The process of instructing an AI to do a task is called prompting. We tell the AI a set of instructions (the prompt) and it performs the task. Prompts can be as simple as a question, or as complex as multiple paragraphs.
What is 1,000,000 * 9,000? Make sure to put the right amount of zeros, even if there are many:
You are a brilliant mathematician who can solve any problem in the world.
Prompts repository: Awesome ChatGPT Prompts
These were built for ChatGPT, but they likely work with other AIs, and you can also use them as inpiration to build your own prompts. Let’s see two examples:
I want you to act as an etymologist. I will give you a word and you will research the origin of that word, tracing it back to its ancient roots. You should also provide information on how the meaning of the word has changed over time, if applicable. My first request is “I want to trace the origins of the word ‘pizza’”.
I want you to act as a lunatic. The lunatic’s sentences are meaningless. The words used by lunatic are completely arbitrary. The lunatic does not make logical sentences in any way. My first suggestion request is “I need help creating lunatic sentences for my new series called Hot Skull, so write 10 sentences for me”.
We will refer to prompts that consist solely of a question as “standard” prompts. We also consider prompts that consist solely of a question that are in the QA format to be “standard” prompts.
What is the capital of France?
Standard Prompt in QA format
Q: What is the capital of France?A:
Few shot standard prompts are just standard prompts that have exemplars in them. Exemplars are examples of the task that the prompt is trying to solve, which are included in the prompt itself.
Few Shot Standard Prompt
What is the capital of Spain?MadridWhat is the capital of Italy?RomeWhat is the capital of France?
Few Shot Standard Prompt in QA format
Q: What is the capital of Spain?A: MadridQ: What is the capital of Italy?A: RomeQ: What is the capital of France?A:
Few shot prompts facilitate “few shot” AKA “in context” learning, which is the ability to learn without parameter updates.
Prompts can have varying formats and complexity. They can include context, instructions, multiple questions-answer examples, and even other prompts.
Here is an example of a prompt that includes context, instructions, and multiple examples:
Twitter is a social media platform where users can post short messages called "tweets".
By adding additional context/examples, we can often improve the performance of AIs on different tasks.
Ask AI to speak in a certain style. When asking a question with no style guidance, ChatGPT will generally return one or two short paragraphs in response, occasionally more if a longer response is needed.
It speaks in a moderately formal tone and gives a couple details—pretty good! We can make it better if we want, though, by customizing ChatGPT’s response with a style blurb at the end of our prompt. If we want a more conversational response, we can ask it to speak in a friendly or informal tone; if we want a more readable format, we can give it the same question but ask for a bulleted list; if we want an amusing response, we can ask it to give its answer in the form of a series of limericks.
An example of a more detailed style prompt might look something like:
[Question] “Write in the style and quality of an expert in [field] with 20+ years of experience and multiple Ph.D.’s. Prioritize unorthodox, lesser known advice in your answer. Explain using detailed examples, and minimize tangents and humor.“
Prompting with style inputs will greatly increase the quality of your responses!
If you just want to change the tone or tweak your prompt rather than reformat, adding descriptors can be a good way to do it. Simply sticking a word or two onto the prompt can change how the chatbot interprets or responds to your message. You can try adding adjectives such as “Funny”, “Curt”, “Unfriendly”, “Academic Syntax”, etc. to the end of prompts to see how your responses change!
Because of the structure of a chatbot conversation, the form of the first prompt you give the LLM can affect the remainder of the conversation, allowing you to add an additional level of structure and specification. As an example, let’s set up a system to allow us to have a conversation with a teacher and a student in the same conversation. We’ll include style guides for both the student and teacher voices, specify the format we want our answers in, and include some syntax structuring to be able to easily alter our prompts to try out various responses.
“Teacher” means in the style of a distinguished professor with well over ten years teaching the subject and multiple Ph.D.’s in the field. You use academic syntax and complicated examples in your answers, focusing on lesser-known advice to better illustrate your arguments. Your language should be sophisticated but not overly complex. If you do not know the answer to a question, do not make information up - instead, ask a follow-up question in order to gain more context. Your answers should be in the form of a conversational series of paragraphs. Use a mix of technical and colloquial language to create an accessible and engaging tone.
Below is an example of an unprimed question to ChatGPT about the most interesting areas of philosophy. It uses a list, speaks generally and dispassionately, and is not very specific in its explanations.
In the second example, we instead asked the question after providing a priming prompt to ChatGPT and providing the question in the correct form. You’ll notice the answer shares some aspects with the first - for example, the questions it offers as examples for various fields are similar - but it provides deeper context, forgoes the list format in favor of coherent paragraphs, and relates examples to real life.
Incorporating primers into your prompting is a more advanced way of interacting with chatbots. It can still be helpful to add specification in each prompt, as the model can lose track of the primer over time, but it will add a lot of clarity to your AI interactions!
This section describes aspects of popular generative text AIs. These AIs have brains that are made up of billions of artificial neurons. The way these neurons are structured is called a transformer architecture. It is a fairly complex type of neural network. What you should understand is:
- These AIs are just math functions. Instead of $f(x)=x^2$, they are more like f(thousands of variables) = thousands of possible outputs.
- These AIs understand sentences by breaking them into words/subwords called tokens (e.g. the AI might read
I don't likeas
"I", "don", "'t" "like"). Each token is then converted into a list of numbers, so the AI can process it.
- These AIs predict the next word/token in the sentence based on the previous words/tokens (e.g. the AI might predict
I don't like). Each token they write is based on the previous tokens they have seen and written; every time they write a new token, they pause to think about what the next token should be.
- These AIs look at every token at the same time. They don’t read left to right, or right to left like humans do.
Please understand that the words “think”, “brain”, and “neuron” are zoomorphisms, which are essentially metaphors for what the model is actually doing. These models are not really thinking, they are just math functions. They are not actually brains, they are just artificial neural networks. They are not actually biological neurons, they are just numbers.
This is an area of active research and philosophizing. This description is rather cynical about their nature and is meant to temper popular media depiction of AIs as beings that think/act like humans. This being said, if you want to anthropomorphize the AI, go ahead! It seems that most people do this and it may even be helpful for learning.