Prompt Engineering: The Career of Future

With the No-Code revolution around the corner, and the coming of new-age technologies like GPT-3 we may see a stark difference between the career of today and the careers of tomorrow…

Shubham Saboo
6 min readMay 10, 2021
“If I had asked people what they wanted, they would have said faster horses.” — Henry Ford

GPT-3 101

GPT-3 from OpenAI has captured public attention unlike any other AI model in the 21st century. The sheer flexibility of the model in performing a series of generalized tasks with near-human efficiency and accuracy is what makes it so exciting. It has created a paradigm shift in the world of Natural Language Processing(NLP), where till now the models were trained based on the ungeneralized approach to excel at one or two tasks.

GPT-3 is the first step towards democratizing access to technology. It enables audiences from all walks of life to solve complex technical problems from the comfort of a user-friendly interface, which allows you to design training prompts for specific AI problems using natural language. Now knowing the technical nitty gritty’s isn’t a mandatory pre-requisite to shape your ideas into solutions!

GPT-3 belongs to the third generation of generative pre-trained transformers family making it capable of producing highly coherent and contextually relevant results boasting a wide range of vocabulary, a human-like touch in the responses owing to an unprecedentedly large universe of a knowledge base.

Extrapolating the spectacular performance of GPT3 into the future suggests that the answer to life, the universe, and everything is just 4.398 trillion parameters → Sir Geoffery Hinton

GPT-3 in Action…

Following are the ways I have used GPT-3 to demonstrate its creative potential and predictive capabilities:

Why GPT-3?

In the history of AI, GPT-3 to date has been a most ambitious effort by human beings to reach as close as possible to the sophistication of a human brain. It is the largest model trained so far with 175 billion parameters making it capable to produce human-like results for various language tasks.

Unlike most AI systems which are designed to perform a single specific task, GPT-3 is designed to be task agnostic with a general-purpose, simple-to-use “text-in, text-out” interface that can potentially perform any number of tasks given the specific training prompt. The easy-to-use API has given birth to the new Software 3.0 ecosystem virtually touching every aspect of human life.

Creating AI solutions has never been easy but with GPT-3 all you need is a sensible training prompt in plain english language. Sounds like magic right, but we are living in an era of rapid technological advancements where there is a very thin line between the “relevant” and the “obsolete”.

Illustration explaining the transition of “Relevant to Obsolete”!

Prompt Engineering & Design

While creating any GPT-3 application the first and foremost thing to consider is the design and content of the training prompt. Prompt design is the most significant process in priming the GPT-3 model to give a favorable and contextual response.

In a way, prompt design is like playing a game of charades!

The Secret to writing good prompts is understanding what GPT-3 knows about the world and how to get the model to use that information to generate useful results. As in the game of charades, we give the person just enough information to figure out the word using his/her intelligence. Similarly, with GPT-3 we give the model just enough context in the form of a training prompt to figure out the patterns and perform the given task. We don't want to interrupt the natural intelligence flow of the model by giving all the information at once.

As a rule of thumb while designing the training prompt you should aim towards getting a zero-shot response from the model, if that isn’t possible move forward with few examples rather than providing it with an entire corpus. The standard flow for training prompt design should look like: Zero-Shot → Few Shots → Corpus-based Priming.

Following is the five-step formula for creating efficient and effective training prompts for any type of task:

  • Step -1: Define the problem you are trying to solve and bucket it into one of the possible natural language tasks classification, Q & A, text generation, creative writing, etc.
  • Step -2: Ask yourself if there is a way to get a solution with zero-shot (i.e. without priming the GPT-3 model with any external training examples)
  • Step -3: If you think that you need external examples to prime the model for your use case, go back to step-2 and think really hard.
  • Step -4: Now think of how you might encounter the problem in a textual fashion given the “text-in, text-out” interface of GPT-3. Think about all the possible scenarios to represent your problem in textual form.
  • Step -5: If you end up using the external examples, use as few as possible and try to include variety in your examples without essentially overfitting the model or skewing the predictions.

The GPT-3 Playground

The large text area is where you interact with the GPT-3 engine. The first paragraph, which appears in a bold font, is what GPT-3 will take as an input. I started this paragraph with the prefix Text:(also known as the start sequence) and followed it by pasting text that I copied from one of the Wikipedia articles. This is the key aspect of training the GPT-3 model: you teach it what type of text you want it to generate by giving it examples. In many cases a single example is sufficient, but you can provide more depending on the sophistication of your use case.

The second paragraph starts with the same Text: prefix, which also appears in bold. This second appearance of the prefix is the last part of the input. We are giving GPT-3 a paragraph that has the prefix and a text sample, followed by a line that only has the prefix. This gives GPT-3 the cue that it needs to generate some text to complete the second paragraph so that it matches the first in tone and style.

Once you have your training text and your options set to your liking, you press the “Submit” button at the bottom, and GPT-3 analyzes the input text and generates some more to match. If you press “Submit” again, GPT-3 runs again and produces another chunk of text.

Technical Nuances of API (In Layman Terms)

  • Temperature & Top P are not “creativity dials”, they control the randomness of response.
  • A low value of temperature means predicting the first thing that the model sees, and for a high value of temperature, the model evaluates certain responses the predict what can fit within the context.
  • Best Of means what the API thinks is the best response depending on the highest average value of the tokens generated.
  • Frequency simply refers to the repetition of texts, it can be adjusted using a slider based on your use case.
  • Presence Penalty can be used to reduce the likelihood of generating repetitive tokens or sentences depending on your use case.

Conclusion

GPT-3 will redefine the way we look at technology, the way we communicate with our devices and will lower the barrier to access to advanced technology. GPT-3 can write, create, and converse. It has great potential to create an array of business opportunities and take a giant leap towards an abundant society. It will end up creating disruptions and new career opportunities as a by-product of those advancements. GPT-3 in some sense is the foundational step towards creating the careers of the future!

References

  1. https://en.wikipedia.org/wiki/GPT-3
  2. https://openai.com/blog/openai-api
  3. https://www.twilio.com/blog/ultimate-guide-openai-gpt-3-language-model

If you would like to learn more or want to me write more on this subject, feel free to reach out.

My social links: LinkedIn| Twitter | Github

If you liked this post or found it helpful, please take a minute to press the clap button, it increases the post visibility for other medium users.

--

--

Shubham Saboo
Shubham Saboo

Written by Shubham Saboo

AI PM at Tenstorrent | 📕 Author | 📩 AI Newsletter - https://unwindai.substack.com/

Responses (5)