"ChatGPT Prompt Engineering for developers" is an amazing course that teaches how to use Large Language Models especially ChatGPT efficiently. This course is designed by Isa Fulford and Andrew Ng. It is free for a limited time. The introduction lecture explains some of the contents that will be covered and some basics of Large Language Models(LLM).
What is LLM? It is a trained deep-learning neural network that can understand and can also generate text, images, and videos in a human-like fashion. ChatGPT is a Large Language Model which generates text but the correctness of the text is based on how efficiently we prompt the model. Also, ChatGPT is a web-user interface but OpenAI provides many APIs which can be used by software developers to build applications quickly. The software application built is intelligent enough to understand the queries of the end user and act accordingly.
There 2 types of LLM
1) Base LLM:
It predicts the next word based on a large dataset that is available like the internet data. E.g. "A friend in need is ________________." This will be completed as "a friend indeed" by the Base LLM since it is a proverb that will be available on the internet. But if the question is "What is the capital of India?". Then the answer as a completion task might be "What is India's largest city?", "What is India's population?" etc. So Base LLM will return a list of questions instead of the original answer since the question asked above can also be a part of a quiz so the model will return all the questions in the quiz.
2) Instruction-tuned LLM:
This model is trained to follow instructions. The way an Instruction tuned LLM is built is that we start using a Base LLM and further tune it on inputs and outputs that are instructions using Reinforcement Learning with Human Feedback(RLHF). The model is trained to be honest and harmless so it is less likely to give toxic outputs. E.g. "What is the capital of India?". An instruction-tuned LLM will answer "Delhi". Instruction-tuned LLM is the actual research area today.
But when Instruction-tuned LLM does not provide the desired answer, most probably the instructions are not clear enough. E.g. "Write something about Albert Einstein." Here the text generated will be generalized. It's necessary to specify the context of the text to be generated. E.g. "Write about Albert Einstein's personal life." or "Write about the scientific contributions of Albert Einstein." Also, it's important to specify the tone of the text to be generated like professional, journal, casual etc. This helps Instruction-tuned LLM to generate the appropriate text for the user.
This was a basic introduction to LLM. The course helps in understanding how to make API calls to these LLMs efficiently.
The course above covers the following topics:
Prompting best practices and guidelines
Summarizing
Inferring
Transforming
Expanding
Building a chatbot
Thank You for taking the time to read this blog!