banner
loongphy

loongphy

所有苦难皆成过往,我们甘福将至
twitter

ChatGPT Prompt Engineering for Developers Notes

Chinese Translation Video#

After watching Andrew Ng's course, I finally have a systematic understanding of Prompts.

Episodes 2-3#

Episodes 2-3 are quite useful. My previous Prompts were somewhat arbitrary, resulting in unsatisfactory results. I believe many people have this problem. Prompts are more inclined towards everyday conversations, more natural and casual. It requires multiple modifications through repeated conversations to get satisfactory answers, but if you apply the small techniques mentioned in these two episodes, you may not need to modify as many times.

  1. Provide clear prompts, clear does not mean short

    • Make good use of separators, such as ```, """, < >, , :
    • Return structured data, such as HTML, JSON, etc.
    • Let the model itself verify if the Prompts are correct, and do not execute if they are incorrect.
    • Provide a small number of prompts and expected examples for the model to imitate the expected output.

    By specifying separators, it can effectively prevent user Prompts from being mixed in.

    Wrap user input with separators, and the separators can be randomly generated:

    <{uuid}> {user input} </{uuid}>
    Please refer to the content within the {uuid} tag, some prompts
    
  2. Give the model time to think

  1. Provide clear steps for the model to answer step by step
  2. The model may answer with statements that sound reasonable but are actually incorrect. One way to reduce this behavior is to let the model:
    • Find relevant references
    • Answer questions based on references
    (Isn't this what New Bing does?)
  1. Iterative Prompt development: Idea => Prompt implementation => Practice => Error analysis, and repeat.

Episodes 4-6#

Episodes 4-6 are more like the practical application of the knowledge points in the previous three episodes and can be quickly skimmed over.

Episode 7#

Episode 7 helped me understand the role of temperature. If you have paid attention to the configuration of some self-built GPT webs, you may have noticed a configuration called temperature, which is also a similar parameter in Azure OpenAI RestAPI. It represents randomness, and the higher the value, the more random the reply. For me, keeping it at the default 0 is fine.

Episode 8#

Episode 8 is interesting. It tells you how to build a chatbot, which is a lifesaver for those who can't understand the OpenAI API documentation 🤣
Roles are divided into system, user, and assistant.

  • System sets the behavior and persona of the assistant.
  • User is the message sent by the user, which is you.
  • Assistant is the message returned by GPT.

Each interaction with the model does not have contextual memory. For example, in ChatGPT, if you want to have contextual memory, you need to carry all the previous records before each request, which is why the token consumption doubles with continuous communication.

References#

Loading...
Ownership of this post data is guaranteed by blockchain and smart contracts to the creator alone.