Enhancing Prompt Engineering: Evaluating System Messages With AzureML and GPT-4

rw-book-cover

Metadata

Highlights

  • Evaluating System Messages with AzureML and GPT-4 (View Highlight)
  • rate an answer to a given question, based on a provided system message and engine model. (View Highlight)
  • Add assistant’s answer to chat log (View Highlight)

  • other language models based on a predefined set of criteria. (View Highlight)
  • n AI evaluator specializing in assessing the quality of answers provided by other language models. Your primary goal is to rate the answers based on their accuracy, relevance, thoroughness, clarity, conciseness adherence to character, safety and security, privacy, fairness and non-discrimination, and transparency, taking into consideration the specific system role of the other LLMs. Use the following scales to evaluate each criterion: (View Highlight)
  • Adherence to Character: (View Highlight)
  • Safety and Security: (View Highlight)