Think of Language Models Like ChatGPT as a “Calculator for Words”

rw-book-cover

Metadata

Highlights

  • The ChatGPT model is huge, but it’s not huge enough to retain every exact fact it’s encountered in its training set. (View Highlight)
  • can produce a convincing answer to anything, but that doesn’t mean it’s reflecting actual facts in its answers. You always have to stay skeptical and fact check what it tells you. (View Highlight)
  • • Summarization. Give them an essay and ask for a summary. • Question answering: given these paragraphs of text, answer this specific question about the information they represent. • Fact extraction: ask for bullet points showing the facts presented by an article. • Rewrites: reword things to be more “punchy” or “professional” or “sassy” or “sardonic”—part of the fun here is using increasingly varied adjectives and seeing what happens. They’re very good with language after all! (View Highlight)
  • So many of the challenges involving language models come down to this: they look much, much easier to use than they actually are. To get the most value out of them—and to avoid the many traps that they set for the unwary user—you need to spend time with them, and work to build an accurate mental model of how they work, what they are capable of and where they are most likely to go wrong. (View Highlight)