Humane Ingenuity 46: Can Engineered Writing Ever Be Great?

rw-book-cover

Metadata

Highlights

  • , a simple LLM has the same issue a pool table has: the ball will always follow the same path across the surface, in a predictable route, given its initial direction, thrust, and spin. Without additional interventions, an LLM will select the most common word that follows the prior word, based on its predetermined internal calculus. This is, of course, a recipe for unvaried familiarity, as the angle of the human prompt, like the pool cue, can overdetermine the flow that ensues. (View Highlight)
  • temperature,” an internal variable, increasing it from 0, which produces perfect fidelity to the physics, i.e., always selecting the most likely next word, to something more like 0.8, which slightly weakens the gravitational pull in its textspace, so that less common words will be chosen more frequently. (View Highlight)
  • writing well isn’t about using less frequent words or having more frequent tangents. Great writing forges alternative pathways with intentionality. Styles and directions are not shifted randomly, but as needed to strengthen one’s case or to jolt the reader after a span of more mundane prose. (View Highlight)
  • As we await the next generation of engineered writing, of tools like ChatGPT that are based on large language models (LLMs), it is worth pondering whether they will ever create truly great and unique prose, rather than the plausible-sounding mimicry they are currently known for. (View Highlight)
    • Note: This article examines the possibility of whether LLMs, such as ChatGPT, can create great writing that is uniquely their own instead of mimicking existing writing.
  • ChatGPT is an indiscriminate generalist in what it has read, because it wants to be able to write on virtually any topic. (View Highlight)
  • Good writing isn’t just the selection and ordering of words, the output; good writing is the product of good reading. (View Highlight)
  • If we want our LLMs to be truly variable and creative, we would have to train the models not on a mass of texts or even the texts of a set of “good writers” (if we could even agree on who those are!), but on a limited, odd array of texts one human being has ingested over their lifetime (View Highlight)