Planning for AGI and Beyond

rw-book-cover

Metadata

  • Author: OpenAI
  • Full Title: Planning for AGI and Beyond
  • Document Note: This document outlines a mission to ensure that artificial general intelligence (AGI) is developed in a way that benefits humanity. It sets out principles to maximize the good and minimize the bad, and to ensure that the benefits of AGI are widely and fairly shared. It also outlines short-term and long-term steps to ensure that AI is developed safely and responsibly, including public standards for when an AGI effort should stop a training run, decide a model is safe to release, or pull a model from production use. Ultimately, the goal is for AGI to be an amplifier of humanity, and to ensure a successful transition to a world with superintelligence.
    1. How can we ensure that access to AGI is widely and fairly shared?
    2. What can be done to minimize the risks of misuse, drastic accidents, and societal disruption associated with AGI?
    3. How can we develop mechanisms for ensuring coordination among AGI efforts in order to ensure a safe transition to a world with superintelligence?
  • URL: https://openai.com/blog/planning-for-agi-and-beyond/

Highlights

  • as we create successively more powerful systems, we want to deploy them and gain experience with operating them in the real world. We believe this is the best way to carefully steward AGI into existence—a gradual transition to a world with AGI is better than a sudden one. We expect powerful AI to make the rate of progress in the world much faster, and we think it’s better to adjust to this incrementally. (View Highlight)
  • We think it’s important that efforts like ours submit to independent audits before releasing new systems; we will talk about this in more detail later this year. At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models (View Highlight)
  • It’s possible that AGI capable enough to accelerate its own progress could cause major changes to happen surprisingly quickly (and even if the transition starts slowly, we expect it to happen pretty quickly in the final stages). (View Highlight)