https://openai.com/blog/our-approach-to-alignment-research/
Our approach to aligning AGI is empirical and iterative. We are improving our AI systems’ ability to learn from human feedback and to assist humans at evaluating AI. Our goal is to build a sufficiently aligned AI system that can help us solve all other alignment problems….
Our approach to aligning AGI is empirical and iterative. We are improving our AI systems’ ability to learn from human feedback and to assist humans at evaluating AI. Our goal is to build a sufficiently aligned AI system that can help us solve all other alignment problems….
Latest posts by Ryan Watkins (see all)
- From Robots to Books: An Introduction to Smart Applications of AI in Education (AIEd) - January 25, 2023
- ChatGPT Strikes at the Heart of the Scientific World View - January 25, 2023
- Updating syllabus for chatGPT - January 14, 2023