AI systems are acting with more autonomy in real-world settings, with OpenAI focusing on responsibly navigating this transition to AGI by building capable systems and developing monitoring methods to deploy and manage them safely. OpenAI has implemented a monitoring system for coding agents to learn from real-world usage, identify risks, and improve safety as AI capabilities progress. The system reviews agent interactions, alerts for problematic behavior, and surfaces potential issues for human review to mitigate consequences and improve agent security.










