The International AI Safety Report 2026 is an independent, expert-led global assessment of general-purpose AI (GPAI). The report was developed with guidance from more than 100 independent experts representing over 30 countries and international organizations. These includes the European Union, OECD, and the United Nations. According to the report, experts had full discretion over its content by ensuring political independence and credibility.
The report’s primary focus is on frontier AI and emerging risks. It concentrates on the most capable general-purpose AI systems, rather than narrow or legacy AI. General-purpose AI systems are capable of performing a wide variety of tasks across multiple domains. At the same time, these systems pose emerging risks. These are already documented in real-world harms. The report emphasizes frontier capabilities, in which risks are the hardest to predict and manage.
AI capabilities are advancing faster than the available reliable evidence on risks. Policymakers are therefore facing a trade-off in which acting too early risks of locking in ineffective or misguided regulation. If acting too late risks of severe societal harm. In response, the report synthesizes what is currently known, shows clearly identified evidence gaps, and supports proportionate and adaptive governance.
GPAI already delivers benefits in healthcare, scientific research, education, productivity, and innovation. However, adoption remains highly uneven across regions. Therefore without trust and safety, these benefits may fail to scale. Misuse, malfunctions, and systemic risks can undermine public trust, while slow deployment can reduce potential developmental gains.
AI capabilities are improving rapidly but unevenly. Key drivers of progress include larger models with improved training, as well as inference-time scaling that uses greater computing power at runtime. These advances enable multi-step reasoning and have produced major gains in mathematics, software engineering, and scientific problem-solving. However, significant limitations remain. AI capabilities are often “jagged”: systems can excel at complex tasks while failing at simpler ones, such as counting objects, physical reasoning, or error recovery. This creates an evaluation gap in which laboratory benchmarks do not reliably predict real-world performance or risk.
The report identifies three core categories of AI risk. The first is malicious use in crime and exploitation. Scams, fraud, blackmail, deepfakes, and non, consensual imagery are among these. The second risk involves influence operations were AI generated content can be just as persuasive as human messaging. The third is the risk of cybercrime, where AI can be used to find security loopholes and create malware that is then utilized by criminals and state, sponsored actors. In addition, the report highlights biological and chemical risks, where AI systems can provide expert-level technical guidance. Safeguards have been introduced in response, particularly where developers cannot rule out serious misuse.
Conclusion
The General-purpose AI presents transformational benefits alongside serious and evolving risks. The layered risk management, adaptive governance, as well as societal resilience are essential to ensure that AI advances is capable in providing support rather than undermining the development, democracy, and global stability.