The OECD Due Diligence Guidance for Responsible AI (DDG-RAI) is designed to translate Responsible Business Conduct (RBC) principles into the AI system value chain. Due diligence is defined as a continuous and systematic process of identifying, preventing, mitigating by addressing harm before and after decisions are made. Therefore, the DDG-RAI is a normative governance instrument. It is not a technical AI governance manual or a compliance checklist.
Key features of the DDG-RAI
Through this approach, DDG-RAI concentrates on connecting three governance domains: Responsible Business Conduct (MNE Guidelines), AI-specific risk governance (OECD AI Principles), and emerging national and regional AI regulations.
DDG-RAI functions as a meta-framework that harmonizes and contextualizes existing AI regulations from a due diligence perspective, without replacing them.
Risk identification in the DDG-RAI
The document conceptualizes risk as actual or potential adverse impacts on people, society, and the environment, rather than treating AI risk as model failure, bias metrics, or robustness and accuracy problems. Thus, DDG-RAI frames risk as an adverse impact, not as a technical failure.
This document aligns AI governance with human rights, labour standards, and environmental responsibility, rather than engineering performance. The OECD, as a global policy organization serving public goods, makes enterprises accountable for downstream and indirect harms, even when AI systems technically function “as intended.” DDG-RAI closes a major gap in many AI frameworks that focus on how systems work rather than what they do in the real world. This approach is consistent with the OECD’s long-standing RBC logic and extends it into AI governance.
Six Steps RBC Due Diligence Framework
The document outlines a six-step RBC due diligence framework. The first step is embedding RBC into policies and management systems. The second step showcase the identifying and assessing actual and potential adverse impacts. The third step is about ceasing, preventing, as well as mitigating adverse impacts. The fourth step focuses on tracking implementation and results. The fifth step involves communicating actions, and the final step is providing or cooperating in remediation.
The document strongly states that the due diligence is iterative but not linear. This mean it is continuous, adaptive, and responsive to post-deployment realities. DDG-RAI rejects the idea that responsibility depends on intent, direct causation, or ownership of the final system. Instead, responsibility depends on whether an enterprise causes, contributes to, or is directly linked to an adverse impact.
This significantly raises accountability expectations for data suppliers, compute providers, investors, and downstream users.
Three groups in the DDG-RAI
DDG-RAI focuses on value chain governance through a whole-of-value-chain approach. This document divides actors into three groups. These are AI input suppliers (Group 1), AI developers and deployers (Group 2) and AI users (Group 3).
This framework dismantles the common notion that AI responsibility belongs only to developers. Investors and cloud providers are explicitly recognized as governance actors rather than neutral infrastructure. Furthermore, the document states that the enterprises operating outside the tech sector are clearly informed that using AI does not outsource responsibility. This framework aligns well with real-world AI ecosystems in which roles are overlapping and non-linear. This is making DDG-RAI particularly policy-relevant for enforcement and grievance mechanisms.
Stakeholder Engagement in the DDG-RAI
In this document, stakeholder engagement is treated as an operational infrastructure. Key analytical points include engagement being framed as two-way, continuous, and conducted in good faith rather than as symbolic consultation. Workers, trade unions, and affected communities are prioritized over abstract “users.” Stakeholder engagement is explicitly linked to trust, risk anticipation, and market resilience.
Notably, DDG-RAI recognizes practical constraints. This is especially for SMEs, and allows for proportionality, pooled engagement mechanisms, and higher-level engagement for low-risk systems. This realism strengthens the document’s credibility.
Legal Compliance and the DDG-RAI
The report states that it does not substitute for legal compliance and should not be treated as a compliance blueprint. However, it positions itself as playing a strategic pre-regulatory role, particularly as many legal regimes (such as the EU AI Act and Corporate Sustainability Due Diligence Directive) already reflect RBC logic. Enterprises implementing this guidance are therefore likely to be better positioned for future regulatory convergence. As such, the guidance acts as a normative floor that anticipates stricter legal obligations without directly imposing them.
Strengths and Limitations of DDG-RAI
The key strengths of DDG-RAI are, that it states the importance of the integration of AI governance with established international RBC norms. Its shift AI risk discourse from technical failure to social harm, as well as clarification of responsibility across complex AI value chains. Additionally, it provides interoperability across jurisdictions and frameworks. This document also places strong emphasis on stakeholder engagement and remediation.
The key limitations of the document include its high-level nature. This may limit immediate operationalization for smaller firms. Further its voluntary status may constrain enforceability without complementary regulation. The document is less detailed on geopolitical misuse, military AI, and state-led deployments. More importantly, the guidance assumes a baseline level of institutional capacity that may not exist in many developing countries.
Conclusion
The DDG-RAI should be understood as a governance architecture for AI accountability. It should not be understood as a technical AI risk manual. Its real value lies in embedding AI into existing global responsibility regimes, redefining what “AI risk” means in policy terms, and preparing enterprises for a future in which AI governance is inseparable from business conduct.