As organizations adopt artificial intelligence (AI) technologies, incorporating AI-specific risks into the existing SOC 2 framework becomes critical. The AICPA’s Trust Services Criteria (TSC), grounded in the Committee of Sponsoring Organizations of the Treadway Commission (COSO) Internal Control—Integrated Framework, provides the foundational principles that can help guide how organizations incorporate AI into their SOC 2 control environment.
The COSO Internal Control Framework consists of five key components that work together to ensure effective internal control.
The COSO framework provides a structured approach to governance, risk assessment, and control activities, which directly applies to managing AI risks such as privacy, data protection, bias and fairness, and governance. By embedding COSO principles—like clear control environments, risk assessment processes, and monitoring—organizations can help ensure AI systems align with ethical standards, regulatory requirements, and accountability for decision-making.
Depending on the type of AI your organization deploys, a range of risks must be thoughtfully evaluated. These risks can be grouped into three overarching categories:
To effectively address these risks, organizations can embed AI-specific controls within each element of the TSC framework. By tailoring these controls to the nature and complexity of the AI systems in use, your organization can proactively manage risk while fostering responsible innovation.
SOC 2 reports have become a widely used mechanism for companies using AI to establish customer trust.
AI applications work with large and diverse datasets. SOC 2 reports can cover:
It’s essential for your company to implement measures that protect sensitive data and comply with relevant data protection regulations.
SOC 2 examinations evaluate whether controls relevant to the in-scope system and selected criteria are designed and operating effectively, which often includes controls across the data lifecycle, ensuring that information—particularly data used in AI systems—is collected, stored, processed, and retired in a secure, compliant manner:
AI technologies introduce distinct governance challenges, including risks related to:
SOC 2 criteria can be supported by controls such as:
When AI solutions depend on third-party cloud providers, external datasets, or APIs, SOC 2 includes criteria for assessing and managing risks arising from relevant third-party service providers—protecting both your organization’s and your customers’ information.
If your organization’s platform supports decision-making, the risk management framework should include procedures addressing corrupted data inputs and model drift or degradation over time. These may include:
The processing integrity trust principle requires systems to process data accurately, completely, timely, and with proper authorization. In the context of AI, this means instituting algorithms and models that consistently deliver reliable outputs, with robust controls governing inputs, processing, and results.
SOC 2’s processing integrity criteria can be supported by validation mechanisms, error detection, and correction procedures to prevent unreliable or misleading outcomes. These safeguards may be embedded throughout the AI model life cycle:
By integrating these controls, your organization can align AI operations with SOC 2’s processing integrity expectations—helping to ensure compliance and delivering transparent and dependable outcomes consistent with stated service commitments and system requirements.
As AI becomes more deeply embedded in organizational processes, SOC 2 reporting can be applied to address its distinctive risks and governance challenges. Key considerations include:
Comprehensive policies governing AI use and development, supported by thorough risk assessments and documentation of vulnerabilities, form the foundation of responsible AI governance. This oversight is particularly critical in highly regulated sectors such as healthcare and finance, where AI decisions may affect individual welfare or financial stability.
A mature SOC report in this context may demonstrate your organization’s commitment to principles such as transparency, sound governance, data privacy, and security—while evidencing a culture of accountability and continuous improvement. In doing so, SOC reporting over AI reinforces trust, integrity, and confidence in automated decision-making systems.
SOC 2 reports have become widely used tools for companies using AI to establish customer trust and meet vendor expectations and market demands. These reports provide independent assurance that your organization’s systems, processes, and controls are designed and operating effectively to manage risks.
While traditional SOC examinations focus on security, availability, confidentiality, and privacy, applying SOC principles to AI introduces new dimensions—particularly around model development, validation, and monitoring. By incorporating AI-specific risks into SOC examinations, you can demonstrate transparency and accountability in how your organization’s algorithms operate and evolve.
Although both traditional and AI-focused SOC 2 reports assess risk and controls, they differ in focus, scope, and depth—especially when processing integrity is included in the examination. For AI-driven applications, processing integrity may encompass consideration of controls related to model inputs, processing, and outputs to ensure systems function as intended consistent with stated objectives.
These AI-centric considerations complement the standard operational SOC controls, such as access management, data handling, and change control, resulting in a more holistic evaluation. Ultimately, SOC 2 reporting tailored to AI environments strengthens confidence among clients and stakeholders by validating not only operational soundness but also the governance and technical rigor behind intelligent systems.
SOC 2, governed by the AICPA’s Trust Services Criteria, doesn’t currently describe how to implement controls—it focuses on what outcomes must be achieved. AI-specific governance instruments such as ISO 42001, NIST AI RMF, or the EU AI Act have not been formally incorporated into the SOC 2 framework.
The AICPA has acknowledged the growing role of AI, especially generative AI, and has begun issuing nonauthoritative guidance on AI use in various service areas. Over time, this type of guidance may influence how firms design controls and apply existing SOC 2 criteria, even if the underlying TSC do not change.
To learn more about implementing AI controls or including them in your SOC 2 reporting, contact your firm professional.
Baker Tilly US, LLP, Baker Tilly Advisory Group, LP and Moss Adams LLP and their affiliated entities operate under an alternative practice structure in accordance with the AICPA Code of Professional Conduct and applicable laws, regulations and professional standards. Baker Tilly Advisory Group, LP and its subsidiaries, and Baker Tilly US, LLP and its affiliated entities, trading as Baker Tilly, are members of the global network of Baker Tilly International Ltd., the members of which are separate and independent legal entities. Baker Tilly US, LLP and Moss Adams LLP are licensed CPA firms that provide assurance services to their clients. Baker Tilly Advisory Group, LP and its subsidiary entities provide tax and consulting services to their clients and are not licensed CPA firms. ISO certification services offered through Moss Adams Certifications LLC. Investment advisory offered through either Moss Adams Wealth Advisors LLC or Baker Tilly Wealth Management, LLC.