Representing AI Controls in Your SOC 2 Report

LinkedIn Share Button Twitter Share Button Other Share Button Other Share Button
AI Controls for SOC 2 Reports Thumbnail

As organizations adopt artificial intelligence (AI) technologies, incorporating AI-specific risks into the existing SOC 2 framework becomes critical. The AICPA’s Trust Services Criteria (TSC), grounded in the Committee of Sponsoring Organizations of the Treadway Commission (COSO) Internal Control—Integrated Framework, provides the foundational principles that can help guide how organizations incorporate AI into their SOC 2 control environment.

The COSO Internal Control Framework consists of five key components that work together to ensure effective internal control.

  • The Control Environment serves as the foundation, setting the tone at the top through ethics, governance, and organizational culture.
  • Risk Assessment focuses on identifying and analyzing risks that could impact the achievement of objectives.
  • Control Activities include the policies and procedures implemented to mitigate those risks.
  • Information and Communication ensures that relevant information is shared internally and externally to support decision-making and accountability.
  • Monitoring Activities involve ongoing evaluations to confirm that controls are functioning as intended and remain effective over time.

SOC 2 reports have become a widely used mechanism for companies using AI to establish customer trust.

The COSO framework provides a structured approach to governance, risk assessment, and control activities, which directly applies to managing AI risks such as privacy, data protection, bias and fairness, and governance. By embedding COSO principles—like clear control environments, risk assessment processes, and monitoring—organizations can help ensure AI systems align with ethical standards, regulatory requirements, and accountability for decision-making.

Depending on the type of AI your organization deploys, a range of risks must be thoughtfully evaluated. These risks can be grouped into three overarching categories:

  • Privacy and data protection. AI systems often rely on vast datasets, raising concerns about data leakage, inadequate safeguards, and the misuse of sensitive information.
  • Bias and fairness. Without careful oversight, AI can perpetuate or amplify bias due to flawed training data or lack of transparency in decision-making processes.
  • Governance and compliance. Ethical AI use demands strong access controls, proper error handling, and alignment with evolving legal and regulatory standards.

To effectively address these risks, organizations can embed AI-specific controls within each element of the TSC framework. By tailoring these controls to the nature and complexity of the AI systems in use, your organization can proactively manage risk while fostering responsible innovation.

SOC 2 reports have become a widely used mechanism for companies using AI to establish customer trust.

Data Privacy and Confidentiality

AI applications work with large and diverse datasets. SOC 2 reports can cover:

  • Controls preventing unauthorized access
  • Use of anonymized data when necessary
  • Controls governing data retention and deletion

It’s essential for your company to implement measures that protect sensitive data and comply with relevant data protection regulations.

SOC 2 examinations evaluate whether controls relevant to the in-scope system and selected criteria are designed and operating effectively, which often includes controls across the data lifecycle, ensuring that information—particularly data used in AI systems—is collected, stored, processed, and retired in a secure, compliant manner:

  • Collection. Define and enforce criteria for what data may be collected, from which sources, and under what legal and ethical conditions.
  • Storage and access. Restrict and monitor access to AI training data through encryption, role-based permissions, least-privilege principles, and other security mechanisms.
  • Processing. Ensure AI models use only approved datasets and that processes maintain data accuracy, integrity, and relevance.
  • Retention and disposal. Implement clear policies for AI data retention and secure destruction once data is no longer required.

AI technologies introduce distinct governance challenges, including risks related to:

  • Bias
  • Misuse
  • Unauthorized model training
  • Data privacy

SOC 2 criteria can be supported by controls such as:

  • Periodic risk assessments that incorporate AI-specific threats and mitigation strategies
  • Logging and monitoring of data access and use, for transparency in AI training and decision-making
  • Incident response procedures to address breaches, unauthorized data usage, or model compromise

When AI solutions depend on third-party cloud providers, external datasets, or APIs, SOC 2 includes criteria for assessing and managing risks arising from relevant third-party service providers—protecting both your organization’s and your customers’ information.

Bias and Fairness

If your organization’s platform supports decision-making, the risk management framework should include procedures addressing corrupted data inputs and model drift or degradation over time. These may include:

  • Access controls to model code and datasets
  • Logging and monitoring of model behavior
  • Versioning of all model changes
  • Infrastructure security

The processing integrity trust principle requires systems to process data accurately, completely, timely, and with proper authorization. In the context of AI, this means instituting algorithms and models that consistently deliver reliable outputs, with robust controls governing inputs, processing, and results.

SOC 2’s processing integrity criteria can be supported by validation mechanisms, error detection, and correction procedures to prevent unreliable or misleading outcomes. These safeguards may be embedded throughout the AI model life cycle:

1. Model Life Cycle Controls

  • Development. Use only approved and authorized training datasets, with oversight on feature selection and model architecture.
  • Validation. Rigorously test models for accuracy, bias, and error rates prior to deployment.
  • Versioning. Document and track all model updates, retraining activities, and architecture changes through formal change management.

2. Data Integrity and Input Controls

  • Ensure input data is accurate, complete, and authorized.
  • Implement validation tools to avoid garbage-in, garbage-out risks that compromise model reliability.

3. Audit Trails and Evidence Mapping

  • Maintain traceable records of data flows, model changes, and AI system interactions.
  • Provide evidence to demonstrate compliance and enable swift investigation of anomalies.

4. Error Handling and Incident Response

  • Monitor and log errors in model outputs or processes.
  • Establish corrective action protocols and stakeholder communication plans when integrity is compromised.

5. Continuous Monitoring

  • Continuously assess model performance, accuracy, and anomalous behavior.
  • Address issues such as bias, model drift, or exposure to malicious or corrupt training data through timely remediation.

By integrating these controls, your organization can align AI operations with SOC 2’s processing integrity expectations—helping to ensure compliance and delivering transparent and dependable outcomes consistent with stated service commitments and system requirements.

Governance and Compliance

As AI becomes more deeply embedded in organizational processes, SOC 2 reporting can be applied to address its distinctive risks and governance challenges. Key considerations include:

  • Understanding how AI-driven decisions are made and whether model outputs are explainable, traceable, and auditable
  • Managing who has authority to modify or retrain models, ensuring accountability and preventing unauthorized changes
  • Assessing ethical risks and unintended consequences, such as bias, discrimination, or data misuse, which can impact organizational integrity and stakeholder trust

Comprehensive policies governing AI use and development, supported by thorough risk assessments and documentation of vulnerabilities, form the foundation of responsible AI governance. This oversight is particularly critical in highly regulated sectors such as healthcare and finance, where AI decisions may affect individual welfare or financial stability.

A mature SOC report in this context may demonstrate your organization’s commitment to principles such as transparency, sound governance, data privacy, and security—while evidencing a culture of accountability and continuous improvement. In doing so, SOC reporting over AI reinforces trust, integrity, and confidence in automated decision-making systems.

How Different is a SOC 2 Report Covering AI-Enabled Systems?

SOC 2 reports have become widely used tools for companies using AI to establish customer trust and meet vendor expectations and market demands. These reports provide independent assurance that your organization’s systems, processes, and controls are designed and operating effectively to manage risks.

While traditional SOC examinations focus on security, availability, confidentiality, and privacy, applying SOC principles to AI introduces new dimensions—particularly around model development, validation, and monitoring. By incorporating AI-specific risks into SOC examinations, you can demonstrate transparency and accountability in how your organization’s algorithms operate and evolve.

Although both traditional and AI-focused SOC 2 reports assess risk and controls, they differ in focus, scope, and depth—especially when processing integrity is included in the examination. For AI-driven applications, processing integrity may encompass consideration of controls related to model inputs, processing, and outputs to ensure systems function as intended consistent with stated objectives.

These AI-centric considerations complement the standard operational SOC controls, such as access management, data handling, and change control, resulting in a more holistic evaluation. Ultimately, SOC 2 reporting tailored to AI environments strengthens confidence among clients and stakeholders by validating not only operational soundness but also the governance and technical rigor behind intelligent systems.

Is It Anticipated That Additional AI Frameworks Will Be Added to SOC 2?

SOC 2, governed by the AICPA’s Trust Services Criteria, doesn’t currently describe how to implement controls—it focuses on what outcomes must be achieved. AI-specific governance instruments such as ISO 42001, NIST AI RMF, or the EU AI Act have not been formally incorporated into the SOC 2 framework.

The AICPA has acknowledged the growing role of AI, especially generative AI, and has begun issuing nonauthoritative guidance on AI use in various service areas. Over time, this type of guidance may influence how firms design controls and apply existing SOC 2 criteria, even if the underlying TSC do not change.

We’re Here to Help

To learn more about implementing AI controls or including them in your SOC 2 reporting, contact your firm professional.

Additional Resources

Related Topics

Contact Us with Questions

Baker Tilly US, LLP, Baker Tilly Advisory Group, LP and Moss Adams LLP and their affiliated entities operate under an alternative practice structure in accordance with the AICPA Code of Professional Conduct and applicable laws, regulations and professional standards. Baker Tilly Advisory Group, LP and its subsidiaries, and Baker Tilly US, LLP and its affiliated entities, trading as Baker Tilly, are members of the global network of Baker Tilly International Ltd., the members of which are separate and independent legal entities. Baker Tilly US, LLP and Moss Adams LLP are licensed CPA firms that provide assurance services to their clients. Baker Tilly Advisory Group, LP and its subsidiary entities provide tax and consulting services to their clients and are not licensed CPA firms. ISO certification services offered through Moss Adams Certifications LLC. Investment advisory offered through either Moss Adams Wealth Advisors LLC or Baker Tilly Wealth Management, LLC.