Skip to content

AI governance from ISACA-style audit programs

Photo by Carlos Muza / Unsplash

(similarities, differences, and what actually changes for auditors)

AI governance does not require a new audit universe, new tooling, or a separate audit methodology. Even though the ISO 42001 can connect the dots and ensure the global-wide AI compliance with whatever coming as the entire management system approach.


From what to start? With the AI risk assessment followed by smarter control mapping - recognizing that AI risks sit inside systems, processes, and decisions auditors already review.

Below is how AI governance aligns with - and differs from - familiar ISACA audit domains.


1. IT general controls (ITGCs)

Similarities

  • Controls connected to the risk assessment and impact analysis
  • Access management still applies (who can configure, deploy, and modify AI systems)
  • Change management still governs updates (model changes, prompt changes, data updates)
  • Logging and monitoring remain core evidence
  • Segregation of duties still matters
  • Controls cover wider area not only technical and not only financial departments

What’s different

  • “Changes” may include model retraining, fine-tuning, or prompt updates — not just code
  • Privileged access may include AI configuration rights or API-level permissions
  • Logs must show not just system access, but AI decisions and overrides

Audit focus shift
From: “Is access controlled?”
To: “Is access controlled to AI decision-making capability?”


2. Third-party risk management

Similarities

  • Vendor due diligence
  • Contractual controls
  • Ongoing monitoring
  • Right-to-audit concepts

What’s different

  • AI vendors may embed AI functionality deep inside “non-AI” products
  • Risk depends on how the AI is used, not just who supplies it
  • EU AI Act introduces downstream accountability, even when AI is outsourced

Audit focus shift
From: “Is the vendor secure?”
To: “Do we understand, document, and govern how the vendor’s AI affects our decisions and users?”


3. SDLC / system development lifecycle

Similarities

  • Design reviews
  • Risk assessments
  • Testing and validation
  • Approval before production

What’s different

  • AI models may evolve after deployment
  • Testing must include bias, accuracy, explainability, and unintended outcomes
  • “Requirements” may include regulatory or ethical constraints, not just functionality

Audit focus shift
From: “Was the system built correctly?”
To: “Was the AI designed, trained, and monitored to stay within acceptable risk boundaries?”


4. Change management

Similarities

  • Formal change approval
  • Impact analysis
  • Rollback planning
  • Version control
  • Merger and organizational changes risk evaluation

What’s different

  • Some AI changes happen continuously (learning systems, model updates)
  • Business users may influence AI behavior through prompts or configuration
  • Risk classification may change over time (especially under EU AI Act)

Audit focus shift
From: “Was the change authorized?”
To: “Was the AI behavior change assessed, approved, and monitored for risk impact?”


5. Governance, risk, and compliance (GRC)

Similarities

  • Risk registers
  • Policies and standards
  • Management oversight
  • Board reporting
  • Objectives, targets and KPIs

What’s different

  • AI risk spans legal, ethical, operational, and reputational domains
  • Accountability may cross multiple business owners
  • Regulators increasingly expect documented AI governance, not informal practices

Audit focus shift
From: “Is risk documented?”
To: “Is AI risk owned, monitored, and escalated like any other enterprise risk?”


6. Risk management: from AI risk analysis to AI impact assessment

Similarities

  • Uses existing enterprise risk management processes
  • Feeds into risk registers, control selection, and management reporting
  • Supports informed decision-making and prioritization
  • Aligns with familiar risk scoring and treatment approaches

What’s different

  • AI risk is not only technical — it includes legal, ethical, operational, and societal impact
  • The EU AI Act explicitly requires impact-based thinking, not just likelihood and severity
  • Impact may affect individuals, customers, employees, or third parties — not only the organization

Audit focus shift
From: “Have AI risks been identified and analyzed?”
To: “Has the organization assessed the real-world impact of AI on people, decisions, and outcomes — and adjusted controls accordingly?”

Why it matters
AI impact assessment connects governance to reality. It demonstrates that AI risk management is not theoretical, but grounded in how AI actually affects users and stakeholders.


Similarities

  • Internal audit retains independence and assurance role
  • Uses standard audit planning, scoping, testing, and reporting
  • Leverages risk-based audit approaches already in place

What’s different

  • Audit scope must include AI-specific risks, not just systems that happen to use AI
  • Auditors assess both:
    • AI risk management (how risks are identified, assessed, monitored)
    • AI-related processes (design, deployment, monitoring, human oversight)
  • Evidence includes governance artifacts, not just technical logs

Audit focus shift
From: “Does the process operate as designed?”
To: “Does the process effectively govern AI behavior and risk across its lifecycle?”

Why it matters
Internal audit becomes a key assurance function for AI governance — validating not only control existence, but control effectiveness in managing evolving AI risks.


8. AI risk domains: quality, security, and intellectual property protection

AI-related risk should not be treated as a single abstract category.
For effective governance and auditable oversight, AI risk must be evaluated across three distinct but interconnected domains: quality, security, and intellectual property (IP) protection.

Quality risk

What it covers

  • Accuracy, reliability, and consistency of AI outputs
  • Bias, fairness, and unintended discrimination
  • Model drift and performance degradation over time
  • Fitness for purpose and alignment with business intent

Audit considerations

  • Defined quality criteria and acceptance thresholds
  • Ongoing performance monitoring and validation
  • Documented review and escalation when quality degrades
  • Evidence that human oversight exists for high-impact decisions

Audit focus
From: “Does the AI work?”
To: “Does the AI produce outcomes that are consistently reliable, explainable, and appropriate for its intended use?”


Security risk

What it covers

  • Data integrity and confidentiality
  • Model tampering or poisoning
  • Unauthorized access to AI systems or outputs
  • Exposure through APIs, integrations, or third-party platforms

Audit considerations

  • Access controls for AI configuration, models, and data
  • Secure development and deployment practices
  • Monitoring for anomalous behavior and misuse
  • Incident response processes that explicitly include AI-related events

Audit focus
From: “Is the system secure?”
To: “Is the AI protected against misuse, manipulation, and unintended exposure throughout its lifecycle?”


Intellectual property (IP) protection risk

What it covers

  • Use of proprietary or licensed data in training and fine-tuning
  • Leakage of confidential or copyrighted information through outputs
  • Ownership and usage rights of AI-generated content
  • Contractual and legal exposure related to AI models and datasets

Audit considerations

  • Clear rules for acceptable training data and prompts
  • Controls preventing disclosure of sensitive or proprietary information
  • Documentation of data sources, licenses, and usage rights
  • Alignment with legal, procurement, and contractual requirements

Audit focus
From: “Is data managed correctly?”
To: “Is intellectual property protected throughout AI development, deployment, and use?”


What this means for ISACA members

  • Risk management expands from risk analysis to impact awareness
  • Internal audit evolves from system auditing to decision-governance assurance
  • AI governance becomes auditable, defensible, and board-ready
  • Organizations stay ahead of regulatory and stakeholder expectations without reinventing audit practices

AI does not change the fundamentals of risk and audit.
It changes what needs to be examined, documented, and evidenced.

Comments

Latest