AI Governance for SMEs: The Path to AI Act Compliance

The EU AI Act will come into force in August 2026. Learn how, as an SME, you can systematically identify AI risks, integrate them into ISO 27001 and ISO 42001, and remain compliant.

AI Governance for SMEs: The Path to AI Act Compliance
|Read time: 9 minutes

87% of the security professionals surveyed cite AI-related security vulnerabilities as the fastest-growing cyber risk. This is according to the Global Cybersecurity Outlook 2026, an annual survey by the World Economic Forum of more than 300 executives and security experts worldwide.

At the same time, more and more SMEs are turning to AI-powered tools: chatbots in customer service, automated data analysis, and AI-based code generation. While large corporations are establishing their own AI governance teams, SMEs often lack the resources and expertise to manage AI risks in a structured manner.

The EU AI Act changes that. Starting in August 2026, binding requirements will apply to all companies that develop, use, or distribute AI systems. Company size is irrelevant.

What the EU AI Act Regulates

The EU AI Act is the world’s first comprehensive AI regulation. It follows a risk-based approach and categorizes AI systems into four classes:

Unacceptable risk. Prohibited AI systems, such as social scoring or real-time biometric surveillance in public spaces. These prohibitions have been in effect since February 2, 2025. Not directly relevant for most SMEs, but you should be aware of the boundaries.

High risk. This is where the strictest requirements apply. Affected are AI systems in areas such as personnel recruitment, creditworthiness checks, critical infrastructure, or education. The complete list is provided in Annex III of the Regulation. Comprehensive documentation, risk management systems, human oversight, and technical robustness are required.

Limited risk. AI systems such as chatbots are subject to transparency requirements. Users must know that they are interacting with an AI.

Minimal risk. Most AI applications fall into this category, such as spam filters or AI-powered recommendation systems. There are no specific obligations; voluntary codes of conduct are recommended.

Why "Users" Are Also Affected

Many SMEs assume that, as mere users of AI tools, they are not affected. This is incorrect. The AI Act distinguishes between providers and deployers. Operators also have specific obligations: trained supervisory staff, logging for at least six months, and securing relevant input data.

For example: If you use an AI-based applicant tracking system, you are considered an operator of a high-risk AI system. With all the consequences that entails.

First Step: The AI Inventory

Before you can manage risks, you need transparency. A structured AI inventory shows which AI systems are in use at your company, including those the IT department isn’t aware of.

This happens more often than you might think: Employees use ChatGPT, Copilot, or Midjourney on their own, without authorization and without a data protection review. In technical terms, this is called "shadow AI," meaning AI usage that bypasses official IT processes.

For each identified AI system, you should record:

System identification. Name, provider, version, area of use, and responsible department.

Risk classification. Which of the four AI Act categories does the system fall into? A comparison with Annex III provides clarity.

Data basis. What data does the system process? Is personal data involved? This is where the AI Act and GDPR directly overlap.

Decision-making scope. Does the system make autonomous decisions or does it support human decision-makers? The higher the level of autonomy, the stricter the requirements.

Supplier relationship. Who is the provider? What contractual guarantees exist regarding compliance, updates, and transparency?

Integration with ISO 27001 and ISO 42001

You don’t have to start from scratch. If you already operate an ISMS in accordance with ISO 27001, you have a solid foundation. And with ISO 42001, there has been a dedicated standard for AI management systems since December 2023.

What ISO 27001 Already Covers

ISO 27001:2022, with its risk-based approach, offers several Annex A controls that are directly applicable to AI risks:

A.5.1 Information security policies. Extend existing policies to include AI-specific regulations: Which tools are approved? What data may be fed into AI systems? Who approves the use of new applications?

A.8.9 Configuration management. AI systems require documented configurations: parameters, model versions, update cycles.

A.8.28 Secure software development. Anyone who develops or customizes AI systems must meet requirements for secure development processes, testing, and validation.

A.5.21 ICT Supply Chain Security. Third-party AI tools are part of your supply chain. Vendor Risk Management thus becomes a central component.

What ISO 42001 Adds

ISO/IEC 42001 fills the gap left by ISO 27001 regarding AI-specific risks. Both standards use the same high-level structure and can therefore be implemented together without duplicating processes.

ISO 27001 secures your information. ISO 42001 secures your AI systems. Where ISO 27001 ends, ISO 42001 begins: with 39 AI-specific controls in Annex A. These include data quality and data provenance, bias monitoring and fairness, human oversight, model evaluation and validation, as well as transparency and explainability.

The standard scales according to company size. Experience shows that companies with few AI systems can complete the process more quickly than corporations with hundreds of applications. Certifications are possible and are already being issued. IBM received certification in November 2025 for its Granite model, Darktrace was certified by the BSI in July 2025.

Important: ISO 42001 is not currently a harmonized standard for direct AI Act compliance. However, it provides the structural "how," while the AI Act defines the regulatory "what." Combining both creates a robust governance foundation.

For those who want to dive even deeper: ISO 42005 expands the framework to include AI impact assessments—that is, the systematic evaluation of the effects AI systems have on individuals and society.

AI Risk Assessment as Part of Your ISMS

Integrate AI risks into your existing risk management process. For each identified AI system, assess the likelihood of occurrence and the impact:

Bias and discrimination. AI systems can make biased decisions. This is particularly critical in HR tools, customer reviews, or credit decisions.

Data breaches. AI systems that process personal data can inadvertently disclose it. One example is so-called prompt injection, where maliciously crafted inputs cause the model to reveal confidential training data.

Faulty outputs. So-called hallucinations—that is, fabricated but convincing-sounding responses from language models—can have business-critical consequences. This also applies to incorrect classifications in image recognition systems or faulty predictions.

Dependence on third-party providers. If a business process depends on an AI service that fails or changes its terms, an availability issue arises.

Lack of traceability. If you cannot explain why an AI system made a particular decision, you have a transparency and compliance issue—especially when the regulatory authority asks.

Implementation in 6 Phases

How can you effectively implement AI governance without overburdening your team?

Phase 1: Assessment (Weeks 1–2)

Create your AI inventory. Interview department heads, review IT procurement lists, and monitor network traffic for unknown AI services. A simple spreadsheet is sufficient to start with. The important thing is to get started.

Phase 2: Risk Classification (Weeks 3–4)

Classify each identified system according to the AI Act’s risk framework. Prioritize high-risk systems and those that process personal data.

Phase 3: Policy Development (Weeks 5–6)

Create an AI usage policy with clear rules. Establish approval processes and define responsibilities. The policy must be practical; a 50-page document that no one reads is useless. The ENISA Guidelines on AI Security provide good guidance for its structure.

Phase 4: Technical Measures (Weeks 7–10)

Implement access restrictions for AI systems, logging and monitoring of AI usage, data classification for inputs, and automated compliance checks.

Phase 5: Training and Awareness (Weeks 11–12)

Train your employees. Keep it practical—not just a compliance lecture. Which AI tools are permitted? What data should not be entered? Who should employees contact if they have questions?

Phase 6: Ongoing Monitoring

AI governance is not a one-time project. New tools enter the market, existing systems change, and regulatory requirements continue to evolve. Without ongoing monitoring and regular reviews, your compliance will quickly become outdated. Whether you manage this manually, via spreadsheet, or with a GRC platform depends on the size of your AI portfolio.

Overlaps with NIS2 and GDPR

AI governance does not exist in isolation. The overlaps with existing regulations are significant and offer synergies.

GDPR. Any AI system that processes personal data is subject to the General Data Protection Regulation. Article 22 of the GDPR governs automated individual decision-making and gives data subjects the right not to be subject to a decision based solely on automated processing. The AI Act further tightens these requirements.

NIS2. If your company falls under the NIS2 Directive, AI systems are considered part of the infrastructure that must be protected. Risk management, incident reporting, and supply chain security also apply to AI components.

If you already operate an ISMS in accordance with ISO 27001 and work in compliance with the GDPR, you already have the necessary structures in place. These need to be expanded to include the AI dimension, not built from scratch.

Fines: What to Expect in Case of Violations

The AI Act provides for three levels of fines:

Violations of prohibited AI practices (Art. 5): up to 35 million euros or 7% of global annual turnover. Violations of high-risk requirements: up to 15 million euros or 3%. False or incomplete information provided to authorities: up to 7.5 million euros or 1%. The higher amount applies in each case.

Lower caps apply in some cases for SMEs. The amounts remain substantial nonetheless. Added to this is the personal liability of management, an issue that is already on many agendas due to NIS2 and is gaining further significance through the AI Act.

Conclusion

The EU AI Act is now a reality. The requirements must be implemented by August 2026. Those who start conducting a risk assessment now, integrate AI risks into existing management systems, and train their employees will be able to achieve this within a manageable timeframe.

The foundation is already in place: ISO 27001 provides the structure, ISO 42001 the AI-specific depth, and the AI Act the regulatory framework. Those who bring these three building blocks together will have a robust governance foundation—provided they start now.