The EU AI Act applies
to your organization.
Even if you have never
operated in Europe.
Most US manufacturers, CPG companies, and PE-backed industrials have material EU AI Act exposure they have not yet mapped. The governance gap it reveals will not close on its own.
The Act reaches you based on what your AI does,
not where your company is incorporated.
The EU AI Act's extraterritorial scope is the part most US executives miss. The regulation applies to any organization — regardless of headquarters, offices, or physical EU presence — where an AI system's output is used within the European Union.
A US manufacturer whose AI-assisted quality system influences decisions at an EU customer site is inside the Act's scope. A CPG company whose demand planning model outputs are shared with EU distribution partners is inside the scope. A PE-backed industrial whose portfolio company uses AI-assisted HR or safety systems with EU employees is inside the scope.
The question is not whether you operate in Europe. The question is whether your AI outputs touch decisions that affect EU markets, EU employees, or EU customers.
You sell into EU markets
Any AI system influencing pricing, product quality, allocation decisions, or customer service for EU buyers creates compliance exposure regardless of where the system runs.
You have EU employees or partners
AI systems used in hiring, performance management, safety monitoring, or operational decisions affecting EU workers or partners fall under high-risk classification requirements.
Your AI outputs travel
If a model trained or run in the US generates recommendations, scores, or decisions that flow to EU operations, the output is in scope even if the system never crosses a border.
High-risk classification is not a technical judgment.
It is a governance obligation.
The EU AI Act organizes AI systems into risk tiers. High-risk systems face the most demanding requirements: documented risk management, human oversight protocols, named accountability, technical documentation, and registration with EU authorities.
For manufacturing and industrial organizations, high-risk classification is more common than most legal or technology teams expect. The Act specifically names critical infrastructure, workplace safety systems, employment decisions, and supply chain systems influencing significant operational decisions.
Non-compliance is not a technical violation. It is a governance failure with financial consequences: fines up to 35 million euros or 7% of global annual turnover, whichever is higher.
What triggers high-risk classification
- AI systems used in employment, worker management, or safety-critical operations
- Supply chain and logistics AI influencing significant decisions affecting EU operations
- AI used in quality control for products entering EU markets
- Demand planning and allocation systems with EU distribution exposure
- AI systems where output drives decisions affecting worker conditions or fundamental rights
What compliance requires
- Complete inventory of AI systems in production
- Risk classification for each system
- Named human accountability for AI-influenced decisions
- Documented escalation and override protocols
- Board-level oversight and reporting structure
- Technical documentation sufficient for regulatory audit
- For US companies without EU presence: appointment of an authorized EU representative
The deadline may shift.
The governance requirement does not.
The core EU AI Act framework became enforceable in August 2025. Full high-risk system requirements are scheduled for August 2, 2026.
A Digital Omnibus proposal currently under EU review may extend the standalone high-risk deadline to late 2027 for organizations that can demonstrate good-faith compliance progress. That proposal has not been finalized.
Two things are certain regardless of the final date.
EU customers, partners, and counterparties are already requesting evidence
of AI governance readiness. The market clock started before the regulatory
clock. Organizations that cannot demonstrate governance maturity are losing
conversations they do not know they are in.
The governance structures the Act requires — named accountability, human
oversight, decision audit trails, board reporting — are not compliance
overhead. They are the conditions under which AI investments actually scale
and deliver value.
The organizations that treat the EU AI Act as a governance building opportunity rather than a compliance deadline will be better positioned regardless of when enforcement lands.
Governance design. Diagnostic assessment.
Board-ready output. One engagement.
Hawksroost and VTCDO work together as a single integrated engagement — governance architecture from the board level, diagnostic tools that reveal the operational reality underneath it.
AI Governance Assessment
Evaluates your governance and accountability layer: AI use policy, system inventory, named decision ownership, escalation protocols, and board reporting structure. Output includes a risk register, regulatory exposure assessment, and a defined 90-day action roadmap.
AI Readiness Assessment
Scores your organization across nine dimensions of AI and data readiness. Provides the context your board needs to understand AI risk as a business risk, not a technology risk.
EU AI Act Advisory Engagement
Hawksroost delivers the human layer: board-ready governance framework design, EU AI Act exposure mapping, named accountability structure, escalation and oversight protocols, and a board summary your leadership team can act on.
AI Governance Assessment · AI Readiness Assessment · EU AI Act Advisory and Board Governance Framework
The credential stack for this work is specific.
Most advisors have one piece of it.
This is not theoretical governance advice. Jay Hawkinson has held the accountability position this work is designed to support — as the first functional Chief Data and AI Officer at Valmont Industries and head of a 60-person data and AI organization at Lamb Weston. He has seen what happens when governance is built correctly before AI scales, and what happens when it is not.
The EU AI Act asks boards and executives to govern AI with the same rigor they apply to financial controls. That requires someone who understands both the boardroom and the operating reality underneath it.
Board governance — premier director certification program
Carnegie Mellon University Heinz College — responsible AI governance for boards
CERT at Carnegie Mellon University — risk and threat governance
Valmont Industries · Lamb Weston · Belden · AFL
Invitation-only community of senior technology executives
Annual recognition of top data and analytics leaders
"Jay has a unique ability to break down data and AI topics and make them relatable to the company at large. He shares his knowledge freely and selflessly — I have personally been the beneficiary of his generosity."
Joshua DixonPresident, CEO and Board Director · Industrial Manufacturing
15-year professional relationship across Belden and Valmont Industries
If your board is starting to ask AI governance questions and the room goes quiet, this is the conversation to have.
Hawksroost engages selectively. The first conversation is a fit check, not a sales call. We discuss your context, your EU exposure, and whether there is a genuine match. No obligation.
Start a Confidential Conversation (877) 417-1184 · euaiact@hawksroost.com