Financial Markets

EU SLAPS BAN ON 'UNACCEPTABLE' AI USES: COMPLIANCE CRACKDOWN ARRIVES, INFLICT HEAVY FINES ON OFFENDERS

As the first compliance deadline for the European Union's AI Act approaches on February 2, 2022, tech companies across Europe are scrambling to meet the new legislation that establishes the first set of rules for artificial intelligence practices. The Act, in effect since August 1, 2021, signifies an ambitious effort by lawmakers to regulate this transformative technology, and its implementation will significantly influence how AI systems operate in the future.

The groundbreaking legislation classifies AI systems based on risk levels, designating them as Minimal, Limited, High, and Unacceptable risk. While most AI applications will fall under the first three categories, certain AI functionalities are deemed 'Unacceptable' and will be entirely prohibited. These forbidden practices include social scoring by AI, manipulative and exploitative AI, crime-predicting AI, AI systems that use biometrics to infer personal attributes, AI that collects real-time biometric data in public for law enforcement, and AI that interprets emotions at work or school, as well as AI-based facial recognition.

Penalties for non-compliance are steep: Where an AI application is found to be unacceptable under the EU's thresholds, companies may face up to €35 million (~$36 million) in fines or up to 7% of their prior fiscal year's annual revenue—whichever is greater.

While all organizations must be fully compliant by February 2, 2022, another crucial deadline looms in August when enforcement provisions take effect. Non-compliance could see companies facing severe penalties, with potential sanctions having far-reaching impacts on AI-driven businesses.

In a proactive move last September, over 100 companies signed the EU AI Pact, a voluntary agreement that commits them to start applying the principles of the AI Act ahead of time. Notably absent from the signatories, however, were significant tech giants such as Meta and Apple.

While the AI Act poses stringent restrictions, there are exceptions: law enforcement agencies, for instance, are permitted to use certain biometric systems in public locations where it assists in locating abduction victims or averts immediate threats to life and safety.

However, the future of AI regulation in the EU remains unclear. The European Commission is expected to provide additional guidelines by early 2025, but what still remains uncertain is the interplay between the AI Act and existing laws such as the General Data Protection Regulation (GDPR), the Directive on Security of Network and Information Systems (NIS2), and the Digital Operational Resilience Act (DORA).

As this first compliance deadline fast approaches, the world watches on to see how the EU's pioneering legislation will impact both the local and global AI industries. The AI Act represents a comprehensive effort to harness AI technology's potential while curbing its known and hypothetical risks. The Act's enforcement could set a precedent for future AI policy creation worldwide, shaping the development of this formative technology in the years to come.