The EU AI Act – The Shape of Things to Come

Authored by Jon Arnold from J Arnold & Associates

April 1, 2024
The EU AI Act and the contact center

In 1984, the US telecom sector was deregulated, and was arguably the biggest spark for much-needed innovation in the communications technology world, from which the Internet emerged, and everything that has since followed. For today’s generation, it may be difficult to understand how profound the breakup of AT&T was back then. Telecom was regulated for good reason, but when those monopoly powers only resulted in high prices and no innovation for subscribers, something had to give. 

Fast-forward 40 years, during which time communications technology has remained largely unregulated, and we have arrived at another pivotal crossroads. Back in 1984, the world was a simpler place, with few concerns about the potential downsides of “high tech”. The pace of innovation that started with the Internet was unprecedented, and has become even more so with the rise of AI. Technology plays a far greater role in our lives today, and the mythical “man vs. machine” paradigm is becoming very real in 2024.

"...the mythical “man vs. machine” paradigm is becoming very real..."

The challenge of balancing technology innovation and regulation

This brings us to the EU AI Act, which was approved in March 2024 by the EU parliament. The EU has been on the forefront of trying to strike a balance between responsible regulation and maintaining a healthy environment for innovation. Starting with GDPR in 2016, the AI Act is the latest foundational move to provide guardrails in Europe before technology becomes uncontrollable. 

"...trying to strike a balance between responsible regulation and maintaining a healthy environment for innovation."

Complementing the AI Act would be the Digital Markets Act and the Digital Services Act; each covers different ground, but it adds up to a holistic approach aimed at keeping technology progress in check, especially in the shadow of Big Tech. This may seem at odds with the Silicon Valley model of unbridled innovation, but one doesn’t need to look far for indications that the bad guys can easily get the upper hand in today’s Wild West of AI. We’ve all seen scams and deep fakes, and it’s getting increasingly difficult to distinguish them from legitimate forms of communication.

As a technology analyst, my world is about communications technology, and how vendors are trying to help workers be more productive and how organizations provide better customer experiences. While the EU AI Act very much impacts this world by making these technologies more trustworthy, there are bigger issues at stake. 

There have been valid and growing concerns about the use of technologies – especially AI – to undermine trust in the media, our institutions, and democracy itself. Nowhere is that more concerning than in the realm of politics, where this year many democracies are holding elections, and never before has technology been in a position to dictate the outcomes.

With so many democracies being based in the EU, it’s not surprising to see the EU AI Act being passed now, but there is plenty to consider just pertaining to communications technology. In that context, it’s important to consider who the AI Act is trying to protect. To illustrate, here are four audiences to consider.

1. End users – consumers and employees

First and foremost, end users need protection, and the AI Act tries to do this in a couple of ways. Before citing examples, it’s worth noting that “end users” goes beyond consumers (citizens, really), which is where most of the regulatory focus will be. However employees also need protection, as they are increasingly engaging with AI-driven applications in the workplace.

One way for providing this protection is to establish trust where AI providers must show transparency. In this regard, the regulations serve to ensure that end users – at home or at work – are made aware when using AI-generated applications or content. The intent is to remove the onus from end users to gauge on their own whether they are interacting with human or AI-based sources. It’s too early to tell how effectively AI providers will be able to do this, but as the AI Act takes hold, this will be an area to watch for much-needed innovation.

 A second area will be the establishment of an EU AI Office, not just for overseeing regulatory compliance – as well as enforcement – but also where consumers can file or report complaints, abuses, violations, etc. This form of public sector consumer protection isn’t new, but it’s especially important here given how new and poorly understood AI is. With AI touching every facet of our lives, we all need to know that avenues exist to be part of the solution, not just as consumers, but as citizens as well.

2. AI applications providers

For regulations to be effective, providers need to know the boundaries. With AI evolving so quickly, this will always be a moving target, but the AI Act has a two-tier model built around the level of risk posed to the public. In short, these tiers are delineated by the level of impact the providers can have, whereby lower risk will be subject to fewer restrictions and lighter penalties. This would mainly be the realm of smaller AI providers who don’t have the resources to create comprehensive language models that can impact the public en masse.

The second tier applies more to the larger AI players who have “high impact capabilities” that could create “systemic risk”. Parsing out the specifics here is better left to the regulators, but the main idea is to create a more transparent environment, where AI players must show how their models are built, trained and stay on the right side of copyright laws. While this clearly seems like the right thing to do, in the absence of regulation, AI players tend to view these elements as competitive differentiators, making it nearly impossible to hold them accountable when bad things happen.

3. The big technology players

This audience is an extension of the above, but applies to a sub-set of AI players, namely those in the Big Tech camp, and is based mainly on the Digital Markets Act, which closely complements the AI Act. These are the players who, by virtue of their scale and resources, have “high impact capabilities”, and the Digital Markets Act rightly views them as “gatekeepers”. 

Not only do they have unmatched reach with consumers – and in businesses as well – but they can – and do – exhibit anti-competitive behaviors to keep smaller players on the sidelines. Both AI and cloud are volume businesses, and serve as barriers to entry, giving Big Tech players an unfair competitive advantage to maintain market dominance in the EU.

In particular, six players have been named as “gatekeepers”; five being US-based, and one from China. These would be – in alphabetical order – Alphabet, Amazon, Apple, Meta and Microsoft; along with ByteDance from China (parent of Tik Tok). 

Aside from the transparency issues cited above, the Act focuses here on ensuring a more open market, where smaller players can reach consumers and businesses, and offer the same services as the Big Tech players. Since the latter control access to the EU market – hence gatekeepers - these regulations serve to ensure that consumers and businesses have free choice beyond their offerings – and especially to ensure EU-based AI providers have fair access to their home market. 

To back that up, hefty financial penalties can be imposed, ranging from 3% of annual global turnover to 7%, depending on company size. By having a range in place based on size of provider, this form of deterrent can be effective across the board, as 7% for a Big Tech player represents a massive cost for non-compliance.

4. EU economy

In today’s highly polarized and politicized world, it’s not surprising to see a strong geo-political tone to the AI Act, along with the Digital Markets and Services Acts. These regulations are largely driven by self-interest in terms of doing what’s best for the EU. Aside from the long-standing dominance of US giants – Big Tech – EU AI players must also compete with China-based companies like ByteDance, so these new regulations reflect a protectionist trend that has been going well before the recent rise of AI.

Ultimately, the EU AI Act is about giving EU-based AI players a fighting chance against these US and Chinese tech giants. This is a tricky balance to strike for two reasons. First, the regulations cannot be so onerous as to constrain the innovation that smaller EU AI players need to compete in a global market. 

"...the EU AI Act is about giving EU-based AI players a fighting chance..."

Secondly, with AI evolving faster than anyone can keep pace with, this new regulatory framework needs to be agile and flexible enough to adapt to changing conditions. The EU is a massive economic bloc with complex governance challenges, and if there is too much bureaucracy, the regulations will quickly become outdated and impossible to enforce.

Implications for IT leaders

The EU AI Act represents an end-to-end, holistic approach to ensure transparency with responsibility and accountability to make AI a net benefit for both consumers and providers. The need for guardrails and a competitive playing field is widely-recognized, but it’s also clear that self-regulation alone will not be sufficient.

"The EU AI Act represents an end-to-end, holistic approach...to make AI a net benefit for both consumers and providers."

For North America, however, the AI Act will be more a model of leadership for taking a modern approach to regulation in a digital world, rather than the solution for their market. US-based IT leaders may welcome what the AI Act is striving for, but it won’t be their reality any time soon. 

The dynamics of the US economy and political landscape present a different set of challenges that the EU AI Act won’t be well-suited for. The 2023 US Executive Order for AI has similar aspirations, but is driven by different priorities, and will take some time to come into effect. As such, IT leaders in the US will be operating in a less certain environment, meaning a more cautious approach will be needed with AI.

Regardless of how far or fast IT wants to move with AI, it should be clear from this analysis that a responsible approach requires an organization-wide effort. Unlike deploying new software or network hardware for a particular business unit like the contact center, AI connects everything to everything else, where the impact is not contained to a particular department or line of business. 

Issues such as compliance risk, data security and personal privacy are central to all forms of AI, and this cannot be left for others to deploy on a piecemeal, ad hoc basis. Since the EU AI Act is in effect now, it’s an important point of reference for how IT should be thinking about these things. The framework may not be in place yet outside the EU, but it represents a well-considered model for IT leaders to work towards.

The EU AI Act – The Shape of Things to Come
Jon Arnold

As Principal of J Arnold & Associates, Jon is an independent research analyst providing thought leadership and go-to-market counsel with a focus on the business-level impact of disruptive communications technologies.

The EU AI Act – The Shape of Things to ComeThe EU AI Act – The Shape of Things to Come

Need help deciding what your business needs are? Get in touch!

Aizan Technologies Inc - Arrow