February 22, 2024

Why Europe shouldn’t allow AI companies to put profits before people

TThe soap opera-like ouster and quick return of OpenAI CEO Sam Altman provided plenty of fodder for tongue-in-cheek comments online, but also exposed some serious fault lines. One of those critiques I liked was: “How are we supposed to solve the AI ​​alignment problem when aligning just a few board members poses an insurmountable challenge?”

As the company behind ChatGPT, OpenAI may be one of the most recognizable names, but artificial intelligence is more than one company. It is a technology with enormous consequences, yet it remains almost completely unregulated. The EU has an opportunity to meaningfully address that challenge – but not if it bends the knee to Big Tech’s continued onslaught. Inspirational Members of the European Parliament have so far stood firm despite enormous pressure, trying to save this landmark legislation. This weekend, EU Commissioner Thierry Breton spoke out against what he says are selfish lobbying efforts by France’s Mistral AI and other AI companies that do not serve the public interest. These lawmakers need and deserve our support at this crucial time.

Europe is ready to take the lead in a world that is waking up to the need to regulate AI. From the US Executive Order to the recent AI Safety Summit hosted by Britain at Bletchley Park, countries everywhere recognize that if we want to share the benefits of this incredible technology, we must limit its risks. The EU AI Act will be the first comprehensive legal framework that aims to achieve exactly this, but a handful of tech companies are holding the political process hostage and threatening to sink the ship unless their systems are exempted from regulation. Capitulating will damage European innovation, put profits before public safety and be an insult to democracy. Our legislators should not bend the knee.

On November 10, negotiations broke down after France and Germany pushed back against the proposed regulation of ‘basic models’. Together with Italy, they then released a non-paper articulating these demands, asking that companies building basic models would only be subject to voluntary commitments. Foundation models are general-purpose machine learning systems, such as Open AI’s GPT-4 (which is the basis for ChatGPT), which can then be applied to a variety of downstream applications and functions. Regulating these basic models will force AI companies to ensure they are safe before deploying them, rather than waiting to act until after dangerous systems have been released, which poses a clear risk of public harm. Given growing concerns about the potential risks posed by these advanced systems, including mass disinformation, enabled bioterrorism, hacking of critical infrastructure, large-scale cyberattacks and more, it is a prudent provision to include.

Read more: The EU’s AI regulations could be softened following countermeasures from its largest members

We have seen firsthand the need for codified legal protections, rather than relying on corporate self-regulation. For example, the psychological damage that social media inflicts on young women and girls has become increasingly apparent. The companies operating the platforms and channels that host harmful content have been aware of this harm for years, but failed to act. Voluntary commitments are neither sufficient nor reliable. If we want to prevent people from getting hurt, we need prevention instead of cure. We need enforceable safety standards and risk mitigation for high-performance AI from the start.

So why the objection? Holdouts claim this will hinder innovation for companies looking to adapt and adopt AI, but this is simply not true. Regulating commodity models is essential for innovation as it will protect smaller European users downstream from compliance requirements and from liability if something goes wrong. There are only a handful of very well-resourced companies developing the most impactful basic models, but there are thousands of small companies in the EU that have already applied them to concrete business applications, and many more that are planning to do so. We need balanced obligations across the value chain; the broadest shoulders must bear the greatest burden.

This is reflected in the composition of the opposing parties. The European DIGITAL SME Alliance, consisting of 45,000 business members, wants to regulate foundation models. Two European AI companies (France’s Mistral AI and Germany’s Aleph Alpha) and a handful of large American companies do not. Their argument is also not supported by practical experience. My own country, Estonia, is subject to the exact same EU rules and regulations as Germany, yet has a vibrant and thriving startup ecosystem. If those opposed to the regulation of foundation models, like Mistral’s Cedric O, want to point fingers, they should look elsewhere. While opponents of regulation claim to protect the EU’s innovation ecosystem, such a step back would more likely shift the financial and legal burden from large companies to startups, which have neither the ability nor the resources to change the underlying models. .

France and Germany also argue that regulating basic models will undermine Europe’s ability to compete in AI on the global stage. This won’t last. The proposed layered approach, which is already a compromise between the Parliament and the Council of the EU, allows targeting so that competitors of large AI companies can emerge without heavy restrictions. European lawmakers should turn their ears to the fear-mongering of Big Tech and its newest allies, and remember the purpose of the law: to achieve a fair and balanced framework that ensures innovation while preventing harm. It should not be a legislative tool to anoint a few Silicon Valley-backed AI leaders with sectoral supremacy and zero requirements, while preventing thousands of European companies from maximizing the technology’s potential.

Parliament supports regulating foundation models, as do many in the Commission and Council. The business community agrees, as do the thousands of AI experts who are deeply concerned about the dangers of these increasingly powerful systems if left unchecked. A handful of tech companies should not allow our political process to be ransomed by threatening to explode this historic legislation and throw away three years of work. They must not be allowed to put their profits before our security, and their market conquest before European innovation.

Leave a Reply

Your email address will not be published. Required fields are marked *