Artificial intelligence has become commonplace in the lives of billions of people around the world. According to research, 56% of companies have adopted AI in at least one function, especially in emerging countries. That’s 6% more for him than in 2020. AI is being used for everything from optimizing service operations to recruiting talent. It can capture biometric data and is already serving medical applications, the justice system, finance, and making important decisions in people’s lives.
However, one major challenge remains to regulate its use. So is a global consensus possible, or is a fragmented regulatory environment inevitable?
The concept of AI evokes fear in Orwell’s novel “1984” and his conception of “Big Brother is Watching You”. Algorithm-based products that violate human rights are already being developed. Now is the time to introduce standards and regulations to de-risk society based on surveillance and other nightmare scenarios. The US and EU can take the lead on this issue, especially since they have historically shared principles regarding the rule of law and democracy. But on either side of the Atlantic, different moral values underpin principles that don’t necessarily translate into similar practical rules. The US emphasizes procedural fairness, transparency and non-discrimination, while the EU emphasizes data privacy and fundamental rights. The challenge therefore arises of finding common rules for digital services operating across continents.
Why AI ethics aren’t good enough
Not all uses of AI are palatable or built on palatable value. AI could be essentially “Godlike”: AI, subject to self-declared ethical safeguards, could discriminatory and destructiveConsider for a moment the AI that underlies China’s so-called “social credit” system. This ranks China’s population, and those deemed untrustworthy are punished for everything from jaywalking to playing too many video games. including loss of rights such as the ability to use or limit internet speeds.
Imposing mandatory rules on AI will help prevent technology from violating human rights. I’m here. The EU AI law, intended to address these kinds of problems.of the law is the first of its kind Although regulated by large regulators around the world, other jurisdictions such as China and the UK have also entered the regulatory race to have a say in shaping the technology that will dominate our lives in this century. doing.
Why global regulation is a challenge
AI Act applications fall into three risk categories. There are systems, such as China’s social credit application, that pose an “unacceptable risk.” There are also “high-risk” applications like resume scanning tools that must comply with legal requirements to prevent discrimination. Finally, other systems that are not considered high risk or unacceptable risk are not regulated.
Regulation is necessary, but neither the US nor the EU can impose it alone. Reaching a global consensus on the values that support such regulation is also unlikely. Even within the EU and US, challenges and disagreements exist. In some countries there is a conflict between domestic rules and regional approaches. Similarly, discord could lead to the collapse of the world’s digital infrastructure if the EU and US do not work together.
Discovery of common principles based on values
We need to be aware of the basic principles of what to expect from AI and what its value should be. Implicit values work easily even when the principle is not explicitly stated. Science is valuable and cultural. Algorithms may incorporate racist or unfair discrimination.part of research It advocates replacing implicit bias with the principles of empathy, self-control and duty. Justice, fairness and human rights are also important values that emphasize common principles, albeit vague and culturally dependent.
Some researchers advocate stakeholder engagement. This is important for building empathy as an underlying principle. It is important to involve people who have been previously excluded from the AI regulatory process but who are affected by its consequences.
To move forward, it is important to set the right principles. Strong leadership is required, but even more important is the development of clear technical rules that can be effectively enforced.
Who Should Lead AI Standardization?
Technical standardization is leading the regulation of AI through associations such as IEEE and ISO, national bodies such as NIST in the US, and CEN, CENELEC, AFNOR, Agoria and Dansk standards in Europe. In these situations, one of the key issues is the extent of government involvement. There are concerns about politicians’ ability to understand how to regulate technology and make complex decisions, but governments need to get involved if they are to stay regulated.
This is very important in a democracy because of the risks associated with holding power. Great power can be abused, and Silicon Valley’s major tech industry players and tenants have a disproportionate amount of influence over standard setting. He is, so to speak, the “last gasp” of those who consider the EU’s human rights-centric speech regulations to be an unacceptable constraint on the First Amendment. When the self-proclaimed “free speech absolutist” proposed to buy Twitter last April, his policy threatened to loosen its moderation.this is a violation New European moderation ruleswhich introduces algorithmic accountability requirements for large platforms like Twitter.
The application and optimization of technical standards requires collaboration among parliamentarians, policy makers, academics and engineers, and the support of various stakeholder groups such as businesses, citizens and human rights groups. Without this balance, big tech Lobbyists and geopolitics will have a disproportionate amount of influence.
not all is lost
Despite all the challenges, there is hope. The US account gives the impression that governments cannot improve society through regulation, but major paradigm adaptations have occurred and been addressed before. Regulation takes space, time and energy. Societies have had to adapt to technology in the same way that governments have adapted to rail infrastructure and oil, but their emergence has brought similar challenges.
Finally, and counterintuitive as it may seem, the West should pay attention to how China plans to regulate AI. A recent Chinese law on algorithm recommendation services aims to integrate mainstream Chinese values into “Made in China” AI systems, which will surely be sold and used worldwide. . Therefore, the United States and the EU will promote and fund transatlantic research and development programs that may lead to digital technologies that are not only in line with our values but also positively enhance our humanity. It is time to defend liberal democratic values and human rights by
Article based on talks by the following professors at the “Transatlantic Dialogue on Humanity and AI Regulation” conference at HEC Paris in May 2022: David Restrepo Amariles, HEC Paris; Gregory Lewkowicz, Université Libre de Bruxelles; Janine Hiller Indiana University Anjanette Raymond, Scott Shackelford, Isak Asare, Telecom Paris Winston Maxwell, King’s College London Roger Brownsword, Oxford University Carina Prunkl, Rebecca Williams, UPenn Wharton Business School Kevin Werbach, Iliff School Philip Butler, Gregory Voss, Toulouse Business School; Robert Geraci, Manhattan College; Martin Ebers, University of Tartu; Ryan Calo, University of Washington; Margaret Hu, Penn State University; Aaron McKain, Divya Siddarth, Julio Ponce from North Central University, ost Joosten from University of Barcelona, Pablo Baquero from HEC Paris, Nizan Packin from University of Haifa, Konstantinos Karachalios from IEEE, Head of AI Initiative at OECD.
David Restrepo Amariles is Associate Professor in Data Law and Artificial Intelligence at HEC Paris.