Geneviève Castagnet and Gerald Santucci: AI Safety and Regulation: If Only Cultural Differences Were Acknowledged Rationally?

We live in a time of unparalleled global turmoil. From the impacts of climate change, biodiversity loss and ecosystem collapse, to pandemics, involuntary migration, technological acceleration, cyber-attacks, geopolitical conflicts, societal polarization, and the spread of artificial intelligence (AI) misinformation and disinformation, today’s leaders in government, industry and civil society are confronted with entirely new categories […]

  • Geneviève: Head of AI Ethics, SNCF
  • Gérald : President of the European Education New Society Association (ENSA)

We live in a time of unparalleled global turmoil. From the impacts of climate change, biodiversity loss and ecosystem collapse, to pandemics, involuntary migration, technological acceleration, cyber-attacks, geopolitical conflicts, societal polarization, and the spread of artificial intelligence (AI) misinformation and disinformation, today’s leaders in government, industry and civil society are confronted with entirely new categories of challenges.

In this context, the rapidly evolving agenda of issues that touches upon various aspects of AI development and deployment requires the close attention of all people who are concerned with its safety and governance.

In the mid-2000s, three disruptive elements converged to create the AI boom and, consequently, its ubiquity and its inherent risks. Indeed, algorithms known as convolutional neural networks (CNN) met the power of modern-day graphics processing units (GPUs) and the availability of big data. From a distance, Europe is relatively strong in algorithms, U.S. is relatively strong in chips and software, China is relatively strong in big data. Such specific strengths among countries are currently a reason to compete fiercely – we believe that they should rather be a reason to stimulate coopetition.