Some liken AI to social media, which also fundamentally changed the way we communicate and connect with each other. Just like social media, AI could create a host of opportunities for positive developments. But it would be foolish to pretend there are no drawbacks.
ISO has a proven track record of doing exactly this, and ISO/IEC 42001 is evidence that AI is high on ISO’s agenda today. As the world’s first AI management system standard, ISO/IEC 42001 addresses the unique challenges AI poses, such as ethical considerations, transparency, and continuous learning. All of this is for entities providing or using AI-based products or services, ensuring responsible development and use of AI systems.
Now, we have a chance to do this again. Artificial intelligence (AI) isn’t the first technology to affect day-to-day life for people all over the world, and it certainly won’t be the last. As a university professor, the founder of a company, and a long-standing leader in international standards development, I have a fortunate vantage point. My many perspectives allow me to view the current applications of AI and its promise. To me, there’s no doubt that responsible governance is the only way to deliver on the potential of AI while avoiding its negative side effects.
The main challenge is that AI is currently evolving along many tracks at different speeds. But the challenges and potential risks of AI are global. This calls for inclusive, fair, and flexible solutions. To bring all of these streams together and move forward responsibly, we must gather all stakeholders from all over the world around the same table.
By taking into account all voices, ISO consistently works toward building International Standards that are inclusive and, most important, flexible. From the humble JPEG to global telephone networks and broadcasting systems, many of today’s technologies wouldn’t have been possible without standards.
For the last 30 years, the JPEG image format has been a staple for the internet’s billions of users. While the technologies used to display images have evolved tremendously during the past few decades, the JPEG format is still used everywhere. This is a great example of what can happen when a new technology develops under consensus-based, responsive, and inclusive governance.
We stand on the brink of a new world powered by AI technologies. This provides an opportunity to minimize global risks by listening to all voices equally. To deliver on the promise of AI, we must act fast. Being static is not an option.
That governance must encompass education, technology, and regulation. But most important, it must be founded on inclusive and reliable International Standards.
Governing the future
Finally, we need regulation, but we need to be careful about what we regulate. AI technologies and the tools that use them are complex and fast. Regulation must be designed and implemented with enough foresight so that it’s still relevant by the time it goes into effect. Creating these smart frameworks and umbrellas is challenging work, but it’s pivotal to the responsible use of AI going forward.
Connecting streams
The private sector—driven by shareholder value and competition—is innovating faster than anyone else. This means they are effectively setting standards as they go, simply because they are the first to wade into unknown territory. This is not negative per se, but it leaves many key voices out of the debate. Scientists, engineers, consumer associations, governments, and others must all weigh in and come together to establish the mechanisms needed to guide AI toward a benign and prosperous future.
I prefer to equate AI to cars, because they are a perfect illustration of how groundbreaking technology can be used positively with responsible governance. People need a license to legally drive a car, which is obtained through training; cars are significantly safer and easier to use than they used to be, thanks to technological progress; and the industry is heavily regulated across the world. History shows that effective and responsible governance must be built on the three blocks of education, technology, and regulation.
Second, there must be technological solutions to counteract risks. Solutions already exist for threats like misinformation, but we must do more to create effective antidotes for AI’s risks.
Published by ISO News.