AI Is Here—and Everywhere | Quality Digest

The year 2023 was one of AI hype. Regardless of whether the narrative was that AI was going to save the world or destroy it, it often felt as if visions of what AI might be someday overwhelmed the current reality. And though anticipating future harms is a critical component of overcoming ethical debt in tech, getting too swept up in the hype risks creating a vision of AI that seems more like magic than a technology that can still be shaped by explicit choices. But taking control requires a better understanding of that technology.

Our PROMISE: Quality Digest only displays static ads that never overlay or cover up content. They never get in your way. They are there for you to read, or not.

Still, making predictions for a year out doesn’t seem quite as risky. What can be expected of AI in 2024?

A flood of AI-generated content primed to exploit algorithmic filters and recommendation engines could soon overpower critical functions such as information verification, information literacy, and serendipity provided by search engines, social media platforms, and digital services.


AI Is Here—and Everywhere

Three AI researchers look to the challenges ahead in 2024

We think it’s possible to make this happen, and hope that universities that are rushing to hire more technical AI experts put just as much effort into hiring AI ethicists. Hopefully, media outlets will help cut through the hype; everyone reflects on their own uses of this technology and its consequences; and tech companies listen to informed critiques in considering what choices continue to shape the future.

So please consider turning off your ad blocker for our site.

Published Jan. 3, 2024, on The Conversation.


Published: Wednesday, January 31, 2024 – 12:01

However, throughout the year, people recognized that a failure to teach students about AI might put them at a disadvantage, and many schools rescinded their bans. This is not to say we should be revamping education to put AI at the center of everything. But if students don’t learn about how AI works, they won’t understand its limitations—and therefore how it’s useful and appropriate to use and how it’s not. This is true not only for students; the more people understand how AI works, the more empowered they are to use it and critique it.

Speaking of problems, the very people sounding the loudest alarms about AI—like Elon Musk and Sam Altman—can’t seem to stop themselves from building ever more powerful AI. Expect them to keep doing more of the same. They’re like arsonists calling in the blaze they stoked themselves, begging the authorities to restrain them. And along those lines, the greatest hope for 2024—though it seems slow in coming—is stronger AI regulation at national and international levels.

Anjana Susarla, professor of information systems, Michigan State University