AI Is Here—and Everywhere | Quality Digest

Published Jan. 3, 2024, on The Conversation.

Innovation

AI Is Here—and Everywhere

Three AI researchers look to the challenges ahead in 2024

A flood of AI-generated content primed to exploit algorithmic filters and recommendation engines could soon overpower critical functions such as information verification, information literacy, and serendipity provided by search engines, social media platforms, and digital services.

We think it’s possible to make this happen, and hope that universities that are rushing to hire more technical AI experts put just as much effort into hiring AI ethicists. Hopefully, media outlets will help cut through the hype; everyone reflects on their own uses of this technology and its consequences; and tech companies listen to informed critiques in considering what choices continue to shape the future.

The year 2023 was one of AI hype. Regardless of whether the narrative was that AI was going to save the world or destroy it, it often felt as if visions of what AI might be someday overwhelmed the current reality. And though anticipating future harms is a critical component of overcoming ethical debt in tech, getting too swept up in the hype risks creating a vision of AI that seems more like magic than a technology that can still be shaped by explicit choices. But taking control requires a better understanding of that technology.

Speaking of problems, the very people sounding the loudest alarms about AI—like Elon Musk and Sam Altman—can’t seem to stop themselves from building ever more powerful AI. Expect them to keep doing more of the same. They’re like arsonists calling in the blaze they stoked themselves, begging the authorities to restrain them. And along those lines, the greatest hope for 2024—though it seems slow in coming—is stronger AI regulation at national and international levels.

Anjana Susarla, professor of information systems, Michigan State University

So our prediction, or perhaps hope, for 2024 is that there will be a huge push to learn. In 1966, Joseph Weizenbaum, the creator of the ELIZA chatbot, wrote that machines are “often sufficient to dazzle even the most experienced observer,” but that once their “inner workings are explained in language sufficiently plain to induce understanding, its magic crumbles away.” The challenge with generative AI is that, in contrast to ELIZA’s very basic pattern-matching and substitution methodology, it’s much more difficult to find language “sufficiently plain” to make the magic crumble away.

Quality Digest does not charge readers for its content. We believe that industry news is important for you to do your job, and Quality Digest supports businesses of all types.

Thanks,
Quality Digest

منبع: https://www.qualitydigest.com/inside/innovation-article/ai-here-and-everywhere-013124.html

The big technical question is how soon and how thoroughly AI engineers can address the current Achilles’ heel of deep learning—what might be called generalized hard reasoning, things like deductive logic. Will quick tweaks to existing neural-net algorithms be sufficient, or will a fundamentally different approach be required, as neuroscientist Gary Marcus suggests? Armies of AI scientists are working on this problem, so expect some headway in 2024.

First, the race is on! Progress in AI had been steady since the days of Minsky’s prime, but the public release of ChatGPT in 2022 kicked off an all-out competition for profit, glory, and global supremacy. Expect more powerful AI in addition to a flood of new AI applications.

The deluge of synthetic content produced by generative AI could spawn a world where malicious people and institutions can manufacture synthetic identities and orchestrate large-scale misinformation. A flood of AI-generated content primed to exploit algorithmic filters and recommendation engines could soon overpower critical functions such as information verification, information literacy, and serendipity provided by search engines, social media platforms, and digital services.

Our PROMISE: Quality Digest only displays static ads that never overlay or cover up content. They never get in your way. They are there for you to read, or not.

However, someone has to pay for this content. And that’s where advertising comes in. Most people consider ads a nuisance, but they do serve a useful function besides allowing media companies to stay afloat. They keep you aware of new products and services relevant to your industry. All ads in Quality Digest apply directly to products and services that most of our readers need. You won’t see automobile or health supplement ads.

A new bipartisan bill introduced in Congress aims to codify algorithmic literacy as a key part of digital literacy. With AI increasingly intertwined with everything people do, it’s clear that the time has come to focus not on algorithms as pieces of technology but to consider the contexts the algorithms operate in: people, processes, and society.

Meanwhile, new AI applications are likely to result in new problems, too. You might soon start hearing about AI chatbots and assistants talking to each other, having entire conversations on your behalf but behind your back. Some of it will go haywire—comically, tragically, or both. Deepfakes, AI-generated images, and videos that are difficult to detect are likely to run rampant despite nascent regulation, causing more sleazy harm to individuals and democracies everywhere. And there are likely to be new classes of AI calamities that wouldn’t have been possible even five years ago.