269

As soon as Apple announced its plans to inject generative AI into the iPhone, it was as good as official: The technology is now all but unavoidable. Large language models will soon lurk on most of the world’s smartphones, generating images and text in messaging and email apps. AI has already colonized web search, appearing in Google and Bing. OpenAI, the $80 billion start-up that has partnered with Apple and Microsoft, feels ubiquitous; the auto-generated products of its ChatGPTs and DALL-Es are everywhere. And for a growing number of consumers, that’s a problem.

Rarely has a technology risen—or been forced—into prominence amid such controversy and consumer anxiety. Certainly, some Americans are excited about AI, though a majority said in a recent survey, for instance, that they are concerned AI will increase unemployment; in another, three out of four said they believe it will be abused to interfere with the upcoming presidential election. And many AI products have failed to impress. The launch of Google’s “AI Overview” was a disaster; the search giant’s new bot cheerfully told users to add glue to pizza and that potentially poisonous mushrooms were safe to eat. Meanwhile, OpenAI has been mired in scandal, incensing former employees with a controversial nondisclosure agreement and allegedly ripping off one of the world’s most famous actors for a voice-assistant product. Thus far, much of the resistance to the spread of AI has come from watchdog groups, concerned citizens, and creators worried about their livelihood. Now a consumer backlash to the technology has begun to unfold as well—so much so that a market has sprung up to capitalize on it.


Obligatory "fuck 99.9999% of all AI use-cases, the people who make them, and the techbros that push them."

you are viewing a single comment's thread
view the rest of the comments
[-] BurningRiver@beehaw.org 11 points 5 months ago* (last edited 5 months ago)

Can you trust whatever AI you use, implicitly? I already know the answer, but I really want to hear people say it. These AI hype men are seriously promising us capabilities that may appear down the road, without actually demonstrating use cases that are relevant today. “Some day it may do this, or that”. Enough already, it’s bullshit.

[-] Zaktor@sopuli.xyz 3 points 5 months ago* (last edited 5 months ago)

Yes? AI is a lot of things, and most have well-defined accuracy metrics that regularly exceed human performance. You're likely already experiencing it as a mundane tool you don't really think about.

If you're referring specifically to generative AI, that's still premature, but as I pointed out, the interactive chat form most people worry about is 18 months old and making shocking levels of performance gains. That's not the perpetual "10 years away" it's been for the last 50 years, that's something that's actually happening in the near term. Jobs are already being lost.

People are scared about AI taking over because they recognize it (rightfully) as a threat. That's not because they're worthless. If that were the case you'd have nothing to fear.

this post was submitted on 13 Jun 2024
269 points (100.0% liked)

Technology

37702 readers
417 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS