141
submitted 1 month ago by yogthos@lemmy.ml to c/technology@lemmy.ml
you are viewing a single comment's thread
view the rest of the comments
[-] FaceDeer@fedia.io 0 points 1 month ago

OpenAI is no longer the cutting edge of AI these days, IMO. It'll be fine if they close down. They blazed the trail, set the AI revolution in motion, but now lots of other companies have picked it up and are doing better at it than them.

[-] pizza_the_hutt@sh.itjust.works 31 points 1 month ago

There is no AI Revolution. There never was. Generative AI was sold as an automation solution to companies looking to decrease labor costs, but's it's not actually good at doing that. Moreover, there's not enough good, accurate training material to make generative AI that much smarter or more useful than it already is.

Generative AI is a dead end, and big companies are just now starting to realize that, especially after the Goldman-Sachs report on AI. Sam Altman is just a snake oil saleman, another failing-upwards executive who told a bunch of other executives what they wanted to hear. It's just now becoming clear that the emperor has no clothes.

[-] SkyNTP@lemmy.ml 7 points 1 month ago

Generative AI is not smart to begin with. LLM are basically just compressed versions of the internet that predict statistically what a sentence needs to be to look "right". There's a big difference between appearing right and being right. Without a critical approach to information, independent reasoning, individual sensing, these AI's are incapable of any meaningful intelligence.

In my experience, the emperor and most people around them still has not figured this out yet.

[-] yogthos@lemmy.ml 7 points 1 month ago
[-] anachronist@midwest.social 5 points 1 month ago

Generative AI is just classification engines run in reverse. Classification engines are useful but they've been around and making incremental improvements for at least a decade. Also, just like self-driving cars they've been writing checks they can't honor. For instance, legal coding and radiology were supposed to be automated by classification engines a long time ago.

[-] bizarroland@fedia.io 4 points 1 month ago* (last edited 1 month ago)

It's sort of like how you can create a pretty good text message on your phone using voice to text but no courtroom is allowing AI transcription.

There's still too much risk that it will capitalize the wrong word or replace a word that's close to what was said or do something else wholly unconceived of to trust it with our legal process.

If they could guarantee a 100% accurate transcription of spoken word to text it would put the entire field of Court stenographers out of business and generate tens of millions of dollars worth of digital contracts for the company who can figure it out.

Not going to do it because even today a phone can't tell the difference between the word holy and the word holy. (Wholly)

load more comments (6 replies)
this post was submitted on 28 Jul 2024
141 points (97.3% liked)

Technology

34437 readers
190 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS