this post was submitted on 04 Apr 2025
841 points (97.6% liked)

Technology

68400 readers
2872 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

US experts who work in artificial intelligence fields seem to have a much rosier outlook on AI than the rest of us.

In a survey comparing views of a nationally representative sample (5,410) of the general public to a sample of 1,013 AI experts, the Pew Research Center found that "experts are far more positive and enthusiastic about AI than the public" and "far more likely than Americans overall to believe AI will have a very or somewhat positive impact on the United States over the next 20 years" (56 percent vs. 17 percent). And perhaps most glaringly, 76 percent of experts believe these technologies will benefit them personally rather than harm them (15 percent).

The public does not share this confidence. Only about 11 percent of the public says that "they are more excited than concerned about the increased use of AI in daily life." They're much more likely (51 percent) to say they're more concerned than excited, whereas only 15 percent of experts shared that pessimism. Unlike the majority of experts, just 24 percent of the public thinks AI will be good for them, whereas nearly half the public anticipates they will be personally harmed by AI.

you are viewing a single comment's thread
view the rest of the comments
[–] ImmersiveMatthew@sh.itjust.works 1 points 21 hours ago (1 children)

I too am a developer and I am sure you will agree that while the overall intelligence of models continues to rise, without a concerted focus on enhancing logic, the promise of AGI likely will remain elusive.  AI cannot really develop without the logic being dramatically improved, yet logic is rather stagnant even in the latest reasoning models when it comes to coding at least.

I would argue that if we had much better logic with all other metrics being the same, we would have AGI now and developer jobs would be at risk. Given the lack of discussion about the logic gaps, I do not foresee AGI arriving anytime soon even with bigger a bigger models coming.

[–] Clent@lemmy.dbzer0.com 6 points 19 hours ago

If we had AGI, the number of jobs that would be at risk would be enormous. But these LLMs aren't it.

They are language models and until someone can replace that second L with Logic, no amount of layering is going to get us there.

Those layers are basically all the previous AI techniques laid over the top of an LLM but anyone that has a basic understanding of languages can tell you how illogical they are.