555
(page 2) 50 comments
sorted by: hot top controversial new old
[-] rational_lib@lemmy.world 7 points 12 hours ago

As I use copilot to write software, I have a hard time seeing how it'll get better than it already is. The fundamental problem of all machine learning is that the training data has to be good enough to solve the problem. So the problems I run into make sense, like:

  1. Copilot can't read my mind and figure out what I'm trying to do.
  2. I'm working on an uncommon problem where the typical solutions don't work
  3. Copilot is unable to tell when it doesn't "know" the answer, because of course it's just simulating communication and doesn't really know anything.

2 and 3 could be alleviated, but probably not solved completely with more and better data or engineering changes - but obviously AI developers started by training the models on the most useful data and strategies that they think work best. 1 seems fundamentally unsolvable.

I think there could be some more advances in finding more and better use cases, but I'm a pessimist when it comes to any serious advances in the underlying technology.

load more comments (8 replies)
[-] KeenFlame@feddit.nu 14 points 16 hours ago

I am so tired of the ai hype and hate. Please give me my gen art interest back please just make it obscure again to program art I beg of you

[-] barsoap@lemm.ee 3 points 11 hours ago* (last edited 11 hours ago)

It's still quite obscure to actually mess with AI art instead of just throwing prompts at it, resulting in slop of varying quality levels. And I don't mean controlnet, but github repos with comfyui plugins with little explanation but a link to a paper, or "this is absolutely mathematically unsound but fun to mess with". Messing with stuff other than conditioning or mere model selection.

load more comments (1 replies)
[-] Etterra@lemmy.world 18 points 18 hours ago

Good. I look forward to all these idiots finally accepting that they drastically misunderstood what LLMs actually are and are not. I know their idiotic brains are only able to understand simple concepts like "line must go up" and follow them like religious tenants though so I'm sure they'll waste everyone's time and increase enshitification with some other new bullshit once they quietly remove their broken (and unprofitable) AI from stuff.

[-] finitebanjo@lemmy.world 4 points 14 hours ago

Theres no bracing for this, OpenAI CEO said the same thing like a year ago and people are still shovelling money at this dumpster fire today.

[-] j4p@lemm.ee 6 points 16 hours ago

Sigh I hope LLMs get dropped from the AI bandwagon because I do think they have some really cool use cases and love just running my little local models. Cut government spending like a madman, write the next great American novel, or eliminate actual jobs are not those use cases.

[-] art@lemmy.world 4 points 15 hours ago

It's had all the signs of a bubble for the last few years.

[-] Someplaceunknown@fedia.io 212 points 1 day ago

"LLMs such as they are, will become a commodity; price wars will keep revenue low. Given the cost of chips, profits will be elusive," Marcus predicts. "When everyone realizes this, the financial bubble may burst quickly."

Please let this happen

[-] orl0pl@lemmy.world 28 points 22 hours ago

Market crash and third world war. What a time to be alive!

[-] iAvicenna@lemmy.world 5 points 16 hours ago

so long, see you all in the next hype. Any guesses?

load more comments (4 replies)
[-] Semi_Hemi_Demigod@lemmy.world 185 points 1 day ago

I wish just once we could have some kind of tech innovation without a bunch of douchebag techbros thinking it's going to solve all the world's problems with no side effects while they get super rich off it.

[-] oyo@lemm.ee 7 points 15 hours ago

Of course most don't actually even believe it, that's just the pitch to get that VC juice. It's basically fraud all the way down.

[-] Strider@lemmy.world 4 points 17 hours ago

Soooo... Without capitalism?

[-] Semi_Hemi_Demigod@lemmy.world 3 points 16 hours ago

Pretty much.

[-] ohwhatfollyisman@lemmy.world 56 points 1 day ago

... bunch of douchebag techbros thinking it's going to solve all the world's problems with no side effects...

one doesn't imagine any of them even remotely thinks a technological panacaea is feasible.

... while they get super rich off it.

because they're only focusing on this.

load more comments (7 replies)
[-] Decker108@lemmy.ml 6 points 17 hours ago

Nice, looking forward to it! So much money and time wasted on pipe dreams and hype. We need to get back to some actually useful innovation.

[-] DirigibleProtein@aussie.zone 48 points 1 day ago
[-] Greg@lemmy.ca 61 points 1 day ago

largely based on the notion that LLMs will, with continued scaling, become artificial general intelligence

Who said that LLMs were going to become AGI? LLMs as part of an AGI system makes sense but not LLMs alone becoming AGI. Only articles and blog posts from people who didn't understand the technology were making those claims. Which helped feed the hype.

I 100% agree that we're going to see an AI market correction. It's going to take a lot of hard human work to achieve the real value of LLMs. The hype is distracting from the real valuable and interesting work.

[-] b3an@lemmy.world 3 points 12 hours ago

I read a lot I guess, and I didn’t understand why they think like this. From what I see, are constant improvements in MANY areas! Language models are getting faster and more efficient. Code is getting better across the board as people use it to improve their own, contributing to the whole of code improvements and project participation and development. I feel like we really are at the beginning of a lot of better things and it’s iterative as it progresses. I feel hopeful

[-] mutant_zz@lemmy.world 25 points 22 hours ago

OpenAI published a paper about GPT titled "Sparks of AGI".

I don't think they really believe it but it's good to bring in VC money

[-] Greg@lemmy.ca 5 points 19 hours ago

That is a very VC baiting title. But it's doesn't appear from the abstract that they're claiming that LLMs will develop to the complexity of AGI.

load more comments (1 replies)
[-] Chozo@fedia.io 26 points 1 day ago

Journalists have no clue what AI even is. Nearly every article about AI is written by somebody who couldn't tell you the difference between an LLM and an AGI, and should be dismissed as spam.

[-] zbyte64@awful.systems 17 points 1 day ago

The call is coming from inside. Google CEO claims it will be like alien intelligence so we should just trust it to make political decisions for us bro: https://www.computing.co.uk/news/2024/ai/former-google-ceo-eric-schmidt-urges-ai-acceleration-dismisses-climate

load more comments (1 replies)
[-] halcyoncmdr@lemmy.world 97 points 1 day ago

No shit. This was obvious from day one. This was never AGI, and was never going to be AGI.

Institutional investors saw an opportunity to make a shit ton of money and pumped it up as if it was world changing. They'll dump it like they always do, it will crash, and they'll make billions in the process with absolutely no negative repercussions.

load more comments (11 replies)
[-] JustARaccoon@lemmy.world 3 points 16 hours ago

Until Open AI announces a new 5t model or something and then the hype refreshes

[-] Boxscape@lemmy.sdf.org 32 points 1 day ago

Well duhhhh.
Language models are insufficient.
They also need:

load more comments (4 replies)
[-] LavenderDay3544@lemmy.world 6 points 19 hours ago

AI was 99% a fad. Besides OpenAI and Nvidia, none of the other corporations bullshitting about AI have made anything remotely useful using it.

[-] model_tar_gz@lemmy.world 5 points 11 hours ago* (last edited 3 hours ago)

Absolutely not true. Disclaimer, I do work for NVIDIA as a forward deployed AI Engineer/Solutions Architect—meaning I don’t build AI software internally for NVIDIA but I embed with their customers’ engineering teams to help them build their AI software and deploy and run their models on NVIDIA hardware and software. edit: any opinions stated are solely my own, N has a PR office to state any official company opinions.

To state this as simply as possible: I wouldn’t have a job if our customers weren’t seeing tremendous benefit from AI technology. The companies I work with typically are very sensitive to CapX and OpX costs of AI—they self-serve in private clouds. If it doesn’t help them make money (revenue growth) or save money (efficiency), then it’s gone—and so am I. I’ve seen it happen; entire engineering teams laid off because a technology just couldn’t be implemented in a cost-effective way.

LLMs are a small subset of AI and Accelerated-Compute workflows in general.

[-] LavenderDay3544@lemmy.world 2 points 11 hours ago* (last edited 11 hours ago)

To state this as simply as possible: I wouldn’t have a job if our customers weren’t seeing tremendous benefit from AI technology.

Right because corporate management doesn't ever blindly and stupidly overinvest in fads that blow up in their faces...

I work with typically are very sensitive to CapX and OpX costs of AI—they self-serve in private clouds. If it doesn’t help them make money (revenue growth) or save money (efficiency), then it’s gone—and so am I.

You clearly have no clue what you're on about. As someone with a degrees and experience in both CS and Finance all I have to say is that's not at all how these things work. Plenty of companies lose money on these things in the hopes that their FP&A projection fever dreams will come true. And they're wrong much more often than you seem to think. FP&A is more art than science and you can get financial models to support any argument you want to make to convince management to keep investing in what you think they should. And plenty of CEOs and boards are stupid enough to buy it. A lot of the AI hype has been bought and sold that way in the hopes that it would be worthwhile eventually or that other alternatives can't be just as good or better.

I’ve seen it happen; entire engineering teams laid off because a technology just couldn’t be implemented in a cost-effective way.

This is usually what happens once they finally realize spending money on hype doesn't pay off and go back to more established business analytics, operations research, and conventional software which never makes mistakes if it's programmed correctly.

LLMs are a small subset of AI and Accelerated-Compute workflows in general.

No one ever said otherwise. And we're talking about AI only, no moving the goalposts to accelerated computing, which is a mechanism through which to implement a wide range of solutions and not a specific one in and of itself.

[-] model_tar_gz@lemmy.world 3 points 10 hours ago

That’s fair. I see what I see at an engineering and architecture level. You see what you see at the business level.

That said. I stand by my statement because I and most of my colleagues in similar roles get continued, repeated and expanded-scope engagements. Definitely in LLMs and genAI in general especially over the last 3-5 years or so, but definitely not just in LLMs.

“AI” is an incredibly wide and deep field; much more so than the common perception of what it is and does.

Perhaps I’m just not as jaded in my tech career.

operations research, and conventional software which never makes mistakes if it's programmed correctly.

Now this is where I push back. I spent the first decade of my tech career doing ops research/industrial engineering (in parallel with process engineering). You’d shit a brick if you knew how much “fudge-factoring” and “completely disconnected from reality—aka we have no fucking clue” assumptions go into the “conventional” models that inform supply-chain analytics, business process engineering, etc. To state that they “never make mistakes” is laughable.

load more comments (1 replies)
[-] jj4211@lemmy.world 4 points 17 hours ago

I would say LLMs specifically are in that ball park. Things like machine vision have been boringly productive and relatively un hyped.

There's certainly some utility to LLMs, but it's hard to see through all the crazy over estimations and being shoved everywhere by grifters.

[-] intelisense@lemm.ee 4 points 18 hours ago

Nvidia made money, but I've not seen OpenAI do anything useful, and they are not even profitable.

load more comments (1 replies)
load more comments (1 replies)
[-] RecluseRamble@lemmy.dbzer0.com 11 points 23 hours ago

Of course it'll crash. Saying it's imminent though suggests someone needs to exercise their shorts.

load more comments
view more: ‹ prev next ›
this post was submitted on 13 Nov 2024
555 points (95.4% liked)

Technology

59346 readers
5162 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS