1037
you are viewing a single comment's thread
view the rest of the comments
[-] model_tar_gz@lemmy.world 88 points 2 months ago

I’m an AI Engineer, been doing this for a long time. I’ve seen plenty of projects that stagnate, wither and get abandoned. I agree with the top 5 in this article, but I might change the priority sequence.

Five leading root causes of the failure of AI projects were identified

  • First, industry stakeholders often misunderstand — or miscommunicate — what problem needs to be solved using AI.
  • Second, many AI projects fail because the organization lacks the necessary data to adequately train an effective AI model.
  • Third, in some cases, AI projects fail because the organization focuses more on using the latest and greatest technology than on solving real problems for their intended users.
  • Fourth, organizations might not have adequate infrastructure to manage their data and deploy completed AI models, which increases the likelihood of project failure.
  • Finally, in some cases, AI projects fail because the technology is applied to problems that are too difficult for AI to solve.

4 & 2 —>1. IF they even have enough data to train an effective model, most organizations have no clue how to handle the sheer variety, volume, velocity, and veracity of the big data that AI needs. It’s a specialized engineering discipline to handle that (data engineer). Let alone how to deploy and manage the infra that models need—also a specialized discipline has emerged to handle that aspect (ML engineer). Often they sit at the same desk.

1 & 5 —> 2: stakeholders seem to want AI to be a boil-the-ocean solution. They want it to do everything and be awesome at it. What they often don’t realize is that AI can be a really awesome specialist tool, that really sucks on testing scenarios that it hasn’t been trained on. Transfer learning is a thing but that requires fine tuning and additional training. Huge models like LLMs are starting to bridge this somewhat, but at the expense of the really sharp specialization. So without a really clear understanding of what can be done with AI really well, and perhaps more importantly, what problems are a poor fit for AI solutions, of course they’ll be destined to fail.

3 —> 3: This isn’t a problem with just AI. It’s all shiny new tech. Standard Gardner hype cycle stuff. Remember how they were saying we’d have crypto-refrigerators back in 2016?

[-] WanderingVentra@lemm.ee 21 points 2 months ago* (last edited 2 months ago)

Not to derail, but may I ask how did you become an AI Engineer? I'm a software dev by trade, but it feels like a hard field to get into even if I start training for the AI part of it, because I'd need the data to practice =(

But it's such a big buzz word I feel like I need to start looking that direction if i want to stay employed.

[-] turkalino@lemmy.yachts 19 points 2 months ago

if I want to stay employed

I think this is a little paranoid. Somebody has to handle the production models - deploying them to servers, maintaining the servers, developing the APIs and front ends that provide access to the models… I don’t think software dev jobs are going anywhere

[-] technocrit@lemmy.dbzer0.com 5 points 2 months ago* (last edited 2 months ago)

For me it helps to have a project. I learned SciKit in order to analyze trading data to beat the "market". I was focusing on crypto but there's lots of trading data available in general. Unsurprisingly I didn't make any money, but it was fun to learn more about data processing, statistics, and modeling with functions.

(FWIW I'm crypto-neutral depending on the topic and anti-"AI" because it doesn't exist.)

[-] ChickenLadyLovesLife@lemmy.world 3 points 2 months ago

Ha ha I got into genetic algorithms for the same reason, market prediction. Ended up exactly at zero in terms of net gains and losses - if you don't count commissions, anyway. :(

[-] KellysNokia@lemmy.world 4 points 2 months ago

Kaggle has some good free datasets to practice

[-] rainynight65@feddit.org 5 points 2 months ago

Re 1, 3 and 5, maybe it is upon the AI projects to stop providing shiny solutions looking for a problem they could solve, and properly engaging with potential customers and stakeholders to get a clear understanding of the problems that need solving.

This was precisely the context of a conversation I had at work yesterday. Some of our product managers attended a conference that was rife with AI stuff, and a customer rep actually took to the stage and said 'I have no need for any of that because none of it helps me solve the problems I need to solve.'

[-] model_tar_gz@lemmy.world 5 points 2 months ago

I don’t disagree. Solutions finding problems is not the optimal path—but it is a path that pushes the envelope of tech forward, and a lot of these shiny techs do eventually find homes and good problems to solve and become part of a quiver.

But I will always advocate to start with the customer and work backwards from there to arrive at the simplest engineered solution. Sometimes that’s a ML model. Sometimes a ln expert system. Sometimes a simpler heuristics/rules based system. That all falls under the ‘AI’ umbrella, by the way. :D

[-] Hackerman_uwu@lemmy.world 2 points 2 months ago

Also in the industry and I gotta say it’s not often I agree with every damn point. You nailed it. Thanks for posting!

this post was submitted on 29 Aug 2024
1037 points (97.9% liked)

Technology

59205 readers
2519 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS