577
submitted 4 months ago by rosschie@lemdro.id to c/technology@lemmy.world

The incident in northern California marked the latest mishap blamed on the electric vehicle company's Autopilot tech

you are viewing a single comment's thread
view the rest of the comments
[-] NutWrench@lemmy.world 35 points 4 months ago

This is that "AI" that investors keep jerking themselves purple over.

Real "Self driving cars" will not be available in our lifetimes.

[-] baseless_discourse@mander.xyz 13 points 4 months ago* (last edited 4 months ago)

self-driving vehicles have existed for decades, and they are very safe.

They are trains 🚊 / trams 🚋

[-] HomerianSymphony@lemmy.world 5 points 4 months ago

Trams and trains have drivers.

[-] Serinus@lemmy.world 5 points 4 months ago

I can afford to have a driver if I'm splitting the cost with 400 of my closest friends.

[-] rottingleaf@lemmy.world 1 points 4 months ago

We-ell, there have been bugs causing train collisions, but there also have been train collisions caused by machinist's error or some other misfortune, so.

[-] ours@lemmy.world 1 points 4 months ago

Subways not trains/trams, which makes sense since they are in a mostly closed system. The French one is closed off and doors slide open on the dock so that passengers can board the cars. This particular system also runs on pneumatic wheels on a rail. I guess for easier accuracy with braking/acceleration?

[-] DaTingGoBrrr@lemmy.ml 7 points 4 months ago

I don't believe that. Based on how far AI has come in the recent years I think it's only a matter of time before someone (other than Tesla) manages to do it well.

The biggest problem with the Tesla Auto pilot is Elon. Just the fact that he insists on using only camera-based vision because "people only need their eyes to drive" should tell you all you need to know about their AI.

[-] rottingleaf@lemmy.world -1 points 4 months ago

And you believe and think that why? Most of us criticizing do that, because we have some idea what machine learning is and what it simply doesn't solve. It's not a hard to get knowledge.

[-] DaTingGoBrrr@lemmy.ml 4 points 4 months ago

I think it's unreasonable to state that it won't happen within our lifetime. That's hopefully 60+ years away for me. It's a long time for computing and general AI development to advance. Just look at how much has happened in the technology field for the past 30 years.

"It always seems impossible until it's done." - Nelson Mandela

[-] rottingleaf@lemmy.world 1 points 4 months ago

Sorry, but this is again abstractions and philosophy, in the genre of Steve Jobs themed motivational texts. Which I hate with boredom (get tired quickly of hating with passion).

Many things have been called "AI" and many will be. I'm certain some will bring very important change. And those may even use ML somewhere. For classification and clustering parts most likely, and maybe even extrapolation, but that'd be subject to a system of symbolic logic working above them at least, and they'll have to find a way of adding entropy.

What they call "AI" now definitely won't. Fundamentally.

“It always seems impossible until it’s done.” - Nelson Mandela

Quoting that one guy who hasn't been hanged\shot\beheaded while many many many more other people trying the same have been. Survivor's error and such.

[-] scratchee@feddit.uk 2 points 4 months ago

They don’t have to be any good, they just have to be significantly better than humans. Right now they’re… probably about average, there’s plenty of drunk or stupid humans bringing the average down.

It’s true that isn’t good enough, unlike humans, self driving cars are will be judged together, so people will focus on their dumbest antics, but once their average is significantly better than human average, that will start to overrule the individual examples.

[-] rottingleaf@lemmy.world 1 points 4 months ago

Right now they are not that at all.

When people say neural nets are unable to reason, they don't mean something fuzzy-cloudy like normies do, which can be rebutted by some other fuzzy-cloudy stuff. They literally mean that neural nets are unable to reason. They are not capable of logic.

[-] scratchee@feddit.uk 1 points 4 months ago

Reasoning is obviously useful, not convinced it’s required to be a good driver. In fact most driving decisions must be done rapidly, I doubt humans can be described as “reasoning” when we’re just reacting to events. Decisions that take long enough could be handed to a human (“should we rush for the ferry, or divert for the bridge?”). It’s only the middling bit between where we will maintain this big advantage (“that truck ahead is bouncing around, I don’t like how the load is secured so I’m going to back off”). that’s a big advantage, but how much of our time is spent with our minds fully focused and engaged anyway? Once we’re on autopilot, is there much reasoning going on?

Not that I think this will be quick, I expect at least another couple of decades before self driving cars can even start to compete with us outside of specific curated situations. And once they do they’ll continue to fuck up royally whenever the situation is weird and outside their training, causing big news stories. The key question will be whether they can compete with humans on average by outperforming us in quick responses and in consistently not getting distracted/tired/drunk.

[-] DaTingGoBrrr@lemmy.ml 2 points 4 months ago

Innovators and visionaries is what drives us forward. If they listened all the nay-sayers we would get nowhere. I will keep being optimistic about the future developments of technology.

[-] rottingleaf@lemmy.world 1 points 4 months ago

Innovators and visionaries is what drives us forward. If they listened all the nay-sayers we would get nowhere.

How can you not see that these sentences say nothing?

What's "forward"? From Hypercard and Genera times to today is "forward"?

Who are "innovators and visionaries"? I mean, that'd be many people, but no Steve Jobs in the list, if that's what made you write this.

Who are "nay-sayers"? If that's, say, Richard Stallman, then all his nays on technology (as it happens, he's kinda weird on other things) were correct.

And the final question, why do you think you can in any way feel the wind of change, when you don't know the fundamental basics of the area of human knowledge where you "believe" in it? Don't you think it's not wind of change, it's just usual marketing for clueless people?

Say, I see a lot of promising and wonderful things, but people not knowing fundamentals get excited over something stupid which is being advertised to them.

[-] DaTingGoBrrr@lemmy.ml 1 points 4 months ago

Blue LED-lights, the TV, radio, airplanes, the personal computer, the light bulb, nuclear fission, optical microscopes, shooting lasers for an aptosecond are among some thing previously thought to be impossible to do.

Who said anything about Steve Jobs? I never mentioned anyone specific and as you say, there are many people that would make that list.

I would consider the "experts" and laymen with a sceptical attitude towards innovation to be nay-sayers.

I think it's weird how so many people suddenly became experts on AI as soon as OpenAI released ChatGPT.

I don't like the current trend of companies putting half-assed AI in to everything. AI is the new buzzword to bring in hype. But that doesn't mean I can not see the value it can potentially bring in the future once it's more developed. The developments within the AI-field has only just begun.

My use of the word AI is very broad. I am not saying that ChatGPT could drive a car. But I 100% believe that we will have self-driving cars before I die of old age.

[-] rottingleaf@lemmy.world 1 points 4 months ago

Blue LED-lights, the TV, radio, airplanes, the personal computer, the light bulb, nuclear fission, optical microscopes, shooting lasers for an aptosecond are among some thing previously thought to be impossible to do.

Let's just say it's not so clear. Limited mechanical computers of various principles humans have made since Antiquity, analog devices to compute various things were being made even before electricity. Well, even the known scene about Archimedes, water, crown and "eureka" is that. Flying machines - the same, though we wouldn't have airplanes until good enough propulsion.

Romans and Byzantines would even make mechanic servants to pour wine, or devices playing music.

It's rather that people wouldn't have any context to think about such specifics. But they didn't consider such things impossible.

While Pi is still not 4, just as it wasn't anywhere near 4 in Sargon the Great's time. I mean, depends on the geometry chosen.

Who said anything about Steve Jobs? I never mentioned anyone specific and as you say, there are many people that would make that list.

I did in the comment you were answering to, so made such a guess.

I would consider the “experts” and laymen with a sceptical attitude towards innovation to be nay-sayers.

The sceptical attitude is to the cargo cult of "innovation" without understanding the matters in question.

I think it’s weird how so many people suddenly became experts on AI as soon as OpenAI released ChatGPT.

Dunno what "AI" is, but knowing enough about ML takes a few evenings. It's not a complex matter. All the market value is not in complexity, it's in datasets.

But that doesn’t mean I can not see the value it can potentially bring in the future once it’s more developed. The developments within the AI-field has only just begun.

Something extrapolating datasets won't be more useful by being "more developed".

My use of the word AI is very broad. I am not saying that ChatGPT could drive a car. But I 100% believe that we will have self-driving cars before I die of old age.

Then you should have said that in the beginning and there'd be no argument. Only then this have nothing to do with all these bullshit companies, because what they are doing is snake oil, not "AI".

[-] DaTingGoBrrr@lemmy.ml 1 points 4 months ago* (last edited 4 months ago)

Then you should have said that in the beginning and there'd be no argument. Only then this have nothing to do with all these bullshit companies, because what they are doing is snake oil, not "AI".

I'm with you on the current use of machine learning being snake oil but I never said anything about ML. I'm not sure how my first post was unclear. You just made a lot of assumptions.

According to Google I am using the term correctly.

AI is the broader concept of enabling a machine or system to sense, reason, act, or adapt like a human

ML is an application of AI that allows machines to extract knowledge from data and learn from it autonomously

Edit: I was apparently too tired to see that you wrote machine learning in your initial reply

Edit 2: I feel like this discussion has gone way off topic and I am done with it. The OP claimed that we will not see real self-driving cars within our lifetime and I disagree with that

[-] echodot@feddit.uk 3 points 4 months ago

Now AI may or may not be overhyped but Tesla's self-driving nonsense isn't AI regardless. Just pattern recognition it is not the neural net everyone assumes it is.

It really shouldn't be legal, this tech will never work because it doesn't include lidar so it lacks depth perception. Of course humans also don't have lidar, but we have depth perception built in thanks billions of years of evolution. But computers don't do too well with stereoscopic vision for 3D calculations, and really can do with actual depth information being provided to them.

If you lack depth perception, and higher reasoning skills, for a moment you might actually think that a train driving past you is a road. 3D perception would have told the software that the train was vertical and not horizontal, and thus was a barrier and not a driving surface.

[-] FishFace@lemmy.world 1 points 4 months ago

Just pattern recognition it is not the neural net everyone assumes it is.

Tesla's current iteration of self-driving is based on neural networks. Certainly the computer vision is; there's no other way we have of doing computer vision that works at all well and, according to this article from last year it's true for the decision-making too.

Of course, the whole task of self-driving is "pattern recognition"; neural networks are just one way of achieving that.

[-] FishFace@lemmy.world 0 points 4 months ago

We have gone from cruise control to cars being able to drive themselves quite well in about a decade. The last percentage points of reliability are of course the hardest, but that's a tremendously pessimistic take.

this post was submitted on 15 Jul 2024
577 points (98.0% liked)

Technology

59374 readers
3589 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS