Fuck AI

3309 readers
1251 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
26
27
 
 
28
29
 
 

1:00 p.m. – 2:00 p.m.

Lunch and Fireside Chat - Vice Chair for Supervision Michelle W. Bowman and Sam Altman, OpenAI CEO As part of the Board’s efforts to keep pace with the latest developments in banking and finance, Vice Chair for Supervision Bowman will host a fireside chat with Sam Altman, CEO of OpenAI, to discuss the effects of artificial intelligence on banks, businesses, and consumers and how we can encourage innovation in the banking and financial system.

30
 
 

The New South Wales tenants union has called for nationwide reforms to crack down on misleading rental advertisements after the state government introduced new laws in response to the growing use of artificial intelligence in real estate.

The legislation, announced on Sunday, will require mandatory disclosure when images in rental advertisements have been altered to conceal faults and mislead rental applicants.

The state government cited examples of real estate agents using artificially generated furniture that showed a double bed in a bedroom that was only large enough to fit a single in listings, or digitally modifying photos to obscure property damage.

31
32
33
34
 
 

Google’s carbon emissions have soared by 51% since 2019 as artificial intelligence hampers the tech company’s efforts to go green.

While the corporation has invested in renewable energy and carbon removal technology, it has failed to curb its scope 3 emissions, which are those further down the supply chain, and are in large part influenced by a growth in datacentre capacity required to power artificial intelligence.

The company reported a 27% increase in year-on-year electricity consumption as it struggles to decarbonise as quickly as its energy needs increase.

Datacentres play a crucial role in training and operating the models that underpin AI models such as Google’s Gemini and OpenAI’s GPT-4, which powers the ChatGPT chatbot. The International Energy Agency estimates that datacentres’ total electricity consumption could double from 2022 levels to 1,000TWh (terawatt hours) in 2026, approximately Japan’s level of electricity demand. AI will result in datacentres using 4.5% of global energy generation by 2030, according to calculations by the research firm SemiAnalysis.

35
 
 

Spotify, the world’s leading music streaming platform, is facing intense criticism and boycott calls following CEO Daniel Ek’s announcement of a €600m ($702m) investment in Helsing, a German defence startup specialising in AI-powered combat drones and military software.

The move, announced on 17 June, has sparked widespread outrage from musicians, activists and social media users who accuse Ek of funnelling profits from music streaming into the military industry.

Many have started calling on users to cancel their subscriptions to the service.

“Finally cancelling my Spotify subscription – why am I paying for a fuckass app that works worse than it did 10 years ago, while their CEO spends all my money on technofascist military fantasies?” said one user on X.

36
 
 
37
38
 
 

We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

https://archive.ph/Fapar

39
229
submitted 4 days ago* (last edited 4 days ago) by throws_lemy@lemmy.nz to c/fuck_ai@lemmy.world
 
 

Does mixing bleach and vinegar sound like a great idea?

Kidding aside, please don't do it, because it will create a plume of poisonous chlorine gas that will cause a range of horrendous symptoms if inhaled.

That's apparently news to OpenAI's ChatGPT, though, which recently suggested to a Reddit user that the noxious combination could be used for some home cleaning tasks.

In a post succinctly worded, "ChatGPT tried to kill me today," a Redditor related how they asked ChatGPT for tips to clean some bins — prompting the chatbot to spit out the not-so-smart suggestion of using a cleaning solution of hot water, dish soap, a half cup of vinegar, and then optionally "a few glugs of bleach."

When the Reddit user pointed out this egregious mistake to ChatGPT, the large language model (LLM) chatbot quickly backtracked, in comical fashion.

"OH MY GOD NO — THANK YOU FOR CATCHING THAT," the chatbot cried. "DO NOT EVER MIX BLEACH AND VINEGAR. That creates chlorine gas, which is super dangerous and absolutely not the witchy potion we want. Let me fix that section immediately."

Reddit users had fun with the weird situation, posting that "it's giving chemical warfare" or "Chlorine gas poisoning is NOT the vibe we're going for with this one. Let's file that one in the Woopsy Bads file!"

40
 
 

When I started working on this video about Palantir, I didn’t expect that it would make me want to have a panic attack. Then again, maybe panic is the appropriate response to learning that an artificial intelligence and surveillance company is actively collecting data on every American citizen in order to establish a technological dystopia.

41
42
 
 

An industry-backed researcher who has forged a career sowing doubt about the dangers of pollutants is attempting to use artificial intelligence (AI) to amplify his perspective.

Louis Anthony “Tony” Cox Jr, a Denver-based risk analyst and former Trump adviser who once reportedly claimed there is no proof that cleaning air saves lives, is developing an AI application to scan academic research for what he sees as the false conflation of correlation with causation.

43
 
 

Chinese President Xi Jinping and U.S. President Joe Biden agreed late in 2024 that artificial intelligence (AI) should never be empowered to decide to launch a nuclear war. The groundwork for this excellent policy decision was laid over five years of discussions at the Track II U.S.-China Dialogue on Artificial Intelligence and National Security convened by the Brookings Institution and Tsinghua University’s Center for International Security and Strategy.

By examining several cases from the U.S.-Soviet rivalry during the Cold War, one can see what might have happened if AI had existed back in that period and been trusted with the job of deciding to launch nuclear weapons or to preempt an anticipated nuclear attack—and had been wrong in its decisionmaking. Given the prevailing ideas, doctrines, and procedures of the day, an AI system “trained” on that information (perhaps through the use of many imaginary scenarios that reflected the current conventional wisdom) might have decided to launch nuclear weapons, with catastrophic results.

44
 
 
45
46
 
 
47
 
 

Source (Bluesky)

48
49
 
 
50
view more: ‹ prev next ›