1
71
2
35
3
18
4
17

This morning I was searching for vegan options for hide glue

https://www.woodworkersjournal.com/dr-jekylls-hyde-glue-the-vegans-alternative/

I ended up in that page which I think it's a joke. So then I searched for that in Amazon.

This morning on my work computer I get this Dr. Jekyll thing provided to me by Microsoft's AI driven shit blaster.

You tell me WTF. My phone is not connected to my home computer and my work computer is not connected to either. How the fuck do they figure out the connection. And why? Like I totally I'm not interested in this particular story guy or even the fake glue that was the start of the joke.

5
10
6
29
submitted 3 days ago by lemmee_in@lemm.ee to c/fuck_ai@lemmy.world
7
55
submitted 4 days ago by lemmee_in@lemm.ee to c/fuck_ai@lemmy.world

The photograph in Arthur’s article showed what had happened in a particular street. Taken with a telephoto lens from an upper storey of a building, it showed a chaotic and almost surreal scene: about 70 vehicles of all sizes jumbled up and scattered at crazy angles along the length of the street.

It was an astonishing image which really stopped me in my tracks. Not surprisingly, it also went viral on social media. And then came the reaction: “AI image, fake news.” The photograph was so vivid, so uncannily sharp and unreal, that it looked to viewers like something that they could have faked themselves using Midjourney or Dall-E or a host of other generative AI tools.

8
-4
submitted 3 days ago* (last edited 2 days ago) by mistahbenny@lemmy.ml to c/fuck_ai@lemmy.world

It can be and will always be used as a subject of art, at the very least and unconditionally... The best thing we can do as a collective is try to isolate it in order to make it „learn itself“. Or make it non nonexistent, which will be pretty hard.

9
45

What a wonderful piece of Linux propaganda 😁. Look at this piece of shit spying on me at work doing who knows what that it needs more than one process.

10
28
11
40
submitted 1 week ago by lemmee_in@lemm.ee to c/fuck_ai@lemmy.world

Meta has historically restricted its LLMs from uses that could cause harm – but that has apparently changed. The Facebook giant has announced it will allow the US government to use its Llama model family for, among other things, defense and national security applications.

12
90
submitted 1 week ago by ptz@dubvee.org to c/fuck_ai@lemmy.world

The CEO of AI search company Perplexity, Aravind Srinivas, has offered to cross picket lines and provide services to mitigate the effect of a strike by New York Times tech workers.

13
97
submitted 1 week ago by ptz@dubvee.org to c/fuck_ai@lemmy.world

Cross-posted from "AI-driven bot network trying to help Trump win US election" by @MicroWave@lemmy.world in !news@lemmy.world


Summary

Researcher Elise Thomas uncovered a network of AI-driven bots on X (formerly Twitter) promoting Donald Trump ahead of the U.S. presidential election.

The bots, which sometimes inadvertently reveal their AI origins, were identified through telltale signs like outdated hashtags and accidental “refusals.” The accounts, many of which were blue check-verified, act as amplifiers for central “originator” accounts.

Though suspended by X after Thomas reported them, the network highlights the potential of AI to automate disinformation, making it challenging to attribute and detect such operations in future elections.


If anyone didn't see this coming, I would like to know what the property taxes are like on the rock you're living under.

14
122

Meta is one of several tech companies vying for a nuclear boost.

15
93
submitted 1 week ago* (last edited 1 week ago) by FlyingSquid@lemmy.world to c/fuck_ai@lemmy.world

Meta has faced a setback in its plan to build data centers run on nuclear power. The FT reports that CEO Mark Zuckerberg told staff last week that the land it was planning to build a new data center on was discovered to be the home of a rare bee species, which would have complicated the building process.

16
93

OpenAI’s Whisper tool may add fake text to medical transcripts, investigation finds.

17
1271
18
93
submitted 1 week ago* (last edited 1 week ago) by Dot@feddit.org to c/fuck_ai@lemmy.world
19
25
AI search could break the web. (www.technologyreview.com)
submitted 2 weeks ago by Dot@feddit.org to c/fuck_ai@lemmy.world
20
123
submitted 2 weeks ago by Dot@feddit.org to c/fuck_ai@lemmy.world
21
10
submitted 2 weeks ago by lemmee_in@lemm.ee to c/fuck_ai@lemmy.world
22
9
submitted 2 weeks ago* (last edited 2 weeks ago) by Dot@feddit.org to c/fuck_ai@lemmy.world

This article is talking about phishing websites made by scammers with obvious signs that it was made by LLMs.

I thought it might be interesting here.

23
72
submitted 2 weeks ago by lemmee_in@lemm.ee to c/fuck_ai@lemmy.world

Meta is “working with the public sector to adopt Llama across the US government,” according to CEO Mark Zuckerberg.

The comment, made during his opening remarks for Meta’s Q3 earnings call on Wednesday, raises a lot of important questions: Exactly which parts of the government will use Meta’s AI models? What will the AI be used for? Will there be any kind of military-specific applications of Llama? Is Meta getting paid for any of this?

When I asked Meta to elaborate, spokesperson Faith Eischen told me via email that “we’ve partnered with the US State Department to see how Llama could help address different challenges — from expanding access to safe water and reliable electricity, to helping support small businesses.” She also said the company has “been in touch with the Department of Education to learn how Llama could help make the financial aid process more user friendly for students and are in discussions with others about how Llama could be utilized to benefit the government.”

She added that there was “no payment involved” in these partnerships.

yeah fck them, for now until the government relies on their AI

24
20
submitted 2 weeks ago by Dot@feddit.org to c/fuck_ai@lemmy.world
  • A new OpenAI study using their SimpleQA benchmark shows that even the most advanced AI language models fail more often than they succeed when answering factual questions, with OpenAI's best model achieving only a 42.7% success rate.
  • The SimpleQA test contains 4,326 questions across science, politics, and art, with each question designed to have one clear correct answer. Anthropic's Claude models performed worse than OpenAI's, but smaller Claude models more often declined to answer when uncertain (which is good!).
  • The study also shows that AI models significantly overestimate their capabilities, consistently giving inflated confidence scores. OpenAI has made SimpleQA publicly available to support the development of more reliable language models.
25
-2

These are better than those weird videos.

view more: next ›

Fuck AI

1346 readers
124 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 8 months ago
MODERATORS