1
4
2
124
submitted 13 hours ago by theacharnian@lemmy.ca to c/fuck_ai@lemmy.world

cross-posted from: https://lemmy.ca/post/28449417

Canadian mega landlord using AI ‘pricing scheme’ as it massively hikes rents

3
41
4
45
5
98

cross-posted from: https://lemmy.world/post/19416727

Artificial intelligence is worse than humans in every way at summarising documents and might actually create additional work for people, a government trial of the technology has found.

Amazon conducted the test earlier this year for Australia’s corporate regulator the Securities and Investments Commission (ASIC) using submissions made to an inquiry. The outcome of the trial was revealed in an answer to a questions on notice at the Senate select committee on adopting artificial intelligence.

The test involved testing generative AI models before selecting one to ingest five submissions from a parliamentary inquiry into audit and consultancy firms. The most promising model, Meta’s open source model Llama2-70B, was prompted to summarise the submissions with a focus on ASIC mentions, recommendations, references to more regulation, and to include the page references and context.

Ten ASIC staff, of varying levels of seniority, were also given the same task with similar prompts. Then, a group of reviewers blindly assessed the summaries produced by both humans and AI for coherency, length, ASIC references, regulation references and for identifying recommendations. They were unaware that this exercise involved AI at all.

These reviewers overwhelmingly found that the human summaries beat out their AI competitors on every criteria and on every submission, scoring an 81% on an internal rubric compared with the machine’s 47%.

6
75

If the only reason people care about NaNoWriMo is for the name and hashtag, somebody already pitched Writevember as a replacement. Honestly sounds better to me anyway.

I've heard other people say the tools/gamification/etc on the NaNoWriMo platform were really helpful though. For those people, how difficult would it be to potentially patch that stuff into the WriteFreely platform? As one of the only long-form Fediverse-native platforms still being actively developed, maybe they'd appreciate the boost in code contributions.

7
27

Google researchers had their AI "make" a Doom level, and now they're claiming they have a game engine. It is arrogant nonsense, and it only proves how desperate they are to take jobs away from every type of creator they can.

It's particularly offensive to do this with Doom, since making maps for that game is a particular art form, and individual creators are regarded very highly. To traipse into their scene and claim you can do it automatically is just... it's just disgusting.

#Doom #AI #Google #Techbo #GameDesign #GameDev #JimSterling #Jimquisition #StephanieSterling #Games #Gaming #Videogames

8
128
9
122
10
70
submitted 1 week ago by lemmee_in@lemm.ee to c/fuck_ai@lemmy.world

A Swedish financial services firm specialising in direct payments, pay-after-delivery options, and instalment plans is preparing to reduce its workforce by nearly 50 per cent as artificial intelligence automation becomes more prevalent.

Klarna, a buy-now, pay-later company, has reduced its workforce by over 1,000 employees in the past year, partially attributed to the increased use of artificial intelligence.

The company plans to implement further job cuts, resulting in a reduction of nearly 2,000 positions. Klarna's current employee count decreased from approximately 5,000 to 3,800 compared to last year.

A company spokesperson stated that the number of employees is expected to decrease to approximately 2,000 in the coming years, although they did not provide a specific timeline. In Klarna's interim financial report released on Tuesday, the company attributed the job cuts to its increasing reliance on artificial intelligence, enabling it to reduce its human workforce.

Klarna claims that its AI-powered chatbot can handle the workload previously managed by 700 full-time customer service agents. The company has reduced the average resolution time for customer service inquiries from 11 minutes to two while maintaining consistent customer satisfaction ratings compared to human agents.

11
46

I ran an AI startup back in 2017 and this was a huge deal for us and I’ve seen no actual improvement in this problem. NYTimes is spot on IMO

12
312
submitted 1 week ago by lemmee_in@lemm.ee to c/fuck_ai@lemmy.world

Meta has quietly unleashed a new web crawler to scour the internet and collect data en masse to feed its AI model.

The crawler, named the Meta External Agent, was launched last month, according to three firms that track web scrapers and bots across the web. The automated bot essentially copies, or “scrapes,” all the data that is publicly displayed on websites, for example the text in news articles or the conversations in online discussion groups.

A representative of Dark Visitors, which offers a tool for website owners to automatically block all known scraper bots, said Meta External Agent is analogous to OpenAI’s GPTBot, which scrapes the web for AI training data. Two other entities involved in tracking web scrapers confirmed the bot’s existence and its use for gathering AI training data.

While close to 25% of the world’s most popular websites now block GPTBot, only 2% are blocking Meta’s new bot, data from Dark Visitors shows.

Earlier this year, Mark Zuckerberg, Meta’s cofounder and longtime CEO, boasted on an earnings call that his company’s social platforms had amassed a data set for AI training that was even “greater than the Common Crawl,” an entity that has scraped roughly 3 billion web pages each month since 2011.

13
0
submitted 2 weeks ago by ZDL@ttrpg.network to c/fuck_ai@lemmy.world

Just in case that URL doesn't replicate the session properly I've added a screenshot of the session to the end.

A few things are obvious here. First the choice to trumpet the "strengths" of degenerative AI while qualifying the weaknesses is clearly a choice made in the programming of the system. In later interactions it claims that this was not specifically programmed into it but, as it says, it's a black box and there's no way to confirm nor deny anything it claims.

Which is, you know, pretty much the reason why degenerative AI can't be trusted.

14
135
submitted 2 weeks ago by admin@lemmy.haley.io to c/fuck_ai@lemmy.world
15
-16
submitted 2 weeks ago by admin@lemmy.haley.io to c/fuck_ai@lemmy.world
16
29
submitted 2 weeks ago by admin@lemmy.haley.io to c/fuck_ai@lemmy.world

cross-posted from: https://lemm.ee/post/40428405

cross-posted from: https://flipboard.social/users/TechDesk/statuses/113013778572529137

With the next generation of AI photo editing tools built into the Google’s flagship Pixel 9 family, our basic assumptions about photographs capturing a reality we can believe in are about to be seriously tested — and @theverge shows us why.

“An explosion from the side of an old brick building. A crashed bicycle in a city intersection. A cockroach in a box of takeout. It took less than 10 seconds to create each of these images with the Reimagine tool in the Pixel 9’s Magic Editor. They are crisp. They are in full color. They are high-fidelity. There is no suspicious background blur, no tell-tale sixth finger. These photographs are extraordinarily convincing, and they are all extremely f---ing fake.” Take a look at the pictures for yourself as The Verge ponders the implications of these new capabilities.

https://flip.it/AO_SK3

#AI #GenerativeAI #ArtificialIntelligence #Google #Pixel #Pixel9 #Smartphones #Photography #Tech

17
82

Lionsgate has parted ways with Eddie Egan, the marketing consultant who came up with the “Megalopolis” trailer that included fake quotes from famous film critics.

The studio pulled the trailer on Wednesday, after it was pointed out that the quotes trashing Francis Ford Coppola’s previous work did not actually appear in the critics’ reviews, and were in fact made up.

18
154
submitted 2 weeks ago by lemmee_in@lemm.ee to c/fuck_ai@lemmy.world

Software engineers may have to develop other skills soon as artificial intelligence takes over many coding tasks.

That's according to Amazon Web Services' CEO, Matt Garman, who shared his thoughts on the topic during an internal fireside chat held in June, according to a recording of the meeting obtained by Business Insider.

"If you go forward 24 months from now, or some amount of time — I can't exactly predict where it is — it's possible that most developers are not coding," said Garman, who became AWS's CEO in June.

"Coding is just kind of like the language that we talk to computers. It's not necessarily the skill in and of itself," the executive said. "The skill in and of itself is like, how do I innovate? How do I go build something that's interesting for my end users to use?"

This means the job of a software developer will change, Garman said.

"It just means that each of us has to get more in tune with what our customers need and what the actual end thing is that we're going to try to go build, because that's going to be more and more of what the work is as opposed to sitting down and actually writing code," he said.

19
22
submitted 2 weeks ago by Emperor@feddit.uk to c/fuck_ai@lemmy.world

cross-posted from: https://feddit.uk/post/16468762

There are few miniature painting contents as prestigious as the Golden Demon, Games Workshop’s showcase for the artistry and talent in the Warhammer hobby. After the March 2024 Golden Demon was marred by controversy around AI content in a gold-medal winning entry, GW has revised its guidelines, and any kind of AI assistance is out.

The Warhammer 40k single miniature category at the Adepticon 2024 Golden Demon was won by Neil Hollis, who submitted a custom, dinosaur-riding Aeldari Exodite (a fringe Warhammer 40k faction that has long been part of the lore but never received models). The model’s base included a backdrop image which, it emerged, had been generated using AI software.

Online discussions soon turned sour as fans quarrelled over the eligibility of the model, the relevance of a backdrop in a competition about painting miniatures, the ethics of AI-generated media, and Hollis’ responses to criticism.

Games Workshop didn’t issue any statements at the time, but it has since updated the rules for the next Golden Demon tournament. In the FAQs section of the latest Golden Demons rules packet, the answer to the question “Am I allowed to use Artificial Intelligence to generate any part of my entry?” is an emphatic “No”.

20
8

I'm currently trying to exit Gmail with all my emails if possible. However many comments are about why I shouldn't host my own server. So it got me thinking that there should be a new kind of email system not based on all the previous crud from the before times that we still use today.

And indeed, it looks like AI will be the driving force that ends email just like spam did the telephone. Sure the telephone is still around but no one uses teleconferencing anymore for example. We use teams and zoom and such other shitty pay services. So the pool is prime to reinvent email. The users may not see a big difference maybe, but the tech behind it may hopefully be simplified and decentralized as it was meant to be.

21
145
submitted 2 weeks ago* (last edited 2 weeks ago) by ptz@dubvee.org to c/fuck_ai@lemmy.world

Many Procreate users can breathe a sigh of relief now that the popular iPad illustration app has taken a definitive stance against generative AI. "We're not going to be introducing any generative AI into our products," Procreate CEO James Cuda said in a video posted to X. "I don't like what's happening to the industry, and I don't like what it's doing to artists."

The creative community's ire toward generative AI is driven by two main concerns: that AI models have been trained on their content without consent or compensation, and that widespread adoption of the technology will greatly reduce employment opportunities. Those concerns have driven some digital illustrators to seek out alternative solutions to apps that integrate generative AI tools, such as Adobe Photoshop. "Generative AI is ripping the humanity out of things. Built on a foundation of theft, the technology is steering us toward a barren future," Procreate said on the new AI section of its website. "We think machine learning is a compelling technology with a lot of merit, but the path generative AI is on is wrong for us."

I love seeing a product where not shoving in "AI" is the feature. Hope to see more.

22
78
submitted 2 weeks ago by lemmee_in@lemm.ee to c/fuck_ai@lemmy.world

Voters in Wyoming’s capital city on Tuesday are faced with deciding whether to elect a mayoral candidate who has proposed to let an artificial intelligence bot run the local government.

Earlier this year, the candidate in question – Victor Miller – filed for him and his customized ChatGPT bot, named Vic (Virtual Integrated Citizen), to run for mayor of Cheyenne, Wyoming. He has vowed to helm the city’s business with the AI bot if he wins.

Miller has said that the bot is capable of processing vast amounts of data and making unbiased decisions.

23
3
submitted 3 weeks ago by VerbFlow@lemmy.world to c/fuck_ai@lemmy.world

I recieved a comment from someone telling me that one of my posts had bad definitions, and he was right. Despite the massive problems caused by AI, it's important to specify what an AI does, how it is used, for what reason, and what type of people use it. I suppose judges might already be doing this, but regardless, an AI used by one dude for personal entertainment is different than a program used by a megacorporation to replace human workers, and must be judged differently. Here, then, are some specifications. If these are still too vague, please help with them.

a. What does the AI do?

  1. It takes in a dataset of images, specified by a prompt, and compiles them into a single image thru programming (like StaDiff, Dall-E, &c);
  2. It takes in a dataset of text, specified by a prompt, and compiles that into a single string of text (like ChatGPT, Gemini, &c);
  3. It takes in a dataset of sound samples, specified by a prompt, and compiles that into a single sound (like AIVA, MuseNet, &c).

b. What is the AI used for?

  1. It is used for drollery (applicable to a1 and a2);
  2. It is used for pornography (a1);
  3. It is used to replace stock images (a1);
  4. It is used to write apologies (a2);
  5. It is used to write scientific papers (this actually happened. a2);
  6. It is used to replace illustration that the user would've done themselves (a1);
  7. It is used to replace illustration by a wage-laborer (a1);
  8. It is used to write physical books to print out (a2);
  9. It is used to mock and degrade persons (a1, a3);
  10. It is used to mock and degrade persons sexually (a1, a3);
  11. It is used for propaganda (a1, a2, a3).

c. Who is using the AI?

  1. A lower-class to middle-class person;
  2. An upper-class person;
  3. A small business;
  4. A large business;
  5. An anonymous person;
  6. An organization dedicated to shifting public perception.

This was really tough to do. I'll see if I can touch up on it myself. As of now, Lemmy cannot do lists in lists.

24
26
25
42
submitted 3 weeks ago by ptz@dubvee.org to c/fuck_ai@lemmy.world

Artists defending a class-action lawsuit are claiming a major win this week in their fight to stop the most sophisticated AI image generators from copying billions of artworks to train AI models and replicate their styles without compensating artists. In an order on Monday, US district judge William Orrick denied key parts of motions to dismiss from Stability AI, Midjourney, Runway AI, and DeviantArt. The court will now allow artists to proceed with discovery on claims that AI image generators relying on Stable Diffusion violate both the Copyright Act and the Lanham Act, which protects artists from commercial misuse of their names and unique styles.

"We won BIG," an artist plaintiff, Karla Ortiz, wrote on X (formerly Twitter), celebrating the order. "Not only do we proceed on our copyright claims," but "this order also means companies who utilize" Stable Diffusion models and LAION-like datasets that scrape artists' works for AI training without permission "could now be liable for copyright infringement violations, amongst other violations." Lawyers for the artists, Joseph Saveri and Matthew Butterick, told Ars that artists suing "consider the Court's order a significant step forward for the case," as "the Court allowed Plaintiffs' core copyright-infringement claims against all four defendants to proceed."

view more: next ›

Fuck AI

1135 readers
176 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 5 months ago
MODERATORS