[-] pavnilschanda@lemmy.world 56 points 1 month ago* (last edited 1 month ago)

sighs in indonesian

pulls out searx

20
27

cross-posted from: https://discuss.tchncs.de/post/18541227

cross-posted from: https://discuss.tchncs.de/post/18541226

Google’s research focuses on real harm that generative AI is currently causing and could get worse in the future. Namely, that generative AI makes it very easy for anyone to flood the internet with generated text, audio, images, and videos.

140
63
[-] pavnilschanda@lemmy.world 39 points 2 months ago

Being neurodivergent does that to you

103
129

cross-posted from: https://lemmy.world/post/16969151

I wasn't aware just how good the news is on the green energy front until reading this. We still have a tough road in the short/medium term, but we are more or less irreversibly headed in the right direction.

39

cross-posted from: https://lemmy.zip/post/17964868

Photographers say the social media giant is applying a ‘Made with AI’ label to photos they took, causing confusion for users.

42

cross-posted from: https://lemmy.world/post/16841877

The world's top two AI startups are ignoring requests by media publishers to stop scraping their web content for free model training data, Business Insider has learned.

OpenAI and Anthropic have been found to be either ignoring or circumventing an established web rule, called robots.txt, that prevents automated scraping of websites.

TollBit, a startup aiming to broker paid licensing deals between publishers and AI companies, found several AI companies are acting in this way and informed certain large publishers in a Friday letter, which was reported earlier by Reuters. The letter did not include the names of any of the AI companies accused of skirting the rule.

OpenAI and Anthropic have stated publicly that they respect robots.txt and blocks to their specific web crawlers, GPTBot and ClaudeBot.

However, according to TollBit's findings, such blocks are not being respected, as claimed. AI companies, including OpenAI and Anthropic, are simply choosing to "bypass" robots.txt in order to retrieve or scrape all of the content from a given website or page.

A spokeswoman for OpenAI declined to comment beyond pointing BI to a corporate blogpost from May, in which the company says it takes web crawler permissions "into account each time we train a new model." A spokesperson for Anthropic did not respond to emails seeking comment.

Robots.txt is a single bit of code that's been used since the late 1990s as a way for websites to tell bot crawlers they don't want their data scraped and collected. It was widely accepted as one of the unofficial rules supporting the web.

44

cross-posted from: https://awful.systems/post/1734913

another obviously correct opinion from Lucidity

[-] pavnilschanda@lemmy.world 87 points 3 months ago

As far as I know, Apple's implementation of LLMs is completely opt-in

137

cross-posted from: https://lemmy.zip/post/17261222

While “AI” (language learning models) certainly could help journalism, the fail upward brunchlords in charge of most modern media outlets instead see the technology as a way to cut corners, undermine labor, and badly automate low-quality, ultra-low effort, SEO-chasing clickbait.

[-] pavnilschanda@lemmy.world 42 points 3 months ago* (last edited 3 months ago)

Based on the discussion that I've seen, it looks like the "Anti-AI" motive was an excuse since all the hack was doing was to steal API keys and potentially sell them. Here's a discussion thread on reddit that goes into this more.

13

cross-posted from: https://lemmy.dbzer0.com/post/20826311

Source

I see Google's deal with Reddit is going just great...

3
[-] pavnilschanda@lemmy.world 111 points 3 months ago

A problem that I see getting brought up is that generated AI images makes it harder to notice photos of actual victims, making it harder to locate and save them

[-] pavnilschanda@lemmy.world 59 points 4 months ago

Not to mention that it's hard to intuitively watch longterm videos on TikTok. Once you swipe a video, it's gone, unless you save it on favorites, and it's not easy to get back to the video, either. At least on YouTube, it's easier to go back to the paused video from the homepage.

[-] pavnilschanda@lemmy.world 78 points 7 months ago

What about volunteering groups? I'm in my 20s but volunteering groups tend to have people on the older side. It helps that people in their 40s and over tend to be financially stable and would spend their free time to volunteer.

[-] pavnilschanda@lemmy.world 44 points 7 months ago

Apparently people who specialize in AI/ML have a very hard time trying to replicate the desired results when training models with 'poisoned' data. Is that true?

[-] pavnilschanda@lemmy.world 54 points 1 year ago

Does anyone else think that NFTs are an allegory/miniature version of how art is easily commodified by capitalism? IIRC, NFTs were there to help finance artists who work on a purely digital medium, but then grifters coopted the NFT space and try to sell sets of same-looking artwork. Complete with "fandoms" and drama, as well.

[-] pavnilschanda@lemmy.world 43 points 1 year ago

This will definitely make customers less trustful of Microsoft when dealing with their privacy-focused AI projects. Here's to hoping that open-source LLMs become more advanced and optimized.

[-] pavnilschanda@lemmy.world 41 points 1 year ago

Talk about your ex. Or at least that's a pro tip that I like to hear often

[-] pavnilschanda@lemmy.world 91 points 1 year ago

Honestly apps like Threads and Twitter should just be a containment site for these types of people. Let them be...

view more: next ›

pavnilschanda

joined 1 year ago
MODERATOR OF