77

Hermansson logged in to Google and began looking up results for the IQs of different nations. When he typed in “Pakistan IQ,” rather than getting a typical list of links, Hermansson was presented with Google’s AI-powered Overviews tool, which, confusingly to him, was on by default. It gave him a definitive answer of 80.

When he typed in “Sierra Leone IQ,” Google’s AI tool was even more specific: 45.07. The result for “Kenya IQ” was equally exact: 75.2.

Hmm, these numbers seem very low. I wonder how these scores were determined.

you are viewing a single comment's thread
view the rest of the comments
[-] grue@lemmy.world 5 points 4 weeks ago

I don't understand the title. LLM hallucinations have nothing to do with JAQing off.

[-] kitnaht@lemmy.world 22 points 4 weeks ago* (last edited 4 weeks ago)

Problem it wasn't a hallucination - it was referencing a paper that has been debunked. These aren't made up numbers, they're VERY specific numbers that come from a VERY specific paper.

This one: https://www.sciencedirect.com/science/article/abs/pii/S0160289610000450 -- If I'm not mistaken. Created by a Nazi Sympathizer Richard Lynn and the Pioneer Fund

The problem is that this also managed to get cited more than 22,000x creating a feedback effect that reinforced the AI's learning.

[-] grue@lemmy.world 1 points 4 weeks ago

Okay, but it's still got nothing to do with the dishonest rhetorical technique called "JAQing off" (a.k.a. "Just Asking Questions," a.k.a. "sealioning").

[-] kitnaht@lemmy.world 2 points 4 weeks ago

It's kind of a ... symptom ... of the community we're in. I wouldn't read into it too deeply.

I think the usual output from the AI Overview (or at least the goal) is to give a long and ostensibly Fair and Balanced summary. So in this case it would be expected to throw out "some say that people from Australia are extra dumb because of these studies, but others contend that those studies were badly performed" or whatever. Asking the question on more words to represent both sides so that it can pretend not to be partisan.

[-] grue@lemmy.world 1 points 4 weeks ago

Let me be more clear about this: an LLM trying to answer a question (successfully or otherwise) is doing basically the opposite of a human asking questions (disingenuously, as in "JAQing off," or otherwise).

I wasn't trying to solicit comments trying to explain what the LLM was doing; my point was simply that OP is confused and used a term incorrectly in the title.

[-] Amoeba_Girl@awful.systems 8 points 4 weeks ago

i like turtles

[-] khalid_salad@awful.systems 2 points 3 weeks ago* (last edited 3 weeks ago)

It's a reference to the fact that the kind of person who would try and justify this sort of race science is also the kind of person who is "just asking questions." Combined with the tech industry's tepid "it's just a tool, it's not inherently evil" bullshit, I think OPs point is obvious to anyone who isn't a pedant, deliberately acting in bad faith.

[-] froztbyte@awful.systems 2 points 3 weeks ago

you may wish to read the sidebar

this post was submitted on 25 Oct 2024
77 points (100.0% liked)

TechTakes

1427 readers
97 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS