Architeuthis

joined 2 years ago
[–] Architeuthis@awful.systems 10 points 17 hours ago* (last edited 17 hours ago) (2 children)

Apparently the hacker who publicized a copy of the no fly list was leaked an article containing Yarvin's home address, which she promptly posted on bluesky. Won't link because I don't think we've had the doxxing discussion but It's easily findable now.

I'm mostly posting this because the article featured this photo:

I figure eventually some proprietary work would make it into the wild via autocomplete. Copilot used to be cool with inserting other programmer's names and emails in author notes for instance, though they seem to have started filtering that out in the mean time.

Copilot licenses let you specifically opt out from your prompts and your code being used to train new models, so it would be a big deal.

[–] Architeuthis@awful.systems 1 points 1 day ago* (last edited 1 day ago) (2 children)

We should be so lucky, the ensuing barrage of lawsuits about illegally cribbing company IP would probably make the book author class action damages pale in comparison.

[–] Architeuthis@awful.systems 9 points 3 days ago (1 children)

This is too corny and overdramatic for my tastes. It reads a bit like satire, complete with piling on the religious undertones there at the end.

[–] Architeuthis@awful.systems 4 points 4 days ago

Getting love bombed in that rationalist con he went to recently probably didn't help matters.

[–] Architeuthis@awful.systems 16 points 4 days ago* (last edited 4 days ago) (3 children)

The common clay of the new west:

transcriptChatGPT has become worthless

[Business & Professional]

I’m a paid member and asked it to help me research a topic and write a guide and it said it needed days to complete it. That’s a first. Usually it could do this task on the spot.

It missed the first deadline and missed 5 more. 3 weeks went by and it couldn’t get the task done. Went to Claude and it did it in 10 minutes. No idea what is going on with ChatGpt but I cancelled the pay plan.

Anyone else having this kind of issue?

[–] Architeuthis@awful.systems 6 points 4 days ago* (last edited 4 days ago) (1 children)

if one person came out and spilled the beans, it’d suggest that there might be more people who didn’t

I mean, after his full throated defense of Lynn's IQ map (featuring disgraced nazi college dropout Cremieux/TP0 as a subject matter expert) what other beans might be interesting enough to spill? Did he lie about becoming a kidney donor?

I think the emails are important because a) they make a case that for all his performative high-mindedness and deference to science and whinging about polygenic selection he came to his current views through the same white supremacist/great replacement milieu as every other pretentious gutter racist out there and b) he is so consistently disingenuous that the previous statement might not even matter much... he might honestly believe that priming impressionable well-off techies towards blood and soil fascism precursors was worth it if we end up allowing unchecked human genetic experimentation to come up with 260IQ babies that might have a fighting chance against shAItan.

I guess it could come out that despite his habit of including conflict of interest disclosures, his public views may be way more for sale than is generally perceived.

[–] Architeuthis@awful.systems 10 points 6 days ago* (last edited 6 days ago)

I wonder if this is just a really clumsy attempt to invent stretching the overton window from first principles or if he really is so terminally rationalist that he thinks a political ideology is a sliding scale of fungible points and being 23.17% ancap can be a meaningful statement.

That the exchange of ideas between friends is supposed to work a bit like the principle of communicating vessels is a pretty weird assumption, too. Also, if he thinks it's ok to admit that he straight up tries to manipulate friends in this way, imagine how he approaches non-friends.

Between this and him casually admitting that he keeps "culture war" topics alive on the substack because they get a ton of clicks, it's a safe bet that he can't be thinking too highly of his readership, although I suspect there is an esoteric/exoteric teachings divide that is mostly non-obvious from the online perspective.

[–] Architeuthis@awful.systems 8 points 6 days ago* (last edited 6 days ago) (13 children)

In his early blog posts, Scott Alexander talked about how he was not leaping through higher education in a single bound

He starts his recent article on AI psychosis by mixing up psychosis with schizophrenia (he calls psychosis a biological disease), so that tracks.

Other than that, I think it's ok in principle to be ideologically opposed to something even if you and yours happened to benefit from it. Of course, it immediately becomes iffy if it's a mechanism for social mobility that you don't plan on replacing, since in that case you are basically advocating for pulling up the ladder behind you.

[–] Architeuthis@awful.systems 15 points 6 days ago* (last edited 6 days ago)

Shamelessly reproduced from the other place:

A quick summary of his last three posts:

"Here's a thought experiment I came up with to try to justify the murder of tens of thousands of children."

"Lots of people got mad at me for my last post; have you considered that being mad at me makes me the victim and you a Nazi?"

"I'm actually winning so much right now: it's very normal that people keep worriedly speculating that I've suffered some sort of mental breakdown."

[–] Architeuthis@awful.systems 11 points 6 days ago (5 children)

I’m even grateful, in a way, to SneerClub, and to Woit and his minions. I’m grateful to them for so dramatically confirming that I’m not delusional: some portion of the world really is out to get me. I probably overestimated their power, but not their malevolence. […]

Honestly what he should actually be grateful for is how all his notoriety ever amounted to^[1]^ was a couple of obscure forums going 'look at this dumb asshole' and moving on.

He is an insecure and toxic serial overreactor with shit opinions and a huge unpopular-young-nerd chip on his shoulder, and who comes off as being one mildly concerted troll effort away from a psych ward at all times. And probably not even that, judging from Graham Linehan's life trajectory.

[1] besides Siskind using him to broaden his influence on incels and gamer gaters.

 

An excerpt has surfaced from the AI2027 podcast with siskind and the ex AI researcher, where the dear doctor makes the case for how an AGI could build an army of terminators in a year if it wanted.

It goes something like: OpenAI is worth as much as all US car companies (except tesla) combined, so it could buy up every car factory and convert it to a murderbot factory, because that's kind of like what the US gov did in WW2 to build bombers, reaching peak capacity in three years, and AGI would obviously be more efficient than a US wartime gov so let's say one year, generally a completely unassailable syllogism from very serious people.

Even /r/ssc commenters are calling him out about the whole AI doomer thing getting more noticeably culty than usual edit: The thread even features a rare heavily downvoted siskind post, -10 at the time of this edit.

The latter part of the clip is the interviewer pointing out that there might be technological bottlenecks that could require upending our entire economic model before stuff like curing cancer could be achieved, positing that if we somehow had AGI-like tech in the 1960s it would probably have to use its limited means to invent the entire tech tree that leads to late 2020s GPUs out of thin air, international supply chains and all, before starting on the road to becoming really useful.

Siskind then goes "nuh-uh!" and ultimately proceeds to give Elon's metaphorical asshole a tongue bath of unprecedented depth and rigor, all but claiming that what's keeping modern technology down is the inability to extract more man hours from Grimes' ex, and that's how we should view the eventual AGI-LLMs, like wittle Elons that don't need sleep. And didn't you know, having non-experts micromanage everything in a project is cool and awesome actually.

 

Kind of sounds like ultimately it would have been very illegal to do.

"We made the decision for the nonprofit to retain control of OpenAI after hearing from civic leaders and engaging in constructive dialogue with the offices of the Attorney General of Delaware and the Attorney General of California," OpenAI board chairman Bret Taylor said in a statement.

Asked about Musk's suit on a call with reporters, Altman said, "You all are obsessed with Elon, that's your job — like, more power to you. But we are here to think about our mission and figure out how to enable that. And that mission has not changed."

 

The types of information processed includes names, dates of birth, gender and ethnicity, and a number that identifies people on the police national computer.

Also to be shared – and listed under “special categories of personal data” - are “health markers which are expected to have significant predictive power”, such as data relating to mental health, addiction, suicide and vulnerability, and self-harm, as well as disability.

archive is

 

copy pasting the rules from last year's thread:

Rules: no spoilers.

The other rules are made up aswe go along.

Share code by link to a forge, home page, pastebin (Eric Wastl has one here) or code section in a comment.

 

Would've been way better if the author didn't feel the need to occasionally hand it to siskind for what amounts to keeping the mask on, even while he notes several instances where scotty openly discusses how maintaining a respectable facade is integral to his agenda of infecting polite society with neoreactionary fuckery.

 

AI Work Assistants Need a Lot of Handholding

Getting full value out of AI workplace assistants is turning out to require a heavy lift from enterprises. ‘It has been more work than anticipated,’ says one CIO.

aka we are currently in the process of realizing we are paying for the privilege of being the first to test an incomplete product.

Mandell said if she asks a question related to 2024 data, the AI tool might deliver an answer based on 2023 data. At Cargill, an AI tool failed to correctly answer a straightforward question about who is on the company’s executive team, the agricultural giant said. At Eli Lilly, a tool gave incorrect answers to questions about expense policies, said Diogo Rau, the pharmaceutical firm’s chief information and digital officer.

I mean, imagine all the non-obvious stuff it must be getting wrong at the same time.

He said the company is regularly updating and refining its data to ensure accurate results from AI tools accessing it. That process includes the organization’s data engineers validating and cleaning up incoming data, and curating it into a “golden record,” with no contradictory or duplicate information.

Please stop feeding the thing too much information, you're making it confused.

Some of the challenges with Copilot are related to the complicated art of prompting, Spataro said. Users might not understand how much context they actually need to give Copilot to get the right answer, he said, but he added that Copilot itself could also get better at asking for more context when it needs it.

Yeah, exactly like all the tech demos showed -- wait a minute!

[Google Cloud Chief Evangelist Richard Seroter said] “If you don’t have your data house in order, AI is going to be less valuable than it would be if it was,” he said. “You can’t just buy six units of AI and then magically change your business.”

Nevermind that that's exactly how we've been marketing it.

Oh well, I guess you'll just have to wait for chatgpt-6.66 that will surely fix everything, while voiced by charlize theron's non-union equivalent.

 

An AI company has been generating porn with gamers' idle GPU time in exchange for Fortnite skins and Roblox gift cards

"some workloads may generate images, text or video of a mature nature", and that any adult content generated is wiped from a users system as soon as the workload is completed.

However, one of Salad's clients is CivitAi, a platform for sharing AI generated images which has previously been investigated by 404 media. It found that the service hosts image generating AI models of specific people, whose image can then be combined with pornographic AI models to generate non-consensual sexual images.

Investigation link: https://www.404media.co/inside-the-ai-porn-marketplace-where-everything-and-everyone-is-for-sale/

 

For thursday's sentencing the us government indicated they would be happy with a 40-50 prison sentence, and in the list of reasons they cite there's this gem:

  1. Bankman-Fried's effective altruism and own statements about risk suggest he would be likely to commit another fraud if he determined it had high enough "expected value". They point to Caroline Ellison's testimony in which she said that Bankman-Fried had expressed to her that he would "be happy to flip a coin, if it came up tails and the world was destroyed, as long as if it came up heads the world would be like more than twice as good". They also point to Bankman-Fried's "own 'calculations'" described in his sentencing memo, in which he says his life now has negative expected value. "Such a calculus will inevitably lead him to trying again," they write.

Turns out making it a point of pride that you have the morality of an anime villain does not endear you to prosecutors, who knew.

Bonus: SBF's lawyers' list of assertions for asking for a shorter sentence includes this hilarious bit reasoning:

They argue that Bankman-Fried would not reoffend, for reasons including that "he would sooner suffer than bring disrepute to any philanthropic movement."

 

rootclaim appears to be yet another group of people who, having stumbled upon the idea of the Bayes rule as a good enough alternative to critical thinking, decided to try their luck in becoming a Serious and Important Arbiter of Truth in a Post-Mainstream-Journalism World.

This includes a randiesque challenge that they'll take a $100K bet that you can't prove them wrong on a select group of topics they've done deep dives on, like if the 2020 election was stolen (91% nay) or if covid was man-made and leaked from a lab (89% yay).

Also their methodology yields results like 95% certainty on Usain Bolt never having used PEDs, so it's not entirely surprising that the first person to take their challenge appears to have wiped the floor with them.

Don't worry though, they have taken the results of the debate to heart and according to their postmortem blogpost they learned many important lessons, like how they need to (checks notes) gameplan against the rules of the debate better? What a way to spend 100K... Maybe once you've reached a conclusion using the Sacred Method changing your mind becomes difficult.

I've included the novel-length judges opinions in the links below, where a cursory look indicates they are notably less charitable towards rootclaim's views than their postmortem indicates, pointing at stuff like logical inconsistencies and the inclusion of data that on closer look appear basically irrelevant to the thing they are trying to model probabilities for.

There's also like 18 hours of video of the debate if anyone wants to really get into it, but I'll tap out here.

ssc reddit thread

quantian's short writeup on the birdsite, will post screens in comments

pdf of judge's opinion that isn't quite book length, 27 pages, judge is a microbiologist and immunologist PhD

pdf of other judge's opinion that's 87 pages, judge is an applied mathematician PhD with a background in mathematical virology -- despite the length this is better organized and generally way more readable, if you can spare the time.

rootclaim's post mortem blogpost, includes more links to debate material and judge's opinions.

edit: added additional details to the pdf descriptions.

 

edited to add tl;dr: Siskind seems ticked off because recent papers on the genetics of schizophrenia are increasingly pointing out that at current miniscule levels of prevalence, even with the commonly accepted 80% heritability, actually developing the disorder is all but impossible unless at least some of the environmental factors are also in play. This is understandably very worrisome, since it indicates that even high heritability issues might be solvable without immediately employing eugenics.

Also notable because I don't think it's very often that eugenics grievances breach the surface in such an obvious way in a public siskind post, including the claim that the whole thing is just HBD denialists spreading FUD:

People really hate the finding that most diseases are substantially (often primarily) genetic. There’s a whole toolbox that people in denial about this use to sow doubt. Usually it involves misunderstanding polygenicity/omnigenicity, or confusing GWAS’ current inability to detect a gene with the gene not existing. I hope most people are already wise to these tactics.

 

... while at the same time not really worth worrying about so we should be concentrating on unnamed alleged mid term risks.

EY tweets are probably the lowest effort sneerclub content possible but the birdsite threw this to my face this morning so it's only fair you suffer too. Transcript follows:

Andrew Ng wrote:

In AI, the ratio of attention on hypothetical, future, forms of harm to actual, current, realized forms of harm seems out of whack.

Many of the hypothetical forms of harm, like AI "taking over", are based on highly questionable hypotheses about what technology that does not currently exist might do.

Every field should examine both future and current problems. But is there any other engineering discipline where this much attention is on hypothetical problems rather than actual problems?

EY replied:

I think when the near-term harm is massive numbers of young men and women dropping out of the human dating market, and the mid-term harm is the utter extermination of humanity, it makes sense to focus on policies motivated by preventing mid-term harm, if there's even a trade-off.

view more: next ›