zogwarg

joined 2 years ago
[–] zogwarg@awful.systems 7 points 2 days ago

An interesting talk on the impact of the impact of AI slop bug bounty submission on the curl project (youtube).

[–] zogwarg@awful.systems 1 points 4 days ago

I've definitely heard some of those in real life.

[–] zogwarg@awful.systems 16 points 4 days ago (1 children)

Having spent too much time listening to his shit, i don't think it's purely propagandistic, what he describes is too esoteric to work as effective propaganda, I think some of it is Nazi-being-drawn-to-the-occult type of shit.

[–] zogwarg@awful.systems 2 points 5 days ago

We have:

No more sycophancy—now the AI tells you what it believes. [...] We get common knowledge, which recently seems like an endangered species.

Followed by:

We could also have different versions of articles optimized for different audiences. The question is, how many audiences, but I think that for most articles, two good options would be “for a 12 years old child” and “standard encyclopedia article”. Maybe further split the adult audience to “layman” and “expert”?

You have got to love the consistency.

And the accidentally (or not so accidentally?) imperialistic:

The first idea is translation to languages other than English. Those languages often have fewer speakers, and consequently fewer Wikipedia volunteers. But for AI encyclopedia, volunteers are not a bottleneck. The easiest thing it could do is a 1:1 translation from the English version. But it could also add sources written in the other language, optimize the article for a different audience, etc.

And also a deep misunderstanding of translation, there is no such thing as 1:1 translation, it always requires re-interpretation.

[–] zogwarg@awful.systems 7 points 1 week ago (1 children)

My eyes are bleeding. WARNING: psychic damage will occur.

[–] zogwarg@awful.systems 2 points 1 week ago

When I was a kid in France it was Basic on TI and Casio graphing calculators, while in principle I agree that not every child will enjoy math, the sieve of Eratosthenes, LCM and GCD are good exercises for a first program. And i think it's easy to grasp that it's a lot less tedious to write a program for it, than to do it by hand.

[–] zogwarg@awful.systems 9 points 1 week ago (1 children)

I was thinking about why so many in the radical left participate in "speedrunning". The reason is the left's lack of work ethic ('go fast' rather than 'do it right') and, in a Petersonian sense, to elevate alternative sexual archetypes in the marketplace ('fastest mario'). Obviously, there are exceptions to this and some people more in the center or right also "speedrun". However, they more than sufficient to prove the rule, rather than contrast it. Consider how woke GDQ has been, almost since the very beginning. Your eyes will start to open. Returning to the topic of the work ethic... A "speedrunner" may well spend hours a day at their craft, but this is ultimately a meaningless exercise, since they will ultimately accomplish exactly that which is done in less collective time by a casual player. This is thus a waste of effort on the behalf of the "speedrunner". Put more simply, they are spending their work effort on something that someone else has already done (and done in a way deemed 'correct' by the creator of the artwork). Why do they do this? The answer is quite obvious if you think about it. The goal is the illusion of speed and the desire (SUBCONSCIOUS) to promote radical leftist, borderline Communist ideals of how easy work is. Everyone always says that "speedruns" look easy. That is part of the aesthetic. Think about the phrase "fully automated luxury Communism" in the context of "speedrunning" and I strongly suspect that things will start to 'click' in your mind. What happens to the individual in this? Individual accomplishment in "speedrunning" is simply waiting for another person to steal your techniques in order to defeat you. Where is something like "intellectual property" or "patent" in this necessarily communitarian process? Now, as to the sexual archetype model and 'speedrunning' generally... If you have any passing familiarity with Jordan Peterson's broader oeuvre and of Jungian psychology, you likely already know where I am going with this. However, I will say more for the uninitiated. Keep this passage from Maps of Meaning (91) in mind: "The Archetypal Son... continually reconstructs defined territory, as a consequence of the 'assimilation' of the unknown [as a consequence of 'incestuous' (that is, 'sexual' – read creative) union with the Great Mother]" In other words, there is a connection between 'sexuality' and creativity that we see throughout time (as Peterson points out with Tiamat and other examples). In the sexual marketplace, which archetypes are simultaneously deemed the most creative and valued the highest? The answer is obviously entrepreneurs like Elon Musk and others. Given that we evolved and each thing we do must have an evolutionary purpose (OR CAUSE), what archetype is the 'speedrunner' engaged in, who is accomplishing nothing new? They are aiming to make a new sexual archetype, based upon 'speed' rather than 'doing things right' and refuse ownership of what few innovations they can provide to their own scene, denying creativity within their very own sexual archetype. This is necessarily leftist. The obvious protest to this would be the 'glitchless 100% run', which in many ways does aim to play the game 'as intended' but seems to simply add the element of 'speed' to the equation. This objection is ultimately meaningless when one considers how long a game is intended to be played, in net, by the creators, even when under '100%' conditions. There is still time and effort wasted for no reason other than the ones I proposed above. By now, I am sure that I have bothered a number of you and rustled quite a few of your feathers. I am not saying that 'speedrunning' is bad, but rather that, thinking about the topic philosophically, there are dangerous elements within it. That is all.

[–] zogwarg@awful.systems 5 points 1 week ago

Are they drawn to the cult because they are obsessed with status, or does the cult foster this obssession? Yes.

[–] zogwarg@awful.systems 7 points 1 week ago* (last edited 1 week ago)

It's almost endearing (or sad) that he believes (or very strongly wants to believe) his experience is "typical", exploring the boundaries of what you are attracted to typically doesn't involve this much evo-pysch psychobabble, or even this much fragile masculinity.

[–] zogwarg@awful.systems 2 points 1 week ago (1 children)
[–] zogwarg@awful.systems 3 points 1 week ago

Some of it is driven by translation agencies, which will refer work to freelance translators.

I would say the biggest gap is that many customers aren’t even bothering to use translators at all, and the ones that do realize it needs fixing up don’t really understand the work involved, many people misunderstand translation as being a 1-1 process, and think that Machine translation got you most of the way there.

It’s also the are we willing to pay that much more, when the shitty translation is “good enough”.

One big issue is that translation as a low barrier of entry, and many people will accept stupid work at stupid rates, and to keep rates high you have to prove the added value.

(Proving the added value as also gotten harder, as some clients even more often than before will “correct” your work before publish it, as highlighted in the article)

[–] zogwarg@awful.systems 12 points 2 weeks ago (2 children)

It's also a lot less pleasant of a task, it's like wearing a straightjacket, and compared to CAT (eg: automatically using glossaries for technical terms) actually slows you down, if the translation is quite far from how you would naturally phrase things.

Source: Parents are Professional translators. (They've certainly seen work dry up, they don't do MTPE it's still not really worth their time, they still get $$$ for critically important stuff, and live interpreting [Live interpreting is definetely a skill that takes time to learn compared to translation.])

 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Semi-obligatory thanks to @dgerard for starting this)

 

Source: nitter, twitter

Transcribed:

Max Tegmark (@tegmark):
No, LLM's aren't mere stochastic parrots: Llama-2 contains a detailed model of the world, quite literally! We even discover a "longitude neuron"

Wes Gurnee (@wesg52):
Do language models have an internal world model? A sense of time? At multiple spatiotemporal scales?
In a new paper with @tegmark we provide evidence that they do by finding a literal map of the world inside the activations of Llama-2! [image with colorful dots on a map]


With this dastardly deliberate simplification of what it means to have a world model, we've been struck a mortal blow in our skepticism towards LLMs; we have no choice but to convert surely!

(*) Asterisk:
Not an actual literal map, what they really mean to say is that they've trained "linear probes" (it's own mini-model) on the activation layers, for a bunch of inputs, and minimizing loss for latitude and longitude (and/or time, blah blah).

And yes from the activations you can get a fuzzy distribution of lat,long on a map, and yes they've been able to isolated individual "neurons" that seem to correlate in activation with latitude and longitude. (frankly not being able to find one would have been surprising to me, this doesn't mean LLM's aren't just big statistical machines, in this case being trained with data containing literal lat,long tuples for cities in particular)

It's a neat visualization and result but it is sort of comically missing the point


Bonus sneers from @emilymbender:

  • You know what's most striking about this graphic? It's not that mentions of people/cities/etc from different continents cluster together in terms of word co-occurrences. It's just how sparse the data from the Global South are. -- Also, no, that's not what "world model" means if you're talking about the relevance of world models to language understanding. (source)
  • "We can overlay it on a map" != "world model" (source)
 

Nitter link

With interspaced sneerious rephrasing:

In the close vicinity of sorta-maybe-human-level general-ish AI, there may not be any sharp border between levels of increasing generality, or any objectively correct place to call it AGI. Any process is continuous if you zoom in close enough.

The profound mysteries of reality carving, means I get to move the goalposts as much as I want. Besides I need to re-iterate now that the foompocalypse is imminent!

Unless, empirically, somewhere along the line there's a cascade of related abilities snowballing. In which case we will then say, post facto, that there's a jump to hyperspace which happens at that point; and we'll probably call that "the threshold of AGI", after the fact.

I can't prove this, but it's the central tenet of my faith, we will recognize the face of god when we see it. I regret that our hindsight 20-20 event is so ~~conveniently~~ inconveniently placed in the future, the bad one no less.

Theory doesn't predict-with-certainty that any such jump happens for AIs short of superhuman.

See how much authority I have, it is not "My Theory" it is "The Theory", I have stared into the abyss and it peered back and marked me as its prophet.

If you zoom out on an evolutionary scale, that sort of capability jump empirically happened with humans--suddenly popping out writing and shortly after spaceships, in a tiny fragment of evolutionary time, without much further scaling of their brains.

The forward arrow of Progress™ is inevitable! S-curves don't exist! The y-axis is practically infinite!
We should extrapolate only from the past (eugenically scaled certainly) century!
Almost 10 000 years of written history, and millions of years of unwritten history for the human family counts for nothing!

I don't know a theoretically inevitable reason to predict certainly that some sharp jump like that happens with LLM scaling at a point before the world ends. There obviously could be a cascade like that for all I currently know; and there could also be a theoretical insight which would make that prediction obviously necessary. It's just that I don't have any such knowledge myself.

I know the AI god is a NeCeSSarY outcome, I'm not sure where to plant the goalposts for LLM's and still be taken seriously. See how humble I am for admitting fallibility on this specific topic.

Absent that sort of human-style sudden capability jump, we may instead see an increasingly complicated debate about "how general is the latest AI exactly" and then "is this AI as general as a human yet", which--if all hell doesn't break loose at some earlier point--softly shifts over to "is this AI smarter and more general than the average human". The world didn't end when John von Neumann came along--albeit only one of him, running at a human speed.

Let me vaguely echo some of my beliefs:

  • History is driven by great men (of which I must be, but cannot so openly say), see our dearest elevated and canonized von Neumann.
  • JvN was so much above the average plebeian man (IQ and eugenics good?) and the AI god will be greater.
  • The greatest single entity/man will be the epitome of Intelligence™, breaking the wheel of history.

There isn't any objective fact about whether or not GPT-4 is a dumber-than-human "Artificial General Intelligence"; just a question of where you draw an arbitrary line about using the word "AGI". Albeit that itself is a drastically different state of affairs than in 2018, when there was no reasonable doubt that no publicly known program on the planet was worthy of being called an Artificial General Intelligence.

No no no, General (or Super) Intelligence is not an completely un-scoped metric. Again it is merely a fuzzy boundary where I will be able to arbitrarily move the goalposts while being able to claim my opponents are!

We're now in the era where whether or not you call the current best stuff "AGI" is a question of definitions and taste. The world may or may not end abruptly before we reach a phase where only the evidence-oblivious are refusing to call publicly-demonstrated models "AGI".

Purity-testing ahoy, you will be instructed to say shibboleth three times and present your Asherah poles for inspection. Do these mean unbelievers not see these N-rays as I do ? What do you mean we have (or almost have, I don't want to be too easily dismissed) is not evidence of sparks of intelligence?

All of this is to say that you should probably ignore attempts to say (or deniably hint) "We achieved AGI!" about the next round of capability gains.

Wasn't Sam the Altman so recently cheeky? He'll ruin my grift!

I model that this is partially trying to grab hype, and mostly trying to pull a false fire alarm in hopes of replacing hostile legislation with confusion. After all, if current tech is already "AGI", future tech couldn't be any worse or more dangerous than that, right? Why, there doesn't even exist any coherent concern you could talk about, once the word "AGI" only refers to things that you're already doing!

Again I reserve the right to remain arbitrarily alarmist to maintain my doom cult.

Pulling the AGI alarm could be appropriate if a research group saw a sudden cascade of sharply increased capabilities feeding into each other, whose result was unmistakeably human-general to anyone with eyes.

Observing intelligence is famously something eyes are SufFicIent for! No this is not my implied racist, judge someone by the color of their skin, values seeping through.

If that hasn't happened, though, deniably crying "AGI!" should be most obviously interpreted as enemy action to promote confusion; under the cover of selfishly grabbing for hype; as carried out based on carefully blind political instincts that wordlessly notice the benefit to themselves of their 'jokes' or 'choice of terminology' without there being allowed to be a conscious plan about that.

See Unbelievers! I can also detect the currents of misleading hype, I am no buffoon, only these hypesters are not undermining your concerns, they are undermining mine: namely damaging our ability to appear serious and recruit new cult members.

 

source nitter link

@EY
This advice won't be for everyone, but: anytime you're tempted to say "I was traumatized by X", try reframing this in your internal dialogue as "After X, my brain incorrectly learned that Y".

I have to admit, for a brief moment i thought he was correctly expressing displeasure at twitter.

@EY
This is of course a dangerous sort of tweet, but I predict that including variables into it will keep out the worst of the online riff-raff - the would-be bullies will correctly predict that their audiences' eyes would glaze over on reading a QT with variables.

Fool! This bully (is it weird to speak in the third person ?) thinks using variables here makes it MORE sneer worthy, especially since this appear to be a general advice, but i would struggle to think of a single instance in my life where it's been applicable.

 

Source Tweet

@ESYudkowsky: Remember when you were a kid and thought you might have psychic powers, so you dealt yourself face-down playing cards and tried to guess whether they were red or black, and recorded your accuracy rate over several batches of tries?

|

And then remember how you had absolutely no idea to do stats at that age, so you stayed confused for a while longer?


Apologies for the usage of the japanese; but it is a very apt description: https://en.wikipedia.org/wiki/Chūnibyō,

view more: next ›