corbin

joined 2 years ago
[–] corbin@awful.systems 3 points 3 hours ago

We have EFTs via ABA numbers and they are common for B2B transactions. Retail customers prefer payment processors for the ability to partially or totally reverse fraudulent transactions, though; contrasting the fairly positive reputation of PayPal's Venmo with the big banks' Zelle, the latter doesn't have as much fraud protection.

Now, you might argue that folks in the USA are too eager to transmit money to anybody that asks, and that they should put more effort into resisting being defrauded.

[–] corbin@awful.systems 6 points 3 hours ago

Side sneer: the table-saw quote comes from this skeet by Simon W. I've concluded that Simon doesn't know much about the practice of woodworking, even though he seems to have looked up the basics of the history. Meanwhile I have this cool-looking chair design open in a side tab and hope to build a couple during July.

Here's a better take! Slop-bots are like wood glue: a slurry of proteins that can join any two pieces of wood, Whatever their shapes may be, as long as they have a flat surface in common. (Don't ask where the proteins come from.) It's not hard to learn to mix in sawdust so that Whatever non-flat shapes can be joined. Or, if we start with flat pieces of Whatever wood, we can make plywood. Honestly, sawdust is inevitable and easier than planing, so just throw Whatever wood into a chipper and use the shards to make MDF. MDF is so cheap that we can imagine Whatever shape made with lumber, conceptually decompose it into Whatever pieces of MDF are manufactory, conceptually slice those pieces into Whatever is flat and easy to ship, and we get flat-paks.

So how did flat-paks change carpentry? Well, ignoring that my family has always made their own furniture in the garage, my grandparents bought from trusted family & friends, my parents bought from Eddie Bauer, and I buy from IKEA. My grandparents' furniture was sold as part of their estate, my parents still have a few pieces like dining tables and chairs, and my furniture needs to be replaced every decade because it is cheap and falls apart relatively quickly. Similarly, using slop-bots to produce software is going to make a cheap good that needs to be replaced often and has high maintenance costs.

To be fair to Simon, the cheapness of IKEA furniture means that it can be readily hacked. I've hacked lots of my furniture precisely because I have a spare flat-pak in the closet! But software is already cheap to version and backup, so it can be hacked too.

[–] corbin@awful.systems 6 points 5 days ago

Frankly this isn't even half as good as their off-the-cuff comments two years ago. There's a lot of poser energy here as they try to invoke the concepts of "senior engineer" and "CEO" as desirable, achievable, precise vocations rather than job titles. In particular, this bit:

Look, CEOs, I'm one of you so I get it.

This is one of the most out-of-touch positions I've ever seen. In no particular order: CEOs generally don't understand, CEOs form a Big Club and you ain't in it, CEOs don't actually have power in their organization but delegate power flowing from the board of directors, CEOs are inherently disrespectable because their jobs are superfluous, and finally CEOs don't take business advice from one-person companies unless it's through a paid contract.

The job title naturally associated to a one-person limited-liability company is usually "manager" or "owner", and it says nothing about job responsibilities.

Finally, while I think that their zest for fiction is admirable, it would help to critically consider what they're endorsing. Dune's Butlerian Jihad resulted in neo-Catholicism which effuses the narrative; it's not a desirable outcome. Paraphrasing the Unabomber is fairly poor taste, especially considering that they are sitting in a city in Canada and not a shack in the wilderness of Montana.

[–] corbin@awful.systems 8 points 5 days ago

Well, yes. It's not a new concept; it was a staple of Cold War sci-fi like The Three Stigmata, and we know from studies of e.g. Pentacostal worship that it is pretty easy to broadcast a suggestion to a large group of vulnerable people and get at least some of them to radically alter their worldview. We also know a reliable formula for changing people's beliefs; we use the same formula in sensitivity training as we did in MKUltra, including belief challenges, suspension of disbelief, induction/inception, lovebombing, and depersonalization. We also have a constant train of psychologists attempting to nudgelord society, gently pushing mass suggestions and trying to slowly change opinions at scale.

Fundamentally your sneer is a little incomplete. MKUltra wasn't just about forcing people to challenge their beliefs via argumentation and occult indoctrination, but also psychoactive inhibition-lowering drugs. In this setting, the drugs are administered after institutionalization.

[–] corbin@awful.systems 2 points 6 days ago

Read carefully. On p1-2, the judge makes it clear that "the incentive for human beings to create artistic and scientific works" is "the ability of copyright holders to make money from their works," to the law, there isn't any other reason to publish art. This is why I'm so dour on copyright, folks; it's not for you who love to make art and prize it for its cultural impact and expressive power, but for folks who want to trade art for money.

On p3, a contrast appears between Chhabria and Alsup (yes, that Alsup); the latter knows what a computer is and how to program it, and this makes him less respectful of copyright overall. Chhabria doesn't really hide that they think Meta didn't earn their summary judgement, presumably because they disagree with Alsup about whether this is a "competitive or creative displacement." That's fair given the central pillar of the decision on p4:

Llama is not capable of generating enough text from the plantiffs' books to matter, and the plaintiffs are not entitled to the market for licensing their works as AI training data.

An analogy might make this clearer. Suppose a transient person on a street corner is babbling. Occasionally they spout what sounds like a quote from a Star Wars film. Intrigued, we prompt the transient to recite the entirety of Star Wars, and they proceed to mostly recreate the original film, complete with sound effects and voice acting, only getting a few details wrong. Does it matter whether the transient paid to watch the original film (as opposed to somebody else paying the fee)? No, their recreation might be candid and yet not faithful enough to infringe. Is Lucas entitled to a licensing fee for every time the transient happens to learn something about Star Wars? Eh, not yet, but Disney's working on it. This is why everybody is so concerned about whether the material was pirated, regardless of how it was paid for; they want to say that what's disallowed is not the babbling on the street but the access to the copyrighted material itself.

Almost every technical claim on p8-9 is simplified to the point of incorrectness. They are talking points about Transformers turned into aphorisms and then axioms. The wrongest claim is on p9, that "to be able to generate a wide range of text … an LLM's training data set must be large and diverse" (it need only be diverse, not large) followed by the claim that an LLM's "memory" must be trained on books or equivalent "especially valuable training data" in order to "work with larger amounts of text at once" (conflating hyperparameters with learned parameters.) These claims show how the judge fails to actually engage with the technical details and thus paints with a broad brush dipped in the wrong color.

On p12, the technical wrongness overflows. Any language model can be forced to replicate a copyrighted work, or to avoid replication, by sampling techniques; this is why perplexity is so important as a metric. What would have genuinely been interesting is whether Llama is low-perplexity on the copyrighted works, not the rate of exact replications, since that's the key to getting Llama to produce unlimited Harry Potter slash or whatever.

On p17 the judge ought to read up on how Shannon and Markov initially figured out information theory. LLMs read like Shannon's model, and in that sense they're just like humans: left to right, top to bottom, chunking characters into words, predicting shapes and punctuation. Pretending otherwise is powdered-wig sophistry or perhaps robophobia.

On p23 Meta cites fuckin' Sega v. Accolade! This is how I know y'all don't read the opinions; you'd be hyped too. I want to see them cite Galoob next. For those of you who don't remember the 90s, the NES and Genesis were video game consoles, and these cases established our right to emulate them and write our own games for them.

p28-36 is the judge giving free legal advice. I find their line of argumentation tenuous. Consider Minions; Minions are bad, Minions are generic, and Minions can be used to crank out infinite amounts of slop. But, as established at the top, whoever owns Minions has the right to profit from Minions, and that is the lone incentive by which they go to market. However, Minions are arbitrary; there's no reason why they should do well in the market, given how generic and bad they are. So if we accept their argument then copyright becomes an excuse for arbitrary winners to extract rent from cultural artifacts. For a serious example, look up the ironic commercialization of the Monopoly brand.

[–] corbin@awful.systems 1 points 1 week ago

Top-level commenters would do well to read Authors Guild v Google, two decades ago. They're also invited to rend their garments and gnash their teeth at Google, if they like.

[–] corbin@awful.systems 21 points 1 week ago

Last Week Tonight's rant of the week is about AI slop. A Youtube video is available here. Their presentation is sufficiently down-to-earth to be sharable with parents and extended family, focusing on fake viral videos spreading via Facebook, Instagram, and Pinterest; and dissecting several examples of slop in order to help inoculate the audience.

[–] corbin@awful.systems 0 points 3 weeks ago (3 children)

What a deeply dishonorable lawsuit. The complaint is essentially that Disney and Universal deserve to be big powerful movie studios that employ and systematically disenfranchise "millions of" artists (p8).

Disney claims authorship over Darth Vader (Lucas) and Yoda (Oz), Elsa and Ariel (Andersen), folk characters Aladdin, Mulan, and Snow White; Lightning McQueen & Buzz Lightyear (Lasseter et al), Sully (Gerson & Stanton), Iron Man (Lee, Kirby, et al), and Homer Simpson (Groening). Disney not only did not design or produce any of these characters, but Disney purchased those rights. I will give Universal partial credit for not claiming to invent any of their infamous movie monsters, but they do claim to have created Shrek (Stieg). Still, this is some original-character-do-not-steal snottiness; these avaricious executives and attorneys appropriated art from artists and are claiming it as their own so that they can sue another appropriator.

Here is a sample of their attitude, p16 of the original complaint:

Disney's copyright registrations for the entertainment properties in The Simpsons franchise encompass the central characters within.

See, they're the original creator and designated benefactor, because they have Piece of Paper, signed by Government Authority, and therefore they are Owner. Who the fuck are Matt Groening or Tracey Ullman?

I will not contest Universal's claim to Minions.

One weakness of the claim is that it's not clear whether Midjourney infringes, Midjourney's subscribers infringe, or Midjourney infringes when collaborating with its subscribers. It seems like they're going to argue that Midjourney commits the infringing act, although p104 contains hedges that will allow Disney to argue either way. Another weakness is the insistence that Midjourney could filter infringing queries, but chooses not to; this is a standard part of amplifying damages in copyright claims but might not stand up under scrutiny since Midjourney can argue that it's hard to e.g. tell the difference between infringing queries and parodic or satirical queries which infringe but are permitted by fair use. On the other hand, this lawsuit could be an attempt to open a new front in Disney's long-standing attempt to eradicate fair use.

As usual, I'm not defending Midjourney, who I think stand on their own demerits. But I'm not ever going to suck Disney dick given what they've done to the animation community. I wish y'all would realize the folly of copyright already.

[–] corbin@awful.systems -1 points 1 month ago (1 children)

Incomplete sneer, ten-yard penalty. First down, plus coach has to go read Chasing the Rainbow: The Non-conscious Nature of Being (Oakley & Halligan, 2017) to see what psychology thinks of itself once the evidence is rounded up in one place.

[–] corbin@awful.systems -4 points 1 month ago (12 children)

I'm gonna be polite, but your position is deeply sneerworthy; I don't really respect folks who don't read. The article has quite a few quotes from neuroscientist Anil Seth (not to be confused with AI booster Anil Dash) who says that consciousness can be explained via neuroscience as a sort of post-hoc rationalizing hallucination akin to the multiple-drafts model; his POV helps deflate the AI hype. Quote:

There is a growing view among some thinkers that as AI becomes even more intelligent, the lights will suddenly turn on inside the machines and they will become conscious. Others, such as Prof Anil Seth who leads the Sussex University team, disagree, describing the view as "blindly optimistic and driven by human exceptionalism." … "We associate consciousness with intelligence and language because they go together in humans. But just because they go together in us, it doesn't mean they go together in general, for example in animals."

At the end of the article, another quote explains that Seth is broadly aligned with us about the dangers:

In just a few years, we may well be living in a world populated by humanoid robots and deepfakes that seem conscious, according to Prof Seth. He worries that we won't be able to resist believing that the AI has feelings and empathy, which could lead to new dangers. "It will mean that we trust these things more, share more data with them and be more open to persuasion." But the greater risk from the illusion of consciousness is a "moral corrosion", he says. "It will distort our moral priorities by making us devote more of our resources to caring for these systems at the expense of the real things in our lives" – meaning that we might have compassion for robots, but care less for other humans.

A pseudoscience has an illusory object of study. For example, parapsychology studies non-existent energy fields outside the Standard Model, and criminology asserts that not only do minds exist but some minds are criminal and some are not. Robotics/cybernetics/artificial intelligence studies control loops and systems with feedback, which do actually exist; further, the study of robots directly leads to improved safety in workplaces where robots can crush employees, so it's a useful science even if it turns out to be ill-founded. I think that your complaint would be better directed at specific AGI position papers published by techbros, but that would require reading. Still, I'll try to salvage your position:

Any field of study which presupposes that a mind is a discrete isolated event in spacetime is a pseudoscience. That is, fields oriented around neurology are scientific, but fields oriented around psychology are pseudoscientific. This position has no open evidence against it (because it's definitional!) and aligns with the expectations of Seth and others. It is compatible with definitions of mind given by Dennett and Hofstadter. It immediately forecloses the possibility that a computer can think or feel like humans; at best, maybe a computer could slowly poorly emulate a connectome.

[–] corbin@awful.systems 7 points 1 month ago

Oh, sorry. We're in agreement and my sentence was poorly constructed. The computation of a matrix multiplication usually requires at least pencil and paper, if not a computer. I can't compute anything larger than a 2 × 2. But I'll readily concede that Strassen's specific trick is simple enough that a mentalist could use it.

[–] corbin@awful.systems 8 points 1 month ago (2 children)

Only the word "theoretical" is outdated. The Beeping Busy Beaver problem is hard even with a Halting oracle, and we have a corresponding Beeping Busy Beaver Game.

 

Sorry, no sneer today. I'm tired of this to the point where I'm dreaming up new software licenses.

A trans person no longer felt safe in our community and is no longer developing. In response, at least four different forums full of a range of Linux users and developers (Lemmy #1, Lemmy #2, HN, Phoronix (screenshot)) posted their PII and anti-trans hate.

I don't have any solutions. I'm just so fucking disappointed in my peers and I feel a deep inadequacy at my inability to get these fuckwads to be less callous.

 

After a decade of cryptofascism and failed political activism, our dear friend jart is realizing that they don't really have much of a positive legacy. If only there was something they could have done about that.

 

In this big thread, over and over, people praise the Zuck-man for releasing Llama 3's weights. How magnanimous! How courteous! How devious!

Of course, Meta is doing this so that they don't have to worry about another 4chan leak of weights via Bittorrent.

 

Sometimes what is not said is as sneerworthy as what is said.

It is quite telling to me that HN's regulars and throwaway accounts have absolutely nothing to say about the analysis of cultural patterns.

 

Possibly the worst defense yet of Garry Tan's tweeting of death threats towards San Francisco's elected legislature. In yet more evidence for my "HN is a Nazi bar" thesis, this take is from an otherwise-respected cryptographer and security researcher. Choice quote:

sorry, but 2Pac is now dad music, I don't make the rules

Best sneer so far is this comment, which links to this Key & Peele sketch about violent rap lyrics in the context of gang violence.

 

Choice quote:

Actually I feel violated.

It's a KYC interview, not a police interrogation. I've always enjoyed KYC interviews; I get to talk about my business plans, or what I'm going to do with my loan, or how I ended up buying/selling stocks. It's hard to empathize with somebody who feels "violated" by small talk.

 

In today's episode, Yud tries to predict the future of computer science.

 

Choice quote:

Putting “ACAB” on my Tinder profile was an effective signaling move that dramatically improved my chances of matching with the tattooed and pierced cuties I was chasing.

 

As usual, I struggle to form a proper sneer in the face of such sheer wrongheadedness. The article is about a furry who was dating a Nazifur and was battered for it; the comments are full of complaints about the overreach of leftism. Choice quote:

Anti-fascists see fascism everywhere (your local police department) the same way the John Birch Society saw communism everywhere (Dwight Eisenhower.). Or maybe they are just jealous that the fascists have cool uniforms and boots. Or maybe they think their life isn’t meaningful enough and it has to be like a comic book or a WWII movie.

Well, I do wear a Captain America shirt often…

 

A well-respected pirate, neighbor, and Lisper is also a chud. Welcome to HN, the Nazi Bar where everybody's also an expert in technology.

 

Eminent domain? Never heard of it! Sounds like a fantasy from the "economical illiterate."

Edit: This entire thread is a trash fire, by the way. I'm only highlighting the silliest bit from one of the more aggressive landlords.

view more: next ›