[-] ebu@awful.systems 19 points 2 months ago* (last edited 2 months ago)

there were bits and pieces that made me feel like Jon Evans was being a tad too sympathetic to Elizer and others whose track record really should warrant a somewhat greater degree of scepticism than he shows, but i had to tap out at this paragraph from chapter 6:

Scott Alexander is a Bay Area psychiatrist and a writer capable of absolutely magnificent, incisive, soulwrenching work ... with whom I often strongly disagree. Some of his arguments are truly illuminatory; some betray the intellectual side-stepping of a very smart person engaged in rationalization and/or unwillingness to accept the rest of the world will not adopt their worldview. (Many of his critics, unfortunately, are inferior writers who misunderstand his work, and furthermore suggest it’s written in bad faith, which I think is wholly incorrect.) But in fairness 90+% of humanity engages in such rationalization without even worrying about it. Alexander does, and challenges his own beliefs more than most.

the fact that Jon praises Scott's half-baked, anecdote-riddled, Red/Blue/Gray trichotomy as "incisive" (for playing the hits to his audience), and his appraisal of the meandering transhumanist non-sequitur reading of Allen Ginsberg's Howl as "soulwrenching" really threw me for a loop.

and then the later description of that ultimately rather banal New York Times piece as "long and bad" (a hilariously hypocritical set of adjectives for a self-proclaimed fan of some of Scott's work to use), and the slamming of Elizabeth Sandifer as being a "inferior writer who misunderstands Scott's work", for uh, correctly analyzing Scott's tendencies to espouse and enable white supremacist and sexist rhetoric... yeah it pretty much tanks my ability to take what Jon is writing at face value.

i don't get how after so many words being gentle but firm about Elizer's (lack of) accomplishments does he put out such a full-throated defense of Scott Alexander (and the subsequent smearing of his """enemies"""). of all people, why him?

[-] ebu@awful.systems 15 points 3 months ago* (last edited 3 months ago)

Would you rather have a dozen back and forth interactions?

these aren't the only two possibilities. i've had some interactions where i got handed one ref sheet and a sentence description and the recipient was happy with the first sketch. i've had some where i got several pieces of references from different artists alongside paragraphs of descriptions, and there were still several dozen attempts. tossing in ai art just increases the volume, not the quality, of the interaction

Besides, this is something I've heard from other artists, so it's very much a matter opinion.

i have interacted with hundreds of artists, and i have yet to meet an artist that does not, to at least some degree, have some kind of negative opinion on ai art, except those for whom image-generation models were their primary (or more commonly, only) tool for making art. so if there is such a group of artists that would be happy to be presented with ai art and asked to "make it like this", i have yet to find them

Annoying, sure, but not immoral.

annoying me is immoral actually

[-] ebu@awful.systems 17 points 3 months ago

as someone who only draws as a hobbyist, but who has taken commissions before, i think it would be very annoying to have a prospective client go "okay so here's what i want you to draw" and then send over ai-generated stuff. if only because i know said client is setting their expectations for the hyper-processed, over-tuned look of the machine instead of what i actually draw

[-] ebu@awful.systems 18 points 3 months ago

i couldn't resist

Reddit post titled "The Anti-Al crowd is so toxic and ridiculous that it's actually pushed me FURTHER into Al art"

at least when this rhetoric popped up around crypto and GameStop stocks, there was a get-rich-quick scheme attached to it. these fuckers are doing it for free

[-] ebu@awful.systems 20 points 4 months ago

it is a little entertaining to hear them do extended pontifications on what society would look like if we had pocket-size AGI, life-extension or immortality tech, total-immersion VR, actually-good brain-computer interfaces, mind uploading, etc. etc. and then turn around and pitch a fit when someone says "okay so imagine if there were a type of person that wasn't a guy or a girl"

[-] ebu@awful.systems 15 points 4 months ago

typically one prefers their questions be answered correctly. but hey, you are free to be wrong faster now

[-] ebu@awful.systems 15 points 4 months ago

The point is that even if the chances of [extinction by AGI] are extremely slim

the chances are zero. i don't buy into the idea that the "probability" of some made-up cataclysmic event is worth thinking about as any other number because technically you can't guarantee that a unicorn won't fart AGI into existence which in turn starts converting our bodies into office equipment

It's kind of like with the trinity nuclear test. Scientists were almost 100% confident that it wont cause a chain reaction that sets the entire atmosphere on fire

if you had done just a little bit of googling instead of repeating something you heard off of Oppenheimer, you would know this was basically never put forward as serious possibility (archive link)

which is actually a fitting parallel for "AGI", now that i think about it

EDIT: Alright, well this community was a mistake..

if you're going to walk in here and diarrhea AGI Great Filter sci-fi nonsense onto the floor, don't be surprised if no one decides to take you seriously

...okay it's bad form but i had to peek at your bio

Sharing my honest beliefs, welcoming constructive debates, and embracing the potential for evolving viewpoints. Independent thinker navigating through conversations without allegiance to any particular side.

seriously do all y'all like. come out of a factory or something

[-] ebu@awful.systems 16 points 5 months ago* (last edited 5 months ago)

i cant stop scrolling through this hot garbage, it just keeps getting better

cut-off tweet from the same account saying that Als are now capable of hypnotizing humans

[-] ebu@awful.systems 17 points 5 months ago* (last edited 5 months ago)

i'll take trolls "pretending" to not understand computational time over fascists "pretending" to gush over other fascists any day

[-] ebu@awful.systems 15 points 5 months ago

i can't tell if this is a joke suggestion, so i will very briefly treat it as a serious one:

getting the machine to do critical thinking will require it to be able to think first. you can't squeeze orange juice from a rock. putting word prediction engines side by side, on top of each other, or ass-to-mouth in some sort of token centipede, isn't going to magically emerge the ability to determine which statements are reasonable and/or true

and if i get five contradictory answers from five LLMs on how to cure my COVID, and i decide to ignore the one telling me to inject bleach into my lungs, that's me using my regular old intelligence to filter bad information, the same way i do when i research questions on the internet the old-fashioned way. the machine didn't get smarter, i just have more bullshit to mentally toss out

[-] ebu@awful.systems 18 points 6 months ago

it's funny how your first choice of insult is accusing me of not being deep enough into llm garbage. like, uh, yeah, why would i be

but also how dare you -- i'll have you know i only choose the most finely-tuned, artisinally-crafted models for my lawyering and/or furry erotic roleplaying needs

[-] ebu@awful.systems 19 points 6 months ago* (last edited 6 months ago)

as previously discussed, the rabbit r1 turns out to be (gasp) just an android app.

in a twist no one saw coming, the servers running "rabbit os" report to just be running Ubuntu, and the "large action model" that was supposed to be able to watch humans use interfaces and learn how to use them, turns out to just be a series of hardcoded places to click in Playwright.

view more: ‹ prev next ›

ebu

joined 8 months ago