[-] ebu@awful.systems 25 points 1 month ago

because it encodes semantics.

if it really did so, performance wouldn't swing up or down when you change syntactic or symbolic elements of problems. the only information encoded is language-statistical

[-] ebu@awful.systems 21 points 3 months ago* (last edited 3 months ago)

I don't think emojis should be the place to have a socio-political discussion.

have some entirely non-political emojis:

🗳️: BALLOT BOX WITH BALLOT

🇹🇼: FLAG: TAIWAN

🇵🇸: FLAG: PALESTINIAN TERRITORIES

🗽: STATUE OF LIBERTY

🤡: FACE OF "NON-POLITICAL" PERSON

[-] ebu@awful.systems 21 points 4 months ago* (last edited 4 months ago)

"rat furry" :3

"(it's short for rationalist)" >:(

[-] ebu@awful.systems 20 points 4 months ago* (last edited 4 months ago)

simply ask the word generator machine to generate better words, smh

this is actually the most laughable/annoying thing to me. it betrays such a comprehensive lack of understanding of what LLMs do and what "prompting" even is. you're not giving instructions to an agent, you are feeding a list of words to prefix to the output of a word predictor

in my personal experiments with offline models, using something like "below is a transcript of a chat log with XYZ" as a prompt instead of "You are XYZ" immediately gives much better results. not good results, but better

[-] ebu@awful.systems 20 points 4 months ago

it is a little entertaining to hear them do extended pontifications on what society would look like if we had pocket-size AGI, life-extension or immortality tech, total-immersion VR, actually-good brain-computer interfaces, mind uploading, etc. etc. and then turn around and pitch a fit when someone says "okay so imagine if there were a type of person that wasn't a guy or a girl"

[-] ebu@awful.systems 25 points 5 months ago

data scientists can have little an AI doomerism, as a treat

[-] ebu@awful.systems 22 points 5 months ago

the upside: we can now watch "disruptive startups" go through the aquire funding -> slapdash development -> catastrophic failure -> postmortem cycle at breakneck speeds

[-] ebu@awful.systems 20 points 5 months ago* (last edited 5 months ago)

i really, really don't get how so many people are making the leaps from "neural nets are effective at text prediction" to "the machine learns like a human does" to "we're going to be intellectually outclassed by Microsoft Clippy in ten years".

like it's multiple modes of failing to even understand the question happening at once. i'm no philosopher; i have no coherent definition of "intelligence", but it's also pretty obvious that all LLM's are doing is statistical extrapolation on language. i'm just baffled at how many so-called enthusiasts and skeptics alike just... completely fail at the first step of asking "so what exactly is the program doing?"

[-] ebu@awful.systems 21 points 6 months ago

syncthing is an extremely valuable piece of software in my eyes, yeah. i've been using a single synced folder as my google drive replacement and it works nearly flawlessly. i have a separate system for off-site backups, but as a first line of defense it's quite good.

[-] ebu@awful.systems 24 points 7 months ago

correlation? between the rise in popularity of tools that exclusively generates bullshit en masse and the huge swelling in volume of bullshit on the Internet? it's more likely than you think

it is a little funny to me that they're taking about using AI to detect AI garbage as a mechanism of preventing the sort of model/data collapse that happens when data sets start to become poisoned with AI content. because it seems reasonable to me that if you start feeding your spam-or-real classification data back into the spam-detection model, you'd wind up with exactly the same degredations of classification and your model might start calling every article that has a sentence starting with "Certainly," a machine-generated one. maybe they're careful to only use human-curated sets of real and spam content, maybe not

it's also funny how nakedly straightforward the business proposition for SEO spamming is, compared to literally any other use case for "AI". you pay $X to use this tool, you generate Y articles which reach the top of Google results, you generate $(X+P) in click revenue and you do it again. meanwhile "real" business are trying to gauge exactly what single digit percent of bullshit they can afford to get away with putting in their support systems or codebases while trying to avoid situations like being forced to give refunds to customers under a policy your chatbot hallucinated (archive.org link) or having to issue an apology for generating racially diverse Nazis (archive).

[-] ebu@awful.systems 21 points 7 months ago

i absolutely love the "clarification" that an email address is PII only if it's your real, primary, personal email address, and any other email address (that just so happens to be operated and used exclusively by a single person, even to the point of uniquely identifying that person by that address) is not PII

[-] ebu@awful.systems 23 points 7 months ago* (last edited 7 months ago)

Actually, that email exchange isn’t as combative as I expected.

i suppose the CEO completely barreling forward past multiple attempts to refuse conversation while NOT screaming slurs at the person they're attempting to lecture, is, in some sense, strictly better than the alternative

view more: ‹ prev next ›

ebu

joined 8 months ago