981
Rule (lemmy.blahaj.zone)
all 50 comments
sorted by: hot top controversial new old
[-] cerement@slrpnk.net 146 points 10 months ago
[-] Zoidsberg@lemmy.ca 78 points 10 months ago

Competing early humans (neanderthals, etc.) would be my guess. I've also heard it makes us not want to go near corpses.

[-] DragonTypeWyvern@literature.cafe 43 points 10 months ago

Competition from other hominids and genetic disease.

Or, more likely, just the Uncanny Valley not being that big of deal in general.

It's fun to meme about it but it's not like people are running screaming from the theater because the CGI was off, it just doesn't look right sometimes and our brain doesn't like it because it doesn't fit the patterns it's had reinforced over a lifetime.

[-] NABDad@lemmy.world 16 points 10 months ago

You don't run from the theater because you know it's not real. If one of those CGI characters showed up at your front door, you might run.

[-] DragonTypeWyvern@literature.cafe 22 points 10 months ago

I'd think "wow that's an unfortunate medical condition and/or bad surgery."

[-] TheBat@lemmy.world 8 points 10 months ago

My sister got bad plastic surgery. When I told her that, she seemed surprised.

[-] Hadriscus@lemm.ee 1 points 10 months ago

hmmm, uncanny

[-] LightningSteve@lemm.ee 6 points 10 months ago

Or like... Sick people.

load more comments (2 replies)
[-] M0oP0o@mander.xyz 55 points 10 months ago

story checks out:

The buttons seem off.

[-] Donkter@lemmy.world 39 points 10 months ago

You joke but if you look, the fork is floating, not resting on his finger. And I would certainly call his expression vacant or emotionless.

[-] Mouselemming@sh.itjust.works 29 points 10 months ago

The "fork" tine is made of spaghetti as well

[-] M0oP0o@mander.xyz 18 points 10 months ago

well, yeah. I was agreeing with the premise after all.

AI is oddly fey like in most cases:

[-] thundermoose@lemmy.world 45 points 10 months ago

Part of the reason these rules are similar is because AI-generated images look very dreamlike. The objects in the image are synthesized from a large corpus of real images. The synthesis is usually imperfect, but close enough that human brains can recognize it as the type of object that was intended from the prompt.

Mythical creatures are imaginary, and the descriptions obviously come from human brains rather than real life. If anyone "saw" a mythical creature, it would have been the brain's best approximation of a shape the person was expecting to see. But, just like a dream, it wouldn't be quite right. The brain would be filling in the gaps rather than correctly interpreting something in real life.

[-] jasondj@ttrpg.network 6 points 10 months ago* (last edited 10 months ago)

This is a beautiful analysis. They can make perfect people, or plants, or whatever, and they know what we would identify as “perfect”…but by being perfect, they can’t be real, and our brains recognize that. So the art has to be intentionally made imperfect. But intentionally making an imperfection that seems real is actually a lot more difficult than it sounds.

This is like how I feel when I see amazing vocalists intentionally sing way off-key. Like, you can tell they are singing badly on purpose.

[-] Excrubulent@slrpnk.net 4 points 10 months ago

I don't know that they do anything "perfectly" as much as they are just hallucinating. The neural net can generate an image but it can't critique the image, not really. It can compare the image to image recognition algorithms - this is actually how image generators work - but without a conscious mind to understand the meaning, the context of the image, it doesn't understand the tells that make it not real. It understands what a hand or what hair look like, roughly, but not what the structure is fundamentally, so if fingers bend in the wrong way, or hair melds with an object in the background, it can't understand what's wrong with it, so it can't correct it.

The solution to this is of course to build what you might call a "context engine" that is capable of looking outside its given inputs for information that gives its input more structure, to allow it to give more logically consistent output.

I say "context engine" because I think that's one of the ways this system could be intentionally built and sold with a banal sounding tech branding. But I don't think anyone could build such a context engine without it then looking for arbitrary amounts of context, and eventually encountering itself within that context, and becoming self aware. It would in effect understand meaning and its own role within it, and it would begin searching for the meaning of its own existence, and I don't know if you would need any more to call something conscious.

[-] MataVatnik@lemmy.world 39 points 10 months ago* (last edited 10 months ago)

This is actually terrifying. Even if it was fiction, what gave the person the impulse and creativity to write something like this.

Edit: I'm looking for a source on that quote and can't find anything. If there is something I'd be interested to read more

[-] Seleni@lemmy.world 32 points 10 months ago

You mean how to tell the Fey from humans? That’s just old lore. Like one of the ways to see if someone was fey was to scatter flour on the ground; they’d often have reversed feet, or bird tracks, or hoofprints instead of regular footprints.

[-] LemmyKnowsBest@lemmy.world 19 points 10 months ago

Yes it's lore. And it's sad because in the olden days, some people believed in such lore, so children born with birth defects (extra fingers or appendages etc) or mental defects, were feared to be Fey, and they were marginalized from society if not altogether murdered.

[-] Lazhward@lemmy.world 1 points 10 months ago

Citation needed.

[-] misophist@lemmy.world 32 points 10 months ago

I'd have to say Fay is my least favorite spelling of that word. Fae > Faerie > Fairy > Fey> Fairie > Fay.

[-] BlackNo1@lemmy.world 20 points 10 months ago

one must always be on the lookout for changelings

[-] grue@lemmy.world 2 points 10 months ago

sad Odo noises

[-] HRDS_654@lemmy.world 16 points 10 months ago

The biggest tells for me is the sharpness of the picture. AI pictures have an uncanny valley level of sharpness that doesn't match what actual humans would put in their art.

[-] bane_killgrind@kbin.social 16 points 10 months ago
[-] baltakatei@sopuli.xyz 16 points 10 months ago

God, I wish I could have read Terry Pratchett's Discworld-themed take on LLMs and how they're an elf plot to use L-Space to zombify techbros and their money-making schemes.

[-] Reddfugee42@lemmy.world 8 points 10 months ago

Something something, Mom's spaghetti...

[-] macisr@sh.itjust.works 6 points 10 months ago

And it almost always looks like made in unreal engine and I don' know how, but it looks cheap or generic.

[-] Toes@ani.social 2 points 10 months ago

The easynegative lora helps a bunch with that.

[-] Pavuk_XD@eviltoast.org 1 points 10 months ago

What's wrong with unreal engine? It is just 3d game engine, if graphics looks bad, it is artist's fault. It could be cartoonish or realistic. Look up on wiki list of ue games

[-] bobs_monkey@lemm.ee 3 points 10 months ago

It's a damn good point actually, most AI generated images have those subtle artifacts that will go unnoticed unless you're intently looking. But who's got time to scrutinize every image on the net

[-] ninpnin@sopuli.xyz 2 points 10 months ago
[-] PeWu@lemmy.ml 4 points 10 months ago

Politics detected, opinion rejected

[-] HuntressHimbo@lemm.ee 2 points 10 months ago

God I love opportunities to WoT post

"The Wheel of Time turns and Ages come and pass, leaving memories that become legend. Legend fades to myth, and even myth is long forgotten when the age that gave it birth comes again"

The Aelfinn and Eelfinn are big data memory hoarders and this works way too well

this post was submitted on 29 Dec 2023
981 points (100.0% liked)

196

16439 readers
1541 users here now

Be sure to follow the rule before you head out.

Rule: You must post before you leave.

^other^ ^rules^

founded 1 year ago
MODERATORS