15

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

Last week's thread

(Semi-obligatory thanks to @dgerard for starting this)

you are viewing a single comment's thread
view the rest of the comments
[-] fasterandworse@awful.systems 7 points 1 month ago

is this a possible thing: all the AI assistant stuff being forced onto us in the next gen hardware is gonna need significant computing power bumps to support it, is this creating a potential surplus of computing power in all devices that could time very well with an excessive skeuomorphic UI design response to the decade of bland flatness we've endured that's gonna cook the cpus on the devices of everyone else?

[-] rook@awful.systems 13 points 1 month ago

is this creating a potential surplus of computing power in all devices

Haha, no. Flat UI was done for reasons of fashion, not efficiency. UI will always expand to consume the available memory and compute, regardless of how boring it looks. Exhibit A: Electron!

[-] fasterandworse@awful.systems 6 points 1 month ago

yeah but I didn't say that flat ui was created for efficiency. Any efficiency of a flat ui is cancelled out by the excesses of client-side JS. I know it is fashion, I was there. But I also know that there is a sense that it is efficient by the designers that design with it.

[-] nightsky@awful.systems 10 points 1 month ago* (last edited 1 month ago)

The ongoing trend of "flat UI" is largely not due to processing power though. Even inexpensive computers have CPUs and GPUs that could push very fancy graphics without problems, see what the same machines can do in game graphics (and I don't mean high-end gaming, I mean the kind of simple gaming that can run on a low-end laptop these days). Some of the early GUIs in the 1980s had "flat design" due to performance limitations, but that went away in the 1990s. Today it could still be a reason in some embedded system scenarios with simple microcontrollers, but not in a desktop or laptop computer, and also not in smartphones or tablets.

The reason we have the bland flat design is the same why we still have things like "all surfaces are ugly glossy black plastic" (luckily this one is on its way out) or "war on physical buttons" aka "touchscreens everywhere"... it's simply a design trend.

[-] cstross@wandering.shop 12 points 1 month ago

@nightsky "touchscreens everywhere" isn't an aesthetic choice, it's a cost-of-goods choice: which adds more to the cost of a physical product, a bunch of bespoke embossed buttons/keys for specific tasks, or a single mass-produced touchscreen?

It's the same reason modern electronics uses embedded microcontrollers rather than actual properly designed task-specific gate arrays.

[-] fasterandworse@awful.systems 6 points 1 month ago

I hear you, but I didn't say flat ui is due to processing power. My line of thought is that a sudden bump in available processing power might prompt designers to feel that elaborate uis are fine now because despite flat ui not being an efficiency thing, it is definitely perceived as one by the average designer who doesn't know how much of the css used to render it is generated client-side via js

[-] istewart@awful.systems 8 points 1 month ago

Just chiming in to say to hell with skeuomorphism, I still want Apple Platinum back. Bonus points if it comes with an option for Dark Platinum that was only present in the early releases of OS X Server.

[-] froztbyte@awful.systems 6 points 1 month ago

to the computing side, and with the proviso that in my own estimation of my skills I am at best slightly less than "dangerously clueless": unfortunately not as much as may be desired because the kind of chips being added are fairly specialised silicon

it's not impossible that people may find other uses for it over time but to the best of my knowledge as it stands right now much of this shit is dead weight the moment this bubble pops

(I don't think it will all go entirely away; there are some ML uses that are not complete trash. but that's a long different arc)

I'm not sure I follow the skeu side of your comment?

[-] fasterandworse@awful.systems 7 points 1 month ago

that;s exactly the catch I was hoping wouldn't be the case. When the AI shit is abandoned, is the hardware useful for regular stuff...

So, from what you're saying: Generative AI is fucking up in the past, present, and future

[-] froztbyte@awful.systems 6 points 1 month ago

broad brush strokes, yes largely that

there's some extremely fucking interesting details in the weeds, but that's beyond the scope of merely a comment (and also I don't feel equipped to make a goodpost about it as yet)

[-] istewart@awful.systems 6 points 1 month ago

My baseline understanding is that "NPUs," as such, are vector accelerators with perhaps lower precision and definitely lower peak TDP. I say this because much of the incremental ML research I've skimmed over seems to be around getting away with lower precision, dropping down to FP8 or even FP4 from FP16 when they can get away with it.

I'm still confused as to why and how this is an acceptable tradeoff to firing up an iGPU with precise power/TDP stepping. Perhaps one of those situations where the power budget and latency to fire up the whole GPU block or burst it to max power ends up costing as much as the actual calculation. I think for purposes of this discussion, we also need a source that sheds light on the architectural differences between NPUs and GPU shader/execution units.

this post was submitted on 11 Nov 2024
15 points (100.0% liked)

TechTakes

1481 readers
358 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS