[-] S410@lemmy.ml 5 points 7 months ago

Again, the state being a piece of shit, doesn't mean everyone who lives and operates in that country automatically supports every single decision of the state.

You know that Israel buys most of its weapons from the US, right? In other words, US actively supplies Israel with weapons knowing full well what Israel is using them for. Are you going to boycott all US companies too, now?

If you want to hurt a state, you should get your country to stop doing business with that state - that is, natural resources and weapons trading, - not go after civilians and civilian businesses, which aren't responsible for the already made decisions, nor hold any power to overturn them. (Since countries that are known for aggressive behavior aren't known for being particularly democratic).

[-] S410@lemmy.ml 6 points 8 months ago* (last edited 8 months ago)

https://bugzilla.redhat.com/show_bug.cgi?id=2216594
https://gitlab.freedesktop.org/drm/amd/-/issues/2145
Looks like you're hitting a known bug that isn't fixed yet.
Reportedly, ROCm 5.6.1 is the lastest working version, so you could try to downgrade. Something dnf install rocm-*-5.6.1 should do the trick.

[-] S410@lemmy.ml 10 points 8 months ago

Did you add a repo for RHEL8 to your Fedora install? Please, undo that.
Please, don't blindly follow instructions you find online, particularly when it comes down to installing something as important as drivers.
Installing drivers from third party sources should be done only as the last resort and only if you know exactly what you're doing.

[-] S410@lemmy.ml 18 points 8 months ago

AMDGPU drivers are a part of Linux itself, so you shouldn't need to install them manually. They're already there.
What Blender seems to want is ROCm HIP. The rocm-hip package might be what it wants? Try to install it and see if that works.

[-] S410@lemmy.ml 13 points 8 months ago* (last edited 8 months ago)

Ah yes, blanket banning things using logic riddled with faulty generalization and facts proven by assertion. Totally not a bad idea! Totally!

[-] S410@lemmy.ml 32 points 8 months ago

Offtopic, but why on earth would anyone use .rar? It's a proprietary format. The reason there's basically no software to create or modify .rar archives is due licensing, which makes it illegal to write software that can do it.

Looking at the rarlab's website, it appears that only the MacOS version has an ARM build. For Linux, only x86 and x64 are listed.

So, either use MacOS, use emulation to run the x86/x64 version or break the law.

[-] S410@lemmy.ml 38 points 8 months ago

I love region-locked websites! I love region-locked websites! I love region-locked websites!

[-] S410@lemmy.ml 0 points 8 months ago* (last edited 8 months ago)

Machine learning doesn't retain an exact copy either. Just how on earth do you think can a model trained on terabytes of data be only a few gigabytes in side, yet contain "exact copies" of everything? If "AI" could function as a compression algorithm, it'd definitely be used as one. But it can't, so it isn't.

Machine learning can definitely re-create certain things really closely, but to do it well, it generally requires a lot of repeats in the training set. Which, granted, is a big problem that exists right now, and which people are trying to solve. But even right now, if you want an "exact" re-creation of something, cherry picking is almost always necessary, since (unsurprisingly) ML systems have a tendency to create things that have not been seen before.

Here's an image from an article claiming that machine learning image generators plagiarize things.

However, if you take a second to look at the image, you'll see that the prompters literally ask for screencaps of specific movies with specific actors, etc. and even then the resulting images aren't one-to-one copies. It doesn't take long to spot differences, like different lighting, slightly different poses, different backgrounds, etc.

If you got ahold of a human artist specializing in photoreal drawings and asked them to re-create a specific part of a movie they've seen a couple dozen or hundred times, they'd most likely produce something remarkably similar in accuracy. Very similar to what machine learning images generators are capable of at the moment.

[-] S410@lemmy.ml 26 points 8 months ago

This is the evilest, worstest, and most upsetting thing I've read all day

[-] S410@lemmy.ml 4 points 8 months ago* (last edited 8 months ago)

The act of learning is absorbing and using massive amounts of data. Almost any child can, for example, re-create copyrighted cartoon characters in their drawing or whistle copyrighted tunes.

If you look at, pretty much, any and all human created works, you will be able to trace elements of those works to many different sources. We, usually, call that "sources of inspiration". Of course, in case of human created works, it's not a big deal. Generally, it's considered transformative and a fair use.

[-] S410@lemmy.ml 11 points 8 months ago

Every work is protected by copyright, unless stated otherwise by the author.
If you want to create a capable system, you want real data and you want a wide range of it, including data that is rarely considered to be a protected work, despite being one.
I can guarantee you that you're going to have a pretty hard time finding a dataset with diverse data containing things like napkin doodles or bathroom stall writing that's compiled with permission of every copyright holder involved.

[-] S410@lemmy.ml 21 points 8 months ago

They're not wrong, though?

Almost all information that currently exists has been created in the last century or so. Only a fraction of all that information is available to be legally acquired for use and only a fraction of that already small fraction has been explicitly licensed using permissive licenses.

Things that we don't even think about as "protected works" are in fact just that. Doesn't matter what it is: napkin doodles, writings on bathrooms stall walls, letters written to friends and family. All of those things are protected, unless stated otherwise. And, I don't know about you, but I've never seen a license notice attached to a napkin doodle.

Now, imagine trying to raise a child while avoiding every piece of information like that; information that you aren't licensed to use. You wouldn't end up with a person well suited to exist in the world. They'd lack education regarding science, technology, they'd lack understanding of pop-culture, they'd know no brand names, etc.

Machine learning models are similar. You can train them that way, sure, but they'd be basically useless for real-world applications.

view more: next ›

S410

joined 1 year ago