747
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 03 Oct 2023
747 points (95.7% liked)
Technology
59590 readers
2961 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
Agreed, we desperately need regulations on who has the right to reproduce another person’s image/voice/likeness. I know that there will always be people on the internet who do it anyway, but international copyright laws still mostly work in spite of that, so I imagine that regulations on this type of AI would mostly work as well.
We’re really in the Wild West of machine learning right now. It’s beautiful and terrifying all at the same time.
It would be a shame to lose valuable things like there I ruined it, which seem to be a perfectly fair use of copyrighted works. Copyright is already too strong.
Copyright IS too strong, but paradoxically artists' rights are too weak. Everything is aimed to boost the profits of media companies, but not protect the people who make them. Now they are under threat of being replaced by AI trained on their own works, no less. Is it really worth it to defend AI if we end up with less novel human works because of it?
The "circle of life" except that it kills the artists' careers rather than creating new ones. Even fledgling ones might find that there's no opportunity for them because AIs are already gearing to take entry-level jobs. However efficient AI may be at replicating the work of artists, the same could be said of a photocopier, and we laws to define how those get to be used so that they don't undermine creators.
I get that AI output is not identical and its output doesn't go foul under existing laws, but the principles behind them are still important. Not only Culture but even AI itself will be lesser for it if human artists are not protected, because art AIs quickly degrade when AI art is fed back into it en masse.
Don't forget that the kind of AI we have doesn't do anything by itself. We don't have sentient machines, we have very elaborate auto-complete systems. It's not AI that is steamrolling artists, it's companies seeking to replace artists with AIs trained on their works that are threatening them. That can't be allowed.
It's sad to see how AI advocates strive to replicate the work of artists all the while being incredibly dismissive of their value. No wonder so many artists are incensed to get rid of everything AI.
Besides, it's nothing new that media companies and internet content mills are willing to replace quality with whatever is cheaper and faster. To try to use that as an indictment against those artists' worth is just... yeesh.
You realize that even this had to be set up by human beings right? Piping random prompts through art AI is impressive, but it's not intelligent. Don't let yourself get caught on sci-fi dreams, I made this mistake too. When you say "AI will steamroll humans" you are assigning awareness and volition to it that it doesn't have. AIs maybe filled with all human knowledge but they don't know anything. They simply repeat patterns we fed into them. An AI could give you a description of a computer, it could generate a picture of a computer, but it doesn't have an understanding. Like I said before, it's like a very elaborate auto-complete. If it could really understand anything, the situation would be very different, but the fact that even its most fierce advocates use it as a tool shows that it's still lacking capabilities that humans have.
AI will not steamroll humans. AI-powered corporate industries, owned by flesh and blood people, might steamroll humans, if we let them. If you think that will get to just enjoy a Holodeck you are either very wealthy or you don't realize that it's not just artists who are at risk.
It's such a shame too. Like you can have a million sensible takes and opinions and views on the topic, pro-AI, but the discussion revolves around the same shit on both sides.
It is an amazing tool, and could be used (and is used, it's just obscured by the massive amount of shit and assholes trolling other people/artists) in so many creative ways. I'd been in a bit of a rut for quite a few years (partially because my brain no make happy chemicals or sleep), but I haven't been as excited about the possibilities and inspired maybe ever in my life (at least not for a decade or nearly two) with art and my own stuff. I'm finally drawing again after way too many years of letting my stuff gather dust.
I used to think techno supremacists were an extreme fringe, but "AI" has made me question that.
For one, this isn't AI in the scifi sense. This is a sophisticated model that forms an algorithm to generate content based on patterns it observes in a plethora of works.
It's ridiculously overhyped, and I think it's just flash in a pan. Companies have already minimized their customer support with automated service options and "tell me what the problem is" prompts. I have yet to meet anyone who is pleased by these. Instead it's usually shouting into the phone that you want to talk to a real human because the algorithm thinks you want a problem fixed instead of the service cancelled.
I think this "technocrat" vs "humanities" debate will be society's next big question.
I haven't watched Star Trek, but if you're correct, they depicted an incredibly rudimentary and error prone system. Google "do any African countries start with a K" meme and look at the suggested answer to see just how smart AI is.
I remain skeptical of AI. If I see evidence suggesting I'm wrong, I'll be more than happy to admit it. But the technology being touted today is not the general AI envisioned by science fiction nor everything that's been studied in the space the last decade. This is just sophisticated content generation.
And finally, throwing data at something does not necessarily improve it. This is easily evidenced by the Google search I suggested. The problem with feeding data en masse is that the data may not be correct. And if the data itself is AI output, it can seriously mess up the algorithms. Since these venture capitalist companies have given no consideration to it, there's no inherent mark for AI output. It will always self regulate itself to mediocrity because of that. And I don't think I need to explain that throwing a bunch of funding at X does not make X a worthwhile endeavor. Crypto and NFT come to mind.
I leave you with this article as a counterexample: https://gizmodo.com/study-finds-chatgpt-capabilities-are-getting-worse-1850655728
Throwing more data at the models has been making things worse. Although the exact reasons are unclear, it does suggest that AI is woefully unreliable and immature.
I used to be on the tecnocrat side too when I was younger, but seeing the detrimental effects of social media, the app-driven gig economy and how companies constantly charge more for less changed my mind. Technocrats adopt this idea that technology is neutral and constantly advancing towards an ideal solution for everything, that we only need to keep adding more tech and we'll have an utopia. Nevermind that so many advancements in automation lead to layoffs rather than less working hours for everyone.
I believe the debate is already happening, and the widespread disillusionment with tech tycoons and billionaires shows popular opinion is changing.
Very similar here, I used to think technology advancement was the most important thing possible. I still do think it's incredibly important, but we can't commercially do it for its own sake. Advancement/knowledge for the sake of itself must be confined to academia. AI currently can't hold a candle to human creativity, but if it reaches that point, it should be an academic celebration.
I think the biggest difference for me now vs before is that I think technology can require too high of a cost to be worth it. Reading about how some animal subjects behaved with Elon's Neuralink horrified me. They were effectively tortured. I refuse the idea that we should develop any technology which requires that. If test subjects communicate fear or panic that is obviously related to the testing, it's time to end the testing.
Part of me still does wonder, but what could be possible if we do make sacrifices to develop technology and knowledge? And here, I'm actually reminded of fantasy stories and settings. There's always this notion of cursed knowledge which comes with incredible capability but requires immoral acts/sacrifice to attain.
Maybe we've made it to the point where we have something analogous (brain chips). And to avoid it, we not only need to better appreciate the human mind and spirit -- we need people in STEM to draw a line when we would have to go too far.
I digress though. I think you're right that we're seeing an upswell of the people against things like this.
All the ills you mention are a problem with current capitalism, not with tech. They exist because humans are too fucking stupid to regulate themselves, and should unironically be ruled by an AI overlord instead once the tech gets there.
You are making the exact same mistake that I just talked about, that I have also made, that a bunch of tech enthusiasts make:
An AI Overlord will be engineered by people with human biases, under the command of people with human biases, trained by data with human biases, having goals that are defined with human biases. What you are going to get is tyranny with extra steps, plus some of its own concerning glitches on the side.
It's a sci-fi dream to assume technology is inherently destined to solve human issues. It takes human concern and humanites studies to apply technology in a way that actually helps people.
Even given the smartest, most perfect computer in the world, it can give people the perfect, most persuasive answers and people can still say no and pull the plug just because they feel like it.
The same is not even different among humans, the power to influence organizations and society entirely relies on the willingness of people to go along with it.
Not only this sci-fi dream is skipping several steps, steps where humans in power direct and gauge AI output as far as it serves their interests rather than some objective ultimate optimal state of society. Should the AI provide all the reasons that they should be in charge, an executive or a politician can simply say "No, I am the one in charge" and that will be it. Because to most of them preserving and increasing their own power is the whole point, even if at expense of maximum efficiency, sustainability or any other concerns.
But before you go fullblown Skynet machine revolution, you should realize that AIs that are limited and directed by greedy humans can already cause untold damage to regular people, simply by optimizing them out of industries. For this, they don't even need to be self-aware agents. They can do that as mildly competent number crunchers, completely oblivious of reality out of spreadsheets and reports.
And all this is assuming an ideal AI. Truly, AI can consume and process more data than any human. Including wrong data. Including biased data. Including completely baseless theories. Who's to say we might not get to a point AI decides to fire people because of the horoscope or something equally stupid?
Are you really trying to use failures of AI to try to argue that it's going to overcome humans? If we can't even get it to work how we want it too what makes you think people are just going to hand the keys of Society to it? How is an AI that keeps bursting into racist rants and emotional meltdowns going to take over anything? Does it sound like it is brewing some Master Plan? Why would people hand control to it? That alone shows that it presents all the flaws of a human, like I just pointed out.
Maybe you are too eager to debunk me but you are missing the point to nitpick. It doesn't really matter that we can't "pull the plug" on the internet, if that even was needed, all it takes to stop the AI takeover is that people in power just disregard what it says. It's far more reasonable to assume even those who use AIs wouldn't universally defer to it.
Nevermind that no drastic action is needed period. You said it yourself, Microsoft pulled the plug on their AIs. This idea of omnipresent self-replicating AI is still sci-fi, because AIs have no reason to seek to spread themselves, or ability to do so.
You are trying to argue in so many directions and technicalities it's just incoherent. AI will control everything because it's gonna be smarter, people will accept because they are dumb, and if the AI is dumb too that also works, but wasn't it supposed to be smarter? Anything that gets you to the conclusion you already started with.
I could be having deeper arguments of how an AI even gets to want anything, but frankly, I don't think you could meaningfully contribute to that discussion.
And you mean to tell me they decided to do it themselves? No, we both know that's not what happened. That setup was arranged by people. You come with accusations of cluelessness and luddism only to say the exact same thing with different words.
You'd rather burst into wild speculation while acting superior rather than acknowledging matters as they are.
Who do you think are making the calls to replace people? Do you seriously believe that executives, who hold the highest power, will decide to replace themselves? They might as well use AIs just fine and reap all the benefits while doing none of the effort. Like many CEOs already do with their human subordinates.
As impressive as AIs might be and become, while you get lost on sci-fi fantasies you are losing sight of who is going to decide what they will be used for and how that will affect regular people.
Hell, we are already have a glimpse of how that's going to play out. Most of the internet is molded by algorithms that, however inscrutable they may be, are directed to serve the interests of wealthy business owners. Some decades ago people dreamed of systems that would recommend things for you before you even knew that you wanted it, but some didn't expect that it would be used to manipulate and advertise to us.
This is why keeping human interest in mind is of the utmost important.
You think oil paintings lost all worth when photography and printing and digital painting came about? That art isn't worth it if it's not expressed through the biggest and newest means?
That is what you think progress is? Human expression and passion being treated like trash because it's not as optimal? What a dreary mindset.
If not to enable people to dedicate themselves to what they love, what is even the worth of technological advancement?
Don't get mistaken. I love technology, I just can't get excited about people being crushed by technology that is getting harnessed in the most cynical, greedy way. But you? You just seem to be eagerly praying for the day you will be turned into a paperclip, for "value".
Tired of your disingenuous responses. By your definition a die is intelligent because "you didn't tell it what number to roll". Stop playing dumb about that AI. I know you understood it.
Humans, again.
Trying to make big claims based on your own indifference towards art and artists only convinces me you are the last person I'd want an opinion about it. There's a lot of discussion to be made about what makes art "better". It's not just making it bigger and longer.
This just sounds weirdly cultish.
Lemmy is full of Luddite Twitter artist types. It’s an echo chamber in here.
Steamroll us? Not if I have anything to say about it. I look forward to setting condition 1SQ for strategic launch of thermonuclear weapons. Hooyah navy. If this is to be our end, then let it be SUCH an end, so as to be worthy of remembrance. I would rather this entire planet and all things upon it burn in radioactive fire than be sacrificed on the altar of technology.
Here is an alternative Piped link(s):
there I ruined it
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I'm open-source; check me out at GitHub.
Just a few years back, Vernor Vinge's scifi novels still seemed reasonably futuristic in dealing with the issue of fakes well by including several bits where the resolution of imagery was a factor in being able to analyze with sufficient certainty that you were talking to the right person, and now that notion already seems dated, and certainly not enough for a setting far into the future.
(at least they don't still seem as dated as Johnny Mnemonic's plot of erasing a chunk of your memories to transport an amount of data that would be easier and less painful to fit in your head by stuffing a microsd card up your nose)
yeah i don't think it should be legislated against, especially for private use [people will always work around it anyway], but using it for profit is really, viscerally wrong
You know I'm not generally a defender of intellectual property, but I don't think in this case "not legislating because people will work around it" is a good idea. Or ever, really. It's because people will try to work around laws to take advantage of people that laws need to be updated.
It's not just about celebrities, or even just about respect towards dead people. In this case, what if somebody takes the voice of a family member of yours to scam your family or harass them? This technology can lead to unprecedented forms of abuse.
In light of that, I can't even mourn the loss of making an AI Robin Willians talk to you because it's fun.
IMO people doing it on their own for fun/expression is different than corporations doing it for profit, and there's no real way to stop that. I think if famous AI constructs become part of big media productions, it will come with a constructed moral justification for it. The system will basically internalize and commodify the repulsion to itself exploiting the likeness of dead (or alive) actors. This could be media that blurs the line and proports to ask "deep questions" about exploiting people, while exploiting people as a sort of intentional irony. Or it will be more like a moral appeal to sentimentality, "in honor of their legacy we are exploiting their image, some proceeds will support causes they cared about, we are doing this to spread awareness, the issue they are representing are too important, they would have loved this project, we've worked closely with their estate." Eventually there's going to be a film like this, complete with teary-eyed behind-the-scenes interviews about how emotional it was to reproduce the likeness of the actor and what an honor it was. As soon as the moral justification can be made and the actor's image can be constructed just well enough. People will go see it so they can comment on what they thought about it and take part in the cultural moment.
We need something like the fair use doctrine coupled with identify rights.
If you want to use X's voice and likeness in something, you have to purchase that privilege from X or X's estate, and they can tell you to pay them massive fees or to fuck off.
Fair use would be exclusively for comedy, but still face regulation. There's plenty of hilarious TikToks that use AI to make characters say stupid shit, but we can find a way to protect voice actors and creators without stifling creativity. Fair use would still require the person's permission, you just wouldn't need to pay to use it for such a minor thing -- a meme of Mickey Mouse saying fuck for example.
At the end of the day though, people need to hold the exclusive and ultimate right to how their likeness and voice are used, and they need to be able to shut down anything they deem unacceptable. Too many people are concerned with what is capable than with acting like an asshole. It's just common kindness to ask someone if you can use their voice for something, and respecting their wishes if they don't want it.
I don't know if this is a hot take or not, but I'll stand by it either way -- using AI to emulate someone without their permission is a fundamental violation of their rights and privacy. If OpenAI or whoever wants to claim that makes their product unusable, tough fucking luck. Every technology has faced regulations to maintain our rights, and if a company can't survive without unbridled regulations, it deserves to die.
This was very well stated, and I wholeheartedly agree.