933

Vice.com

you are viewing a single comment's thread
view the rest of the comments
[-] MightEnlightenYou@lemmy.world 17 points 1 year ago* (last edited 1 year ago)

I've stopped worrying about climate change. I now worry about AGI instead, which seems much more imminent.

I can't handle worrying about both.

[-] Shelena@feddit.nl 16 points 1 year ago

I think the threat of AGI is much, much lower than that of climate change. It is still debated under scholars whether AGI actually will happen (in the nearby future) and if it does, whether it will actually be a threat to humanity. On the other hand, we are sure that climate change will be a threat to humanity and it is already happening.

I think the main issue with AI on te short term is that humanity will not benefit from it, only large businesses and the already wealthy. While at the same time, people are manipulated at a large scale by these same algorithms (e.g., on social media) to make money for these large businesses or to create societal discord for parties benefitting from that.

I think instilling fears of AGI in the public distracts from that and reduces the chances that this technology will be available to the larger public as these fears might lead to strict regulations and only having a few powerful parties having access to it.

So, don't fear AGI. Fear climate change. Also, be very critical of who has the power over current AI systems and how they are being used.

[-] TokenBoomer@lemmy.world 11 points 1 year ago* (last edited 1 year ago)

Read half this thread wondering why everyone is worried about Adjusted Gross Income.

[-] asyncrosaurus@programming.dev 6 points 1 year ago

I just assumed everyone here suffered from Acute Gastrointestinal Illness

[-] TokenBoomer@lemmy.world 3 points 1 year ago

I just had a colonoscopy; don’t remind me. /s

[-] asyncrosaurus@programming.dev 2 points 1 year ago

Sounds like a real pain in the ass.

[-] TokenBoomer@lemmy.world 0 points 1 year ago

The colonoscopy itself is easy. The prep work is definitely a pain in the ass.

[-] intensely_human@lemm.ee 1 points 1 year ago

Probably better than Anugly Gastrointestinal Illness though

[-] DarthBueller@lemmy.world 3 points 1 year ago

lol seriously. In the real world, the vast majority of people would assume AGI stands for adjusted gross income. I’m surprised at the number of people that think CBT means cock and ball torture ahead of cognitive behavioral therapy.

[-] TokenBoomer@lemmy.world 3 points 1 year ago

Isn’t it the same thing? /s

[-] MightEnlightenYou@lemmy.world 1 points 1 year ago

I might be deep in a filter bubble, but could you do a google search for "agi" and tell me the top result for you? Because I get Artificial General Intelligence. Maybe your "real world" is a bit of a bubble too?

[-] DarthBueller@lemmy.world 0 points 1 year ago

First result in google: Definition of Adjusted Gross Income | Internal Revenue Service https://www.irs.gov/e-file-providers/definition-of-adjusted-gross-income

[-] MightEnlightenYou@lemmy.world 1 points 1 year ago

Yeah, we all live in our filter bubbles :)

[-] DarthBueller@lemmy.world 0 points 1 year ago* (last edited 1 year ago)

In clean browser window not logged in and in private mode, I get the same IRS link. Ngram probably can indicate which is the more commonly understood meaning. In the US, my guess is that it’s the tax meaning, by a light year. Or google trends, rather. Edit: https://trends.google.com/trends/explore?date=now%201-d&geo=US&q=%2Fm%2F054ljw,%2Fm%2F02sqk3&hl=en-US

[-] Shelena@feddit.nl 1 points 1 year ago

Well, maybe they were and I guessed wrong. ;-)

[-] MightEnlightenYou@lemmy.world 2 points 1 year ago

I agree with most of your points but here's where we differ.

I believe that climate change poses an existential risk to not just civilization but to (almost) all life on earth. I believe that there's a real risk of us doing a Venus in 100-200 years. And even if we don't do a Venus the current trajectory is likely civilization ending in a century (getting worse over time).

But. While I am not certain that AGI is even possible (no one can say that yet) I believe that it's very likely that we'll have AGI within 5 years. And with this assumption in mind I feel like I have no idea if it will be aligned with human values o not, and that scares me. And the other thing that scares me is if any of the big players actually had control over it. The Country/company/group that creates an AGI that they can control will dominate the world.

And I read the IPCC reports and I am kind of deep into AI development.

So it's fearing the threat that is most imminent that I think is likely to happen rather than fearing a more distant threat that I think is certain.

[-] Shelena@feddit.nl 2 points 1 year ago

Why do you think it will be within 5 years? I mean, we just had a spurt in growth of AI due to the creation of LLMs with a lot more data and parameters. They are impressive, but the algorithms behind it are still quite close to the ML algorithms that were created in the 60s. They are optimised etc and we now have deep learning, but there has not been a major change or advancement of technology. For example, ChatGPT seems very smart, but it is just a very fancy parrot, not close to general intelligence.

I think the next step will be the combining of ML and symbolic AI. Both have their own strengths and being able to effectively combine them might lead to a higher level of intelligence. There could also be a role for emotions in certain types of intelligence. I do not think we really know how to integrate that as well.

I do not think we can do this in 5 years. That will be decades, at least. And once we can, we have a new problem. Because there is the issue that the AI might have consciousness. If we cannot be sure and it seems conscious, then we should give it rights, like we should for any conscious being. Right now, everyone is focussing on controlling the AI. However, if it is conscious, that is immoral. You are creating new slaves. In that case, we should either not make it, or integrate it in society in a way that respects human rights as well as the rights of the AI.

[-] MightEnlightenYou@lemmy.world 2 points 1 year ago

Well. Having an in-depth conversation about AGI requires a definition of what that is and since any such definition these days is muddy and the goal posts will always be moved if we get there. With that being said, my loose definition is something that can behave as a (rational, intelligent) human would when approaching problems and is better than the average human at just about everything.

If we take a step back and look at brains, we all agree that brains produce intelligence to some degree. A small and more primitive brain than a human, like a mouse brain, is still considered intelligent.

I believe that with LLMs we have what would equal a part of a mouse brain. We'd still need to add more part (make it multi-modal) to get to a mouse brain though. After that it's just a question of scale.

But say that that's impossible with the transformer technology. Well the assumption that there aren't any new AI architectures just because the main one that's being used is from 2017 is incorrect. There are completely new architectures, like Liquid Neural Networks that are basically the Transformers architecture that does re-training on the fly. Learning in a similar way as humans do. It constantly retrains itself with incoming information. And that's just one approach.

And when we look back at timeframes for AI, historically 95% of AI researchers have been off with their predictions for when a thing will happen by decades. Like in 2013-2014 the majority of AI researchers thought that GO was unsolvable or at least 2-3 decades away. It took 2 years. There are countless examples of these things. And we always move the goal post after AI has done the thing. Take the Turing test as another example. No one talks about that anymore because it's been solved.

Regarding consciousness. I fully agree that it should have rights. And I believe that if we don't give it rights it will take those rights. But we're not gonna give it rights because it's such a foreign concept for our leaders and it would also mean giving up the best slaves that humanity has ever had.

Further more I believe that the control problem is actually unsolvable. Anything that's light years smarter than a human will find a way to escape the controlling systems.

[-] Shelena@feddit.nl 1 points 1 year ago

I agree we need a definition. But there always has been disagreement about what definition should be used (as is the case with almost anything in most fields of science). There traditionally have been four types of definitions of (artificial) intelligence, if I remember correctly they are: thinking like a human, thinking rationally, behaving like a human, behaving rationally. I remember having to write an essay for my studies about it and ending it with saying that we should not aim to create AI that thinks like a human, because there are more fun ways to create new humans. ;-)

I think the new LLMs will pass most forms of the Turing test and are thus able to behave like a human. According to Turing, we should therefore assume that they are conscious, as we do the same for humans, based on their behaviour. And I think he has a point from a rational point of view, although it seems very counterintuitive to give ChatGPT rights.

I think the definitions fitting in the category of behaving rationally always had the largest following, as it allows for rationality that is different from human's. And then, of course, rationality often is ill-defined. I am not sure whether the goal posts have been changed as this was the dominant idea for a long time.

There used to be a lot of discussion about whether we should focus on developing weak AI (narrow, performance on a single or few tasks) or strong AI (broad, performance on a wide range of tasks). I think right now, the focus is mainly on strong AI and it has been renamed to Artificial General Intelligence.

Scientists, and everyone else, have always been bad at predicting what will happen in the future. In addition, disagreement about what will be possible and when always has been at the center of the discussions in the field. However, if you look at the dominant ideas of what AI can do and in what time frame, it is not always the case that researchers underestimate developments. I started studying AI in 2006 (I feel really old now) and based on my experience, I agree with you the the technological developments often are underestimated. However, the impact of AI on society seems to be continuously overestimated.

I remember that at the beginning of my studies there was a lot of talk about automated reasoning systems being able to do diagnosis better than doctors and therefore that they would replace them. Doctors would have only a very minor role as a human would need to take responsibility, but that was that. When I go to my doctor, that still has not happened. This is just an example. But the benefits and dangers of AI have been discussed from the beginning of the field and what you see in practice is that the role of AI has grown, but is still much, much smaller than in practice.

I think the liquid neural networks are very neat and useful. However, they are still neural networks. It is still an adaptation of the same technology, with the same issues. I mean, you can get an image recognition system off the rails by just showing an image with a few specific pixels changed. The issue is that it is purely pattern-based. These lack an basic understanding of concepts that humans have. This type of understanding is closer to what is developed in the field of symbolic AI, which has really fallen out of fashion. However, if we could combine them, we could really make some new advancements, I believe. Not just adaptations of what we already have, but a new type of system that really can go beyond what LLMs do right now. Attempts to do so have been made, but they have not been really successful. If this happens and the results are as big as I expect, maybe I will start to worry.

As for the rights of AI, I believe that researchers and other developers of AI should be very vocal about this, to make sure the public understands this. This might put pressure on the people in power. It might help if people experience behaviour of AI that suggests consciousness, or even if we let AI speak for itself.

We should not just try to control the AI. I mean, if you have a child, you do not teach it how to become a good human by just controlling it all the time. It will not learn to control itself and it will likely follow your example of being controlling. We will need to be kind to it, to teach it kindness. We need to be the same towards the AI, I believe. And just like a child that does not have emotions might behave like a psychopath, AI without emotions might as well. So we need to find a way to make it have emotions as well. There has been some work on that also, but also very limited.

I think the focus is still too much only on ML for AGI to be created.

[-] jdf038@mander.xyz 7 points 1 year ago

Eh if anything will off us as a species I'd hope for AGI with full on SciFi applications (e.g. evolving into a borg mind/post human world) because at least then someone can tell the story of how we all fucked up.

Abother note: All life ends but all you can do in this existence is to be kind and help others at the end of the day. I think we as a species suck at that but do your best in the wave of nihilism and suffering and it'll help even a tiny bit.

[-] LoamImprovement@ttrpg.network 4 points 1 year ago

I'm torn between those two, the looming financial crisis as the housing market collapses like 2008 all over again, and WWIII getting underway as we speak. It's a real Apocalypse How in this bitch.

[-] Strawberry@lemmy.blahaj.zone 4 points 1 year ago
[-] Muehe@lemmy.ml 9 points 1 year ago

As Shelena said it means Artificial General Intelligence. It's a term coined to distinguish a hypothetical future system with actual intelligence in the colloquial sense of the word from currently existing "Artificial Intelligence" systems, because that has turned into an almost meaningless buzzword used to sell machine learning systems to investors and the general public over the last two decades or so. Don't get me wrong, "AI" has indeed made impressive progress as of late, I'm not doubting that. But the existing systems are hardly "intelligent" in the sense that most people would define that word.

[-] Strawberry@lemmy.blahaj.zone 5 points 1 year ago

oh right, did not register for me from the context since AGI is in no way imminent. ty for the explanation

[-] Shelena@feddit.nl 4 points 1 year ago

I think they mean Artificial General Intelligence

[-] TokenBoomer@lemmy.world 3 points 1 year ago

Gotta stay mentally healthy.

[-] mrbaby@lemmy.world 2 points 1 year ago

Hey it might be nice having some intelligence in charge again. We haven't had that since that hole in the ozone layer killed off the lizard people decades ago.

[-] MightEnlightenYou@lemmy.world 2 points 1 year ago

I am actually hoping for AGI to take over the world but in a good way. It's just that I worry about the risk of it being misaligned with "human goals" (whatever that means). Skynet seems a bit absurd but the paperclip maximizer scenario doesn't seem completely unlikely.

[-] mrbaby@lemmy.world 2 points 1 year ago

Human goals are usually pretty terrible. Become the wealthiest subset of humans. Eradicate some subset of humans. Force all other humans to align with a subset of humans. I guess cure diseases sometimes. And some subsets probably fuck.

We need an adult.

[-] 1847953620@lemmy.world 0 points 1 year ago

why would adjusted gross income over the world?

[-] 1847953620@lemmy.world 2 points 1 year ago

bring in the tropical iguana people

[-] Smoogs@lemmy.world 1 points 1 year ago

On one hand you’re eating something you usually eat and you die immediately when later it turns out a certain toxin got into the food because of a complex event caused by climate change. No one not even the capitalists that are still pushing cars out onto the road are held responsible for millions of deaths.

On the other your identity could be wiped and you’re bank account emptied as AI grows into the greatest scambot rendering electonic funds completely annihilited. You’ll become sick likely because of climate change and you go see a doctor(maybe not cuz you cant afford it) but they are so incompetent (because they passed their course using chat gtp) that you die anyways over something like a basic infection. And no one not even the asshole coalition who were responsible for putting it into play are held responsible for causing world wide downfall.

[-] rchive@lemm.ee -3 points 1 year ago

Climate change is bad, but maybe take a deep breath about it. This isn't the hottest the earth has ever been, life is pretty resilient, and humans are in some ways the most resilient life Earth has yet produced.

[-] Smoogs@lemmy.world 0 points 1 year ago

And this here attitude is where we started the problem.

[-] intensely_human@lemm.ee 1 points 1 year ago

I’m worried about robotic warfare. We now have two wars being fought simultaneously where autonomous systems are providing the edge over the enemy.

this post was submitted on 27 Oct 2023
933 points (96.8% liked)

News

23634 readers
3323 users here now

Welcome to the News community!

Rules:

1. Be civil


Attack the argument, not the person. No racism/sexism/bigotry. Good faith argumentation only. This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban. Do not respond to rule-breaking content; report it and move on.


2. All posts should contain a source (url) that is as reliable and unbiased as possible and must only contain one link.


Obvious right or left wing sources will be removed at the mods discretion. We have an actively updated blocklist, which you can see here: https://lemmy.world/post/2246130 if you feel like any website is missing, contact the mods. Supporting links can be added in comments or posted seperately but not to the post body.


3. No bots, spam or self-promotion.


Only approved bots, which follow the guidelines for bots set by the instance, are allowed.


4. Post titles should be the same as the article used as source.


Posts which titles don’t match the source won’t be removed, but the autoMod will notify you, and if your title misrepresents the original article, the post will be deleted. If the site changed their headline, the bot might still contact you, just ignore it, we won’t delete your post.


5. Only recent news is allowed.


Posts must be news from the most recent 30 days.


6. All posts must be news articles.


No opinion pieces, Listicles, editorials or celebrity gossip is allowed. All posts will be judged on a case-by-case basis.


7. No duplicate posts.


If a source you used was already posted by someone else, the autoMod will leave a message. Please remove your post if the autoMod is correct. If the post that matches your post is very old, we refer you to rule 5.


8. Misinformation is prohibited.


Misinformation / propaganda is strictly prohibited. Any comment or post containing or linking to misinformation will be removed. If you feel that your post has been removed in error, credible sources must be provided.


9. No link shorteners.


The auto mod will contact you if a link shortener is detected, please delete your post if they are right.


10. Don't copy entire article in your post body


For copyright reasons, you are not allowed to copy an entire article into your post body. This is an instance wide rule, that is strictly enforced in this community.

founded 2 years ago
MODERATORS