1039

Google has plunged the internet into a “spiral of decline”, the co-founder of the company’s artificial intelligence (AI) lab has claimed.

Mustafa Suleyman, the British entrepreneur who co-founded DeepMind, said: “The business model that Google had broke the internet.”

He said search results had become plagued with “clickbait” to keep people “addicted and absorbed on the page as long as possible”.

Information online is “buried at the bottom of a lot of verbiage and guff”, Mr Suleyman argued, so websites can “sell more adverts”, fuelled by Google’s technology.

you are viewing a single comment's thread
view the rest of the comments
[-] squaresinger@feddit.de 213 points 1 year ago

The part about Google isn't wrong.

But the second half of the article, where he says that AI chatbots will replace Google search because they give more accurate information, that simply is not true.

[-] Enkers@sh.itjust.works 74 points 1 year ago* (last edited 1 year ago)

I'd say they at least give more immediately useful info. I've got to scroll past 5-8 sponsored results and then the next top results are AI generated garbage anyways.

Even though I think he's mostly right, the AI techbro gameplan is obvious. Position yourself as a better alternative to Google search, burn money by the barrelful to capture the market, then begin enshitification.

In fact, enshitification has already begun; responses are comparatively expensive to generate. The more users they onboard, the more they have to scale back the quality of those responses.

[-] nilloc@discuss.tchncs.de 2 points 1 year ago

ChatGPT is already getting worse at code commenting and programming.

The problem is that enshitification is basically a requirement in a capitalist economy.

[-] ribboo@lemm.ee 23 points 1 year ago

I mean most top searches are AI generated bullshit nowadays anyway. Adding Reddit to a search is basically the only decent way to get a proper answer. But those answers are not much more reliable than ChatGPT. You have to use the same sort of skepticism and fact checking regardless.

Google has really gotten horrible over the years.

[-] SmashingSquid@notyour.rodeo 4 points 1 year ago

Most of the results after the first page on Google are usually the same as the usable results, just mirrored on some shady site full of ads and malware.

[-] twinnie@feddit.uk 12 points 1 year ago

I already go to ChatGPT more than Google. If you pay for it then the latest version can access the internet and if it doesn’t know the answer to something it’ll search the internet for you. Sometimes I come across a large clickbait page and I just give ChatGPT the link and tell it to get the information from it for me.

[-] madnificent@lemmy.world 40 points 1 year ago

Do you fact-check the answers?

It depends what you’re using it for as to whether you need to fact check stuff.

I’m a software developer and if I can’t remember how to do an inner join in SQL then I can easier ask ChatGPT to do it for me and I will know if it is right or not as this is my field of expertise.

If I’m asking it how to perform open heart surgery on my cat, then sure I’m probably going to want several second opinions as that is not my area of expertise.

When using a calculator do you use two different calculators to check that the first one isn’t lying?

Also, you made a massive assumption that the stuff OP was using it for was something that warranted fact checking.

I can see why you would use it. Why would I want to search Google for inner joins sql when it is going to give me so many false links that don’t give me the info in need in a concise manner.

Even time wasting searches have just been ruined. Example: Top Minecraft Java seeds 1.20. Will give me pages littered with ads or the awful page 1-10 that you must click through.

Many websites are literally unusable at this point and I use ad blockers and things like consent-o-matic. But there are still pop up ads, sub to our newsletter, scam ads etc. so much so that I’ll just leave the site and forego learning the new thing I wanted to learn.

[-] Steeve@lemmy.ca 2 points 1 year ago

The new release of GPT-4 searches Bing, reads the results, summarizes, and provides sources, so it's easier to fact check than ever if you need to.

[-] Baines@lemmy.world 21 points 1 year ago

give it time, algos will fuck those results as well

[-] Dave@lemmy.nz 10 points 1 year ago

ChatGPT powers Bing Chat, which can access the internet and find answers for you, no purchase necessary (if you're not on edge, you might need to install a browser extension to access it as they are trying to push edge still).

[-] madnificent@lemmy.world 5 points 1 year ago

Do you fact-check the answers?

[-] Redredme@lemmy.world 9 points 1 year ago

That's such a strange question. It's almost like you imply that Google results do not need fact checking.

They do. Everything found online does.

[-] otter@lemmy.ca 12 points 1 year ago

With google, it depends on what webpage you end up on. Some require more checking than others, which are more trustworthy

Generative AI can hallucinate about anything

[-] dojan@lemmy.world 20 points 1 year ago* (last edited 1 year ago)

There are no countries in Africa starting with K.

LLMs aren’t trained to give correct answers, they’re trained to generate human-like text. That’s a significant difference.

[-] Takumidesh@lemmy.world 2 points 1 year ago

They also aren't valuable for asking direct questions like this.

There value comes in with call and response discussions. Being able to pair program and work through a problem for example. It isn't about it spitting out a working problem, but about it being able to assess a piece of information in a different way than you can, which creates a new analysis of the information.

It's extraordinarily good at finding things you miss in text.

[-] dojan@lemmy.world 1 points 1 year ago

Yeah. There's definitely tasks suited to LLMs. I've used it to condense text, write emails, and even project planning because they do give decently good ideas if you prompt them right.

Not sure I'd use them for finding information though, even with the ability to search for it. I'd much rather just search for it myself so I can select the sources, then have the LLM process it.

[-] madnificent@lemmy.world 1 points 1 year ago

Agree.

I found it more tempting to accept the initial answers I got from GPT4 (and derivatives) because they are so well written. I know there are more like me.

With the advent of working LLMs, reference manuals should gain importance too. I check them more often than before because LLMs have forced me to. Could be very positive.

[-] yoz@aussie.zone 4 points 1 year ago

Its already happening at my work. Many are using bing AI instead of google.

[-] DudeDudenson 3 points 1 year ago

Don't worry they'll start monetizing LLMs and injecting ads into them soon enough and we'll be back to square one

[-] Aceticon@lemmy.world 0 points 1 year ago

I suspect that client-side AI might actually be the kind of thing that filters the crap from search results and actually gets you what you want.

That would only be Chat-AI if it turns out natural language queries are better to determine the kind of thing the user is looking for than people trying to craft more traditional query strings.

I'm thinking each person would can train their AI based on which query results they went for in unfiltered queries, with some kind of user provided feedback of suitability to account for click-bait (i.e. somebody selecting a result because it looks good but it turns out its not).

[-] Zeth0s@lemmy.world -2 points 1 year ago* (last edited 1 year ago)

If you aren't paying for chatgpt, give a look to perplexity.ai, it is free.

You'll see that sources are references and linked

Don't judge on the free version of chatgpt

Edit. Why the hell are you guys downvoting a legit suggestion of a new technology in the technology community? What do you expect to find here? Comments on steam engines?

[-] cybersandwich@lemmy.world -3 points 1 year ago

I dunno. There have been quite a few times where I am trying to do something on my computer and I could either spend 5 minutes searching, refining, digging through the results...or I can ask chatgpt and have a workable answer in 5 seconds. And that answer is precisely tailored to my specifics. I don't have to assume/research how to modify a similar answer to fit my situation.

Obviously it's dependent on the types of information you need, but for coding, bash scripting, Linux cli, or anything of that nature LLMs have been great and much better than Google searches.

[-] Excrubulent@slrpnk.net 14 points 1 year ago* (last edited 1 year ago)

Okay but the problem with that is that LLMs not only don't have any fidelity at all, they can't. They are analogous to the language planning centre of your brain, which has to be filtered through your conscious mind to check if it's talking complete crap.

People don't realise this and think the bot is giving them real information, but it's actually just giving them spookily realistic word-salad, which is a big problem.

Of course you can fix this if you add some kind of context engine for them to truly grasp the deeper and wider meaning of your query. The problem with that is that if you do that, you've basically created an AGI. That may first of all be extremely difficult and far in the future, and second of all it has ethical implications that go beyond how effective of a search engine it is.

[-] cybersandwich@lemmy.world 2 points 1 year ago

Did you read my last little bit there? I said it depends on the information you are looking for. I can paste error output from my terminal into Google and try to find an answer or I can paste it into chatgpt and be, at the very least pointed in the right direction almost immediately, or even given the answer right away vs getting a stackoverflow link and parsing the responses and comments and following secondary and tiertiary links.

I absolutely understand the stochastic parrot conundrum with LLMs. They have significant drawbacks and they are far from perfect, but then neither is are Google search results. There is still a level of skepticism you have to apply.

One of the biggest mistakes people make is the idea that LLMs and websearching is a zero sum affair. They don't replace each other. They compliment each other. Imo, google is messing up with their "ai" integration into Google search. It sets the expectation that it is an equivalent function.

[-] Touching_Grass@lemmy.world -3 points 1 year ago

I don't need perfect. I need good enough

[-] Excrubulent@slrpnk.net 7 points 1 year ago* (last edited 1 year ago)

Sure but if that becomes the norm then a huge segment of the population will believe the first thing the bot tells them. You might be okay, but we're talking about an entire society filtering its knowledge through an incredibly effective misinformation engine that will lie rather than say "I don't know", because that simple phrase requires a level of self-awareness that eludes a lot of actual people, much less a chatbot.

[-] Touching_Grass@lemmy.world -2 points 1 year ago

That's already a problem. The thing j think about is what will serve me better. Google or chat AI. The risk of bad information exists with both. But an AI based search engine is something that will be much better at finding context, retiring results geared towards my goals and I suspect less prone to fuckery because AI must be trained as a whole

[-] Excrubulent@slrpnk.net 3 points 1 year ago* (last edited 1 year ago)

Except we already know that LLMs lie and people in general are not aware of this. Children are using these. When you as a person have to sift through results you get a sense of what information is out there, how sparse it is, etc. When a chatbot word-vomits the first thing it can think of to satisfy your answer, you get none of that, and perhaps you should be aware of that yourself. You don't really seem to be, it's like you think the saved time is more important than context, which apparently I have to remind you - the bot doesn't know context.

When you say:

an AI based search engine is something that will be much better at finding context

It makes me think that you really don't understand how these bots work, and that's the real danger.

We're talking in this thread about this wider systemic issue, not just what suits you personally regardless of how much it gaslights you, but if that's all you care about then you do you I guess ¯\_(ツ)_/¯

[-] Touching_Grass@lemmy.world -1 points 1 year ago* (last edited 1 year ago)

Lie is a weird way to describe it. They give you an answer based on probabilities. When they're off base they call it hallucinating. Its not lying its just lacking in data to give an accurate and correct a answer which will get better with more training and data. Everything else we have so far gets worse. Google isn't what it was 15 years ago.

I use chatgpt every day to find out answers over google. Its better in almost every single way to get information from and I can only imagine what it's capable of once it can interface with crawlers.

The language you're using to speak on this issue makes it seem like theres a personal vendetta against LLM. Why people get so mad at a new tool is always fascinating.

this post was submitted on 15 Oct 2023
1039 points (97.1% liked)

Technology

59419 readers
2956 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS