297
Microsoft and Alphabet results show Wall Street only cares about AI
(www.marketwatch.com)
This is a most excellent place for technology news and articles.
Hallucinations can be heavily reduced today by providing the LLM with grounding truth. People use naked LLMs as knowledge databases, which is prone to hallucinations indeed. However, provide them with verified data from the side and they are very, very good at keeping to the truth. I know, because we deploy these with clients to great avail.
Image, music, video models are making great strides and are already part of various pipelines, all the way up to the big boy tools like Photoshop (generative fill, for example).
The tech is being incorporated at a large scale by a lot of companies, from SME to megacorp. I don't see it going away any time soon, even if it doesn't improve from here on out (which it undoubtedly will).
The issue is that there are from time to time they still confidently hallucinate and there is no way to detect if they are right or not.
Hire 1 person to verify AI output instead of a dozen to make the content. If that one editor misses something, who cares when we live in a post-truth society where the media lies on purpose.
How many countries start with the letter K in Africa?
GPT-4:
Anecdotal evidence is useless because it can be contradicted with anecdotal evidence.