I'm not surprised that this feature (which was apparently introduced by Canva in 2019) is AI-based in some way. It was just never marketed as such, probably because in 2019, AI hadn't become a common buzzword yet. It was simply called “background remover” because that's what it does. What I find so irritating is that these guys on LinkedIn not only think this feature is new and believe it's only possible in the context of GenAI, but apparently also believe that this is basically just the final stepping stone to AI world domination.
HedyL
This somehow reminds me of a bunch of senior managers in corporate communications on LinkedIn who got all excited over the fact that with GenAI, you can replace the background of an image with something else! That's never been seen before, of course! I'm assuming that in the past, these guys could never be bothered to look into tools as widespread as Canva, where a similar feature had been present for many years (before the current GenAI hype, I believe, even if the feature may use some kind of AI technology - I honestly don't know). Such tools are only for the lowly peasants, I guess - and quite soon, AI is going to replace all the people who know where to click to access a feature like "background remover", anyway!
By the way, is there a DuckDuckGo bang for Google "udm=14" ("web" tab) yet? I have been looking for something like this for awhile, but no success so far. It's very frustrating to receive these AI generated answers even when using "!g".
Of course, it has long been known that some private investors would buy shares in any company just because its name contains letters like “.com” or “blockchain”. However, if a company invests half a billion in an ".ai" company, shouldn't it make sure that the business model is actually AI-based?
Maybe, if we really wanted to replace something with AI, we should start with the VC investors themselves. In this case, we might not actually see any changes for the worse.
Edit: Of course, investors only bear part of the blame if fraud was involved. But the company apparently received a large part of its funding in 2023, following reports of similar lies in as early as 2019. I find it hard to imagine that tech-savvy investors really wouldn't have had a chance to spot the problems earlier.
Edit No. 2: Of course, it is also conceivable that the investors didn't care at all because they were only interested in the baseless hype, which they themselves fueled. But with such large sums of money at stake, I still find it hard to imagine that there was apparently so little due diligence.
As all the book authors on the list were apparently real, I guess the "author" of this supplemental insert remembered to google their names and to remove all references to fake books from fake authors made up by AI, but couldn't be bothered to do the same with the book titles (too much work for too little money, I suppose?). And for an author to actually read these books before putting them on a list is probably too much to ask for...
It's also funny how some people seem to justify this by saying that the article is just “filler material” around ads. I don't know, but I believe most people don't buy printed newspapers in order to read nonsensical “filler material” garnished with advertising. The use of AI is a big problem in this case, but not the only one.
Please help me understand this: It was supposedly fine, because "only one minor was molested", and this confession made everyone more trustworthy? Am I missing something?
Reportedly, some corporate PR departments "successfully" use GenAI to increase the frequency of meaningless LinkedIn posts they push out. Does this count?
In my experience, if some "innovation" makes no sense and yet is continuously hyped up by people who should absolutely know better, it is usually because it allows them to circumvent some law or regulation they don't like. That was certainly true for cryptocurrencies and for a lot of complex financial products during the subprime crisis, and it appears to be true in this case again (this time, it's copyright laws). If AI "rewords" existing content and adds fresh errors, the result is (supposedly) not copyrighted anymore (I guess) and can be used to sell more ads - mission accomplished.
For me, everything increasingly points to the fact that the main “innovation” here is the circumvention of copyright regulations. With possibly very erroneous results, but who cares?
It's also worth noting that your new variation of this “puzzle” may be the first one that describes a real-world use case. This kind of problem is probably being solved all over the world all the time (with boats, cars and many other means of transportation). Many people who don't know any logic puzzles at all would come up with the right answer straight away. Of course, AI also fails at this because it generates its answers from training data, where physical reality doesn't exist.
This is particularly remarkable because - as David pointed out - being a pilot is not even one of those jobs that nobody would want to do. There is probably still an oversupply of suitable people who would pass all the screening tests and really want to become pilots. Some of them would probably even work for a relatively average salary (as many did in the past outside the big airlines). The only problem for the airlines is probably that they can no longer count on enough people being willing (and able!) to take on the high training costs themselves. Therefore airlines would have to hire somewhat less affluent candidates and pay for all their training. However, AI probably looks a lot more appealing to them...
Under the YouTube video, somebody just commented that they believe that in the end, the majority of people is going to accept AI slop anyway, because that's just how people are. Maybe they're right, but to me it seems that sometimes, the most privileged people are the ones who are the most impressed by form over substance, and this seems to be the case with AI at the moment. I don't think this necessarily applies to the population as a whole, though. The possibility that oligopolistic providers such as Google might eventually leave them with no other choice by making reliable search results almost unreachable is another matter.