Good luck, I'm hoping that I can get a worker's visa through my employer to move to the UK myself.
This process is akin to how humans learn by reading widely and absorbing styles and techniques, rather than memorizing and reproducing exact passages.
Like fuck it is. An LLM "learns" by memorization and by breaking down training data into their component tokens, then calculating the weight between these tokens. This allows it to produce an output that resembles (but may or may not perfectly replicate) its training dataset, but produces no actual understanding or meaning--in other words, there's no actual intelligence, just really, really fancy fuzzy math.
Meanwhile, a human learns by memorizing training data, but also by parsing the underlying meaning and breaking it down into the underlying concepts, and then by applying and testing those concepts, and mastering them through practice and repetition. Where an LLM would learn "2+2 = 4" by ingesting tens or hundreds of thousands of instances of the string "2+2 = 4" and calculating a strong relationship between the tokens "2+2," "=," and "4," a human child would learn 2+2 = 4 by being given two apple slices, putting them down to another pair of apple slices, and counting the total number of apple slices to see that they now have 4 slices. (And then being given a treat of delicious apple slices.)
Similarly, a human learns to draw by starting with basic shapes, then moving on to anatomy, studying light and shadow, shading, and color theory, all the while applying each new concept to their work, and developing muscle memory to allow them to more easily draw the lines and shapes that they combine to form a whole picture. A human may learn off other peoples' drawings during the process, but at most they may process a few thousand images. Meanwhile, an LLM learns to "draw" by ingesting millions of images--without obtaining the permission of the person or organization that created those images--and then breaking those images down to their component tokens, and calculating weights between those tokens. There's about as much similarity between how an LLM "learns" compared to human learning as there is between my cat and my refrigerator.
And YET FUCKING AGAIN, here's the fucking Google Books argument. To repeat: Google Books used a minimal portion of the copyrighted works, and was not building a service to compete with book publishers. Generative AI is using the ENTIRE COPYRIGHTED WORK for its training set, and is building a service TO DIRECTLY COMPETE WITH THE ORGANIZATIONS WHOSE WORKS THEY ARE USING. They have zero fucking relevance to one another as far as claims of fair use. I am sick and fucking tired of hearing about Google Books.
EDIT: I want to make another point: I've commissioned artists for work multiple times, featuring characters that I designed myself. And pretty much every time I have, the art they make for me comes with multiple restrictions: for example, they grant me a license to post it on my own art gallery, and they grant me permission to use portions of the art for non-commercial uses (e.g. cropping a portion out to use as a profile pic or avatar). But they all explicitly forbid me from using the work I commissioned for commercial purposes--in other words, I cannot slap the art I commissioned on a T-shirt and sell it at a convention, or make a mug out of it. If I did so, that artist would be well within their rights to sue the crap out of me, and artists charge several times as much to grant a license for commercial use.
In other words, there is already well-established precedent that even if something is publicly available on the Internet and free to download, there are acceptable and unacceptable use cases, and it's broadly accepted that using other peoples' work for commercial use without compensating them is not permitted, even if I directly paid someone to create that work myself.
~~assign everyone a government mandated fursona~~
Freak the fuck out.
Pull back from Ukraine, Crimea, and Georgia, and negotiate an immediate ceasefire.
Call as many political scientists and scholars as possible and get their advice on how the fuck I can design a reformed system of democratic governance that is robust enough to withstand the inevitable attempts to undermine and corrupt it.
Find the multitude of stashed billions from the various oligarchs and seize it, use the money to invest in overhauling Russian society--improving infrastructure and education, improving the standard of living, etc.
Yup, I delivered pizza for the Hut around the same time. Big ol' map of the area divided into sectors, each order listed which sector the address was in. I'd write directions on the back of the order slip, and go off into the night with nothing but a flashlight. First day I got a lecture by the manager on how to navigate by address and tell which side of the street a house was on, I learned more about navigating that day than in the entire rest of my life.
Sometimes I miss those days and wish I could be 19 and driving my tiny Honda Civic through the highlands again, listening to video game songs downloaded from OCRemix on my little MP3 player plugged into the car audio with a tape adapter.
Yeah, that's what happens when the LLM they use to summarize these articles strips all nuance and comedy.
In my experience, any time someone mentions how many decades of experience they have in IT, it means they either:
-
Think that clicking the Facebook button on their desktop and finding their Downloads folder qualifies as experience in IT
-
Have decades of actual IT experience, but think everything still works like they did in the 90s. Yeah, maybe you were an IT expert at one point, but you never bothered to keep your skills fresh, you geezer.
In either case, they think they know better than the lowly flunkie trying to help them, and trying to get them to actually listen to you and "please sir just upload debug logs, I beg you, no those aren't debug logs, I gave you the instructions to generate debug logs three times already, maybe things will be different after the fourth time, there's a literal KB article with step by step instructions to sync your photo library, no I won't call you to handhold you through this, I'd literally just be reading the steps in the article" is pure suffering.
Oh, I think they can precisely articulate exactly what they're angry about if you let them, but they know if they do that in public it'll show just how crazy, hateful, ignorant, and bigoted they are. What they're struggling with is how to articulate what they're angry about in a way that doesn't immediately expose them as a modern-day KKK for LGBT+ folk.
Holy crap, what a garbage ragebait article
Saving you a click: there's no new info here, it's just the same hullabaloo over the guy who made the accusations rescaling the models so they're the same size, and the author treating it as proof they faked it all
Which, I don't personally have a strong opinion on whether it's faked (especially since it's been pointed out that models made using different programs and for different platforms can import in drastically different sizes) but it feels kind of disingenuous to say that it's faked just because of that, y'know? It's like if an artist takes a 1440p resolution image, traces over it, and posts the traced image in 720p resolution. I wouldn't consider blowing up the traced 720p to 1440p as "faking" it or altering the traced image.
The public has the memory of a goldfish. We're less than 3 years out from the single worst administration in the history of this country, and we're seriously considering putting him back in office.
The lawsuit hinges on unwelcome public identification. Ironically, the parties here sue in their own names, filing in federal district court in Washington state and creating a public record of what the suit terms their “unpopular opinions.” By their own identification, they are:
Names listed in the article
They're not sending their best and brightest, folks
There's a pretty big difference between chatGPT and the science/medicine AIs.
And keep in mind that for LLMs and other chatbots, it's not that they aren't useful at all but that they aren't useful enough to justify their costs. Microsoft is struggling to get significant uptake for Copilot addons in Microsoft 365, and this is when AI companies are still in their "sell below cost and light VC money on fire to survive long enough to gain market share" phase. What happens when the VC money dries up and AI companies have to double their prices (or more) in order to make enough revenue to cover their costs?