It was some on board gpu with my super amazing AMD K6-2, it couldn't even run mega man X without chugging. Then a friend gave me an S3 Virge with a glorious 4mb vram.
One point that stands out to me is that when you ask it for code it will give you an isolated block of code to do what you want.
In most real world use cases though you are plugging code into larger code bases with design patterns and paradigms throughout that need to be followed.
An experienced dev can take an isolated code block that does X and refactor it into something that fits in with the current code base etc, we already do this daily with Stackoverflow.
An inexperienced dev will just take the code block and try to ram it into the existing code in the easiest way possible without thinking about if the code could use existing dependencies, if its testable etc.
So anyway I don't see a problem with the tool, it's just like using Stackoverflow, but as we have seen businesses and inexperienced devs seem to think it's more than this and can do their job for them.
A lot of the AI boom is like the DotCom boom of the Web era. The bubble burst and a lot of companies lost money but the technology is still very much important and relevant to us all.
AI feels a lot like that, it's here to stay, maybe not in th ways investors are touting, but for voice, image, video synthesis/processing it's an amazing tool. It also has lots of applications in biotech, targetting systems, logistics etc.
So I can see the bubble bursting and a lot of money being lost, but that is the point when actually useful applications of the technology will start becoming mainstream.
I just wish we could have less ways to do things in Linux.
I get that's one of the main benefits of the eco system, but it adds too much of a burden on developers and users. A developer can release something for Windows easily, same for Mac, but for Linux is it a flatpak, a deb, snap etc?
Also given how many shells and pluggable infrastructure there is it's not like troubleshooting on windows or mac, where you can Google something and others will have exact same problem. On Linux some may have same problem but most of the time it's a slight variation and there are less users in the pool to begin with.
So a lot of stuff is stacked against you, I would love for it to become more mainstream but to do so I feel it needs to be a bit more like android where we just have a singular way to build/install packages, try and get more people onto a common shell/infrastructure so there are more people in same setup to help each other. Even if it's not technically the best possible setup, if its consistent and easy to build for its going to speed up adoption.
I don't think it's realistically possible but it would greatly help adoption from consumers and developers imo.
Most companies can't even give decent requirements for humans to understand and implement. An AI will just write any old stuff it thinks they want and they won't have any way to really know if it's right etc.
They would have more luck trying to create an AI that takes whimsical ideas and turns them into quantified requirements with acceptance criteria. Once they can do that they may stand a chance of replacing developers, but it's gonna take far more than the simpleton code generators they have at the moment which at best are like bad SO answers you copy and paste then refactor.
This isn't even factoring in automation testers who are programmers, build engineers, devops etc. Can't wait for companies to cry even more about cloud costs when some AI is just lobbing everything into lambdas ๐
I think part of the problem is down to how a lot of games come out as "Early Access" which implies it's more bare bones and will get fleshed out over time.
If a game releases as EA then the expectation is you will get more content until release, if a game just comes out without EA then it's assumed it has all content and anything new is dlc/mtx/expansions.
I'm not gonna bother addressing Live Service games, wish they would go in the bin with most other MTX.
I don't think it's quite as simple as "let's crack down on steam like other monopolies" as what do you crack down on?
They do little to no anti competitive behaviour, clutching at straws would be that they require you to keep price parity on steam keys (except on sales).
All these other monopolies do lots of shady stuff to get and maintain their monopoly, so you generally want to stop them doing those things. Steam doesn't do anything shady to maintain it's monopoly it just carries on improving it's platform and ironically improving the users experience and other platforms outside of their own.
Like what do you do to stop steam being so popular outside of just arbitrarily making them shitter to make the other store fronts seem ok by comparison?
The 30% cut is often something cited and maybe that could be dropped slightly, but I'm happy for them to keep taking that cut if they continue to invest some of it back into the eco system.
Look at other platforms like Sony, MS who take 30% to sell on their stores, THEN charge you like ยฃ5 a month if you want multiplayer and cloud saves etc. Steam just gives you all this as part of the same 30%.
Epic literally does anti competitive things like exclusivity and taking games they have some stake in off other store fronts or crippling their functionality.
Steam has improved how I play games, it has cloud saves, virtual controllers, streaming, game sharing, remote play together, VR support, Mod support and this is all part of their 30%, the other platforms take same and do less, or take less but barely function as a platform.
Anti monopoly is great when a company is abusing it's position, but I don't feel Valve is, they are just genuinely good for pc gaming and have single handily made PC gaming a mainstream platform.
It saddens me as Windows 8 was absolutely awful and the first step towards the mess we have now. Windows 10 was better but still inconsistent in loads of areas and still felt faffy to use.
If you ignore the ads and bloat ware in Windows 11 it's not that much better than 10, the UI feels more consistent but still more painful to use than Windows 7.
We have no "good" versions of Windows to use, they are all bad and getting worse, I would love to jump to Linux but that has its own raft of inconsistencies and issues, just different ones.
Stuff just works on windows, I have a proxmox box with some Linux vms to run containers and I've tried several times over the last 20 years to move to Linux on my main pc but there are just too many faffy bits.
I really dislike what windows has become, it's bloat ware that's getting worse and worse, but I begrudgingly use it as I can be productive, the moment I can be as productive in Linux I'm off of windows, but even simple things like drivers are often not as good, lots of commercial software has barebones or no Linux support, there are many different package managers (on one hand great) but some have permission problems due to sandboxing when you need something like your IDE to have access to the dotnet package, also as a developer building apps/libs for Linux is a nightmare.
For example if I make an app for Windows I build a single binary, same for mac os, for Linux it's the Wild west, varying versions of glibc various versions of gtk and that's the simpler stuff.
Anyway I REALLY WANT to like Linux and move away from windows to it, but every time I try its hours/days of hoop jumping before I just end up going back to windows and waiting for windows to annoy me so much I try again.
(just to be clear the annoyances I have with windows are it's constant ad/bloat ware, it's segregation of settings and duplication of things, it constantly updating and forcing you to turn off all their nonsense AGAIN)
In isolation the automation of roles is a great thing, but the way society is currently run your entire quality of existence is tied to your job, and retraining and getting a new job is harder than ever and costs a lot.
If society made it easier for people to retrain and get better jobs and slowly replaced all those bad jobs with an automated workforce it would be better for everyone.
Can't see it happening though...
Same as above, as a kid (80s) games were new and interesting, even shovelware games you would get for free on C64 mags were interesting.
Over the years games have just become more and more streamlined, and action focused, it's basically like Hollywood now where they just churn out nice looking mediocre films to make money.
The 2nd point though js why I responded as I really agree with the point on something new being what makes games interesting now. They don't even have to be amazing, just offer a new experience.
For example when Dayz came out, that was a nice breath of fresh air, every time I loaded up the game with friends I never knew what was going to happen. Same sort of thing with Phasmophobia, was genuinely amazing for the first week we played it, just nothing else like it. Now you can't move for DayZ style games or Phasmo ripoffs.
I am bored of playing the same sort of stuff, like I'm bored watching super hero movies, I want new experiences (VR has some good experiences).
There have been some decent results historically with checkerboard and other separated reconstruction techniques. Nvidia was working on some new checkerboard approaches before they killed off SLI.
A decade or two ago most people I knew had dual GPUs, itbwas quite common for gamers and while you were not getting 100x utilisation it was enough to be noticeable and the prices were not mega bucks back then.
On your point of buying 1 card vs many, I get that, but it seems like we are reaching some limitations with monolithic dies. Shrinking chips seems to be getting far harder and costly, so to keep performance moving forward we are now jacking up the power throughput etc.
Anyway the point I'm trying to make is that it's going to become so costly to keep getting these more powerful monolithic gpus and their power requirements will keep going up, so if it's 2 mid range gpus for $500 each or 1 high end gpu for $1.5k with possibly higher power usage im not sure if it will be as much of a shoe in as you say.
Also if multi chiplet designs are already having to solve the problem of multiple gpu cores communicating and acting like one big one, maybe some of that R&D could benefit high level multi gpu setups.