-2
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 24 Nov 2023
-2 points (49.5% liked)
Technology
59390 readers
2819 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
Even if ChatGPT gets far in advance of the way it is now in terms of writing code, at the very least you're still going to need people to go over the code as a redundancy. Who is going to trust an AI so much that they will be willing to risk it making coding errors? I think that the job of at the very least understanding how code works will be safe for a very long time, and I don't think ChatGPT will get that advanced for a very long time either, if ever.
There's more to it than that, even. It takes a developer's level of knowledge to even begin to tell ChatGPT to make something sensible.
Sit an MBA down in front of a ChatGPT window and tell them to make an application. The application has to save state, it has to use the company's OAuth login system, it has to store data in a PostgreSQL database, and it has to have granular, roles-based access control.
Then watch the MBA struggle because they don't understand that...
The level of knowledge and detail required to make ChatGPT produce something useful on a large scale is beyond an MBA's skillset. They literally don't know what they don't know.
I use an LLM in my job now, and it's helpful. I can tell it to produce snippets of code for a specific purpose that I know how to describe accurately, and it'll do it. Saves me time having to do it manually.
But if my company ever decided it didn't need developers anymore because ChatGPT can do it all, it would collapse inside six months, and everything would be broken due to bad pull requests from non-developers who don't know how badly they're fucking up. They'd have to rehire me... And I'd be asking for a lot more money to clean up after the poor MBA who'd been stuck trying to do my job.
Thank you, you explained all of that much better than I could.
You're welcome! And it occurs to me that the fact that it took a developer to explain all of that is an object lesson in why ChatGPT won't end software development as a career option - and believe me, I simplified it for a non-developer audience.
Sadly, too many
Then their companies will go belly-up.
I don't believe it. If it's good enough then they will ship and make money, and those who put people on it will be so slow that they will be just outperformed by those who don't.
If your code doesn't work because you rely entirely on an AI to do it, you don't have a business you can run unless you want to go back to paper and pencil.
If your code doesn't work because you rely on humans understanding it, you don't have a business you can run. We already are there where humans have no idea why the computer does this or that decision because it's so complex especially with all the machine learning and complex training data, etc. let's not pretend it will get less complex with time.
So your argument is that people will rely on AI entirely without making any redundancies, unlike now where they have more than one human so they can check for these issues because humans make coding errors?
I kinda agree with them. Currently coding already is an abstraction. The average developer has very little idea what machine code their compiler actually produces, and for the most part they don't need to care about this. Feeding an AI a specification is just a higher level of abstraction.
For now, we'll need people to check that AI produces code that does what we expect, but I believe at some point we'll mostly take it for granted that they just do.
My argument is that already today no human is able to and checks it when it comes to decision making models like for example if the car should go left or right around a obstacle. And over time we will have less straight forward classical programming doing decisions and more and more models doing decisions with hundreds or thousands of sensor inputs.
Except we already have fields (like pharma manufacturing) that have to deal with hundreds or thousands of inputs and variables, are automated, and we still manage to fully understand the stack as well as fully check everything.
Hint: when someone tells you they "can't" check or understand what their software is doing, it's a scam.
Normally they should be told to go back and figure it out before being allowed to ship any product. If you tried this in any other industry it would be laughable. Even in software it's outrageous, imagine getting accounting software or even a simple file backup tool that doesn't work some of the time and nobody can tell you how it works. Yet these companies get a pass putting cars like this on the road.
I'd hope so, but it already works for many of them
That's a fuckin bleak outcome for a lot of people if the job transition goes from \ to \
That's like being an artist and being told your job now is simply to fix the shitty hands Midjourney draws. And your job will only last as long as that remains a problem.
Hey, I didn't say the future would be bright, just that it will still need people familiar with code for the foreseeable future. At least until the Earth heats up so much that the lack of potable water and the unsurvivable high temperatures destroy civilization.
It isn't surprising that this is the way we conceptualize the potential impact of AI, but it's frustrating to see it tossed around as if AI disruption is a forgone conclusion.
AI will start re-defining the problems that code is written to solve long before we get anywhere close to GPT models replacing human workers, and that's a big enough problem by itself.
It used to be that before code could even be employed to solve a problem, it had to be understood procedurally. That's increasingly not the case, given that ML is routinely employed to decode things that were previously thought to be too chaotic to be understood, like brain waves and image pixel data. I don't know why we're so sure of ourselves that machine learning is just a gimmick and poses no real threat, just because anthropomorphizing it seems silly.