I always need to laugh when I read "Agentic AI"
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
I am jack's complete lack of surprise.
I'd be inclined to try using it if it was smart enough to write my unit tests properly, but it's great at double inserting the same mock and have 0 working unit tests.
I might try using it to generate some javadoc though.. then when my org inevitably starts polling how much ai I use I won't be in the gutter lol
I've seen it generate working unit tests plenty. In the sense that they pass.
...they do not actually test the functionality. Of course that function returns what you're asserting - you overwrote its actual output and checked against that!
So when the AI bubble burst, will there be coding jobs available to clean up the mess?
There already are. People all over LinkedIn are changing their titles to "AI Code Cleanup Specialist".
About that "net slowdown". I think it's true, but only in specific cases. If the user already knows well how to write code, an LLM might be only marginally useful or even useless.
However, there are ways to make it useful, but it requires specific circumstances. For example, you can't be bothered to write a simple loop, you can use and LLM to do it. Give the boring routine to an LLM, and you can focus on naming the variables in a fitting way or adjusting the finer details to your liking.
Can't be bothered to look up the exact syntax for a function you use only twice a year? Let and LLM handle that, and tweak the details. Now, you didn't spend 15 minutes reading stack overflow posts that don't answer the exact question you had in mind. Instead, you spent 5 minutes on the whole thing, and that includes the tweaking and troubleshooting parts.
If you have zero programming experience, you can use an LLM to write some code for you, but prepare to spend the whole day troubleshooting something that is essentially a black box to you. Alternatively, you could ask a human to write the same thing in 5-15 minutes depending on the method they choose.
This is a sane way to use LLM. Also, pick your poison, some bots are better than others for a specific task. It's kinda fascinating to see how other people solve coding problems and that is essentially on tap with a bot, it will churn out as many examples as you want. It's a really useful tool for learning syntax and libraries of unfamiliar languages.
On one extreme side of LLM there is this insane hype and at the other extreme a great pessimism but in the middle is a nice labour saving educational tool.
so is the profit it was foretold to generate, but it actually costs money than its actually generating.
According to Deutsche Bank the AI bubble is ~~a~~ the pillar of our economy now.
So when it pops. I guess that's kinda apocalyptic.
Edit - strikethrough
Only for taxpayers ☝️
No shit sherlock!
Its great for stupid boobs like me, but only to get you going. It regurgitates old code, it cannot come up with new stuff. Lately there have been less Python errors, but again the stuff you can do is limited. At least for the free stuff that you can get without signing up.
Yea, I use it for home assistant, it's amazingly powerful... And so incredibly dumb
It will take my if and statements, and shrunk it to 1/3 the length, while being twice as to robust... While missing that one of the arguments is entirely in the wrong place.
The people talking about AI coding the most at my job are architects and it drives me insane.
Oh wow. No shit. Anyway!
I'm not a programmer in any sense. Recently, I made a project where I used python and raspberry pi and had to train some small models on a KITTI data set. I used AI to write the broad structure of the code, but in the end, it took me a lot of time going through python documentation as well as the documentation of the specific tools/modules I used to actually get the code working. Would an experienced programmer get the same work done in an afternoon? Probably. But the code AI output still had a lot of flaws. Someone who knows more than me would probably input better prompts and better follow up requirements and probably get a better structure from the AI, but I doubt they'll get a complete code. In the end, even to use AI, you have to know what you're doing to use AI efficiently and you still have to polish the code into something that actually works.
AI companies and investors are absolutely overhyping its capabilities, but if you haven't tried it before I'd strongly recommend doing so. For simple bash scripts and Python it almost always gets something workable first try, genuinely saving time.
AI LLMs are pretty terrible for nearly every other task I've tried. I suspect it's because the same amount of quality training data just doesn't exist for other fields.
I'd much rather write my own bugs to have to waste hours fixing, thanks.
The good news is: AI is a lot less impressive than it seemed at first.
The bad news is: so are a lot of jobs.
I can't even get Copilot to write Vitest files for React without making a mountain of junk code that describes drivel.