205
'3d-printing a screw' is a way to describe how AI integration is stupid most of the time
(lemmy.dbzer0.com)
A "Showerthought" is a simple term used to describe the thoughts that pop into your head while you're doing everyday things like taking a shower, driving, or just daydreaming. The most popular seem to be lighthearted clever little truths, hidden in daily life.
Here are some examples to inspire your own showerthoughts:
If you made it this far, showerthoughts is accepting new mods. This community is generally tame so its not a lot of work, but having a few more mods would help reports get addressed a little sooner.
Whats it like to be a mod? Reports just show up as messages in your Lemmy inbox, and if a different mod has already addressed the report, the message goes away and you never worry about it.
Exactly. It's not true. Any company that fires all of its developers and sets up some poor intern to prompt-engineer updates to their codebase is going to fail spectacularly.
Source: I'm a software developer and use LLMs regularly. There are certain tasks they are very good at, but anyone who commits unexamined code generated by an LLM gets exactly what they deserve.
Ok, im a hardware dev. They've tried to make us do software style project management every time there's a new fad (agile last time). It usually doesn't fit.
What do you find them useful for in your role? Like a coding partner, you can ask questions? Or linting? Im at a loss in my role. I need to know the proprietary code base to write a single line of value. We aren't allowing anyone to train an ai on our code. Thats a huge security problem if anyone does.
So there are a few very specific tasks that LLMs are good at from the perspective of a software developer:
And that's... pretty much it. I've experimented with building applications with "prompt engineering," and to be blunt, I think the concept is fundamentally flawed. The problem is that once the application exceeds the LLM's context window size, which is necessarily small, you're going to see it make a lot more mistakes than it already does, because - just as an example - by the time you're having it write the frontend for a new API endpoint, it's already forgotten how that endpoint works.
As the application approaches production size in features and functions, the number of lines of code becomes an insurmountable bottleneck for Copilot. It simply can't maintain a comprehensive understanding of what's already there.
I use it to generate unit tests, it'll get the bulk of the code writing done and does a pretty good job at coverage, usually hitting 100%. All I have to do for the most part is review the tests to make sure they're doing the right thing, and mock out some stuff that it missed.
Legit. Do you need to feed it your code base at all? How does it know what needs to be tested otherwise?
You're right, unit tests are another area where they can be helpful, as long as you're very careful to check them over.
one other use case where they're helpful is 'translation'. Like i have a docker compose file and want a helm chart/kubernetes yaml files for the same thing. It can get you like 80% there, and save you a lot of yaml typing.
Wont work well if it's mo than like 5 services or if you wanted to translate a whole code base from one language to another. But converting one kind of file to another one with a different language or technology can work ok. Anything to write less yaml…
They are getting faster, having larger context windows, and becoming more accurate. It is only a matter of time until AI simply copy-cats 99.9% of the things humans do.
Actually, there's growing evidence that beyond a certain point, more context drastically reduces their performance and accuracy.
I'm of the opinion that LLMs will need a drastic rethink before they can reach the point you describe.
We have 100M context AI, we just need better attention mechamisms.
This sounds to me like saying you have enough feathers in the grocery bag you're holding. All you need now is a beak, and you'll make yourself a duck.
X doubt
Why does everyone believe we are oh-so special? We are just an accident. We just need to recreate that.
We just need to recreate abiogenisis and billions of years of evolution? Um ok.