this post was submitted on 19 Jun 2025
15 points (100.0% liked)
Science
282 readers
15 users here now
This is a subcom related to all the sciences out there. Post anything science-related here! All articles are welcome so long as you do not post pseudoscience. This especially goes for so-called race "science" and other things like it.
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
And on a separate, personal note, I've found I'm far from immune to the negative affects of using AI. Case in point: when I had to write unit tests for code that I didn't write myself.
On the one hand, it was awesome being able to quickly crank out tests that provided >80% code coverage. On the other hand, once I was comfortable that the LLM's tests weren't producing false positives, I stopped reviewing them in detail - causing me not to really know or understand the code (or test!) in the way I would if I had written them manually.
I find LLMs often end up generating more, and better tests than I would by hand. So, on the whole I do find them to be a net positive for that sort of stuff. The tests tend to be the part I do try to pay attention to, and I treat them as a contract for what the LLM produces. If the tests make sense, they're comprehensive, and they pass, that basically acts as a contract for what the code is doing. I also find tests are inherently the kind of code that should be fairly easy to follow since each test tends to do one thing, and the surface area is pretty flat.
My prediction is that the nature of programming is going to change in general. We'll probably see languages emerge that focus on defining contracts, so the human can focus on the semantics of what the program is supposed to be doing, and the LLM can handle implementation details. That's sort of the direction we've already been moving with high level languages and declarative style of programming.
I thought this take was actually pretty interesting. Yegge is extrapolating a very plausible future where most of the implementation will be handled by fleets of agents with the human being at the very top of the chain. The really big breakthrough that happened just a few months ago has been with having agents actually use tools and iterate on a solution. At this point it's only a matter of time until we start seeing stuff like genetic algorithms coupled with LLMs to have them iterate and converge on a solution. AlphaEvolve is a good example of this approach already being applied in the real world. I'm also expecting people will start to dust off other ideas such as symbolic logic and coupling that with LLMs to make systems that can do actual reasoning. We're living to an event akin to the industrial revolution in the domain of software development.