this post was submitted on 19 Jun 2025
15 points (100.0% liked)
Science
279 readers
1 users here now
This is a subcom related to all the sciences out there. Post anything science-related here! All articles are welcome so long as you do not post pseudoscience. This especially goes for so-called race "science" and other things like it.
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
This will be interesting to watch for sure. While I'm generally skeptical regarding the alarmist claims regarding LLMs, the impact these tools have on cognition is not well understood. This is particularly problematic in the context of kids whose minds are still developing, and while they're still learning to process information. We've already seen other technology, such as social media, impact attention span and learning. It stands to reason that LLMs will have a negative impact here as well.
And on a separate, personal note, I've found I'm far from immune to the negative affects of using AI. Case in point: when I had to write unit tests for code that I didn't write myself.
On the one hand, it was awesome being able to quickly crank out tests that provided >80% code coverage. On the other hand, once I was comfortable that the LLM's tests weren't producing false positives, I stopped reviewing them in detail - causing me not to really know or understand the code (or test!) in the way I would if I had written them manually.
I find LLMs often end up generating more, and better tests than I would by hand. So, on the whole I do find them to be a net positive for that sort of stuff. The tests tend to be the part I do try to pay attention to, and I treat them as a contract for what the LLM produces. If the tests make sense, they're comprehensive, and they pass, that basically acts as a contract for what the code is doing. I also find tests are inherently the kind of code that should be fairly easy to follow since each test tends to do one thing, and the surface area is pretty flat.
My prediction is that the nature of programming is going to change in general. We'll probably see languages emerge that focus on defining contracts, so the human can focus on the semantics of what the program is supposed to be doing, and the LLM can handle implementation details. That's sort of the direction we've already been moving with high level languages and declarative style of programming.
I thought this take was actually pretty interesting. Yegge is extrapolating a very plausible future where most of the implementation will be handled by fleets of agents with the human being at the very top of the chain. The really big breakthrough that happened just a few months ago has been with having agents actually use tools and iterate on a solution. At this point it's only a matter of time until we start seeing stuff like genetic algorithms coupled with LLMs to have them iterate and converge on a solution. AlphaEvolve is a good example of this approach already being applied in the real world. I'm also expecting people will start to dust off other ideas such as symbolic logic and coupling that with LLMs to make systems that can do actual reasoning. We're living to an event akin to the industrial revolution in the domain of software development.
Yep, I'm worried US education isn't equipped to tackle the challenges posed by AI. One obvious way to mitigate it is to require in class writing assignments, along with education about the limitations of LLMs (they're not magical truth machines, however confident they may appear). But this would require smaller class sizes, something anathema to education in our neoliberal hellscape.
Indeed, and I'd argue the education system is overly focused on testing where people just cram for the exams and then forget everything after. The point should be to help people develop their reasoning skills, creativity, critical thinking, but none of that really happens in practice. Stuff like LLMs coupled with the public that isn't able to think critically definitely seems like a recipe for disaster.