[-] gnus_migrate@programming.dev 2 points 1 year ago

I mean if youre going to think of it that way any Turing complete language fits the bill, but what I mean by universal is a language you would reach for to solve any problem you have and it would be better than any other language. It's not a computer science problem it's a software engineering problem.

[-] gnus_migrate@programming.dev 5 points 1 year ago

There can be a universal language in theory, but it's borderline impossible to achieve. Every domain has a different set of problems that it needs to solve, and language design involves tradeoffs that may make sense for one domain but not another. That's why I think language wars are silly, without context it's impossible to say which language is "better", because you could have different answers depending on what you're trying to do.

In the end you shouldn't be too concerned with it. There are lots of languages, but all of them fall under two or three paradigms where if you learn one language from that paradigm, your skills are mostly transferable.

[-] gnus_migrate@programming.dev 6 points 1 year ago* (last edited 1 year ago)

DMD is the reference implementation as far as I know, so I don't think they have the same issue that C and C++ have with regards to needing to have a standard that pleases everyone. I agree that it has an issue positioning itself relative to other languages, but to me D is the good kind of boring. It has most of what you need, there is very little that is surprising in it, if you find yourself needing to do something, probably D has an easy-ish way of doing it.

[-] gnus_migrate@programming.dev 2 points 1 year ago

With the FOSS model you get credited at least, so you are getting something out of it even if it's not monetary. With ChatGPT you don't even get that. You're feeding an AI that's being monetized by someone else, what possible incentive could people have to contribute anymore?

[-] gnus_migrate@programming.dev 6 points 1 year ago

I can think of four aspects needed to emulate human response: basic knowledge on various topics, logical reasoning, contextual memory, and ability to communicate; and ChatGPT seems to possess all four to a certain degree.

LLM's cannot reason, nor can they communicate. They can give the illusion of doing so, and that's if they have enough data in the domain you're prompting them with. Try to go into topics that aren't as popular on the internet, the illusion breaks down pretty quickly. This isn't "we're not there yet", it's a fundamental limitation of the technology. LLM's are designed to mimick the style of a human response, they don't have any logical capabilities.

Regardless of what you think is or isn’t intelligent, for programming help you just need something to go through tons of text and present the information most likely to help you, maybe modify it a little to fit your context. That doesn’t sound too far fetched considering what we have today and how much information are available on the internet.

You're the one who brought up general intelligence not me, but to respond to your point: The problem is that people had an incentive to contribute that text, and it wasn't necessarily monetary. Whether it was for internet points or just building a reputation, people got something in return for their time. With LLM's, that incentive is gone, because no matter what they contribute it's going to be fed to a model that won't attribute those contributions back to them.

Today LLM's are impressive because they use information that was contributed by millions of people. The more people rely on ChatGPT, the less information will be available to train it on, and the less impressive these models are going to be over time.

[-] gnus_migrate@programming.dev 8 points 1 year ago

Hey, if people are going to go back to reading manuals like we're in the 1980's again is it such a bad thing? /s

It's insane how a single tool managed to completely destroy the value collectively created by people in over a decade.

[-] gnus_migrate@programming.dev 26 points 1 year ago

We're not able to properly define general intelligence, let alone build something that qualifies as intelligent.

[-] gnus_migrate@programming.dev 30 points 1 year ago

Also being able to prove the relationship between different parts of the code enables a lot of productivity tooling like IDEs. Simple things like renaming a class or a struct become chores at best in a statically typed language, whereas in dynamic languages there is an element of risk in refactorings like that.

[-] gnus_migrate@programming.dev 2 points 1 year ago

It's useful for audit trails and the like, generally OS audit logs only tell you who accessed the machine not what they did on the production database. Things like that. Databases like postgres come with admin tooling in general that SQLite isn't really meant for. As you said, backups as well are a problem.

[-] gnus_migrate@programming.dev 3 points 1 year ago

Sorry about that, it seems I unintentionally created a bit of controversy and am being a bit defensive.

[-] gnus_migrate@programming.dev 2 points 1 year ago

Regardless, I don't see it as something that is the silver bullet that people make it out to be. Being able to introspect the production database, query it, and generally have a set of tools to properly manage your data as opposed to having everything in a file fully managed by your application is something useful for me that you lose with SQLite.

[-] gnus_migrate@programming.dev 12 points 1 year ago

I don't know, SQLite it's something that makes sense in theory, but I think its easier for ops people to just use a proper database. If you need to move the database to a separate machine, limit permissions, etc. its just easier to do.

SQLite is great for local apps that need a structured way to store their data, but I'm not really comfortable using it for more than that.

view more: next ›

gnus_migrate

joined 1 year ago