this post was submitted on 02 Mar 2025
26 points (88.2% liked)
GenZedong
4451 readers
88 users here now
This is a Dengist community in favor of Bashar al-Assad with no information that can lead to the arrest of Hillary Clinton, our fellow liberal and queen. This community is not ironic. We are Marxists-Leninists.
This community is for posts about Marxism and geopolitics (including shitposts to some extent). Serious posts can be posted here or in /c/GenZhou. Reactionary or ultra-leftist cringe posts belong in /c/shitreactionariessay or /c/shitultrassay respectively.
We have a Matrix homeserver and a Matrix space. See this thread for more information. If you believe the server may be down, check the status on status.elara.ws.
Rules:
- No bigotry, anti-communism, pro-imperialism or ultra-leftism (anti-AES)
- We support indigenous liberation as the primary contradiction in settler colonies like the US, Canada, Australia, New Zealand and Israel
- If you post an archived link (excluding archive.org), include the URL of the original article as well
- Unless it's an obvious shitpost, include relevant sources
- For articles behind paywalls, try to include the text in the post
- Mark all posts containing NSFW images as NSFW (including things like Nazi imagery)
founded 4 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
No worries, and thanks for taking the time to read through it.
ok first dumb question, is the block of code that you had below this line
Was this an actual output from an LLM or a hypothetical example that you wrote up? It's not quite clear to me. It's a lot of output but I don't want to insult you if you wrote all that yourself
I ask because I really want to nitpick the hell out of this design decision:
Adding the replies as full items, that is going to absolutely murder performance. A better scheme would be for replies to be a list/array of IDs or URLs, or a URL to an API call that enumerates all the replies, instead of enumerating all the items and embedding them directly. That is going to absolutely kill performance. Depending on the implementation, you could easily be doing the classic
N+1
query that a lot of web applications fall for.But then again at this point I'm arguing with an LLM which is generating absolutely dogshit code.
That was copy pasted straight from the DeepSeek chat response.
Like I said earlier, you still have to understand how to code and what the code is doing. Thing is that you could literally paste what you said in, and it'll make adjustments. Or you can just make adjustments yourself. As a starting point I find that sort of output useful.
Another example is that I have to use node for an application for work right now. I haven't touched js in over a decade, I'm not familiar with the ecosystem, and DeepSeek lets me quickly get things running. Things I would've spent hours looking up before and doing through trial and error just work out of the box. As I pointed out in an earlier reply, most apps aren't doing really complex or interesting things. Most of it is just shuffling data between different endpoints and massaging it in some way. LLMs can do a lot of this boring work quickly and efficiently.
I want to say that you've piqued my interest, but honestly I'm not sure I can set aside my bias. I deal with enough wrong code already as it is that my co-workers write, so I don't know if having yet another one giving me bad code suggestions adds much, but I appreciate you putting in the work showing everything.