this post was submitted on 16 Sep 2025
290 points (97.1% liked)

Programmer Humor

26496 readers
1989 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] LeFantome@programming.dev 50 points 6 days ago* (last edited 6 days ago) (3 children)

GCC is adding cool new languages too!

They just recently added COBOL and Modula-2. Algol 68 is coming in GCC 16.

[–] dejected_warp_core@lemmy.world 1 points 20 hours ago

Honestly, now that I can see the "business productivity" through-line from COBOL, to BASIC, and most recently, Python, I should probably just learn COBOL.

[–] parlaptie@feddit.org 66 points 6 days ago (2 children)
[–] LeFantome@programming.dev 17 points 5 days ago* (last edited 5 days ago) (3 children)

I guess I should have put a /s but I thought it was pretty obvious. The 68 in Algol 68 is 1968. COBOL is from 1959. Modula-2 is from 1977.

My point exactly was that all the hot new languages are built with LLVM while the “new” language options on GCC are languages from the 50’s, 60’s, and 70’s.

I am not even exaggerating. That is just what the projects look like right now.

[–] sukhmel@programming.dev 4 points 5 days ago (1 children)

I would guess those languages are added for preservation and compatibility reasons, and it's also an important thing

[–] LeFantome@programming.dev 2 points 2 days ago

I think some are getting used actually, particularly COBOL. I think Modula-2 still gets used in some embedded contexts. But these languages are not exactly pushing the state-of-the-art.

Algol 68 is interesting. It is for sure just for academic and academic enthusiast purposes. Historical and educational value only as you say.

[–] brotundspiele@sh.itjust.works 5 points 5 days ago

If Algol68 is from 1968, shouldn't Modula-2 be from 1898?

[–] parlaptie@feddit.org 1 points 5 days ago

I had my suspicions that that's what you were going for, I just thought I'd make it obvious.

[–] Skullgrid@lemmy.world 10 points 6 days ago

It's new to gcc!

[–] davidagain@lemmy.world 6 points 5 days ago
BEGIN    
    BEGIN
        Wow, 
        Modula 2! 
    END;    
    I remember Modula 2.
END.
[–] edinbruh@feddit.it 71 points 6 days ago (1 children)

That's like... It's purpose. Compilers always have a frontend and a backend. Even when the compiler is entirely made from scratch (like Java or go), it is split between front and backend, that's just how they are made.

So it makes sense to invest in just a few highly advanced backends (llvm, gcc, msvc) and then just build frontends for those. Most projects choose llvm because, unlike the others, it was purpose built to be a common ground, but it's not a rule. For example, there is an in-developement rust frontend for GCC.

[–] Kazumara@discuss.tchncs.de 20 points 6 days ago (2 children)

that’s just how they are made.

Can confirm, even the little training compiler we made at Uni for a subset of Java (Javali) had a backend and frontend.

I can't imagine trying to spit out machine code while parsing the input without an intermediary AST stage. It was complicated enough with the proper split.

[–] LeFantome@programming.dev 10 points 5 days ago (1 children)

I have built single pass compilers that do everything in one shot without an AST. You are not going to get great error messages or optimization though.

[–] Kazumara@discuss.tchncs.de 3 points 5 days ago

Oh! Okay, that's interesting to me! What was the input language? I imagine it might be a little more doable if it's closer to hardware?

I don't remember that well, but I think the object oriented stuff with dynamic dispatch was hard to deal with.

[–] BuboScandiacus@mander.xyz 14 points 6 days ago

I can imagine;

[–] davidagain@lemmy.world 33 points 6 days ago (2 children)

Great optimisation, awwwful compile times.

[–] davidagain@lemmy.world 23 points 6 days ago (2 children)

New kid on the block, roc, has it right by splitting application code from "platform"/framework code, precompiling and optimising the platform, then using their fast surgical linker to sew the app code to the platform code.

Platforms are things like cli program, web server that kind of thing. Platforms provide an interface of domain specific IO primitives and handle all IO and memory management, and they also specify what functions app code must supply to complete the program.

It's pretty cool, and they're getting efficiency in the area of systems programming languages like C and Rust, but with none of the footguns of manual memory management, no garbage collection pauses, but yet also no evil stepparent style borrow checker to be beaten by. They pay a lot of attention to preventing cache misses and branch prediction failures, which is his they get away with reference counting and still being fast.

A note of caution: I might sound like I know about it, but I know almost nothing.

[–] CanadaPlus@lemmy.sdf.org 10 points 6 days ago* (last edited 6 days ago) (1 children)

That sounds pretty great. My impression is that relatively little code actually runs that often.

but with none of the footguns of manual memory management, no garbage collection pauses, but yet also no evil stepparent style borrow checker to be beaten by.

That part sounds implausible, though. What kind of memory management are they doing?

[–] davidagain@lemmy.world 7 points 6 days ago* (last edited 6 days ago) (2 children)

Reference counting.

They pay a lot of attention to preventing cache misses and branch prediction failures, which is how they get away with reference counting and still being fast.

[–] CanadaPlus@lemmy.sdf.org 10 points 6 days ago (17 children)

Oh, you just mean it's a kind of garbage collection that's lighter on pauses. Sorry, I've had the "my pre-Rust pet language already does what Rust does" conversation on here too many times.

[–] BatmanAoD@programming.dev 8 points 6 days ago (3 children)

To be fair, the drop/dealloc "pause" is very different from what people usually mean when they say "garbage collection pause", i.e. stop-the-world (...or at least a slice of the world).

load more comments (3 replies)
[–] firelizzard@programming.dev 3 points 5 days ago (2 children)

Garbage collection is analyzing the heap and figuring out what can be collected. Reference counting requires the code to increment or decrement a counter and frees memory when the counter hits zero. They’re fundamentally different approaches. Also reference counting isn’t necessarily automatic, Objective-C had manual reference counting since day one.

[–] BatmanAoD@programming.dev 6 points 5 days ago

"Garbage collection" is ambiguous, actually; reference counting is traditionally considered a kind of "garbage collection". The type you're thinking of is called "tracing garbage collection," but the term "garbage collection" is often used to specifically mean "tracing garbage collection."

[–] CanadaPlus@lemmy.sdf.org 3 points 5 days ago (1 children)

It's still mentioned as one of the main approaches to garbage collection in the garbage collection Wikipedia article.

[–] firelizzard@programming.dev 1 points 4 days ago (1 children)

Ok, I concede the point, “garbage collection” technically includes reference counting. However the practical point remains - reference counting doesn’t come with the same performance penalties as ‘normal’ garbage collection. It has essentially the same performance characteristics of manual memory management because that’s essentially what it’s doing.

[–] CanadaPlus@lemmy.sdf.org 2 points 4 days ago* (last edited 4 days ago) (1 children)

That may well be. I'd say I understand the basic concepts, but people in this thread have more detail on the specifics and how they work out in practice than me.

It does make me wonder why everyone hasn't been doing it, if there's no drawbacks, though.

[–] firelizzard@programming.dev 1 points 4 days ago (1 children)

It is being used. Objective-C (used for macOS and iOS apps) has used reference counting since the language was created. Originally it was manual, but since 2011 it's been automatic by default. And Swift (which basically replaced Objective-C) only supports ARC (does not support manual reference counting). The downside is that it doesn't handle loops so the programmer has to be careful to prevent those. Also, the compiler has to insert reference increment and decrement calls, and that's a significant engineering challenge for the compiler designers. Rust tracks ownership instead of references, but that means it's compiler is even more complicated. Rust's system is a little bit like compile-time reference counting, but that's not really accurate. Apparently Python, Pearl, and PHP use reference counting, plus tracing GC (aka 'normal' GC) in Python and PHP to handle cycles. So your implicit statement/assumption that reference counting is not widely used is false. Based on what I can find online, Python and JavaScript are by far the most used languages today and are roughly equal, so in that respect reference counting GC is equally or possibly more popular than pure tracing GC.

[–] CanadaPlus@lemmy.sdf.org 1 points 3 days ago* (last edited 3 days ago) (1 children)

Everyone doing it was a critical distinction there. OP is making it sound like there's literally no drawbacks. If that was so, I'm pretty sure tracing would have long since died out. It has come up that a lot of languages do use it elsewhere in the thread.

Which is another reason I'm not so sure Roc is the answer we've all been waiting for. Then again, the first few Rust proponents would have sounded the same way.

[–] firelizzard@programming.dev 1 points 3 days ago* (last edited 3 days ago) (1 children)

Honestly I didn’t really follow OP’s meme or care enough to understand it, I’m just here to provide some context and nuance. I opened the comments to see if there was an explanation of the meme and saw something I felt like responding to.

Edit: Actually, I can’t see the meme. I was thinking of a different post. The image on this one doesn’t load for me.

“The answer we’ve all been waiting for” is a flawed premise. There will never be one language to rule them all. Even completely ignoring preferences, languages are targeted at different use cases. Data scientists and systems programmers have very different needs. And preferences are huge. Some people love the magic of Ruby and hate the simplicity of Go. I love the simplicity of Go and hate the magic of Ruby. Expecting the same language to satisfy both groups is unrealistic because we have fundamentally different views of what makes a good language.

[–] CanadaPlus@lemmy.sdf.org 1 points 3 days ago* (last edited 2 days ago)

I meant the person I was arguing with by OP. OOP's image won't load for me either, now, but it was basically just a list of things that compile to LLVM.

load more comments (15 replies)
[–] frezik@lemmy.blahaj.zone 4 points 6 days ago (1 children)

I wish more languages used ref counting. Yes, it has problems with memory cycles, but it's also predictable and fast. Works really well with immutable data.

[–] davidagain@lemmy.world 3 points 6 days ago (1 children)

Roc uses immutable data by default. It performs opportunistic in-place mutation when the reference count will stay 1 (eg this code would satisfy the borrow checker without cloning or copying if it were rust - static code analysis).

[–] frezik@lemmy.blahaj.zone 2 points 5 days ago* (last edited 5 days ago) (1 children)

Thanks, this looks really interesting. I've thought for a while that Rust's borrow checker wouldn't be such a pain in the ass if the APIs were developed with immutable data in mind. It's not something you can easily slap on, because the whole ecosystem fights against it. Looks like Roc is taking that idea and running with it.

[–] davidagain@lemmy.world 2 points 5 days ago

I think that roc and rust are both aiming for fast memory safety, but rust is aiming to be best at mutable data and rpc best at immutable data.

I heard of someone trying to do exactly that - immutable functional programming in roc, but they gave up for the same reason you said - the whole ecosystem is working on the opposite assumption.

As far as I'm aware most of the roc platforms are currently written in rust or zig. Application-specific code is written in roc calling interface/io/effectful functions/api that the platform exposes and the platform calls into the roc code via the required interface.

I do think it's really interesting, and once they have a desktop gui app platform (which must compile for windows for me to be able to use it for work), I'll be giving it a good go. I think it's one of the most interesting new languages to arrive.

load more comments (1 replies)
[–] lena@gregtech.eu 14 points 6 days ago (2 children)

Yeah, I think Go's compiler is so fast partially because it doesn't use LLVM

[–] firelizzard@programming.dev 5 points 5 days ago

TinyGo isn’t that much slower and it uses LLVM

load more comments (1 replies)
[–] tatterdemalion@programming.dev 13 points 6 days ago (1 children)

Isn't Zig working on their own backend?

Also, pretty excited about the cranelift project.

[–] vpol@feddit.uk 8 points 6 days ago (1 children)

Yes, and it’s now default for x86_64

[–] brucethemoose@lemmy.world 14 points 5 days ago

I'll make my own LLVM, with blackjack and hookers.

load more comments
view more: next ›