[-] brie@programming.dev -5 points 3 days ago

Large context window LLMs are able to do quite a bit more than filling the gaps and completion. They can edit multiple files.

Yet, they're unreliable, as they hallucinate all the time. Debugging LLM-generated code is a new skill, and it's up to you to decide to learn it or not. I see quite an even split among devs. I think it's worth it, though once it took me two hours to find a very obscure bug in LLM-generated code.

[-] brie@programming.dev 1 points 4 days ago

What about people who only have one device? Kids, elderly, people with only work computer.

[-] brie@programming.dev 1 points 4 days ago

Proton is already used for identity management: OTP via email. They'll implement OAuth if there's enough demand for it. A company's purpose is to be profitable, ethics side is largely irrelevant.

Many countries already have digital government ID: Australia, Estonia, Russia.

[-] brie@programming.dev 4 points 5 days ago

S7 will be retired or extended with access control. TOTP apps don't work for edge cases like broken phone. Dedicated token devices get lost. SMS will continue being the main solution for 2FA.

[-] brie@programming.dev 1 points 5 days ago

Large gains were due to scaling the hardware, and data. The training algorithms didn't change much, transformers allowed for higher parallelization. There are no signs of the process becoming self-improving. Agentic performance is horrible as you can see with Claude (15% of tasks successful).

What happens in the brain is a big mystery, and thus it cannot be mimicked. Biological neural networks do not exist, because the synaptic cleft is an artifact. The living neurons are round, and the axons are the result of dehydration with ethanol or xylene.

[-] brie@programming.dev 5 points 5 days ago

Not true. SMS is encrypted in 3G, LTE, 5G. Block cyphers like Kasumi and A/9 are used. SMS is reasonably secure, because it's hard to infiltrate telecom systems like S7

[-] brie@programming.dev 2 points 5 days ago

AGI or human level intelligence has a hardware problem. Fabs are not going to be autonomous within 20 years. Novel lithography and cleaning methods are difficult for large groups of humans. LLMs do not provide much assistance in semiconductor design. We are not even remotely close to manufacturing the infrastructure necessary to run human level intelligence software.

[-] brie@programming.dev 1 points 5 days ago

LLMs are not programmed in a traditional way. The actual code is quite small. It mostly runs backprop, filters the data. It is already easily generated by LLMs.

[-] brie@programming.dev 1 points 5 days ago

Because writing web apps is boring as fuck, and evaluating switching provides a reason to stop coding in PHP, and write an article about how they still need to write PHP.

[-] brie@programming.dev 1 points 6 days ago

Can you buy it?

[-] brie@programming.dev 2 points 1 week ago

Broke back convolution

view more: ‹ prev next ›

brie

joined 1 week ago