[-] AbelianGrape@beehaw.org 13 points 1 week ago

Because lots of people I talk to where I live (eastern Canada) don't seem to realize this: the forcible "transfer" (i.e. deportation) of children is an act of genocide according to international law.

[-] AbelianGrape@beehaw.org 4 points 1 month ago

Yeah, I like subleq.

  • compiler is extremely fast, faster even than tinycc
  • strongly statically typed: all values are ints. Since it's all of them, you don't even need to write it!
  • memory safe: the entire (virtual) address space is guaranteed to be accessible at all times so there's no way to leak any of it (can't release it anyway) or to segfault (it's all accessible).

Subleq is the obvious winner in my mind.

[-] AbelianGrape@beehaw.org 10 points 2 months ago

Which, to be fair, is also derived from a word which would be most accurately (with English vowels) pronounced as mah-nuh. Although at this point "manna" is definitively also a word of English whose correct pronunciation is with /æ/.

[-] AbelianGrape@beehaw.org 4 points 2 months ago* (last edited 2 months ago)

I've only ever seen "one-time" in cryptography to refer to One-Time Pads (OTP). They are literally uncrackable (because every possible plaintext could be encoded by every possible ciphertext) but they achieve that by using a shared private key. The cipher becomes attackable if the key is re-used, hence the "one-time."

But that key has to be exchanged somehow, and that exchange can be attacked instead. Key exchange algorithms can't necessarily transfer every possible OTP which means eavesdropping on the exchange would make an OTP attackable. So the best option we know of that doesn't require secret meetings to share OTPs* really is to use RSA encryption. Once we have efficient quantum-resistant schemes, they'll be the best option we know.

* and let's be honest, secret meetings can be eavesdropped on as well.

[-] AbelianGrape@beehaw.org 6 points 3 months ago

Bril is the only compiler IL I know of that is specifically designed for education.

R. Kent Dybvig's compilers course has had approximately 15 "intermediate" representations designed for his course since at least 2004 -- a consequence of teaching the course using the nanopass compiler framework for scheme. You could broadly divide these into "representations that are restrictions of scheme," and "representations that are increasingly-annotated versions of UIL" where UIL is the underlying intermediate representation. As far as I know, UIL was also designed for this course.

[-] AbelianGrape@beehaw.org 5 points 7 months ago* (last edited 5 months ago)

How does this compare with GumTree? It's weird that the page doesn't even mention existing state-of-the-art tools for this task.

edit: I've compared GumTree and difftastic myself while working on a project this past week. Difftastic is harder to use programatically (the JSON format is unstable and leaves something to be desired) but other than that it's miles and miles better.

[-] AbelianGrape@beehaw.org 4 points 8 months ago

I find there's a lot less variety in my monster train runs. Most classes have a distinctly best strategy and the artifacts generally also funnel you towards that strategy. For example, I can't remember the last time I played an Umbra run that didn't set up a morsel engine behind a warden or alloyed construct - as far as I'm concerned, those are the same strategy, it doesn't feel different. The only other build I think is viable is just "play Shadowsiege," which rarely happens early enough to build for it.

Every class in STS has at least three viable archetypes and almost every run within those archetypes still feels different to me.

[-] AbelianGrape@beehaw.org 11 points 8 months ago

I almost exclusively play for A20 heart kills. I play all 4 classes but in a "whichever I feel like today" way. I tried rotating between the characters for a while and really didn't enjoy playing silent or watcher while in the wrong mood for those classes.

My favorite deck in recent memory was probably a silent discard combo with Grand Finale as the only damage-dealing card in the deck. My favorite archetype in general is probably ice defect. A good all-you-can-eat ironclad run is great too.

I don't think I agree that STS is especially well balanced - some regular hallway combats do irrationally more damage on average even to players much better than me (for example, floor one jaw worms or any act 3 darklings). In general, the game could be quite a bit harder on A20 and still be fun for players who want a challenge. It's also weird to me that A1 makes the game easier compared to A0. Between the classes, there is a class which is clearly stronger than the others. However I also don't think this is a bad thing. Imbalances create more opportunities for new experiences, and for different kinds of players to have different kinds of fun. And that certainly agrees with "infinite replayability." I'm sure in 5 years' time I will still be seeing interactions I've never seen before.

[-] AbelianGrape@beehaw.org 10 points 1 year ago* (last edited 1 year ago)

Neither spectre nor meltdown are specific to Intel. They may have been discovered on Intel hardware but the same attacks work against any system with branch prediction or load speculation. The security flaw is inherent to those techniques. We can mitigate them with better address space separation and address layout randomization. That is, we can prevent one process from reading another process's data (which was possible with the original attacks), but we can't guarantee a way to prevent malicious browser tab from reading data from a different tab (for example), even if they are both sandboxed. We also have some pretty cool ways to detect it using on-chip neural networks, which is a very fancy mitigation. Once it's detected, a countermeasure can start screwing with the side channel to prevent leakage at a temporary performance cost.

Also, disabling hyper threading won't cut your performance in half. If the programs that are running can keep the processor backend saturated, it wouldn't make any noticeable difference. Most programs can only maintain about 70-80% saturation, and hyper threading fills in the gaps. However the result is that intensive, inherently parallelizable programs are actually penalized by hyper threading, which is why you occasionally see advice to disable it from people who are trying to squeeze performance out of gaming systems. For someone maintaining a server with critically sensitive data, that was probably good advice. For your home PC, which is low risk... you're probably not worried about exposure in the first place. If you have a Linux computer you can probably even disable the default mitigations if you wanted.

[-] AbelianGrape@beehaw.org 4 points 1 year ago

These would be performance regressions, not correctness errors. Specifically, some false dependencies between instructions. The result of that is that some instructions which could be executed immediately may instead have to wait for a previous instruction to finish, even though they don't actually need its result. In the worst case, this can be really bad for performance, but it doesn't look like the affected instructions are too likely to be bottlenecks. I could definitely be wrong though; I'd want to see some actual data.

The pentium fdiv bug, on the other hand, was a correctness bug and was a catastrophic problem for some workloads.

[-] AbelianGrape@beehaw.org 7 points 1 year ago* (last edited 1 year ago)

I think the mitigations are acceptable, but for people who don't want to worry about that, yes, it could put them off choosing AMD.

To reiterate what Tavis Ormandy (who found the bug) and other hardware engineers/enthusiasts say, getting these things right is very hard. Modern CPUs apply tons of tricks and techniques to go fast, and some of them are so beneficial that we accept that they lead to security risks (see Spectre and Hertzbleed for example). We can fully disable those features if needed, but the performance cost can be extreme. In this case, the cost is not so huge.

Plus, even if someone were to attack your home computer specifically, they'd have to know how to interpret the garbage data that they are reading. Sure, there might be an encryption key in there, but they'd have to know where (and when) to look*. Indeed, mitigations for attacks like spectre and hertzbleed typically include address space randomization, so that an attacker can't know exactly where to look.

With Zenbleed, the problem is caused by something relatively simple, which amounts to a use-after-free of an internal processor resource. The recommended mitigation at the moment is to set a "chicken bit," which makes the processor "chicken out" of the optimization that allocates that resource in the first place. I took a look at one of AMD's manuals and I'd guess for most code, setting the chicken bit will have almost no impact. For some floating-point heavy code, it could potentially be major, but not catastrophic. I'm simplifying by ignoring the specifics but the concept is actually entirely accurate.

* If they are attacking a specific encrypted channel, they can just try every value they read, but this requires the attack to be targeted at you specifically. This is obviously more important for server maintainers than for someone buying a processor for their new gaming PC.

[-] AbelianGrape@beehaw.org 7 points 1 year ago* (last edited 1 year ago)

I'm not sure the median is what you want. The worst case behavior is unbounded. There is no guarantee that such an algorithm ever actually terminates, and in fact (with very low probability) it may not! But that means there is no well-defined median; we can't enumerate the space.

So let's instead ask about the average, which is meaningful, as the increasingly high iteration-count datapoints are also decreasingly likely, in a way that we can compute without trying to enumerate all possible sequences of shuffles.

Consider the problem like this: at every iteration, remove the elements that are in the correct positions and continue sorting a shorter list. As long as we keep getting shuffles where nothing is in the correct position, we can go forever. Such shuffles are called derangements, and the probability of getting one is 1/e. That is, the number of derangements of n items is the nearest integer to n!/e, so the probability of a derangement would be 1/n! * [n!/e]. This number converges to 1/e incredibly quickly as n grows - unsurprisingly, the number of correct digits is on the order of the factorial of n.

We're now interested in partial derangements D_{n,k}; the number of permutations of n elements which have k fixed points. D_{n,0} is the number of derangements, as established that is [n!/e]. Suppose k isn't 0. Then we can pick k points to be correctly sorted, and multiply by the number of derangements of the others, for a total of nCk * [(n-k)!/e]. Note that [1/e] is 0, indeed, it's not possible for exactly one element to be out of place.

So what's the probability of a particular partial derangement? Well now we're asking for D_{n,k}/n!. That would be nCk/n! * [(n-k)!/e]. Let's drop the nearest integer bit and call it an approximation, then (nCk * (n-k)!)/(n! * e) = 1/(k!*e). Look familiar? That's a Poisson distribution with λ = 1!

But if we have a Poisson distribution with λ = 1, then that means that on average we expect one new sorted element per shuffle, and hence we expect to take n shuffles. I'll admit, I was not expecting that when I started working this out. I wrote a quick program to average some trials as a sanity check and it seems to hold.

view more: next ›

AbelianGrape

joined 1 year ago