PartiallyApplied

joined 5 months ago

They both are beautiful and impressive artworks.

Personally, your latest render evokes a sense of warm cleanness. I take it as a sort of illumination of a dark but hopeful place.

Your earlier render feels more outdoorsy (but not necessarily more organic, if that makes any sense). It conveys to me more a spirit of perseverance in contrast to the first one.

That is to say, while everyone may have their own opinions, both pieces of art to me, are incredibly complex works, and you’re clearly insanely talented, so you make it hard to have a preference!

Your latest work does seem to have an incredible amount of meticulous work and effort put into the lighting, and this one seems to focus more on the background, but either way, they are stunning!!

[–] PartiallyApplied@lemmy.world 1 points 1 week ago (1 children)

This github page isn’t visible on my mobile device because the ads block the view.

The concept sounds truly interesting but distribution is everything. The AdSense you have is probably not very profitable and is actively hurting your recognition. As politely but bluntly as possible: If you want appreciation and adoption, remove the advertisements. You’re selling us on your idea, not whatever bottom barrel consumerism Google wants me to buy

Behold: PRQL. I only know it exists not if the errors are good, my SQL needs are simple, but perhaps for some complex data wrangling it could be nicer idk

[–] PartiallyApplied@lemmy.world 6 points 1 week ago* (last edited 1 week ago) (1 children)

Perhaps the textbook example is the Simpson’s Paradox.

This article goes through a couple cases where naively and statically conclusions are supported, but when you correctly separate the data, those conclusions reverse themselves.

Another relevant issue is Aggregation Bias. This article has an example where conclusions about a population hold inversely with individuals of that population.

And the last one I can think of is MAUP, which deals with the fact that statistics are very sensitive in whatever process is used to divvy up a space. This is commonly referenced in spatial statistics but has more broad implications I believe.


This is not to say that you can never generalize, and indeed, often a big goal of statistics is to answer questions about populations using only information from a subset of individuals in that population.

All Models Are Wrong, Some are Useful

  • George Box

The argument I was making is that the NYT will authoritatively make conclusions without taking into account the individual, looking only at the population level, and not only is that oftentimes dubious, sometimes it’s actively detrimental. They don’t seem to me to prove their due diligence in mitigating the risk that comes with such dubious assumptions, hence the cynic in me left that Hozier quote.

“Wet sidewalks cause rain”

Pretty much. I never really thought about the causal link being entirely reversed, moreso that the chain of reasoning being broken or mediated by some factor they missed, which yes definitely happens, but now I can definitely think of instances where it’s totally flipped.

Very interesting read, thanks for sharing!

[–] PartiallyApplied@lemmy.world 18 points 1 week ago* (last edited 1 week ago) (5 children)

I feel this hard with the New York Times.

99% of the time, I feel like it covers subjects adequately. It might be a bit further right than me, but for a general US source, I feel it’s rather representative.

Then they write a story about something happening to low income US people, and it’s just social and logical salad. They report, it appears as though they analytically look at data, instead of talking to people. Statisticians will tell you, and this is subtle: conclusions made at one level of detail cannot be generalized to another level of detail. Looking at data without talking with people is fallacious for social issues. The NYT needs to understand this, but meanwhile they are horrifically insensitive bordering on destructive at times.

“The jackboot only jumps down on people standing up”

  • Hozier, “Jackboot Jump”

Then I read the next story and I take it as credible without much critical thought or evidence. Bias is strange.

[–] PartiallyApplied@lemmy.world 5 points 2 weeks ago* (last edited 2 weeks ago)

I think many people don’t like it conceptually because the advertising for Brave is:

Built in Privacy + Crypto + Ad Blocking

Firefox + uBlock Origin suffices well enough for most people. It’s stable, suits the purpose, and separates them from a company entangled with crypto.

Everyone is just trying to do their best to balance convenience with the social impacts of their actions. People make change because they care, either altruistically or personally, but it always comes with some sort of personal cost. Putting your neck out there and trying to make a change is more important than any specific browser choice

[–] PartiallyApplied@lemmy.world 2 points 2 weeks ago* (last edited 2 weeks ago)

I’ve done a bit more searching and it seems ltex-lsp-plus is the best out there for lsp grammar checking. It’s 1000x better than nothing, though the false negative rate is a bit high for my taste :)

[–] PartiallyApplied@lemmy.world 4 points 2 weeks ago

I’m not sure what kind of diagram you’re after, but Typst has Cetz which is graphing + arbitrary drawing of shapes, paths, splines, etc.

Typst also has fletcher “maker of arrows” for diagrams which is my personal fave for the work I do

https://typst.app/universe/package/fletcher/

 

Really cool Nix idea which could improve incremental builds and replaces IFD (import from derivation) in some instances.

The article poses it as an alt to the lang2nix pattern, but some of functions look rather challenging to understand? Do you think this might allow nixpkgs upstream to support more languages / build systems performantly out of the box, abstracting away the complexity from Nix users?

view more: next ›