3

I've kept playing with shader programming and managed to export a trained neural network's weights as GLSL variable definitions. The code is ugly as hell as I've done a lot of quick experiments with it, and I went all-in with macros where functions would probably be better suited. I hope you still find it interesting.

Excluding neural network weights, the whole thing is ~300 lines of code and can run a few variations of a simple convolutional network.

19

cross-posted from: https://lemmy.pierre-couy.fr/post/678825

Hi ! I've been working on this article for the past few days. It would mean a lot to me if you could provide some feedback.

It is about implementing a physico-chemical simulation as my first attempt to write a shader. The code is surprisingly simple and short (less than 100 lines). The "Prerequisite" and "Update rules" sections, however, may need some adjustments to make them clearer.

Thanks for reading

6

cross-posted from: https://lemmy.pierre-couy.fr/post/678825

Hi ! I've been working on this article for the past few days. It would mean a lot to me if you could provide some feedback.

It is about implementing a physico-chemical simulation as my first attempt to write a shader. The code is surprisingly simple and short (less than 100 lines). The "Prerequisite" and "Update rules" sections, however, may need some adjustments to make them clearer.

Thanks for reading

9

cross-posted from: https://lemmy.pierre-couy.fr/post/678825

Hi ! I've been working on this article for the past few days. It would mean a lot to me if you could provide some feedback.

It is about implementing a physico-chemical simulation as my first attempt to write a shader. The code is surprisingly simple and short (less than 100 lines). The "Prerequisite" and "Update rules" sections, however, may need some adjustments to make them clearer.

Thanks for reading

12

cross-posted from: https://lemmy.pierre-couy.fr/post/678825

Hi ! I've been working on this article for the past few days. It would mean a lot to me if you could provide some feedback.

It is about implementing a physico-chemical simulation as my first attempt to write a shader. The code is surprisingly simple and short (less than 100 lines). The "Prerequisite" and "Update rules" sections, however, may need some adjustments to make them clearer.

Thanks for reading

43

Hi ! I've been working on this article for the past few days. It would mean a lot to me if you could provide some feedback.

It is about implementing a physico-chemical simulation as my first attempt to write a shader. The code is surprisingly simple and short (less than 100 lines). The "Prerequisite" and "Update rules" sections, however, may need some adjustments to make them clearer.

Thanks for reading

36

publication croisée depuis : https://lemmy.pierre-couy.fr/post/653426

This is a guide I wrote for Immich's documentation. It features some Immich specific parts, but should be quite easy to adapt to other use cases.

It is also possible (and not technically hard) to self-host a protomaps release, but this would require 100GB+ of disk space (which I can't spare right now). The main advantages of this guide over hosting a full tile server are :

  • it's a single nginx config file to deploy
  • it saves you some storage space since you're only hosting tiles you've previously viewed. You can also tweak the maximum cache size to your needs
  • it is easy to configure a trade-off between map freshness and privacy by tweaking the cache expiration delay

If you try to follow it, please send me some feedback on the content and the wording, so I can improve it

89

This is a guide I wrote for Immich's documentation. It features some Immich specific parts, but should be quite easy to adapt to other use cases.

It is also possible (and not technically hard) to self-host a protomaps release, but this would require 100GB+ of disk space (which I can't spare right now). The main advantages of this guide over hosting a full tile server are :

  • it's a single nginx config file to deploy
  • it saves you some storage space since you're only hosting tiles you've previously viewed. You can also tweak the maximum cache size to your needs
  • it is easy to configure a trade-off between map freshness and privacy by tweaking the cache expiration delay

If you try to follow it, please send me some feedback on the content and the wording, so I can improve it

[-] pcouy@lemmy.pierre-couy.fr 29 points 2 months ago

In my experience, OnlyOffice has the best compatibility with M$ Office. You should try it if you haven't

[-] pcouy@lemmy.pierre-couy.fr 87 points 3 months ago

On this day, exactly 12 years ago (9:30 EDT 1 Aug 2012), was the most expensive software bug ever, in both terms of dollars per second and total lost. The company managed to pare losses through the heroics of Goldman Sachs, and “only” lost $457 million (which led to its dissolution).

Devs were tasked with porting their HFT bot to an upcoming NYSE API service that was announced to go live less than a 33 days in the future. So they started a death march sprint of 80 hour weeks. The HFT bot was written in C++. Because they didn't want to have to recompile once, the lead architect decided to keep the same exact class and method signature for their PowerPeg::trade() method, which was their automated testing bot that they had been using since 2003. This also meant that they did not have to update the WSDL for the clients that used the bot, either.

They ripped out the old dead code and put in the new code. Code that actually called real logic, instead of the test code, which was designed, by default, to buy the highest offer given to it.

They tested it, they wrote unit tests, everything looked good. So they decided to deploy it at 8 AM EST, 90 minutes before market open. QA testers tested it in prod, gave the all clear. Everyone was really happy. They'd done it. They'd made the tight deadline and deployed with just 90 minutes to spare...

They immediately went to a sprint standup and then sprint retro meeting. Per their office policy, they left their phones (on mute) at their desks.

During the retro, the markets opened at 9:30 EDT, and the new bot went WILD (!!) It just started buying the highest offer offered for all of the stocks in its buy list. The markets didn’t react very abnormally, becuase it just looked like they were bullish. But they were buying about $5 million shares per second… Within 2 minutes, the warning alarms were going on in their internal banking sector… a huge percentage of their $2.5 billion in operating cash was being depleted, and fast!

So many people tried to contact the devs, but they were in a remote office in Hoboken due to the high price of realestate in Manhattan. And their phones were off and no one was at their computer.

The CEO was seen getting people to run through the halls of the building, yelling, and finally the devs noticed. 11 minutes ahd gone by and the bots had bought over $3 billion of stock. The total cash reserves were depleted. The compnay was in SERIOUS trouble...

None of the devs could find the source of the bug. The CEO, desperate, asked for solutions. "KILL THE SERVERS!!" one of the devs shouted!!

They got techs @ the datacenter next to the NYSE building to find all 8 servers that ran the bots and DESTROYED them with fireaxes. Just ripping the wires out… And finally, after 37 minutes, the bots stopped trading. Total paper loss: $10.8 billion.

The SEC + NYSE refused to rewind the trades for all but 6 stocks, the on paper losses were still at $8 billion. No way they coudl pay. Goldman Sachs stepped in and offered to buy all the stocks @ a for-profit price of $457 million, which they agreed to. All in all, the company lost close to $500 million and all of its corporate clients left, and it went out of business a few weeks later.

Now what was the cause of the bug? Fat fingering human error during release.

The sysop had declined to implement CI/CD, which was still in its infancy, probably because that was his full-time job and he was making like $300,000 in 2012 dollars ($500k today). There were 8 servers that housed the bot and a few clients on the same servers.

The sysop had correctly typed out and pasted the correct rsync commands to get the new C++ binary onto the servers, except for server 5 of 8. In the 5th instance, he had an extra 5 in the server name. The rsync failed, but because he pasted all of the commands at once, he didn't notice...

Because the code used the exact same method signature for the trade() method, server 5 was happy to buy up the most expensive offer it was given, because it was running the Sad Path test trading software. If they had changed the method signature, it wouldn't have run and the bug wouldn't have happened.

At 9:43 EDT, the devs decided collectively to do a "rollback" to the previous release. This was the worst possible mistake, because they added in the Power Peg dead code to the other 7 servers, causing the problems to grow exponentially. Although, it took about 3 minutes for anyone in Finance to actually inform them. At that point, more than $50 million dollars per second was being lost due to the bug.

It wasn't until 9:58 EDT that the servers had all been destroyed that the trading stopped.

Here is a description of the aftermath:

It was not until 9:58 a.m. that Knight engineers identified the root cause and shut down SMARS on all the servers; however, the damage had been done. Knight had executed over 4 million trades in 154 stocks totaling more than 397 million shares; it assumed a net long position in 80 stocks of approximately $3.5 billion as well as a net short position in 74 stocks of approximately $3.15 billion.

28 minutes. $8.65 billion inappropriately purchased. ~1680 seconds. $5.18 million/second.

But after the rollback at 9:43, about $4.4 billion was lost. ~900 seconds. ~$49 million/second.

That was the story of how a bad software decision and fat-fingered manual production release destroyed the most profitable stock trading firm of the time, and was the most expensive software bug in human history.

[-] pcouy@lemmy.pierre-couy.fr 27 points 3 months ago

I think they do get marked as dead after the Bodis subdomain does not act as a Lemmy instance. But I was wondering if a large number of instances "waking up from the dead" and acting maliciously could cause some trouble. Or would such "undead" instances pose no more threat to the fediverse than the same number of newly created malicious instances ? I'm mainly thinking about stuff like being in a privileged position to DoS most instances at once, or impersonation of accounts that used to actually exist on these "undead" instances

78

publication croisée depuis : https://lemmy.pierre-couy.fr/post/584644

While monitoring my Pi-Hole logs today, I noticed a bunch of queries for XXXXXX.bodis.com, where XXXXXX are numbers. I saw a few variations for the numbers, each one being queried several times.

Digging further, I found out these queries were caused by CNAME records on domains that look like they used to point to Lemmy/Kbin instances.

From what I understand, domain owners can register a CNAME record to XXXXXX.bodis.com and earn some money from the traffic it receives. I guess that each number variation is a domain owner ID in Bodis' database. I saw between 5 to 10 different number variations, each one being pointed to by a bunch of old Lemmy domains.

This probably means that among actors who snatch expired domains, several of them have taken a specific interest with expired domains of old Lemmy instances. Another hypothesis is that there were a lot of domains registered for hosting Lemmy during the Reddit API debacle (about 1 year ago), which started expiring recently.

Are there any other instance admins who noticed the same thing ? Is any of my two hypothesis more plausible than the other ? Should we worry about this trend ?

Anyway, I hope this at least serves as a reminder to not let our domains expire ;)

173
submitted 3 months ago* (last edited 3 months ago) by pcouy@lemmy.pierre-couy.fr to c/fediverse@lemmy.world

While monitoring my Pi-Hole logs today, I noticed a bunch of queries for XXXXXX.bodis.com, where XXXXXX are numbers. I saw a few variations for the numbers, each one being queried several times.

Digging further, I found out these queries were caused by CNAME records on domains that look like they used to point to Lemmy/Kbin instances.

From what I understand, domain owners can register a CNAME record to XXXXXX.bodis.com and earn some money from the traffic it receives. I guess that each number variation is a domain owner ID in Bodis' database. I saw between 5 to 10 different number variations, each one being pointed to by a bunch of old Lemmy domains.

This probably means that among actors who snatch expired domains, several of them have taken a specific interest with expired domains of old Lemmy instances. Another hypothesis is that there were a lot of domains registered for hosting Lemmy during the Reddit API debacle (about 1 year ago), which started expiring recently.

Are there any other instance admins who noticed the same thing ? Is any of my two hypothesis more plausible than the other ? Should we worry about this trend ?

Anyway, I hope this at least serves as a reminder to not let our domains expire ;)

24

Cross-posted from : https://lemmy.pierre-couy.fr/post/581642

Context : Immich default map tile provider (which gets sent a bunch of PII every time you use the map feature) is a company that I see no reason to trust. This is a follow-up to this post, with the ~~permanent~~ temporary fix I came up with. I will also summarize the general opinion from the comments, as well as some interesting piece of knowledge that commenters shared.

Hacky fix

This will use Nginx proxy module to build a caching proxy in front of Open Street Map's tileserver and to serve a custom style.json for the maps.

This works well for me, since I already proxy all my services behind a single Nginx instance. It is probably possible to achieve similar results with other reverse proxies, but this would obviously need to be adapted.

Caching proxy

Inside Nginx's http config block (usually in /etc/nginx/nginx.conf), create a cache zone (a directory that will hold cached responses from OSM) :

http {
     # You should not need to edit existing lines in the http block, only add the line below
    proxy_cache_path /var/cache/nginx/osm levels=1:2 keys_zone=osm:100m max_size=5g inactive=180d;
}

You may need to manually create the /var/cache/nginx/osm directory and set its owner to Nginx's user (typically www-data on Debian based distros).

Customize the max_size parameter to change the maximum amount of cached data you want to store on your server. The inactive parameter will cause Nginx to discard cached data that's not been accessed in this duration (180d ~ 6months).

Then, inside the server block that serves your Immich instance, create a new location block :

server {
    listen 443 ssl;
    server_name immich.your-domain.tld;

    # You should not need to change your existing config, only add the location block below

    location /map_proxy/ {
        proxy_pass https://tile.openstreetmap.org/;
        proxy_cache osm;
        proxy_cache_valid 180d;
        proxy_ignore_headers Cache-Control Expires;
        proxy_ssl_server_name on;
        proxy_ssl_name tile.openstreetmap.org;
        proxy_set_header Host tile.openstreetmap.org;
        proxy_set_header User-Agent "Nginx Caching Tile Proxy for self-hosters";
        proxy_set_header Cookie "";
        proxy_set_header Referer "";
    }
}

Reload Nginx (sudo systemctl reload nginx). Confirm this works by visiting https://immich.your-domain.tld/map_proxy/0/0/0.png, which should now return a world map PNG (the one from https://tile.openstreetmap.org/0/0/0.png )

This config ignores cache control headers from OSM and sets its own cache validity duration (proxy_cache_valid parameter). After the specified duration, the proxy will re-fetch the tiles. 6 months seem reasonable to me for the use case, and it can probably be set to a few years without it causing issues.

Besides being lighter on OSM's servers, the caching proxy will improve privacy by only requesting tiles from upstream when loaded for the first time. This config also strips cookies and referrer before forwarding the queries to OSM, as well as set a user agent for the proxy following OSM foundation's guidelines (according to these guidelines, you should add a contact information to this user agent)

This can probably be made to work on a different domain than the one serving your Immich instance, but this probably requires to add the appropriate headers for CORS.

Custom style.json

I came up with the following mapstyle :

{
  "version": 8,
  "name": "Immich Map",
  "sources": {
    "immich-map": {
      "type": "raster",
      "tileSize": 256,
      "tiles": [
        "https://immich.your-domain.tld/map_proxy/{z}/{x}/{y}.png"
      ]
    }
  },
  "sprite": "https://maputnik.github.io/osm-liberty/sprites/osm-liberty",
  "glyphs": "https://fonts.openmaptiles.org/{fontstack}/{range}.pbf",
  "layers": [
    {
      "id": "raster-tiles",
      "type": "raster",
      "source": "immich-map",
      "minzoom": 0,
      "maxzoom": 22
    }
  ],
  "id": "immich-map-dark"
}

Replace immich.your-domain.tld with your actual Immich domain, and remember the absolute path you save this at.

One last update to nginx's config

Since Immich currently does not provide a way to manually edit style.json, we need to serve it from http(s). Add one more location block below the previous one :

location /map_style.json {
    alias /srv/immich/mapstyle.json;
}

Replace the alias parameter with the location where you saved the json mapstyle. After reloading nginx, your json style will be available at https://immich.your-domain.tld/map_style.json

Configure Immich to use this

For this last part, follow steps 8, 9, 10 from this guide (use the link to map_style.json for both light and dark themes). After clearing the browser or app's cache, the map should now be loaded from your caching proxy. You can confirm this by tailing Nginx's logs while you zoom and move around the map in Immich

Summary of comments from previous post

Self-hosting a tile server is not realistic in most cases

People who have previously worked with maps seem to confirm that there are no tile server solution lightweight enough to be self hosted by hobbyists. There is maybe some hope with generating tiles on demand, but someone with deep knowledge of the file formats involved in the process should confirm this.

Some interesting links were shared, which seem to confirm this is not realistically self-hostable with the available software :

General sentiment about this issue

In all this part, I want to emphasize that while there seems to be a consensus, this is only based on the few comments from the previous post and may be biased by the fact that we're discussing it on a non-mainstream platform. If you disagree with anything below, please comment this post and explain your point of view.

  • Nobody declared that they had noticed the requests to a third-party server before
  • A non-negligible fraction of Immich users are interested in the privacy benefits over other solutions such as Google photos. These users do not like their self-hosted services to send requests to third-party servers without warning them first
  • The fix should consist of the following :
    • Clearly document the implications of enabling the map, and any feature that sends requests to third parties
    • Disable by default features that send requests to third parties (especially if it contains any form of geolocated data)
    • Provide a way to easily change the tile provider. A select menu with a few pre-configured style.json would be nice, along with a way to manually edit style.json (or at least some of its fields) directly from the Immich config page
[-] pcouy@lemmy.pierre-couy.fr 25 points 3 months ago* (last edited 3 months ago)

At this point, I'll just assume you are trolling and stop replying after this comment.

This post is trying to provide a generic solution to the fact that there are no reasonable way to get map tiles without relying on a third party provider.

I additionally included instructions on how to set it up with Immich, but I don't see how a caching proxy in front of OSM should be part of Immich, a software focused on managing photo libraries.

[-] pcouy@lemmy.pierre-couy.fr 43 points 3 months ago

Blocking the DNS was the first thing I did. This is intended to restore the map feature without having to trust a random company I've never heard of.

What do you mean by "a diff of a code fix" that would be simpler ?

[-] pcouy@lemmy.pierre-couy.fr 23 points 3 months ago

You can, but you would not be able to display the map. Might as well disable the map server-wide

133

Context : Immich default map tile provider (which gets sent a bunch of PII every time you use the map feature) is a company that I see no reason to trust. This is a follow-up to this post, with the ~~permanent~~ temporary fix I came up with. I will also summarize the general opinion from the comments, as well as some interesting piece of knowledge that commenters shared.

Hacky fix

This will use Nginx proxy module to build a caching proxy in front of Open Street Map's tileserver and to serve a custom style.json for the maps.

This works well for me, since I already proxy all my services behind a single Nginx instance. It is probably possible to achieve similar results with other reverse proxies, but this would obviously need to be adapted.

Caching proxy

Inside Nginx's http config block (usually in /etc/nginx/nginx.conf), create a cache zone (a directory that will hold cached responses from OSM) :

http {
     # You should not need to edit existing lines in the http block, only add the line below
    proxy_cache_path /var/cache/nginx/osm levels=1:2 keys_zone=osm:100m max_size=5g inactive=180d;
}

You may need to manually create the /var/cache/nginx/osm directory and set its owner to Nginx's user (typically www-data on Debian based distros).

Customize the max_size parameter to change the maximum amount of cached data you want to store on your server. The inactive parameter will cause Nginx to discard cached data that's not been accessed in this duration (180d ~ 6months).

Then, inside the server block that serves your Immich instance, create a new location block :

server {
    listen 443 ssl;
    server_name immich.your-domain.tld;

    # You should not need to change your existing config, only add the location block below

    location /map_proxy/ {
        proxy_pass https://tile.openstreetmap.org/;
        proxy_cache osm;
        proxy_cache_valid 180d;
        proxy_ignore_headers Cache-Control Expires;
        proxy_ssl_server_name on;
        proxy_ssl_name tile.openstreetmap.org;
        proxy_set_header Host tile.openstreetmap.org;
        proxy_set_header User-Agent "Nginx Caching Tile Proxy for self-hosters";
        proxy_set_header Cookie "";
        proxy_set_header Referer "";
    }
}

Reload Nginx (sudo systemctl reload nginx). Confirm this works by visiting https://immich.your-domain.tld/map_proxy/0/0/0.png, which should now return a world map PNG (the one from https://tile.openstreetmap.org/0/0/0.png )

This config ignores cache control headers from OSM and sets its own cache validity duration (proxy_cache_valid parameter). After the specified duration, the proxy will re-fetch the tiles. 6 months seem reasonable to me for the use case, and it can probably be set to a few years without it causing issues.

Besides being lighter on OSM's servers, the caching proxy will improve privacy by only requesting tiles from upstream when loaded for the first time. This config also strips cookies and referrer before forwarding the queries to OSM, as well as set a user agent for the proxy following OSM foundation's guidelines (according to these guidelines, you should add a contact information to this user agent)

This can probably be made to work on a different domain than the one serving your Immich instance, but this probably requires to add the appropriate headers for CORS.

Custom style.json

I came up with the following mapstyle :

{
  "version": 8,
  "name": "Immich Map",
  "sources": {
    "immich-map": {
      "type": "raster",
      "tileSize": 256,
      "tiles": [
        "https://immich.your-domain.tld/map_proxy/{z}/{x}/{y}.png"
      ]
    }
  },
  "sprite": "https://maputnik.github.io/osm-liberty/sprites/osm-liberty",
  "glyphs": "https://fonts.openmaptiles.org/{fontstack}/{range}.pbf",
  "layers": [
    {
      "id": "raster-tiles",
      "type": "raster",
      "source": "immich-map",
      "minzoom": 0,
      "maxzoom": 22
    }
  ],
  "id": "immich-map-dark"
}

Replace immich.your-domain.tld with your actual Immich domain, and remember the absolute path you save this at.

One last update to nginx's config

Since Immich currently does not provide a way to manually edit style.json, we need to serve it from http(s). Add one more location block below the previous one :

location /map_style.json {
    alias /srv/immich/mapstyle.json;
}

Replace the alias parameter with the location where you saved the json mapstyle. After reloading nginx, your json style will be available at https://immich.your-domain.tld/map_style.json

Configure Immich to use this

For this last part, follow steps 8, 9, 10 from this guide (use the link to map_style.json for both light and dark themes). After clearing the browser or app's cache, the map should now be loaded from your caching proxy. You can confirm this by tailing Nginx's logs while you zoom and move around the map in Immich

Summary of comments from previous post

Self-hosting a tile server is not realistic in most cases

People who have previously worked with maps seem to confirm that there are no tile server solution lightweight enough to be self hosted by hobbyists. There is maybe some hope with generating tiles on demand, but someone with deep knowledge of the file formats involved in the process should confirm this.

Some interesting links were shared, which seem to confirm this is not realistically self-hostable with the available software :

General sentiment about this issue

In all this part, I want to emphasize that while there seems to be a consensus, this is only based on the few comments from the previous post and may be biased by the fact that we're discussing it on a non-mainstream platform. If you disagree with anything below, please comment this post and explain your point of view.

  • Nobody declared that they had noticed the requests to a third-party server before
  • A non-negligible fraction of Immich users are interested in the privacy benefits over other solutions such as Google photos. These users do not like their self-hosted services to send requests to third-party servers without warning them first
  • The fix should consist of the following :
    • Clearly document the implications of enabling the map, and any feature that sends requests to third parties
    • Disable by default features that send requests to third parties (especially if it contains any form of geolocated data)
    • Provide a way to easily change the tile provider. A select menu with a few pre-configured style.json would be nice, along with a way to manually edit style.json (or at least some of its fields) directly from the Immich config page
[-] pcouy@lemmy.pierre-couy.fr 29 points 3 months ago

Quoting one dev from the conversation I had on Discord :

the one run by OSM is not intended for general purpose use because that results in way too much load on their system. We used to use theirs, but as Immich grew we decided that we should relieve them of that

I guess you (and they) are talking about raster tiles, since OSM does not seem to provide vector tiles

[-] pcouy@lemmy.pierre-couy.fr 37 points 3 months ago* (last edited 3 months ago)

When I mentionned that "I can confirm it is not realistic to self-host a tile provider", it's because I tried to run maptiler : it maxed out my CPU for 2 hours before my disk got filled while trying to generate the tiles from OSM data (and it was just for France)

Edit : Anyway, I don't think this should be in Immich's scope. Simply providing an easy option to switch tile providers would allow people motivated enough to host maptiler to use it

Edit bis : More details on how hard it is to host your own tile provider are available on the official OSM wiki

[-] pcouy@lemmy.pierre-couy.fr 119 points 6 months ago

Downvoted for cropping out the reference to the original...

[-] pcouy@lemmy.pierre-couy.fr 32 points 7 months ago* (last edited 7 months ago)

The worst thing about eclipse I've had to deal with is its git integration. The conflict resolution tool is awful and half the terminology diverges from plain git.

The fact that it has a "Push & Commit" button also drives me mad far more than it should

[-] pcouy@lemmy.pierre-couy.fr 32 points 8 months ago* (last edited 8 months ago)

What's up with all the shilling posts lately?

This has existed since at least 2018 according to their Twitter, and is related to crypto currencies through its Radworks DAO

Edit : I'm not saying OP themselves is a shill. Radicle did a pretty goog job at hiding its cryptocurrency ties. They even renamed their token from Radicle to Radworks a few years ago. It seems like cryptobros are adapting to the fact that being related to cryptocurrencies hinders adoption among technical people.

[-] pcouy@lemmy.pierre-couy.fr 51 points 8 months ago

For anyone who wonders, this is related to cryptocurrencies

view more: next ›

pcouy

joined 1 year ago