[-] evenwicht@lemmy.sdf.org 0 points 16 hours ago* (last edited 16 hours ago)

Yeah I’ll have to deal with it at some point one way or another. I’m sure I will close the account at the first opportunity but it’s impossible to find a non-shitty bank or CU. It’s not something I can do at the drop of a hat. It seems not a single bank or CU targets the market of consumers who have some self-respect and a bit of street wisdom.

Why are you so bothered by your bank sending you an email using extremely common informatics technology,

I don’t give a shit how popular tracker pixels are. It doesn’t justify them being in my comms, so I have a duty to not trigger them and I’m happy to treat pushers of these trackers as adversaries and threat actors. They are being dishonest and sneaky. The honest thing to do is to follow the RFC on return receipts, which is transparent and gives the customer appropriate control over their own disclosures.

especially after you already planned for this and literally aren’t sending them any of the data you’re concerned about?

I use a text mail client for other reasons but incidentally it’s good for avoiding tracker pixels. Actually I have to check on something.. I not 100% that spamassassin does not trigger tracker pixels. SA has some vulns, like the DNS leak vuln. But if SA does not trigger the tracker pixels, then indeed I’m secure enough.

[-] evenwicht@lemmy.sdf.org 3 points 23 hours ago* (last edited 22 hours ago)

I did not think of the marketing angle -- although even then, knowing the times that each individual opens their mail and their location has value for personalized marketing.

We are talking about banks in the case at hand. It’s unclear how many people have not come to the realization that bankers are now doing the job of cops. KYC/AML. In this particular sector, anonymization is unlikely. Banks have no limits on their snooping. They have a blank check and no consequences for overcollection. No restraint. When they get breached, they just sign people up for credit monitoring and any overcollection has the immunity of KYC law.

At best, perhaps a marketing division would choose some canned bulk mailing service which happens to give them low resolution on engagement. But even that’s a stretch because anyone in the marketing business also wants to market their own service as making the most of data collection.

[-] evenwicht@lemmy.sdf.org 3 points 1 day ago

No that’s not it. My address is unique to the bank, full headers & path match up with other mail from them, and the means to reach them back correct (yes I examine every character for imposters using od -c).

[-] evenwicht@lemmy.sdf.org 2 points 1 day ago

Can you explain why they would want to anonymise the tracker pixels? Doesn’t that defeat the purpose?

47
submitted 1 day ago* (last edited 1 day ago) by evenwicht@lemmy.sdf.org to c/email_required@lemmy.sdf.org

Got an email from a bank saying my account has been put in a restricted state because they have been unable to reach me. Their emails reach me fine. They rarely send paper mail but when they do I can see that they have the correct address on file.

Then I looked closer at their email, examined the HTML, and found that they insert a tracker pixel in their messages. So if I were to use a graphical mail client with default configs, they would surreptitiously get a signal telling them my IP (thus whereabouts) and time of day every time I open my email from them. I use a text client so the tracker pixels get ignored.

Would a bank conclude from lack of tracker pixels signals that they are not reaching a customer, and then lock down their account?

I’m not going to call them and ask.. fuck them for interrupting my day and making me dance. I don’t lick boots like that. I just wonder if anyone else who does not trigger tracker pixels has encountered this situation.

1

Some email that was sent to me via a burnermail.io account never reached me. The sender got no bounce, so the message just went into a silent black hole. Of course whenever this happens there is no way to know whether the msgs were lost by the forwarding service or the email hosting provider. Hence why I am asking.. would like to see if there is a pattern.

Worth noting that spamgourmet.com is very unreliable. Messages get dropped silently all the time. But until recently I was unaware of any possible issues with burnermail.

8
submitted 5 days ago* (last edited 4 days ago) by evenwicht@lemmy.sdf.org to c/linuxphones@lemmy.ml

My current rig:

  • old android phone with GPS disabled
  • external GPS device (NMEA over bluetooth)
  • OSMand from f-droid for offline maps and navigation
  • BlueGPS to connect to the bluetooth GPS device, grab the NMEA signal, and feed it as a mock location
  • developer options » mock locations enabled

The idea is to save on phone battery so I can navigate more than an hour. The phone’s internal GPS is energy intensive because of all the GPS calculations. By offloading the GPS work to an external bluetooth GPS, the phone’s battery can be somewhat devoted to the screen because bluetooth uses much less energy than GPS. And NMEA carries lat/long so the phone need not do the calculations.

Not sure it actually works though.. been waiting for satellites for a while now. Anyway, I would like to know if this config can work on any FOSS platforms, like pmOS. Can OSMand run on pmOS or is there a better option? IIUC, Android apps are a huge CPU hog on pmOS because of emulation.

Ideally I would like to buy something 2nd-hand like a BQ Aquaris X5 and put pmOS on it. I’ll need a quite lean mapping and nav app that runs on pmOS, and also has the ability to use an external GPS.

For the first 15 minutes when satellites are taking forever to appear, I would like to use something like WiGLE WiFi Wardriving which makes use of wifi APs and cell towers the same way Google location does, but without feeding Google. Is there anything like that on pmOS, or any other FOSS phone platform?

Updates

Every mobile FOSS platform listed by the OSM project have been abandoned as far as I can tell. But perhaps OSM is just poorly tracking this because osmin and pure maps apparently both run on Postmarket OS:

There is a network-dependent nav app called Mepo, but that would not interest me.

There is also Organic Maps which comes as a flatpak for aarch64. It requires the whole KDE framework which is fat in terms of size but probably not relying on emulation so it could perform well enough.

[-] evenwicht@lemmy.sdf.org 1 points 6 days ago

In principle the ideal archive would contain the JavaScript for forensic (and similar) use cases, as there is both a document (HTML) and an app (JS) involved. But then we would want the choice whether to run the app (or at least inspect it), while also having the option to offline faithfully restore the original rendering. You seem to imply that saving JS is an option. I wonder if you choose to save the JS, does it then save the stock skeleton of the HTML, or the result in that case?

[-] evenwicht@lemmy.sdf.org 1 points 6 days ago* (last edited 6 days ago)

wget has a --load-cookies file option. It wants the original Netscape cookie file format. Depending on your GUI browser you may have to convert it. I recall in one case I had to parse the session ID out of a cookie file then build the expected format around it. I don’t recall the circumstances.

Another problem: some anti-bot mechanisms crudely look at user-agent headers and block curl attempts on that basis alone.

(edit) when cookies are not an issue, wkhtmltopdf is a good way to get a PDF of a webpage. So you could have a script do a wget to get the HTML faithfully, and wkhtmltopdf to get a PDF, then pdfattach to put the HTML inside the PDF.

(edit2) It’s worth noting there is a project called curl-impersonate which makes curl look more like a GUI browser to get more equal treatment. I think they go as far as adding a javascript engine or something.

[-] evenwicht@lemmy.sdf.org 4 points 6 days ago

It’s perhaps the best way for someone that has a good handle on it. Docs say it “sets infinite recursion depth and keeps FTP directory listings. It is currently equivalent to -r -N -l inf --no-remove-listing.” So you would need to tune it so that it’s not grabbing objects that are irrelevent to the view, and probably exclude some file types like videos and audio. If you get a well-tuned command worked out, that would be quite useful. But I do see a couple shortcomings nonetheless:

  • If you’re on a page that required you to login to and do some interactive things to get there, then I think passing the cookie from the gui browser to wget would be non-trivial.
  • If you’re on a capped internet connection, you might want to save from the brower’s cache rather that refetch everything.

But those issues aside I like the fact that wget does not rely on a plugin.

[-] evenwicht@lemmy.sdf.org 2 points 6 days ago* (last edited 6 days ago)

The other thing is, what about JavaScript? JS changes the presentation.

Markdown is probably ideal when saving an article, like a news story. It might even be quite useful to get it into a Gemini-compatible language. But what if you are saving the receipt for a purchase? A tax auditor would suspect shenanigans. So the idea with archival is generally to closely (faithfully) preserve the doc.

[-] evenwicht@lemmy.sdf.org 3 points 6 days ago* (last edited 6 days ago)

IIUC you are referring to this extension, which is Firefox-only (~~like~~unlike the save page WE, which has a Chromium version).

Indeed the beauty of ZIP is stability. But the contents are not. HTML changes so rapidly, I bet if I unzip an old MAFF file it would not have stood the test of time well. That’s why I like the PDF wrapper. Nonetheless, this WebScrapBook could stand in place of the MHTML from the save page WE extension. In fact, save page WE usually fails to save all objects for some reason. So WebScrapBook is probably more complete.

(edit) Apparently webscrapbook gives a choice between htz and maff. I like that it timestamps the content, which is a good idea for archived docs.

(edit2) Do you know what happens with JavaScript? I think JS can be quite disruptive to archival. If webscrapbook saves the JS, it’s saving an app, in effect, and that language changes. The JS also may depend on being able to access the web, which makes a shitshow of archival because obviously you must be online and all the same external URLs must still be reachable. OTOH, saving the JS is probably desirable if doing the hybrid PDF save because the PDF version would always contain the static result, not the JS. Yet the JS could still be useful to have a copy of.

(edit3) I installed webscrapbook but it had no effect. Right-clicking does not give any new functions.

25
submitted 6 days ago* (last edited 6 days ago) by evenwicht@lemmy.sdf.org to c/sustainabletech@lemmy.sdf.org

MAFF (a shit-show, unsustained)

Firefox used to have an in-house format called MAFF (Mozilla Archive File Format), which boiled down to a zip file that had HTML and a tree of media. I saved several web pages that way. It worked well. Then Mozilla dropped the ball and completely abandoned their own format. WTF. Did not even give people a MAFF→mhtml conversion tool. Just abandoned people while failing to realize the meaning and purpose of archival. Now Firefox today has no replacement. No MHTML. Choices are:

  • HTML only
  • HTML complete (but not as a single file but a tree of files)

MHTML (shit-show due to non-portable browser-dependency)

Chromium-based browsers can save a whole complete web page to a single MHTML file. Seems like a good move but then if you open Chromium-generated MHTML files in Firefox, you just get an ascii text dump of the contents which resembles a fake email header, MIME, and encoded (probably base64). So that’s a show-stopper.

exceptionally portable approach: A plugin adds a right-click option called “Save page WE” (available in both Firefox and Chromium). That extension produces an MHTML file that both Chromium and Firefox can open.

PDF (lossy)

Saving or printing a web page to PDF mostly guarantees that the content and representation can reasonably be reproduced well into the future. The problem is that PDF inherently forces the content to be arranged on a fixed width that matches a physical paper geometry (A4, US letter, etc). So you lose some data. You lose information about how to re-render it on different devices with different widths. You might save on A4 paper then later need to print it to US letter paper, which is a bit sloppy and messy.

PDF+MHTML hybrid

First use Firefox with the “Save page WE” plugin to produce an MHTML file. But relying on this alone is foolish considering how unstable HTML specs are even still today in 2024 with a duopoly of browser makers doing whatever the fuck they want - abusing their power. So you should also print the webpage to a PDF file. The PDF will ensure you have a reliable way to reproduce the content in the future. Then embed the MHTML file in the PDF (because PDF is a container format). Use this command:

$ pdfattach webpage.pdf webpage.mhtml webpage_with_HTML.pdf

The PDF will just work as you expect a PDF to, but you also have the option to extract the MHTML file using pdfdetach webpage_with_HTML.pdf if the need arises to re-render the content on a different device.

The downside is duplication. Every image is has one copy stored in the MTHML file and another copy separately stored in the PDF next to it. So it’s shitty from a storage space standpoint. The other downside is plugin dependency. Mozilla has proven browser extensions are unsustainable when they kicked some of them out of their protectionist official repository and made it painful for exiled projects to reach their users. Also the mere fact that plugins are less likely to be maintained than a browser builtin function.

We need to evolve

What we need is a way to save the webpage as a sprawled out tree of files the way Firefox does, then a way to stuff that whole tree of files into a PDF, while also producing a PDF vector graphic that references those other embedded images. I think it’s theoretically possible but no tool exists like this. PDF has no concept of directories AFAIK, so the HTML tree would likely have to be flattened before stuffing into the PDF.

Other approaches I have overlooked? I’m not up to speed on all the ereader formats but I think they are made for variable widths. So saving a webpage to an ereader format of some kind might be more sensible than PDF, if possible.

(update) The goals

  1. Capture the webpage as a static snapshot in time which requires no network to render. Must have a simple and stable format whereby future viewers are unlikely to change their treatment of the archive. PDF comes close to this.
  2. Record the raw original web content in a non-lossy way. This is to enable us to re-render the content on different devices with different widths. Future-proofness of the raw content is likely impossible because we cannot stop the unstable web standards from changing. But capturing a timestamp and web browser user-agent string would facilitate installation of the original browser. A snapshot of audio, video, and the code (JavaScript) which makes the page dynamic is also needed both for forensic purposes (suitable for court) and for being able to faithfully reproduce the dynamic elements if needed. This is to faithfully capture what’s more of an application than a document. wget -m possibly satisfies this. But perhaps tricky to capture 3rd party JS without recursing too far on other links.
  3. A raw code-free (thus partially lossy) snapshot for offline rendering is also needed if goal 1 leads to a width-constrained format. Save page WE and WebScrapBook apparently satisfies this.

PDF satisfies goal 1; wget satisfies goal 2; maff/mhtml satisfies goal 3. There is likely no single format that does all of the above, AFAIK. But I still need to explore these suggestions.

0
submitted 1 week ago* (last edited 1 week ago) by evenwicht@lemmy.sdf.org to c/bugs_in_services@sopuli.xyz

If you visit: https://12ft.io/$URL_to_pdf.pdf using a GUI browser, the raw PDF binary is dumped to the screen. There is no way to capture this. If you use wget it just gets an HTML wrapper. If you hit F12»inspect»element, you can derive a proper URL to a PDF and use wget on that. E.g.

wget 'https://12ft.io/api/proxy?q=https://mswista.files.wordpress.com/2015/04/typesofmemory_updated.pdf'

But the PDF is corrupt. There is no user-side hack here. The service is broken. Apparently the server is doing a character set conversion as if it’s ascii text.

(BTW, that sample URL above works fine without 12ft.io. It’s just an example to demo the 12ft.io problem. Of course when a PDF is walled off and I am forced to use 12ft.io, then I’m hosed)

The admin is only reachable in Twitter and Gmail, nethier of which work for me. The is a Mastodon bot at @thmsmlr@bird.makeup but that’s only good for following him. No way to report it to him AFAIK. Hence why I am posting this here.

8
submitted 1 week ago* (last edited 1 week ago) by evenwicht@lemmy.sdf.org to c/beneficial_bots@lemmy.sdf.org

I would never use the typical kind of shared bike that you can just leave anywhere because AFAIK those are exclusively for Google pawns. But the kind that have stations do not need an app. So I scraped all the bicycle station locations into a db & used an openstreetmaps API to grab the elevation of each station. If the destination station was a higher elevation than the source station, my lazy ass would take the tram. Hey, gimme a break.. these shared bikes are heavy as fuck because they’re made to take abuse from the general public.

It was fun to just cruise these muscle bikes downhill. I was probably a big contributor of high bicycle availability at low elevations and shortages in high places. The bike org then started a policy to give people a bonus credit if they park in a high station to try to incentivize more people going uphill.

6
submitted 1 week ago* (last edited 1 week ago) by evenwicht@lemmy.sdf.org to c/beneficial_bots@lemmy.sdf.org

I recall an inspirational story where a woman tried many dating sites and they all lacked the filters and features she needed to find the right guy. So she wrote a scraper bot to harvest profiles and wrote software that narrowed down the selection and propose a candidate. She ended up marrying him.

It’s a great story. I have no link ATM, and search came up dry but I found this story:

https://www.ted.com/talks/amy_webb_how_i_hacked_online_dating/transcript?subtitle=en

I can’t watch videos right now. It could even be the right story but I can’t verify.

I wonder if she made a version 2.0 which would periodically scrape new profiles and check whether her husband re-appears on a dating site, which could then alert her about the anomaly.

Anyway, the point in this new community is to showcase beneficial bots and demonstrate that there is a need to get people off the flawed idea that all bots are malicious. We need more advocacy for beneficial bots.

[-] evenwicht@lemmy.sdf.org 6 points 1 week ago* (last edited 1 week ago)

Don’t Canadian insurance companies want to know where their customers are? Or are the Canadian privacy safeguards good on this?

In the US, Europe (despite the GDPR), and other places, banks and insurance companies snoop on their customers to track their whereabouts as a normal common way of doing business. They insert surreptitious tracker pixels in email to not only track the fact that you read their msg but also when you read the msg and your IP (which gives whereabouts). If they suspect you are not where they expect you to be, they take action. They modify your policy. It’s perfectly legal in the US to use sneaky underhanded tracking techniques rather than the transparent mechanism described in RFC 2298. If your suppliers are using RFC 2298 and not involuntary tracking mechanisms, lucky you.

[-] evenwicht@lemmy.sdf.org 14 points 1 week ago* (last edited 1 week ago)

You’re kind of freaking out about nothing.

I highly recommend Youtube video l6eaiBIQH8k, if you can track it down. You seem to have no general idea about PDF security problems.

And I’m not sure why an application would output a pdf this way. But there’s nothing harmful going on.

If you can’t explain it, then you don’t understand it. Thus you don’t have answers.

It’s a bad practice to just open a PDF you did not produce without safeguards. Shame on me for doing it.. I got sloppy but it won’t happen again.

50
submitted 1 week ago* (last edited 1 week ago) by evenwicht@lemmy.sdf.org to c/cybersecurity@infosec.pub

cross-posted from: https://lemmy.sdf.org/post/24645301

They emailed me a PDF. It opened fine with evince and looked like a simple doc at first. Then I clicked on a field in the form. Strangely, instead of simply populating the field with my text, a PDF note window popped up so my text entry went into a PDF note, which many viewers present as a sticky note icon.

If I were to fax this PDF, the PDF comments would just get lost. So to fill out the form I fed it to LaTeX and used the overpic pkg to write text wherever I choose. LaTeX rejected the file.. could not handle this PDF. Then I used the file command to see what I am dealing with:

$ file signature_page.pdf
signature_page.pdf: Java serialization data, version 5

WTF is that? I know PDF supports JavaScript (shitty indeed). Is that what this is? “Java” is not JavaScript, so I’m baffled. Why is java in a PDF? (edit: explainer on java serialization, and some analysis)

My workaround was to use evince to print the PDF to PDF (using a PDF-building printer driver or whatever evince uses), then feed that into LaTeX. That worked.

My question is, how common is this? Is it going to become a mechanism to embed a tracking pixel like corporate assholes do with HTML email?

I probably need to change my habits. I know PDF docs can serve as carriers of copious malware anyway. Some people go to the extreme of creating a one-time use virtual machine with PDF viewer which then prints a PDF to a PDF before destroying the VM which is assumed to be compromised.

My temptation is to take a less tedious approach. E.g. something like:

$ firejail --net=none evince untrusted.pdf

I should be able to improve on that by doing something non-interactive. My first guess:

$ firejail --net=none gs -sDEVICE=pdfwrite -q -dFIXEDMEDIA -dSCALE=1 -o is_this_output_safe.pdf -- /usr/share/ghostscript/*/lib/viewpbm.ps untrusted_input.pdf

output:

Error: /invalidfileaccess in --file--
Operand stack:
   (untrusted_input.pdf)   (r)
Execution stack:
   %interp_exit   .runexec2   --nostringval--   --nostringval--   --nostringval--   2   %stopped_push   --nostringval--   --nostringval--   --nostringval--   false   1   %stopped_push   1990   1   3   %oparray_pop   1989   1   3   %oparray_pop   1977   1   3   %oparray_pop   1833   1   3   %oparray_pop   --nostringval--   %errorexec_pop   .runexec2   --nostringval--   --nostringval--   --nostringval--   2   %stopped_push   --nostringval--   --nostringval--   --nostringval--   %array_continue   --nostringval--
Dictionary stack:
   --dict:769/1123(ro)(G)--   --dict:0/20(G)--   --dict:87/200(L)--   --dict:0/20(L)--
Current allocation mode is local
Last OS error: Permission denied
Current file position is 10479
GPL Ghostscript 10.00.0: Unrecoverable error, exit code 1

What’s my problem? Better ideas? I would love it if attempts to reach the cloud could be trapped and recorded to a log file in the course of neutering the PDF.

(note: I also wonder what happens when Firefox opens this PDF considering Mozilla is happy to blindly execute whatever code it receives no matter the context.)

9
submitted 1 week ago* (last edited 1 week ago) by evenwicht@lemmy.sdf.org to c/paperless@sopuli.xyz

They emailed me a PDF. It opened fine with evince and looked like a simple doc at first. Then I clicked on a field in the form. Strangely, instead of simply populating the field with my text, a PDF note window popped up so my text entry went into a PDF note, which many viewers present as a sticky note icon.

If I were to fax this PDF, the PDF comments would just get lost. So to fill out the form I fed it to LaTeX and used the overpic pkg to write text wherever I choose. LaTeX rejected the file.. could not handle this PDF. Then I used the file command to see what I am dealing with:

$ file signature_page.pdf
signature_page.pdf: Java serialization data, version 5

WTF is that? I know PDF supports JavaScript (shitty indeed). Is that what this is? “Java” is not JavaScript, so I’m baffled. Why is java in a PDF? (edit: explainer on java serialization, and some analysis)

My workaround was to use evince to print the PDF to PDF (using a PDF-building printer driver or whatever evince uses), then feed that into LaTeX. That worked.

My question is, how common is this? Is it going to become a mechanism to embed a tracking pixel like corporate assholes do with HTML email?

I probably need to change my habits. I know PDF docs can serve as carriers of copious malware anyway. Some people go to the extreme of creating a one-time use virtual machine with PDF viewer which then prints a PDF to a PDF before destroying the VM which is assumed to be compromised.

My temptation is to take a less tedious approach. E.g. something like:

$ firejail --net=none evince untrusted.pdf

I should be able to improve on that by doing something non-interactive. My first guess:

$ firejail --net=none gs -sDEVICE=pdfwrite -q -dFIXEDMEDIA -dSCALE=1 -o is_this_output_safe.pdf -- /usr/share/ghostscript/*/lib/viewpbm.ps untrusted_input.pdf

output:

Error: /invalidfileaccess in --file--
Operand stack:
   (untrusted_input.pdf)   (r)
Execution stack:
   %interp_exit   .runexec2   --nostringval--   --nostringval--   --nostringval--   2   %stopped_push   --nostringval--   --nostringval--   --nostringval--   false   1   %stopped_push   1990   1   3   %oparray_pop   1989   1   3   %oparray_pop   1977   1   3   %oparray_pop   1833   1   3   %oparray_pop   --nostringval--   %errorexec_pop   .runexec2   --nostringval--   --nostringval--   --nostringval--   2   %stopped_push   --nostringval--   --nostringval--   --nostringval--   %array_continue   --nostringval--
Dictionary stack:
   --dict:769/1123(ro)(G)--   --dict:0/20(G)--   --dict:87/200(L)--   --dict:0/20(L)--
Current allocation mode is local
Last OS error: Permission denied
Current file position is 10479
GPL Ghostscript 10.00.0: Unrecoverable error, exit code 1

What’s my problem? Better ideas? I would love it if attempts to reach the cloud could be trapped and recorded to a log file in the course of neutering the PDF.

(note: I also wonder what happens when Firefox opens this PDF, because Mozilla is happy to blindly execute whatever code it receives no matter the context.)

[-] evenwicht@lemmy.sdf.org 8 points 2 weeks ago

Asylum is a legal process. If they follow that process (which begins with claiming asylum), then of course they cease to be illegal immigrants throughout the process.

128
submitted 2 weeks ago by evenwicht@lemmy.sdf.org to c/texas

So here’s a repugnant move by right-wing assholes. Taxans: you can counter that shit. If a hospital asks you whether you are in the country legally, instead of saying “yes” the right answer is “I decline to answer”. Don’t give the dicks their stats.

-2
submitted 2 weeks ago* (last edited 2 weeks ago) by evenwicht@lemmy.sdf.org to c/Finance@lemmy.sdf.org

According to BBC World News, the stocks in the US that are expected to do well under Trump are surging. I think those stocks are surely over-valued. Their value will be corrected after Trump loses.

~~In the US it’s illegal to bet on elections~~(see update), but betting on the stock market is fair game. I would love it if the some short-sellers would exploit this situation.

(update) It’s now legal to bet on elections in the US, as of a few weeks ago

3

I’ve noticed this problem on infosec.pub as well. If I edit a post and submit, the form is accepted but then the edits are simply scrapped. When I re-review my msg, the edits did not stick. This is a very old Lemmy bug I think going back over a year, but it’s bizarre how it’s non-reproducable. Some instances never have this problem but sdf and infosec trigger this bug unpredictably.

0.19.3 is currently the best Lemmy version but it still has this bug (just as 0.19.5 does). A good remedy would be to install an alternative front end, like alexandrite.

view more: next ›

evenwicht

joined 4 months ago
MODERATOR OF