Scrath

joined 2 years ago
[–] Scrath@lemmy.dbzer0.com 2 points 1 day ago (1 children)

I guess that might work. I'll have to try it on monday though it's probably more effort this way compared to just doing it manually since the snippets I have to add currently are mostly single functions with less than 20 lines

[–] Scrath@lemmy.dbzer0.com 2 points 1 day ago (3 children)

Unfortunately not because the word document is meant to be the "master" document. We aren't even supposed to export PDF versions because in the future people may see the PDF in the folder and use that as a reference instead of the main word document even though the word doc was updated and the PDF wasn't. Also I tried pandoc md conversion to docx in the past for another document and it didn't go very well. The formatting of the headers was all over the place which made it impossible to generate the Table of Contents in word

[–] Scrath@lemmy.dbzer0.com 2 points 1 day ago

I'm not very well informed on the specifics of the DLNA standard or how it is differentiated from UPnP so take what I say here with a grain of salt. My understanding is that there are 3 device types in DLNA

  • A server which provides media
  • A client which can pull media from the server
  • A renderer which can play back media from the client

I'm not sure if the server is necessary stricly speaking or if my device is using the underlying UPnP stuff but I can use Macast, which is a DLNA renderer, on my desktop computer and then select it as a playback device in Symfonium on my android phone where it shows up as a UPnP device.

[–] Scrath@lemmy.dbzer0.com 1 points 1 day ago

That might work. I'll have to check it out. Thanks

[–] Scrath@lemmy.dbzer0.com 2 points 1 day ago (1 children)

I've used macast in the past on my desktop where it worked perfectly. Unfortunately I could not find a fitting docker image for it. There is this one but it has literally no information and only 70 image pulls. Then there is a dockerfile in the Macast github repo but considering I don't see a docker image mentioned anywhere in the documentation so I guess that one is only for building the application. I believe Macast is a GUI application anyway so I'm not sure how good it would work on a headless server.

[–] Scrath@lemmy.dbzer0.com 4 points 1 day ago (1 children)

I was under the impression that minidlna is exclusively a DLNA server and not a renderer. Is that wrong?

[–] Scrath@lemmy.dbzer0.com 1 points 1 day ago (5 children)

The code snippets are the worst part. God forbid I ever have to update them because I have to manually indent every line in them correctly

[–] Scrath@lemmy.dbzer0.com 2 points 1 day ago (7 children)

Oh yes definitely. I currently have to write the technical documentation for a project I am working on in MS Word because that's the format my supervisor wants (since everyone in the organisation already has word installed by default and knows how to use it at least somewhat). Probably a quarter of the time I spend writing is lost to fighting the formatting in word. I managed to have stuff happen that my coworkers have never seen word do before like taking the content of all my textfields (which I use for pasting code snippets) and having it duplicated inside each textfield...

I wished I could use LaTeX for it but I understand the argument that some people after me may have to work on the project who don't know LaTeX.

 

Hello everyone, I am currently looking for a software solution to use my home server as a DLNA renderer which can output audio to my stereo amplifier.

The only solution I found was called gmrender-resurrect which seems like it would do exactly what I want but I was unable to get a docker container of it working. While I was able to find and connect to the DLNA Renderer, playback would fail every time and I was unable to get any information from the logs regarding why.

Do any of you know another solution to stream audio from my phone to my server (I am using Symfonium on the phone side)? Ideally it would be something I can deploy as a docker container on my server.

Thanks.

[–] Scrath@lemmy.dbzer0.com 3 points 1 day ago (9 children)

I'm gonna upvote the git + plain markdown solution simply because it is a very basic solution that does not depend on a lot of specific software in case you want to switch in the future. I had a look at obsidian in the past but discarded that idea because it required a license for commercial use back then which it seems they either changed or I misread the terms at the time.

Still I am a fan of going as low-tech as possible with note formats so that I can easily hand down my notes to whoever comes after me and they won't need a special program to open anything.

Quarto looks nice and would be something I would look into if I did more data heavy work. As it is I only write technical notes and documentation for software for which plain markdown is perfectly suitable.

[–] Scrath@lemmy.dbzer0.com 37 points 1 day ago (12 children)

I talked to Microsoft Copilot 3 times for work related reasons because I couldn't find something in documentation. I was lied to 3 times. It either made stuff up about how the thing I asked about works or even invented entirely new configuration settings

[–] Scrath@lemmy.dbzer0.com 7 points 1 week ago* (last edited 1 week ago) (1 children)

I have no idea about the SOCKS5 part but protonvpn supports port forwarding at least.

[–] Scrath@lemmy.dbzer0.com 4 points 1 week ago

Not OP but I'm also running GrapheneOS. After some initial difficulty I had with one of my banking apps I got everything to work just fine.

What does not work unfortunately is paying via google wallet or revolut.

 

Hello everyone, I am currently trying to set up a kmonad config file to replace the autohotkey script I used on windows. My goal is simply to use the right alt key in combination with a,o,u and so on to type german umlaut characters like ä,ö,ü, etc.

So far I am having trouble even getting kmonad to run the config. I guess I probably misunderstand how this is supposed to work significantly. My initial config file was generated by ChatGPT since I had no idea where to even start.

This is my current config file

(defcfg
  input  (device-file "/dev/input/by-path/platform-i8042-serio-0-event-kbd")
  output (uinput-sink "kmonad_keyboard")
  fallthrough true
  allow-cmd true
)

(defsrc
  ralt a o u s lsft
)

(deflayer german
  ralt-a "ä"
  ralt-o "ö"
  ralt-u "ü"
  ralt-s "ß"
  ralt-shift-a "Ä"
  ralt-shift-o "Ö"
  ralt-shift-u "Ü"
)

Any help would be appreciated.

45
submitted 7 months ago* (last edited 7 months ago) by Scrath@lemmy.dbzer0.com to c/electronics@discuss.tchncs.de
 

Hello everyone, I recently built a small distribution board to distribute 5V to multiple components for use in a robotics project. I made each output switchable with an individual switch and an LED to indicate the current state. When I went to test it using a lab power supply I noticed that the LEDs would start flickering weirdly when I turned them off and on again.

https://imgur.com/a/zaSCUby

As it turns out, the LEDs, which I found in my dads old parts in a bag labeled TLBO 5410, are apparently blinking LEDs. I found a datasheet for TLBR5410 LEDs which seem pretty much identical to what I have accidentally used.

Apparently these LEDs are made to operate directly from a 5V supply without an additional current limiting resistor (it is already built in) and are made to continuously blink at a frequency of 3Hz.

Because I thought I was using standard LEDs I added a series resistor causing them to behave weirdly due to low voltage. For comparison, this is how they are supposed to act: https://imgur.com/a/fXlcEDs

 

Hello everyone, I have another question regarding reverse-proxying again, specifically for the linuxserver.io jellyfin image.

On the dockerhub page for this image there are 4 ports listed which should be exposed:

  • 8096 for the HTTP Web UI
  • 8920 for the HTTPS Web UI
  • 7359/udp for autodiscovery of jellyfin from clients
  • 1900/udp for service discovery from DLNA and clients

Additionally there is also an environment variable JELLYFIN_PublishedServerUrl which is for "Setting the autodiscovery response domain or IP address". I currently have that set to my subdomain https://jellyfin.mydomain.com though I am not sure if that is correct.

I already have a reverse-proxy set up allowing me to access my servers webinterface under https://jellyfin.mydomain.com without exposing the https port on the container. What I am unsure about now however, is what to do with the two ports for UDP traffic.

By my understanding, a reverse-proxy will only forward traffic which comes to the ports 80 for http and 443 for https. Those are also the only ports my reverse-proxy container has exposed alongside the management interface. As such the 2 udp ports will not be reachable under my jellyfin domain.

How can I change this or is this even an issue?

10
submitted 1 year ago* (last edited 1 year ago) by Scrath@lemmy.dbzer0.com to c/selfhosted@lemmy.world
 

Hello, I have a question regarding the usage of a reverse-proxy which is part of a docker network.

I currently use Nginx Proxy Manager as a reverse-proxy for all my services hosted in docker. This works great since I can simply forward using each containers name. I have some services however (e.g. homeassistant) which are hosted separately in a VM or using docker on another device.

Is it possible to use the same reverse-proxy for those services as well? I haven't found a way to forward to hosts outside of the proxies docker network (except for using the host network setting which I would like to avoid)

view more: next ›