isn't the argument after bash -c supposed to be one string of the command to be run?
e.g.
bash -c "echo hello"
From Wikipedia, the free encyclopedia
Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).
Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.
Community icon by Alpár-Etele Méder, licensed under CC BY 3.0
isn't the argument after bash -c supposed to be one string of the command to be run?
e.g.
bash -c "echo hello"
Oh yeah, then that is how it really is. The script runs fine, the output is correctly piped, but it is just the signal handling that doesn't work.
Modify the python script to include the new behavior.
I've never created a custom docker container, but I'm pretty sure you should make the entry point python itself, too.
But I don't actually know what the new behavior is. I think it is that it never receives a termination signal, and is then just killed instead, and if that is the case, how can I modify it do catch that?
What I intend to do tomorrow is to rewrite all the output (which I had hoped to avoid having to do for this) to write directly to a log file instead of trying to capture the print statements for this initially "only-meant-for-me" piece of code. That way I won't have to do anything but run the Python script and it should receive the termination signal as intended. But as I said, I would still like to understand what is going on.
I think you already decided what I would have recommended (just write to a log file in your python script) but I wanted to hopefully help with the rest of the question hah.
So the first thing to remember is that a pipe (|
) in Linux is a unidirectional data channel that passes stdout
from the left command the right command’s stdin
and this is its only function. Also notable is that exit status of a pipeline is the exit status of the last command in the pipeline (unless the pipefail
option is enabled but this isn’t the behavior you wanted either), this is what is available in $?
as well immediately after the pipe exits in a script, problem with that is that tee can exit successfully while the previous command failed because it did its job redirecting output.
To get the behavior you are after you would probably need to write a script that does the signal handling or it might work if you use exec to wrap your python+tee command in your dockerfile because then the bash process will get replaced by python or tee, I’m not sure which or how tee will interact with exec without testing though.
Anyway, hope that helps, here are the docs on pipe which are worth a read. In fact when double checking something just now, I learned I can do |&
today instead of 2>&1 |
which is neat hah!
Edit: I forgot to mention, signal handing in docker is a whole other animal so depending on how you are specifically running it the behavior and signals might not be what is expected or the same as running the commands outside of docker.
Great article about it: https://medium.com/@gchudnov/trapping-signals-in-docker-containers-7a57fdda7d86
Repost if you can’t read it on medium: https://www.cloudbees.com/blog/trapping-signals-in-docker-containers
Yeah, I mean writing to a file. Do that in python, don't wrap a script with more script.
You're probably right about the process handling being the cause, but I wouldn't worry about that and just do it right the first time.