this post was submitted on 15 Sep 2025
24 points (92.9% liked)

Linux

9406 readers
449 users here now

A community for everything relating to the GNU/Linux operating system (except the memes!)

Also, check out:

Original icon base courtesy of lewing@isc.tamu.edu and The GIMP

founded 2 years ago
MODERATORS
 

I made a small utility for listing the file names inside an archive file, e.g. tar, zip, etc. This comes in handy when you download some software package using the command line but you aren't sure whether to extract it in its own folder because you don't know what the file structure inside is.

top 14 comments
sorted by: hot top controversial new old
[–] sxan@midwest.social 17 points 1 day ago

Huh. tar tf and unzip -l. I'm not sure I'd even bother to write a shell function to combine them, much less install software.

Zips just exploding to files is so common, if you just mkdir unzpd ; unzip -d unzpd file.zip it's going to be right nearly all of the time. Same with tarballs always containing a directory; it's just so common it's barely worth checking.

You write the tools you need, don't get me wrong. This seems like, at most, a 10-line bash function, and even that seems excessive.

function pear() {
case $1 in
  *.zip)
    unzip -l "$1"
    ;;
  *.tar.*)
    tar tf "$1"
    ;;
esac
}
[–] SinTan1729@programming.dev 2 points 1 day ago
[–] RedSnt@feddit.dk 2 points 1 day ago

Nifty! I've recently begun using archivemount-ng, which allows mounting an archive of the formats you mention (not rar though, but I have rar2fs for that), and in theory one can easily mount an archive using those methods and just do a tree in the mount folder, but it seems that pear saves a few steps and makes that a lot easier.

[–] Lembot_0004@discuss.online -1 points 1 day ago* (last edited 1 day ago) (2 children)

Haven't used archivers from the command line for the last few decades already, so I have a question: is it faster than using the appropriate tools? When using GUIs, some archive listings are opened almost instantly, others, like *.tar.zst, could take dozens of minutes...

P.S. There is actually no need to commit ".gitignore"

[–] BrianTheeBiscuiteer@lemmy.world 2 points 1 day ago (1 children)

Disagree on the .gitignore file. If you're the only developer and you only work off of one machine then it doesn't need to be committed. In a team setting it's absolutely imperative to commit it.

[–] Lembot_0004@discuss.online 0 points 1 day ago (3 children)

it’s absolutely imperative

It isn't part of the project/code.

[–] vala@lemmy.dbzer0.com 1 points 1 day ago (1 children)

How is it not part of the project?

[–] Lembot_0004@discuss.online 0 points 1 day ago (1 children)

It just isn't. It has nothing to do neither with code nor with compiling. The same tier of "partness" as /etc/fstab or something.

[–] vala@lemmy.dbzer0.com 2 points 1 day ago (1 children)

So everyone who contributes to the project should make their own gitignore on every development machine they use to prevent committing build files, secrets ect?

I don't understand why you say it has nothing to do with the code when it literally has nothing to do with anything BUT the code.

What is the downside you see to committing the gitignore?

[–] Lembot_0004@discuss.online 1 points 1 day ago (1 children)

You commit with add -A? Well, ok.

[–] vala@lemmy.dbzer0.com 1 points 1 day ago

Usually git add .

It's much faster, easier and less error prone to go with the blacklist approach of the gitignore file IMO.

How do you ensure your teammates don't start committing their own IDE settings or committing "secrets.json" files or helper scripts or log files?

[–] nous@programming.dev 1 points 1 day ago* (last edited 1 day ago)

You never want build artifacts to be committed. You don't want to have everyone working on your project to need to setup their own gitignore for every project. So it makes sense to have a common commited gitignore for files the project produces that should never be tracked by git.

I dislike when people put in editor files in the gitignore though. People should setup global ones for their local tooling.

[–] pipe01@programming.dev 2 points 1 day ago

It only parses as little data as possible to get just file names, on some files like ZIPs it'll be just the header but on others like tar it'll have to walk through and seek to the start of each file. It should be pretty fast even on big files though