CoderSupreme

joined 2 years ago
 

I often find myself browsing videos on different invidious instances or posts on various lemmy instances, and I would love to be able to create a "watch later" list or a "favorite" list that works across all of them. I don't want to have to manually import and export these lists between different instances, either, like I have to do on lemmy, invidious, etc.

I'm currently using a single bookmarks folder to keep track of everything, but I don't like this because it's a mess. I'd like to be able to create two or three different lists for different groups of websites, so that I can easily find what I'm looking for. For example, a favorite list for reddit, tumblr, etc, another favorite list and a watch for later list for invidious instances, and other lists for other sites.

Is there any way to achieve this? I'm open to using browser extensions, third-party apps, or any other solutions that might be out there. I would prefer a free solution, but I'm willing to consider paid options as well.

A bookmark can only exist in one folder at a time, whereas I want to be able to add a single item to multiple lists (e.g., both "favorites" and "watch later").

I believe the closest to what I'm looking for are Raindrop.io, Pocket, Wallabag, Hoarder, etc.

https://github.com/hoarder-app/hoarder?tab=readme-ov-file#alternatives

I use Manjaro Linux and Firefox.

 

I want to create a collage of 20 screenshots from a video, arranged in a 5x4 grid, regardless of the video’s length. How can I do this efficiently on a Linux system?

Specifically, I’d like a way to automatically generate this collage of 20 thumbnails from the video, without having to manually select and arrange the screenshots. The number of thumbnails should always be 20, even if the video is longer or shorter.

Can you suggest a command-line tool or script that can handle this task efficiently on Linux? I’m looking for a solution that is automated and doesn’t require a lot of manual work.

Here's what I've tried but I only get 20 black boxes:

#!/bin/bash

# Check if input video exists
if [ ! -f "$1" ]; then
    echo "Error: Input video file not found."
    exit 1
fi

# Get video duration
duration=$(ffprobe -v error -show_entries format=duration -of default=noprint_wrappers=1:nokey=1 "$1")

# Calculate interval between frames
interval=$((duration / 20))

# Extract 20 frames from the video
for i in {1..20}; do
    ffmpeg -ss $((interval * ($i - 1))) -i "$1" -vf scale=200:-1 -q:v 2 "${1%.*}_frame$i.jpg"
done

# Create collage
montage -mode concatenate -tile 5x4 -geometry +2+2 "${1%.*}_frame*.jpg" output_collage.jpg

# Clean up temporary files
rm "${1%.*}_frame*.jpg"

echo "Collage created: output_collage.jpg"
60
... (github.com)
submitted 1 year ago* (last edited 1 year ago) by CoderSupreme@programming.dev to c/linux@programming.dev
11
submitted 2 years ago* (last edited 1 year ago) by CoderSupreme@programming.dev to c/auai@programming.dev
 

Permanently Deleted

74
... (www.omgubuntu.co.uk)
submitted 2 years ago* (last edited 1 year ago) by CoderSupreme@programming.dev to c/opensource@lemmy.ml
130
... (www.phind.com)
submitted 2 years ago* (last edited 1 year ago) by CoderSupreme@programming.dev to c/technology@lemmy.ml
12
... (programming.dev)
submitted 2 years ago* (last edited 1 year ago) by CoderSupreme@programming.dev to c/localllama@sh.itjust.works
 

CogVLM: Visual Expert for Pretrained Language Models

Presents CogVLM, a powerful open-source visual language foundation model that achieves SotA perf on 10 classic cross-modal benchmarks

repo: https://github.com/THUDM/CogVLM abs: https://arxiv.org/abs/2311.03079

51
... (github.com)
submitted 2 years ago* (last edited 1 year ago) by CoderSupreme@programming.dev to c/opensource@lemmy.ml
 

A self-hosted BitTorrent indexer, DHT crawler, content classifier and torrent search engine with web UI, GraphQL API and Servarr stack integration.

32
... (github.com)
submitted 2 years ago* (last edited 1 year ago) by CoderSupreme@programming.dev to c/opensource@lemmy.ml
 

A terminal workspace with batteries included

14
... (programming.dev)
submitted 2 years ago* (last edited 1 year ago) by CoderSupreme@programming.dev to c/localllama@sh.itjust.works
 

article: https://x.ai

trained a prototype LLM (Grok-0) with 33 billion parameters. This early model approaches LLaMA 2 (70B) capabilities on standard LM benchmarks but uses only half of its training resources. In the last two months, we have made significant improvements in reasoning and coding capabilities leading up to Grok-1, a state-of-the-art language model that is significantly more powerful, achieving 63.2% on the HumanEval coding task and 73% on MMLU.

view more: ‹ prev next ›