It's really simple, it's a container containing a virtual os, which runs a browser and a webserver to run the app. The app connects to several external api services to do it's thing.
It's like, really simple!
1. Be civil
No trolling, bigotry or other insulting / annoying behaviour
2. No politics
This is non-politics community. For political memes please go to !politicalmemes@lemmy.world
3. No recent reposts
Check for reposts when posting a meme, you can only repost after 1 month
4. No bots
No bots without the express approval of the mods or the admins
5. No Spam/Ads/AI Slop
No advertisements or spam. This is an instance rule and the only way to live. We also consider AI slop to be spam in this community and is subject to removal.
A collection of some classic Lemmy memes for your enjoyment
It's really simple, it's a container containing a virtual os, which runs a browser and a webserver to run the app. The app connects to several external api services to do it's thing.
It's like, really simple!
I‘m very scared that this might actually be the case in some apps out there.
It probably was very simple for the kid who wrote it, just import everything and write a couple of lines to use all this stuff that already exists!
Gotta love using a base container image that is far too overkill for what you're trying to run.
I get to witness to enterprise services flavor of that. Where the company pays software architects that aren't actually coding and coders not allowed to make architectural decisions.
You have software that takes http? You need to rewrite it so that you only speak rabbitmq, and use it for every http request or Web socket message, don't worry, we have a team that specializes in making http translate to rabbit mq, so you only have to rewrite the server code, another team will handle the http listener that translates to you.
What's that, you have a non http protocol? Well, the other team isn't scoped to handle that, so you'll need to convert your listener to rabbitmq and create a whole separate container to actually receive the packets in udp and then translate to rabbitmq. No "processing" software is allowed to speak anything but rabbitmq, and network listener containers are only allowed to dumb receive and Forward.
Tech hipsters be like: you had me at container!
Some of those can be good if you want a single command to install on any OS.
And not have to wait for a maintainer to update the package to have the latest version.
You think of distribution packaged rpms/deb, but the softwate developer can self publish, and you'll see plenty of self published packages in ppa, copr, flathub, and even just loose websites because it's not rocket science to make an apt or yum repository. However the distribution versions may take a little more time, but more likely to work together as a cooperative whole. Flathub has a decent shot by allowing concurrent versions of dependencies to install, while preserving the concept of updating dependencies independent of the package maintainers.
However, as you go down his chart, it's less likely that you'll reasonably update after install. You may get the latest at the second you install, but 6 months later you'll likely be stale. You may neglect to update npm in each and every project, or it may automatically dependency lock (because self publish nature results in developers having to vet dependency updates, and devs are lazy about that).
Bash/Sh on Windows? And what's so bad about 2-3 separate commands anyway?
I assume he refers to npm/pip/cargo to be the multiple os option, not saying the last one is obviously better for multiple os. At least that has to be because that's the only option that is os independent.
Of course it sucks because the essentially uncurated dependency trees result in either instability on updates, or missing updates. Of course also the natural OS updater won't help you out with pip/cargo/npm, but it will help with apt, yum, snap, and flatpak.
I was talking about the other ones, but since you mention it, yeah, many people use Bash on Windows, from Git Bash which is part of Git on Windows, which pretty much any developer forced to use Windows will install in order to use Git.
Developers often prefer to have less interfaces to maintain when possible.
Gets the job done, but shoudn't and isn't intended for non-programmer end user.
I'm not mad at small programs or developers with not much time to setup a distribution pipeline, they should be praised for their work at the program itself. But different OSes have different places to unpack a program and this allows simple updates, we should respect that for consistency at user end. Expect it's Windows, which is a unspecified mess anyway, let's go and unpack everything raw on C:\ or into user directory.
How much you wanna bet the "dev" doesn't realise chromium is a dependency, in this scenario?
What do you mean you don't have to restart your terminal software every afternoon when the four windows consume six gigabytes of RAM?
I saw a terminal app a few weeks ago that had AI INTEGRATION of all things.
Warp.dev! It’s the best terminal I’ve used so far, and the best use of AI as well! It’s extremely useful with some AI help for the thousands of small commands you know exist but rarely uses. And it’s very well implemented.
I don't understand what is the benefit here over a terminal with a good non-LLM based autocomplete. I understand that, theoretically, LLMs can produce better autocomplete, but idk if it is really that big of a difference with terminal commands. I guess its a small shortcut to have the AI there to ask questions, too. It's good to hear its well implemented, though.
There are two modes of AI integrations. The first is a standard LLM in a side panel. It’s search and learning directly in the terminal, with the commands I need directly available to run where I need them. What you get is the same as if you used ChatGPT to answer your questions, then copied the part of the answer you needed to your terminal and run it.
There is also AI Command Suggestion, where you’ll start to type a command / search prefixed by # and get commands directly back to run. It’s quite different from auto-complete (there is very good auto-complete and command suggestion as well, I’m just talking about the AI specific features here).
It’s just a convenient placement of AI at your fingertips when working in the terminal.
closed source sadly :/
Alas. They have said they plan to open some of the source and potentially everything, but it’s little progress.
They recently ported to Linux, which I think will give them much more negative feedback here, so hopefully with more pressure they’ll find the correct copy left license and open up their source to build trust.
Which one ?
came to mind. It uses web technology to make a terminal. I've never used it, so I have no idea if it works well or not.
I stopped using iTerm because it was using too much power while I was on battery. Kitty is by far the best terminal.
Kitty is really popular. I'm using foot, as long a terminal has the basic functionality I need, best latency is what I care about.
I've used it a while back, it works fine, probably not as effecient as other emulators but works well enough
is-even