I wouldn't be so pessimistic. The Netherlands was also a car dependent place that bulldozed neighbourhoods for highways a few dozen years ago and look at where they are now. Change can happen, it just needs a critical mass of supporters and time, lots of time.
Journalism that has any tooth whatsoever would mostly fix this.
As long as no proper journalistic standards exists, populists can pour their BS down the media drain unquestioned, unchallenged. If that's all you hear about a topic, that's what you'll believe.
I consider Beat Saber to be one part of the essentials pack of modern VR gaming. As a rhythm game fan, it’s what got me hooked on VR
I'm not a rhythm game fan; Beat Saber is the only one I play an it's amazing. It's worth getting VR for this game alone.
!beatsaber@lemmy.ml btw.
it seems to me Android devices are too important to just let them be abandoned if Google goes full-proprietary
I wish it'd be that way.
It wouldn’t just be volunteers. Many companies have a huge stake in this OS and would continue to contribute.
If they don't contribute now, I doubt they would then. They don't have any incentive in making the AOSP better publicly because that also makes it better for their competitors.
I think all the OEMs would have individual contracts for source code access anyways. It's not like open source is the only possible model for industry-wide code collaboration.
A majority of the code would/could be forked and maintained.
What makes you think that? If you've ever taken a look at the AOSP source code, you'll know that it's insanely huge. This isn't something a small community of volunteers can reasonably maintain; just like a web browser.
Or a project like GrapheneOS that’s already based on Android code would be expanded to fill the void.
Again, who do you expect to take on that insane task?
GrapheneOS is regular-ass android with some modifications to make it more secure on top. It's not "based on Android" it is (mostly) Android. It does some important modifications but that's details, not basic functionality.
If Google were to cut updates to Android, GrapheneOS would (rightly) make a stink but ultimately have to cease because they cannot maintain the entire rest of the Android code to keep it secure. I suspect they'd rather (loudly) end the project than keep limping along without proper security patches.
I also have several virtual machines which take up about 100 GiB.
This would be the first thing I'd look into getting rid of.
Could these just be containers instead? What are they storing?
nix store (15 GiB)
How large is your (I assume home-manager) closure? If this is 2-3 generations worth, that sounds about right.
system libraries (
/usr
is 22.5 GiB).
That's extremely large. Like, 2x of what you'd expect a typical system to have.
You should have a look at what's using all that space using your system package manager.
EDIT:
ncdu
says I've stored 129.1 TiB lol
If you're on btrfs and have a non-trivial subvolume setup, you can't just let ncdu
loose on the root subvolume. You need to take a more principled approach.
For assessing your actual working size, you need to ignore snapshots for instance as those are mostly the same extents as your "working set".
You need to keep in mind that snapshots do themselves take up space too though, depending on how much you've deleted or written since taking the snapshot.
btdu
is a great tool to analyse space usage of a non-trivial btrfs setup in a probabilistic fashion. It's not available in many distros but you have Nix and we have it of course ;)
Snapshots are the #1 most likely cause for your space usage woes. Any space usage that you cannot explain using your working set is probably caused by them.
Also: Are you using transparent compression? IME it can reduce space usage of data that is similar to typical Nix store contents by about half.
You should worry about that in any case. The writing has been on the wall for quite some time now.
Google is only going to "respond" by doing things it's explicitly ordered to comply with and of course extremely reluctantly; only doing the bare minimum that could be seen as complying.
They sure as hell aren't going to open up the google surveillance services unless explicitly and specifically forced to do so by a court.
This is entirely untrue.
Any part that is already open source will eternally be open source.
Only in the state that it is right now. Google could at any point simply stop releasing the source code with no warning and make all further modifications proprietary.
there are rules about using open source code in projects that requires them to also be open source.
That is only true for copyleft licenses. Licenses that are merely "open source" (also called "permissive") such as the Apache License 2.0 which the AOSP is licensed under do not give two hoots about what you do with the code as long as you give appropriate credit.
The only part of Android that has a copyleft license is the Linux kernel (GPLv2) and I wouldn't really consider it part of the AOSP in practice.
You can do it but I wouldn't recommend it for your use-case.
Caching is nice but only if the data that you need is actually cached. In the real world, this is unfortunately not always the case:
- Data that you haven't used it for a while may be evicted. If you need something infrequently, it'll be extremely slow.
- The cache layer doesn't know what is actually important to be cached and cannot make smart decisions; all it sees is IO operations on blocks. Therefore, not all data that is important to cache is actually cached. Block-level caching solutions may only store some data in the cache where they (with their extremely limited view) think it's most beneficial. Bcache for instance skips the cache entirely if writing the data to the cache would be slower than the assumed speed of the backing storage and only caches IO operations below a certain size.
Having data that must be fast always stored on fast storage is the best.
Manually separating data that needs to be fast from data that doesn't is almost always better than relying on dumb caching that cannot know what data is the most beneficial to put or keep in the cache.
This brings us to the question: What are those 900GiB you store on your 1TiB drive?
That would be quite a lot if you only used the machine for regular desktop purposes, so clearly you're storing something else too.
You should look at that data and see what of it actually needs fast access speeds. If you store multimedia files (video, music, pictures etc.), those would be good candidates to instead store on a slower, more cost efficient storage medium.
You mentioned games which can be quite large these days. If you keep currently unplayed games around because you might play them again at some point in the future and don't want to sit through a large download when that point comes, you could also simply create a new games library on the secondary drive and move currently not played but "cached" games into that library. If you need it accessible it's right there immediately (albeit with slower loading times) and you can simply move the game back should you actively play it again.
You could even employ a hybrid approach where you carve out a small portion of your (then much emptier) fast storage to use for caching the slow storage. Just a few dozen GiB of SSD cache can make a huge difference in general HDD usability (e.g. browsing it) and 100-200G could accelerate a good bit of actual data too.
Basically what I wanted to ask is whether they're taking this seriously and are doing demanding stuff or whether they're just starting out with basic things. Also how important gaming vs. Unreal is to them; would they care if it took a bit longer to e.g. compile shaders if that meant 20% more fps?
That doesn't seem right; that's only ~18W. Each one of those systems alone will exceed that at idle running 24/7. I'd expect 1-2 orders of magnitude more.