Who will make these custom battery pouches for old phones though? Think my old Poco x3 pro, not the iPhone 17. I am sure there are tons of people willing to make batteries for the iPhone 17 but I feel like the interest wanes as the phone gets older?
For older phones, the batteries we buy are likely also old stock left over if we can find some in stock anyway, right?
Just to add, VictoriaMetrics covers all 3 signals:
- VictoriaMetrics for metrics. With Prometheus API support, so it integrates with Grafana using Prometheus datasource. It has its own Grafana datasource with extra functionality too.
- VictoriaLogs for logs. Integrates natively with Grafana using VictoriaLogs datasource.
- VictoriaTraces for traces. With Jaeger API support, so it intergrates with Grafana using Jaeger datasource.
All 3 solutions support alerting, managed by same team, are Apache2 licensed, are focused on resource efficiency and simiplicity.
That hasn't been true for at least 15 years. I was a k-root DNS operator then, and we ran several software stacks on each cluster in case one had a bug.
>It's a bit wasteful to provision your computers so that all the cold data lives in expensive RAM.
But that's a job applications are already doing. They put data that's being actively worked on in RAM they leave all the rest in storage. Why would you need swap once you can already fit the entire working set in RAM?
Because then you have more active working memory as infrequently used pages are moved to compressed swap and can be used for more page cache or just normal resident memory.
Swap ram by itself would be stupid but no one doing this isn’t also turning on compression.
> Swap ram by itself would be stupid but no one doing this isn’t also turning on compression.
I'm not sure what you mean here? Swapping out infrequently accesses pages to disk to make space for more disk cache makes sense with our without compression.
Swapping out to RAM without compression is stupid - then you’re just shuffling pages around in memory. Compression is key so that you free up space. Swap to disk is separate.
>Because then you have more active working memory as infrequently used pages are moved to compressed swap and can be used for more page cache or just normal resident memory.
Uhh... A VMM that swaps out to disk an allocated page to make room for more disk cache would be braindead. The process has allocated that memory to use it. The kernel doesn't have enough information to deem disk cache a higher priority. The only thing that should cause it to be swapped out is either another process or the kernel requesting memory.
> A VMM that swaps out to disk an allocated page to make room for more disk cache would be braindead
Claiming any decision is “brain dead” in something as heuristic heavy and impossible to compute optimally as resident memory pages is quite the statement to make; this is a form of the knapsack problem (NP-complete at least) with the added benefit of time where the items are needed in some specific indeterminate order in the future and there’s a whole bunch of different workloads and workload permutations that alter this.
To drive this point home in case you disagree, what’s dumber? Swapping out to disk an allocated page (from the kernel’s perspective) that’s just sitting in the free list of the userspace allocator for that process or a page of some frequently accessed page of data?
Now, I agree that VMMs may not do this because it’s difficult to come up with these kinds of scenarios that don’t penalize the general case, more importantly than performance this has to be a mechanism that is explainable to others and understandable for them. But claiming it’s a braindead option to even consider is IMHO a bridge too far.
You mean to tell me most applications you've ever used read the entire file system, loading every file into memory, and rely on the OS to move the unused stuff to swap?
A silly but realistic example: lots of applications leak a bit of memory here and there.
Almost by definition, that leaked memory is never accessed again, so it's very cold. But the applications don't put this on disk by themselves. (If the app's developers knew about which specific bit is leaking, they'd rather fix the leak then write it to disk.)
That's just recognizing that there's a spectrum of hotness to data. But the question remains: if all the data that the application wants to keep in memory does fit in memory, why do you need swap?
> Running out of memory kills performance. It is better to kill the VM and restart it so that any active VM remains low latency.
Right, you seem to be not understanding what I'm getting at.
Memory exhaustion is bad, regardless of swap or not.
Swap gets you a better performing machine because you can swap out shit to disk and use that ram for vfs cache.
the whole "low latency" and "I want my VM to die quicker" is tacitly saying that you haven't right sized your instances, your programme is shit, and you don't have decent monitoring.
Like if you're hovering on 90% ram used, then your machine is too small, unless you have decent bounds/cgroups to enforce memory limits.
If you mostly care about data integrity, then a plain RAID-1 mirror over three disks is better than RAIDZ. Correlated drive failures are not uncommon, especially if they are from the same batch.
I also would recommend an offline backup, like a USB-connected drive you mostly leave disconnected. If your system is compromised they could encrypt everything and also can probably reach the backup and encrypt that.
With RAID 1 (across 3 disks), any two drives can fail without loss of data or availability. That's pretty cool.
With RAIDZ2 (whether across 3 disks or more than 3; it's flexible that way), any two drives can fail without loss of data or availability. At least superficially, that's ~equally cool.
That said: If something more like plain-Jane RAID 1 mirroring is desired, then ZFS can do that instead of RAIDZ (that's what the mirror command is for).
And it can do this while still providing efficient snapshots (accidentally deleted or otherwise ruined a file last week? no problem!), fast transparent data compression, efficient and correct incremental backups, and the whole rest of the gamut of stuff that ZFS just boringly (read: reliably, hands-off) does as built-in functions.
It's pretty good stuff.
All that good stuff works fine with single disks, too. Including redundancy: ZFS can use copies=2 to store multiple (in this case, 2) copies of everything, which can allow for reading good data from single disks that are currently exhibiting bitrot.
This property carriers with the dataset -- not the pool. In this way, a person can have their extra-important data [their personal writings, or system configs from /etc, or whatever probably relatively-small data] stored with extra copies, and their less-important (probably larger) stuff stored with just one copy...all on one single disk, and without thinking about any lasting decisions like allocating partitions in advance (because ZFS simply doesn't operate using concepts like hard-defined partitions).
I agree that keeping an offline backup is also good because it provides options for some other kinds of disasters -- in particular, deliberate and malicious disasters. I'd like to add that this this single normally-offline disk may as well be using ZFS, if for no other reason than bitrot detection.
It's great to have an offline backup even if it is just a manually-connected USB drive that sits on a shelf.
But we enter a new echelon of bad if that backup is trusted and presumed to be good even when it has suffered unreported bitrot:
Suppose a bad actor trashes a filesystem. A user stews about this for a bit (and maybe reconsiders some life choices, like not becoming an Amish leatherworker), and decides to restore from the single-disk backup that's sitting right there (it might be a few days old or whatever, but they decide it's OK).
And that's sounding pretty good, except: With most filesystems, we have no way to tell if that backup drive is suffering from bitrot. It contains only presumably good data. But that presumed-good data is soon to become the golden sample from which all future backups are made: When that backup has rotten data, then it silently poisons the present system and all future backups of that system.
If that offline disk instead uses ZFS, then the system detects and reports the rot condition automatically upon restoration -- just in the normal course of reading the disk, because that's how ZFS do. This allows the user to make informed decisions that are based on facts instead of blind trust.
GitHub actions has some rough edges around caching, but all the packaging is totally unimportant and best avoided.
reply