So what are these tiny portable ones? I always assumed they were digitally augmented or virtual even - is there a minimum size for it to be a "real" telescope?
This one is a bit of a joke with my telescope making friends, but ticks all the boxes of what I consider a real telescope. You can actually buy 76mm entry-level telescopes, but they often have an unstable mount and bad optics. Starting at 150mm, you already have a lot of punch under dark skies. Visual use, live digitally enhanced, or astrophotography are 3 different hobbies.
have been on 1M context window with claude since 4.0 - it gets pretty expensive when you run 1M context on a long running project (mostly using it in cline for coding). I think they've realized more context length = more $ when dealing with most agentic coding workflows on api.
dual 3090's (24GB each) on 8x+8x pcie has been a really reliable setup for me (with nvlink bridge... even though it's relatively low bandwidth compared to tesla nvlink, it's better than going over pcie!)
48GB of vram and lots of cuda cores, hard to beat this value atm.
If you want to go even further, you can get an 8x V100 32GB server complete with 512GB ram and nvlink switching for $7000 USD from unixsurplus (ebay.com/itm/146589457908) which can run even bigger models and with healthy throughput. You would need 240V power to run that in a home lab environment though.
Depends where you are plugging them in - but yes they are older gen - despite this, 8xV100 will outperform most of what you can buy for that price simply by way of memory and nvlink bandwidth. If you want to practically run a local model that takes 200GB of memory (Devstral-2-123B-Instruct-2512 for example or GPT-OSS-120B with long context window) without resorting to aggressive ggufs or memory swapping, you don't have many cheaper options. You can also parallelize several models on one node to get some additional throughput for bulk jobs.
Came with receipts lol - hopefully they can repro and fix this but the fact it as omitted for 8 months kind of hints at how little people are using it.
Yeah you can really see the resize comparing the before and after.
https://jpst.it/4KgSB
There is one parity bit per 8 data bits, which is decently resilient - plus the recovery is pretty simple on the occasional bit flip. Combine that with the fact you can reference other sources to make up for missing/corrupted files - I think the chance that this is recoverable is pretty high provided the machine reading it is high quality. Checksumming on the source was unfortunately not commonplace until Unix 7, so it's unlikely there was any software-level integrity checks here. The tape looks like it was stored in a sealed container which is a very good sign. Those older tapes are actually more resilient than the later generation of tapes, and don't usually degrade the same way even with exposure to humidity.
reply