I think there’s a difference between doomsday framing and preparedness.
Offline access and local models aren’t about assuming collapse—they’re about treating knowledge as infrastructure instead of something implicitly guaranteed.
They sell pi's with the names PrepperDisk and Doom Box. They probably thought it is funny.
The general Idea of having knowledge backups / offline access is reasonable to me, the doomsday framing is seeding the idea that it's about collapse and that takes away from a something that is generally more useful.
On the front page is a table with those descriptions and a "Price". Now that you point that out, I have no clue what "Price" means, it's not explained either. I guess the whole communication on the page is rather confusing.
If I got anything wrong, I don't see it as _my_ mistake.
If current frontier online LLMs are made inaccessible due to a local or global cataclysmic event running models locally will be the least of your concerns.
This isn’t prepping for anything it’s cosplaying as a vault dweller.
P.S. Having TED talks as part of the “educational” curriculum of this project is probably the biggest circle jerk imaginable.
It’s a tool, and now part of the culture—so people are naturally using it.
What it seems to reveal is less about the tool and more about us. The fragmentation was already there.
Maybe the response is to slow down a bit—revisit what matters, and use it with some sense of proportion and coherence.
That’s a generous way to think about downvotes. Seeing them as signal rather than rejection leaves room to reflect and adjust.
I’m new here and come more from a philosophical background than a technical one, so I’m still learning the norms. One thing I’m sensitive to in communities like this is who ends up informally deciding what counts as legitimate participation.
Hello and welcome. I appreciate your philosophical background; we need more of that around here imo. In a totally unrelated question /s, have you seen the movie Get Out by Jordan Peele? :P For philosophical discussions of AI, I much prefer the Alignment Forum. For thoughtful, critical, charitable discussion, I recommend LessWrong by leaps and bounds, as long as one doesn't demand brevity. Also, the bar for participation can feel higher other there. I'm ok with that because it encourages people to build up a lot of shared foundations for how we communicate with each other.
This resonates with me. Intent is hard to infer, so it seems better to engage with the content itself. Most ideas are recombinations of earlier ones anyway—the interesting part is the push and pull of refining thoughts together.
Humans already revise and refine their thinking. Tools just compress that process and help filter signal from noise. The meaning still originates with the person.
This is assuming that an extreme majority of people use the tools this way.
Consider a much more cynical view where people are strictly self-interested and use these tools to garner engagement and self-promotion. Good chance the meaning did not originate from the person. And now these people have tools to outsource their parasitic intentions.
Intent is hard to infer, so it seems better to assume good faith and judge the comment itself. Thinking aids might just lower the barrier for people to participate in technical discussions.
DeepSeek (reasoning) and Gemini (multimodal) have been useful for me — especially when I want stronger pushback or a different angle. What are you hoping to get that you’re not getting from the usual set?
I recently deleted my ChatGPT and was just looking around for what the other options are. I kinda like fast, but Grok / Perplexity are behind a perpetual CloudFlare check for me. I'm looking for something that doesn't spend forever reasoning out the answer to some basic quick question.
It's interesting, as I type that out it makes me wonder why not just go back to the search engine since it has the AI summaries that have been getting better.
Finally, I do also like the longer reasoning when I have a tough question and usually like to copy paste it around to various models and compare their responses.
I’ve had similar friction experiences — especially when reasoning-heavy modes take longer or get retried. That repels me too.
On the search engine comparison: do you feel LLMs reduce cognitive load because they maintain context, whereas search requires more manual synthesis?
Also curious — do you think the frustration is mostly with the model itself, or with the serving/infrastructure layer (Cloudflare, routing, batching, etc.) around it? Both comments seem to point at that layer in different ways.
Do you customize or personalize these at all? Or use it just out of the box? I use gemini for image generation and as an, assistant through my android although not frequently. As a chat assistant the online Google UI is a turnoff personally.
For tone, I prefer efficient — higher signal-to-noise.
In custom instructions I’ll sometimes write something like “tell it like it is.” Interestingly, some models then start with “okay, I’ll tell you like it is…” which makes me wonder whether that’s psychological framing, operational behavior, or both.
Curious why you asked — have you found customization meaningfully changes performance?
Since I primarily used ChatGPT, I picked up a bunch of prompts from people sharing it in order to optimize for better results. I don't know if this is relevant anymore as I think they've fixed quite a bit and now you can customize tone and warmth directly.
My prompts were more on avoiding filler, be more direct etc, no emojis etc.
Author here.
This proposes a structural refusal boundary based on directional reduction of ambiguity toward irreversible structural transition (PT→IST), rather than intent inference.
Interested in critique — especially whether this collapses into existing reachability formulations.
Offline access and local models aren’t about assuming collapse—they’re about treating knowledge as infrastructure instead of something implicitly guaranteed.
That feels more like resilience than pessimism.
reply