> It's like someone is claiming they unlocked ultimate productivity by washing dishes, in parallel with doing laundry, and cleaning their house.
In this case you have to take a leap of faith and assume that Claude or Codex will get each task done correctly enough that your house won't burn down.
"While LLMs are amazing, they can't run your business by themselves... We ground AI in tight guardrails and deterministic frameworks, optimizing LLMs to deliver enterprise-grade reliability. Trusted. Reliable. Secure."
this sounds like it's copy and pasted straight from an LLM
have you built stuff with LLMs before? genuine question because nondeterministic and deterministic workflows are leagues apart in what they can accomplish.
Slack claims it does not train generative AI models on customer data, but this hinges on a narrow definition of “generative.” In practice, Slack does use customer data, including messages, files, and usage behavior, to train global machine learning models for features like search, emoji suggestions, and autocomplete. Your data is included by default unless you manually opt out.
To opt out, a Workspace or Org Owner must email [email protected] with the subject line “Slack Global model opt-out request” and include the workspace or org URL. The instructions are not prominently linked and are buried several clicks deep behind multiple marketing pages. The Privacy Principles page concedes this use and confirms the opt-out process.
Chunking the codebase that you entirely own into packages is as if you're intentionally wanting to make your life miserable by imposing the same kind of volatility that you would otherwise find in the development process of building the Linux distribution. It's a misnomer.
Yeah legit interested in their toolchain. I tried Pants and had a bad experience. Bazel is too heavyweight imo and doesn't deal with a variety of dependencies well.
The docs contain a sentence on them retaining any chats that they legally have to retain. This is always the risk when doing business with law abiding companies which store any data on you.
Agree. But it's worth noting that they already have a bit of a hedge in the description for private mode:
> This chat won't appear in history, use or update ChatGPT's memory, or be used to train our models. For safety purposes, we may keep a copy of this chat for up to 30 days.
The "may keep a copy" is doing a lot of work in that sentence.
"for up to 30 days" though. If they are being kept in perpetuity always they should update the copy to say "This chat won't appear in your history, we retain a copy indefinitely"
In this case you have to take a leap of faith and assume that Claude or Codex will get each task done correctly enough that your house won't burn down.
reply