Hacker Newsnew | past | comments | ask | show | jobs | submit | cheriot's commentslogin

Does Atlassian still have the tech debt that lead to extended outages? https://newsletter.pragmaticengineer.com/p/scoop-atlassian

This is very cool.

I wonder if there's a way to control routing client side and remove the list of mac addresses. Eg manage DNS for customers (upsell ad blocking!) and CNAME the unifi entry to a customer specific vhost.


Thank you! DNS-based adoption works well for this. You point the unifi hostname at the tenant's subdomain and the Host header handles routing from there. We also have a DHCP Option 43 generator for setups where DNS isn't practical.

Is there any redeeming quality of MCP vs a skill with CLI tool? Right now it looks like the latter is a clear winner.

Maybe MCP can help segregate auto-approve vs ask more cleanly, but I don't actually see that being done.


MCP defines a consistent authentication protocol. This is the real issue with CLIs, each CLI can (and will) have a different way of handling authentication (env variables, config set, JSON, yml, etc).

But tbh there's no reason agents can't abstract this out. As long as a CLI has a --help or similar (which 99% do) with a description of how to login, then it can figure it out for you. This does take context and tool calls though so not hugely efficient.


That's fair. I just really don't like the way MCP gives the tool author control of my context. It's worth setting up some env vars and config files to avoid them.

This is a general thing with agent orchestration. A good sandbox does something for your local environment, but nothing for remote machines/APIs.

I can't say this loudly enough, "an LLM with untrusted input produces untrusted output (especially tool calls)." Tracking sources of untrusted input with LLMs will be much harder than traditional [SQL] injection. Read the logs of something exposed to a malicious user and you're toast.


Given the "random" nature of language models even fully trusted input can produce untrusted output.

"Find emails that are okay to delete, and check with me before deleting them" can easily turn into "okay deleting all your emails", as so many examples posted online are showing.

I have found this myself with coding agents. I can put "don't auto commit any changes" in the readme, in model instructions files, at the start of every prompt, but as soon as the context window gets large enough the directive will be forgotten, and there's a high chance the agent will push the commit without my explicit permission.


Information flow control is a solid mindset but operationally complex and doesn’t actually safeguard you from the main problem.

Put an openclaw like thing in your environment, and it’ll paperclip your business-critical database without any malicious intent involved.


Even an LLM with trusted input produces untrusted output.


That sounds off. There's specific situations where the IRS will settle for less than the amount owed and they're not pleasant.


Not true. It’s really common for 1099 people


Being broke is surprisingly common. I'm just saying it's not some cheat code.


Sandboxes are needed, but are only one piece of the puzzle. I think it's worth categorizing the trust issue into

1. An LLM given untrusted input produces untrusted output and should only be able to generate something for human review or that's verifiably safe.

2. Even an LLM without malicious input will occasionally do something insane and needs guardrails.

There's a gnarly orchestration problem I don't see anyone working on yet.


I think at least a few teams are working on information flow control systems for orchestrating secured agents with minimal permissions. It's a critical area to address if we really want agents out there doing arbitrary useful stuff for us, safely.


Page me when codex can run the right version of node. Are we all changing the system node version to match the current project again?

[shell_environment_policy]

inherit = "all"

experimental_use_profile = true

[shell_environment_policy.set]

NVM_DIR = "[redacted]"

PATH = "[redacted]"


It worked for me after I configured mise. I needed the mise setup in both `.zprofile` and `.zshrc` for Codex to pick it up. I think mise sets up itself in one of those by default, but Codex uses the other. I expect the same problem would present itself with nvm.

I.e. `eval "$(/Users/max/.local/bin/mise activate zsh)"` in `.zprofile` and `.zshrc`

Then Codex will respect whatever node you've set as default, e.g.:

    mise install node@24
    mise use -g node@24
Codex might respect your project-local `.nvmrc` or `mise.toml` with this setup, but I'm not certain. I was just happy to get Codex to not use a version of node installed by brew (as a dependency of some other package).


Thanks! I moved my PATH setup to .zprofile and everything works now. Brew had added itself to .zprofile and everything else was in .zshrc.


Glad it worked out. And I agree it’s annoying that this doesn’t just work out of the box. It’s not like node/nvm are uncommon, so you’d think they would have ran into the issue when using their own tool.


If you are already using Volta in your project Codex will use the correct version assuming you are running in the same directory as your .json file and the json file has the” volta”:{ “node”: “xx.x.x”, “npm”: “xx.x.x”} configured. Personally use a Dockerfile to setup the container with volta installed. Need to set up Volta and configure at least one version of Node then install Codex in the docker. One caveat is you need to update codex with the initial version of node assuming it’s not the same as your project. If you are using one image per project you should never run into this but I have been using one image and firing up a container for each project, so it was great to see Codex able to use the correct version configured for the project via Volta.

From other comments sounds like Codex using mise for internal tools can cause issues but not sure that is 100% Codex fault if the project is not already defining the node/npm version in the json “engines” entry. If it’s ignoring that entry then I guess this is a valid complaint, but not sure how Codex is supposed to guess which version of tools to use for different projects.

Would you mind adding more details as to the exact setup where Codex is using the wrong version?


Codex is using a login shell so moving my PATH setup to .zprofile fixed it (previously was in .zshrc). Now we just need to write this on the internet enough times that future codex can suggest the fix :p


Both Claude and Gemini (the web variants, not CLI) tried to downgrade my .NET 10 projects to .NET 9 at least a few times.


> I don’t agree with the blanket advice of “just use Postgres.”

I take it as meaning use Postgres until there's a reason not to. ie build for the scale / growth rate you have not "how will this handle the 100 million users I dream of." A simpler tech stack will be simpler to iterate on.


Yes. That's a good framing. PostgreSQL is a good default for online LOB-y things. There are all sorts of reasons to use something other than PostgreSQL, but raw performance at scale becomes such a reason later than you think.

Cloud providers will rent you enormous beasts of machines that, while expensive, will remain cheaper than rewriting for a migration for a long time.


Postgres on modern hardware can likely service 100 million users unless you are doing something data intensive with them.

You can get a few hundred TB of flash in one box these days. You need to average over 1 MB of database data per user to get over 100 TB with only 100 million users. Even then, you can mostly just shard your DB.


What about throughput? How many times can postgres commit per second on NVMe flash?


You can do about 100k commits per second, but this also partly depends on the CPU you attach to it. It also varies with how complicated the queries are.

With 100 million DAU, you're often going to have problems with this rate unless you batch your commits. With 100 million user accounts (or MAU), you may be fine.


That should be enough for most apps.


I love CC, but there's so many bugs. Even the intended behavior is a mess - CC's VS Code UI bash tool stoped using my .zshrc so now it runs the wrong version of everything.


This is the case for all AI tools right now. Sooo bad.

Cursor, Claude code, Claude in the browser, and don't even get me started on Gemini.


Codex is a bit better bug-wise but less enjoyable to use than CC. The larger context window and superiority of GPT 5.2 to Opus makes it mostly worth it to switch.


The author's answers are toward the bottom of the README, https://github.com/jordanhubbard/nanolang?tab=readme-ov-file...


I understand the effort and it seems like a nice little language but wouldn't it make more sense to target already existing C--, QBE, LLVMIR or similar? There must be "simpler C" languages already which sounds more useful given that LLMs must've been trained on them.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: