Hacker Newsnew | past | comments | ask | show | jobs | submit | kaydub's commentslogin

At my job and for personal projects I pay per token with claude and I've had no problems at all with it. No slowdowns, no "throttling", nothing.

I'm honestly surprised how many people have subscriptions and are expecting anthropic to eat the cost lol


Have you been to Orcas Island or the San Juan islands in Washington?

I think Acadia is a great National Park, especially for the East coast. But I moved to Seattle a few years back and only more recently got out to Orgas Island. It's insane how similar it is to Acadia.


I live in New England and did a weekend sea kayaking trip to the San Juans for reasons I don't really recall when I was in the area anyway. My recollection was that it was fun but, as someone for whom Maine was just a few hour drive away, not something I'd make a point of doing again given how similar the two areas were.

Interesting perspective, thank you.

Acadia is a beautiful park and it definitely reaches National Park level for the East coast. I don't really agree with your assessment there, especially since it doesn't sound like you saw much of Acadia if you think it's only Cadillac mountain and a single hike.

The East coast just doesn't have as many untouched lands as the West. West coast parks, pretty much all the parks West of the Mississippi, are next level. If you're accustomed to that then none of the East coast parks are going to wow you.


Yeah, I've gotta use skills more. I didn't quite get it until this last week when I used a skill that I made. I didn't know the skill got pulled into context ONLY for the single command being ran with the skill, I thought the skill got pulled into context and stayed there once it was called.

That does seem very powerful now that I've had some time to think about it.


Or you could argue that if the assistant needs so much modular context your tools are defective.


I avoid most MCPs. They tend to take more context than getting the LLM to script and ingest ouputs. Trying to use JIRA MCP was a mess, way better to have the LLM hit the API, figure out our custom schemas, then write a couple scripts to do exactly what I need to do. Now those scripts are reusable, way less context used.

I don't know, to me it seems like the LLM cli tools are the current pinnacle. All the LLM companies are throwing a ton of shit at the wall to see what else they can get to stick.


For Jira/Confluence, I also struggled with their MCPs. JIRA’s MCPs was hit or miss and Confluence never worked for me.

We don’t use the cloud versions, so not sure if they work better with cloud.

On the other hand, i found some unofficial CLIs for both and they work great.

I wrote a small skill just to give enough detail about how to format Epics, Stories, etc and then some guidance on formatting content and I can get the agent do anything i need with them.


I deal with a ton of different atlassian instances and the most infuriating thing to me about the mcp configuration is that atlassian really thinks you should only have one atlassian instance to auth against. Their mcp auth window takes you to a webpage where you can’t see which thing you are authoring against forcing you to paste the login page url into an incognito window. Pretty half baked implementation.

I noticed that it’s better for some things than others. It’s pretty bad at working with confluence it just eats tokens but if you outlay a roadmap you want created or updated in Jira it’s pretty good at that


I have had some positive experiences using the Jira and Confluence MCPs. However, I use a third-party MCP because my company has a data centre deployment of Jira and Confluence, which the official Atlassian MCP does not support.

My use case was for using it as an advanced search tool rather than for creating tickets or documentation. Considering how poor the Confluence search function is, the results from Confluence via an MCP-powered search are remarkably good. I was able to solve one or two obscure, company-specific issues purely by using the MCP search, and I'm convinced that finding these pages would have been almost impossible without it.


Why would you use Grok at all? The one LLM that they're purposely trying to get specific output from (trying to make it "conservative"). I wouldn't want to use a project that I outright know is tainted by the owners trying to introduce bias.


Do you think this is a gotcha?

You just prompt the llm to change the plan.


Yes to all of these.

Here's the rub, I can spin up multiple agents in separate shells. One is prompted to build out <feature>, following the pattern the author/OP described. Another is prompted to review the plan/changes and keep an eye out for specific things (code smells, non-scalable architecture, duplicated code, etc. etc.). And then another agent is going to get fed that review and do their own analysis. Pass that back to the original agent once it finishes.

Less time, cleaner code, and the REALLY awesome thing is that I can do this across multiple features at the same time, even across different codebases or applications.


There's comments like this because devs/"engineers" in tech are elitists that think they're special. They can't accept that a machine can do a part of their job that they thought made them special.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: