Hacker Newsnew | past | comments | ask | show | jobs | submit | LatticeAnimal's commentslogin

I think that's the problem. I used to find it far superior to google. Now, there are a lot of queries where I am unimpressed with the results and end up trying google just to get better results. (like I used to do with DDG)

I've had a few experiences now where someone is standing over my shoulder asking me to look something up, and I search kagi, find nothing, then search google and find what they asked me to look up. Then when they ask "what was that other search engine you used first?" I don't feel compelled to vouch for kagi :(.


Cool project! How do language servers work with this system? Suppose I am developing PyTorch+cuda code on a remote machine, do I need to have that same PyTorch version installed locally?

If you run the language server remotely, how do you sync the file before it has been saved so that the user gets autocomplete?


Good question. To quickly answer, no you don't need it installed locally but you will benefit from having the source available.

Just so we have a common reference, look at https://github.com/edaniels/graft/blob/main/pkg/local_client.... The main idea is that we are always matching the local current working directory to the corresponding synchronization directory. Using that idea, we serve an LSP locally that rewrites all json-rpc messages that utilize URIs (https://github.com/edaniels/graft/blob/main/pkg/local_client...) from local to remote, and back. The local LSP and the remote LSP we launch are none the wiser. Because of this proxy, when you go to definition, you are going to load the local source definition; when you go an lsp format tool, it runs remotely and the file sync gets you results locally.

The lsp functionality is pretty barebones but has been working for me in sublime when I open a project of mine in a graft connected directory. I've tested it on golang and typescript. I believe python should work but I suppose dependencies could be funky depending on how you synchronize dependencies (uv, pip, etc.).

For go, I used this on my lsp settings and it worked great. What doesn't work great is if you get disconnected :(. Making the LSP very reliable is another story for another task for another day.

    {
        "log_debug": true,
        "clients":
        {
            "super-gopls":
            {
                "env": {
                    "GRAFT_LOG_LEVEL": "debug",
                },
                "command":
                [
                    "graft",
                    "lsp",
                    "gopls"
                ],
                "enabled": false,
                "initializationOptions":
                {
                    "analyses":
                    {
                        "composites": false,
                    },
                },
                "selector": "source.go",
            },
        },
        "lsp_code_actions_on_save":
        {
            "source.fixAll": true,
            "source.organizeImports": true,
        },
        "lsp_format_on_save": true,
    }

IIRC, the ROS UR controller runs at 200Hz and we’ve had arms crash when they run much slower than that.

The website claims “30hz polling rate”, “2ms latency”. Not sure if that is a best case or just for that demo.


Crash? The software, or physically? A 200Hz as a min control loop rate seems on the fast side as a general default, but it all depends on the control environment - and I may be biased as I've done a lot more bare silicon controls than ROS.

I'm guessing running a 200 Hz command rate on an e-series UR which uses 1 kHz internally will give you a protective stop?

Physically crash. When we would block the control loop at all (even down to 100hz), we would get errors and then occasionally the arm would erratically experience massive acceleration spikes and crash into its nearby surroundings before e-stopping.

Re: Other comment. Yes, this was with ur3e s which by default have update rates at around 500hz.


> The website claims “30hz polling rate”, “2ms latency”. Not sure if that is a best case or just for that demo.

This is just example cases but in fact it can go as high as you want as long as your hardware allows it


Beautiful app. Surprised to see JQuery for your frontend; brings back good old memories.


Ha, thanks it’s actually AJQuery, just a two line shim to gain the $ sugar. Otherwise vanilla JS.


I'd love to develop some MCP servers, but I just learned that Claude Desktop doesn't support Linux. Are there any good general-purpose MCP clients that I can test against? Do I have to write my own?

(Closest I can find is zed/cody but those aren't really general purpose)


In many ways this has already started happening. TS has enums, Svelte has runes, React has jsx. None of these features exist in JS, they are all compile-time syntax sugar.

While it is admittedly confusing to have all these different flavors of JS, I don’t think this proposal is actually as radical as it seems.


From the post:

> At the end of the cycle, we swap.

They swap teams every 2-4 weeks so nobody will always be on team defense.


404?


Recently gpt-4-turbo started rejecting writing some tests because it 'knows' it would exceed the max context. (This frustrated me deeply -- It would not have exceeded the context)


AI AI AI AI AI AI.

i.e. Apple Intelligence AI Agents create new ai.town agents based on apple intelligence

AI (Apple Intelligence (adjective))

AI (artificial intelligence agents)

AI (proposed verb meaning "create with generative AI")

AI (https://github.com/a16z-infra/ai-town shortening of AI Town)

AI (Apple Intelligence (adjective))

AI (artificial intelligence agents)


AI is just the hot "trend" tech is riding the wave on, lets go back a bit:

- The Cloud

- Big Data

- Self Driving cars

- ML (same as AI but more about chatbots)

- AI -> I know when Matthew McConaughey is wearing a cowboy hat shilling AI for salesforce(do they even have any GPU clusters at all??) that we have reached the peak of this wave and its all downhill until the next trend is found.


Tbf, ML and the cloud are wildly successful and arguably very useful.

We’ve basically solved image recognition with ML techniques (including deep learning ones, which are now called AI, I think)

The cloud’s popularity and successfulness is self-evident if you follow the web space at all.

Neither of those were “just” trends. I think AI will follow in line with them. Obviously something useful that we will make extensive use of, but with limitations which will become clear in time. (Likely the same limitations we ran into with big data and ml. Not enough quality data and the effort of curating that data may not be worth it)


What's gonna be the next one? Probably something concerning the use of AI with extremely high utility, perhaps something medical


IoT is another one. And we are still in a SaaS wave.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: