I think that's the problem. I used to find it far superior to google. Now, there are a lot of queries where I am unimpressed with the results and end up trying google just to get better results. (like I used to do with DDG)
I've had a few experiences now where someone is standing over my shoulder asking me to look something up, and I search kagi, find nothing, then search google and find what they asked me to look up. Then when they ask "what was that other search engine you used first?" I don't feel compelled to vouch for kagi :(.
Cool project! How do language servers work with this system? Suppose I am developing PyTorch+cuda code on a remote machine, do I need to have that same PyTorch version installed locally?
If you run the language server remotely, how do you sync the file before it has been saved so that the user gets autocomplete?
Good question. To quickly answer, no you don't need it installed locally but you will benefit from having the source available.
Just so we have a common reference, look at https://github.com/edaniels/graft/blob/main/pkg/local_client.... The main idea is that we are always matching the local current working directory to the corresponding synchronization directory. Using that idea, we serve an LSP locally that rewrites all json-rpc messages that utilize URIs (https://github.com/edaniels/graft/blob/main/pkg/local_client...) from local to remote, and back. The local LSP and the remote LSP we launch are none the wiser. Because of this proxy, when you go to definition, you are going to load the local source definition; when you go an lsp format tool, it runs remotely and the file sync gets you results locally.
The lsp functionality is pretty barebones but has been working for me in sublime when I open a project of mine in a graft connected directory. I've tested it on golang and typescript. I believe python should work but I suppose dependencies could be funky depending on how you synchronize dependencies (uv, pip, etc.).
For go, I used this on my lsp settings and it worked great. What doesn't work great is if you get disconnected :(. Making the LSP very reliable is another story for another task for another day.
Crash? The software, or physically? A 200Hz as a min control loop rate seems on the fast side as a general default, but it all depends on the control environment - and I may be biased as I've done a lot more bare silicon controls than ROS.
Physically crash. When we would block the control loop at all (even down to 100hz), we would get errors and then occasionally the arm would erratically experience massive acceleration spikes and crash into its nearby surroundings before e-stopping.
Re: Other comment. Yes, this was with ur3e s which by default have update rates at around 500hz.
I'd love to develop some MCP servers, but I just learned that Claude Desktop doesn't support Linux. Are there any good general-purpose MCP clients that I can test against? Do I have to write my own?
(Closest I can find is zed/cody but those aren't really general purpose)
In many ways this has already started happening. TS has enums, Svelte has runes, React has jsx. None of these features exist in JS, they are all compile-time syntax sugar.
While it is admittedly confusing to have all these different flavors of JS, I don’t think this proposal is actually as radical as it seems.
Recently gpt-4-turbo started rejecting writing some tests because it 'knows' it would exceed the max context. (This frustrated me deeply -- It would not have exceeded the context)
AI is just the hot "trend" tech is riding the wave on, lets go back a bit:
- The Cloud
- Big Data
- Self Driving cars
- ML (same as AI but more about chatbots)
- AI -> I know when Matthew McConaughey is wearing a cowboy hat shilling AI for salesforce(do they even have any GPU clusters at all??) that we have reached the peak of this wave and its all downhill until the next trend is found.
Tbf, ML and the cloud are wildly successful and arguably very useful.
We’ve basically solved image recognition with ML techniques (including deep learning ones, which are now called AI, I think)
The cloud’s popularity and successfulness is self-evident if you follow the web space at all.
Neither of those were “just” trends.
I think AI will follow in line with them.
Obviously something useful that we will make extensive use of, but with limitations which will become clear in time. (Likely the same limitations we ran into with big data and ml. Not enough quality data and the effort of curating that data may not be worth it)
I've had a few experiences now where someone is standing over my shoulder asking me to look something up, and I search kagi, find nothing, then search google and find what they asked me to look up. Then when they ask "what was that other search engine you used first?" I don't feel compelled to vouch for kagi :(.
reply