At the moment, clangd and ycm are very similar projects; they are very limited compared to cquery. clangd and ycm support code completion, diagnostics, fixits, and goto declaration (but not definition), whereas cquery supports references, derived types, callers, etc. Basically, if the feature requires knowledge across multiple translation units, clangd/ycm do not support it.
cquery is designed to support very large projects, so it makes very specific design decisions w.r.t. the data model, indexing pipeline, and multithreading model. I hope clangd can match the performance - but so far every project I've seen simply does not run nearly fast enough on a code-base the size of Chrome/ChromeOS.
Eventually this will be supported with lsp-mode. If you check https://gitter.im/cquery-project/Lobby @topisani made good progress here quickly and already has cquery up and running.
> All in all, it sounds like it would provide most of the IDE experience to VS Code. The only annoying part would be that you'll have to extract the compile flags from your build system yourself, but that's not usually too big of a deal.
This can be done similar to what rtags does (hooking into gcc/clang invocations). Alternatively if you use ninja this relatively simple by generating a compile_commands.json file.
ninja -C out/Release -t compdb cxx cc > compile_commands.json
My eventual plan is to automate compile_commands.json generation if using ninja, so cquery is install and go.
I tried using rtags before developing cquery, but found it did not perform well enough for Chrome when doing a huge number of semantic operations (I was hacking in support for code lens). I spent some time trying to figure out if it code be fixed but I believe it would have been too large of an architectural change.
- cquery interacts with an editor via the language server protocol, letting it work with any editor with relatively minimal work
- cquery handles larger repositories better (ie, indexing all of Chrome takes 20-30 minutes on a high-end workstation)
- cquery responds to semantic requests within 10ms or so
There are some other differences but those are related to the features implemented, ie, cquery supports code lens.
I'm not sure how code completion is in rtags, but I've spent a fair amount of time making it work as fast as possible in cquery. There is quite a bit of caching built on-top of the clang API which often makes it feel instantaneous.
I've been happy user of ycmd+rtags tandem for a couple of years. A killer feature of rtags me is its ability to run server on remote machine (of course source code must be mirrored too). This allows me to do develop on my weak 4-core laptop and offload indexing to fast 32-core workstation.
Regarding indexing time, all of these tools seem to parse source code using either libclang (C API) or "native" C++ API (RecursiveASTVisitor etc.), so IMHO any difference in indexing time between rtags and cquery should come from such factors as number of parsing threads, database for storing tags, caching etc.
Anyway I'm really excited about cquery and even consider moving to VSCode just because of it (being a long-term VIM user). Reliable "Find references" feature is (IMHO) a must-have functionality for large codebases and currently (thanks to cquery and rtags) is supported much better in modern C++ than in other system languages (such as Go and Rust).
> A killer feature of rtags me is its ability to run server on remote machine (of course source code must be mirrored too).
I have a similar use case in mind, so I'm planning on trying to get this working by writing a simple script that proxies language server messages over SSH/TCP. Ideally it should work with any language server.
> Regarding indexing time, all of these tools seem to parse source code using either libclang (C API) or "native" C++ API (RecursiveASTVisitor etc.), so IMHO any difference in indexing time between rtags and cquery should come from such factors as number of parsing threads, database for storing tags, caching etc.
Yea, it is amazing how big of a difference the architecture around indexing makes - any sort of global lock/shared state really hurts performance. I spent a significant amount of time finding the right architecture to make each index job as independent as possible. Most of the design decisions in cquery are oriented towards either latency or throughput at the cost of things like memory and total system load (I've since reduced memory usage, but at one point cquery used 30gb after indexing Chrome - now it is around 5gb).
> Anyway I'm really excited about cquery and even consider moving to VSCode just because of it (being a long-term VIM user). Reliable "Find references" feature is (IMHO) a must-have functionality for large codebases and currently (thanks to cquery and rtags) is supported much better in modern C++ than in other system languages (such as Go and Rust).
I'd like to see cquery support in vim as well using a vim LSP implementation :). But yes, I agree with you - cquery makes me want to continue using C++ over Rust simply because the tooling works a lot better.
> I have a similar use case in mind, so I'm planning on trying to get this working by writing a simple script that proxies language server messages over SSH/TCP. Ideally it should work with any language server.
I think I know at least 2 efforts to do this by now. One at Facebook with Nuclide, and I believe VS Live Share does something similar.
I think it's an idea whose time has come, so I think it's pretty cool that so many people are doing it.
Author here! Wasn't quite ready to post to HN yet since cquery is still in development, and I plan to eventually publish on the vscode marketplace so using cquery should be as simple as using the existing C/C++ extension.
Hi! I see you're using compile_commands.json. How are you handling header files? I've found that header files present problems for compile_commands since a number of tools using compiler_commands (like bear) only look at the compiler commands for .c files, and don't notice that the header files should be added as well.
I know that some tools, like YCM, attempt to intelligently map a header file to it's associated .cpp/.cc/.c file to guess what the compiler commands are, but this doesn't always work.
What is the state of generating compiler_commands.json for Chromium? I remember running into this issue a few years back.
> Hi! I see you're using compile_commands.json. How are you handling header files? I've found that header files present problems for compile_commands since a number of tools using compiler_commands (like bear) only look at the compiler commands for .c files, and don't notice that the header files should be added as well.
When a cc file is indexed cquery will index the associated header files. There is some logic to deduplicate multiple header file parsing so it only happens once, but that is fundamentally how it works. cquery then knows which header files are associated to which cc files.
> I know that some tools, like YCM, attempt to intelligently map a header file to it's associated .cpp/.cc/.c file to guess what the compiler commands are, but this doesn't always work.
cquery does this as well, because you can, for example, create a new file that is not in compile_commands.json. cquery has sophisticated logic here, as it will also try to infer if the file is test or platform specific (use general postfix matching) as those often have a very different set of arguments.
> What is the state of generating compiler_commands.json for Chromium? I remember running into this issue a few years back.
Chrome compiles using ninja, which natively supports compile_commands.json, so generating the file works well and is easy to do. I have not run into any issue here.