Thanks for explaining the reasoning behind implementing your own version instead of using Al Dente. The straightforward UI is really appealing.
I paid for an Al Dente license long ago and have been very happy with it, sail mode and temperature sensitive charging are great features (although I’m not sure how big of an impact sail mode makes).
It’s great to see alternatives available though. Having command line access is really cool.
> although I’m not sure how big of an impact sail mode makes
Anecdotally: I've used sailing mode for the last year or so, and almost always use my laptop on AC power. Before I started using sailing mode, my cycle count was 120; now, a year later, it's 123.
They have an ultrawide mode available now. Personally I find it very uncomfortable. You have to move your whole head to see the sides, and the vision pro is heavy. Looking off to the side for a length of time is uncomfortable.
To add to this: I have a Vision Pro and a 34" curved ultrawide. The latter is much more usable in this regard, because the effective resolution per degree is higher, which means you can keep your head static and use your eyes to look around.
By contrast, you have to use a giant screen on the Vision Pro to get equivalent resolution, which means you have to move your head. It still has its advantages (you can take it wherever you go, and the resolution of the virtual screen can be higher), but it's not yet comparable to a physical monitor, to my chagrin.
I would really love this too. However I think that the only solution for that is to give it a Kagi search tool, in combination with a web scraping tool, and a loop while it figures out whether it's got the information it needs to answer the question.
If you find this topic interesting, David Imel put together a fantastic deep dive. He spent four months of research, went to Japan to interview the creator of emoji, and put together a very good video on the topic[0].
It's a well produced video, I watched it and commented on it last week. At that point my blog post was half written but my research was already complete.
I applaud David for going to Japan, great to see such dedication. My issue with the video is that it simply retreads the accepted timeline rather than doing any critical research. He was in Japan and could have done some real digging, you know? What was he doing for four months? Just editing the video? I don't know. So, I think the content of the video is not a deep dive at all, as it doesn't uncover anything new. None of the stuff I uncovered for my blog post is covered. And we can now see more clearly than ever that Shigetaka Kurita is not the creator of emoji, but rather the creator of the most well-know set of emoji that was perhaps the first use of the sparkle emoji. That's a pretty big difference.
BTW my research was conducted in my free time over less than two weeks. It consisted of some googling, talking with Japanese friends, reading Japanese wikipedia (with browser translation, as can't read it natively) and that is all from the comfort of my own desk. So I would say anybody is capable of having done this.
Of course, I had the bonus of being clued in that earlier history existed thanks to the device I had in my hands. But my point is that we should always question sources and accepted history, because more often than not there's additional story to be told.
> I like to define my subroutines using a modern language like C++, which goes 47 gigaflops. This means C++ is three orders of a magnitude faster than Python. That's twenty years of progress per Moore's law.
This is great. I love the idea of measuring performance differences in “years of Moore’s law.”
Twenty years puts the delta in an easy to understand framework.
Python on 2024 hardware vs C++ on 2004 hardware ... I don't think it's obvious that C++ always wins here, though it would depend on the use case, how much of the Python is underpinned by native libraries, and the specific hardware in question.
You can't not include native libraries, at least if you want your benchmark to be realistic. Almost every Python library where performance matters is written (at least partially) in a compiled language.
Yes, but many people like the sound of "X-times faster than Python" while conveniently forgetting that the same thing can be (and usually is) done in Python + numpy & co. even faster.
I have come to appreciate "slowness" of Python. It trades speed for legibility, which is a great compromise once you have really fast native libraries one import away. Best of both worlds.
C++ with well-optimized libraries should always outperform Python with well-optimized libraries, right? They should be ~identical in the highly optimized inner loops, but Python has more overhead. But naive hand-written C++ could easily perform worse than something like Numpy.
(I've only tested this once, and my naive hand-written C++ was still twice as fast as Numpy, but that was only on one specific task.)
I agree with your observation, although it seems unpopular at the moment. It’s clear the video is slowed down and seeing how his hands respond at the end of his action, it looks like he’s throwing the wheel very quickly.
The tires start moving immediately, so it looks more like a limit in how quickly they change direction. It appears laggy because they don’t finish the movement when he does. They do respond right away when he shifts his arms a little in the middle of the video.
Like others, wanted to say thank you for writing Aider.
I think you've done a fantastic job of covering chat and confirmation use cases with the current features. Comments on here may not reflect the high satisfaction levels of most of your software users :)
Aider helps put into practice the use cases that antirez refers to in their article. Especially as someone get's better at "asking LLMs the right questions" as antirez refers to it.
I completely agree. A huge business with a singular focus isn’t going to pivot into the music business (or any of the myriad use cases the general public throws at it). And if they did use someone’s info, it’s more likely an unethical employee than a genuine business tactic.
Besides, the parent program uses the API, which allows opting out of training or retaining that data.
Yes this makes perfect sense. As we know, businesses definitely do not treat data as a commodity and engage in selling/buying data sets on the open market as a "genuine business tactic". Therefore, since the company in question doesn't have a clear business case for data collection currently, we can be sure this data will never be used against our interests by any company.
Thanks for explaining the reasoning behind implementing your own version instead of using Al Dente. The straightforward UI is really appealing.
I paid for an Al Dente license long ago and have been very happy with it, sail mode and temperature sensitive charging are great features (although I’m not sure how big of an impact sail mode makes).
It’s great to see alternatives available though. Having command line access is really cool.