I was in the early beta and judging from how it's worked in my tests, there's definitely more going on under the hood then the most straightforward approach of "embed all your emails, find nearest neighbors, then ask the LLM to answer".
Kudos on innovating around applying LLMs to real-world problems and going beyond the bog-standard approach. It'd be interesting to see a more detailed blog on the technical approach you took!
Love the app and use it as a thought log all the time! It would be great if I didn't have to select the output type upfront but rather just free-flow dictate. I don't always know if something will turn into a blog post or small tweet at the time I am dictating.
If you could summarize the content into a few words (similar to how ChatGPT names its chat thread), and I could generate a specific output type from that list in the history, that would be a game changer for me!
You can already do that! Just deselect all the outputs on the main view (it will just give you a transcript in the history). Then in the history section, after you select the recording, you can click "Add Output" (or Add Response depending on the version of the app you have).
1. on homescreen, tap "Select Outputs"
2. de-select all the prompts: tap "select all" in the lower right twice and none of the prompts will be selected
3. record a new voice note
4. in the output view, tap "Add Outputs"
The Microsoft LoRA is actually the same concept as the one applied here, the original method was applied to LLMs and then brough to diffusion models recently.
I made ThinMusic so I could listen to Apple Music on my Linux desktop and also scrobble to last.fm reliably.
Between MusicKit JS which made ThinMusic possible, and Spotify's SDK, feels like there could be some cool music related things built in the near future.
You never drained the longTimer channel, so when you say "We agree that longTimer has fired, right?"; that's not quite true. After you call Reset(), you're still getting the value from the first firing, because that's the first time you read from the channel at all.
The docs are quite clear on this behavior and say "Timer will send the current time on its channel after at least duration d." -- key words being at least and says nothing about when you choose to read from the channel.
It's even worse. The longTimer has fired, and it sent a message on the channel just as it was supposed to. When Reset() is called, it causes a second firing and a second message. Here is the code, corrected to illustrate. The output times are exactly as one would expect.
Hey HN - I work at Jack Mobile - where we are reimagining search from the ground up. We combine state of the art NLP, machine learning, and information retrieval to build a product that anyone with a smartphone will want to use every day. And we're just getting started!
We're a small team founded by serial entrepreneurs, based in the Bay Area. We're looking for engineers to help us solve challenging problems in data acquisition, knowledge graph, search and question answering. If you are interested in any of the following:
* Search / NLP
* Data Science / ML
* Systems / Infrastructure
* Full Stack / Mobile (iOS|Android)
Almost all of the author's complaints revolve around the Control Panel. Most users won't ever see the Control Panel, just the Settings app.
The Control Panel is a legacy piece they decided to keep around for the "enterprise"; just like Windows 10 also still includes the old Internet Explorer.
Cool project! I was surprised to see Go has 167k libraries vs. 153k from Node. The Go libraries are being counted incorrectly -- each vendored library is also counted (which is common practice for many Go apps) -- so the number is very inflated.
Kudos on innovating around applying LLMs to real-world problems and going beyond the bog-standard approach. It'd be interesting to see a more detailed blog on the technical approach you took!