Meilisearch is really good for a corpus that rarely changes from my experience so far. If the documents frequently change and you have a need to have those changes available in search results fairly quickly it ends up with pending tasks for hours.
I don't have a good solution for this use-case other than maybe just the good old RDBMS. I'm open for suggestions or anyway to tweak Meilisearch for documents that gets updated every few seconds. We have about 7 million documents that's about 5kb each. What kind of instance do I need to handle this.
The best you could do is put Meilisearch on a very good NVMe. I am indexing large streams of content (Bsky posts + likes), and I assure you that I tested Meilisearch on a not-so-good NVMe and a slow HDD — and ho, Boy!! The SSD is so much faster.
I am sending hundreds of thousands of messages and changes (of the likes count) into Meilisearch, and so far, so good. It's been a month, and everything is working fine. We also shipped the new batches/ stats showing a lot of internal information about indexing step timings [1] to help us prioritize.
35 GiB is probably a third of the data I index into Meilisearch just for experimenting and don't forget about the inverted indexes. You wouldn't use any O(n) algorithm to search in your documents.
Also, every time you need to reboot the engine you would have to reindex everything from scratch. Not a good strategy, believe me.
Loki OSS is just a sales pitch for their managed service. It doesn't work well without dedicating significant time tweaking and configuring it. Documentation is confusing at best if you want to do anything serious. You have to also be ready to handle support calls if you open it up for others to use, because it WILL have issues fairly regularly if you have a good volume of logs and query range is more than a day or two.
Unless you have the bank to go with their managed service, don't bother.
What's the best option to buy a framework laptop outside of US. I tried shipping them to a parcel forwarder but they didn't also accept non-US debit cards.
I shipped mine to a friend who forwarded the package to me. They figured out what I'm doing, and I got an e-mail warning me that they'll ship the laptop, but I won't get warranty outside of the country they're shipping to, so not sure I'd recommend doing this.
From what I'm seeing, the upgrade path is essentially dump and restore. This was the case for moving from EdgeDB v1 to v2 as well. Since EdgeDB is a fast moving target this creates a lot of friction for older databases to upgrade.
Is there plans for replication support or a more robust upgrade path?
I'm a long term vim user and now exclusively a neovim user. I adopted this from the first time I came across this. It's pretty neat, it does break from time to time but there's always a GitHub issue describing a workaround. It's pretty stable these days and breaks far less comparatively.
I've been using chezmoi too and this is the only feature I miss. It'd be interesting to know what solutions folks are using. Chezmoi has some discussions around it, mostly recommending to use run scripts.
I didn't discover Chezmoi until seeing this thread (sigh). I developed a tool, filetailor, with an almost identical goal (dotfile management while accounting for differences across machines). It uses Python and YAML, but from what I can tell is similar in concept to Chezmoi.
One thing I like about filetailor I didn't see in Chezmoi was the ability to surround code with a comment specifying which machines it should be commented/uncommented for. It's easier than templates in some situations.
It works great, but there's probably tons of bugs that occur when used by someone other than me. I don't have a CS background and this was my first big hobby project.
I've partially adopted `aconfmgr` for the other stuff, but haven't put in the effort to refine my config, partly because I only have one applicable (arch-based) system at the moment.
I mean if that is the single thing stopping your company from using it then just add support for it. The cost of DD monthly bill more than supports spending a week or less it would take to add
SSO is about making a user account inside a service provider (e.g. TFA) which mirrors that same user account in the identity provider (e.g. Okta). A reverse proxy isn't able to write to the upstream application's user store or otherwise assert the identity of the current user to the upstream application, as far as I'm aware. It could do some kind of binary proxy-or-don't-proxy based on a valid assertion from the IdP, but the application would just attribute all traffic to a single user.
Or is there some kind of gateway standard that I'm unaware of?
This can be used to add whatever authn/authz you require to apps that don't even support authn/authz. I'm using Traefik ForwardAuth with Keycloak for Jaeger SSO in a couple of places.
That would be so nice. But if they’re planning to ‘add it to our enterprise plans’ then I doubt your PR would be accepted. Leaving you to manage a fork.
If they truly wanted to build an open source community around their product, the "enterprise" part would just be "we host it for you" and not "we gatekeep features that we think we can extort big companies to pay"
That is: I wouldn't hold off on a PR just because they said they're going to get around to it; if the PR works, and is merged, that's one less part they have to write. If they don't merge it, then the bad faith you're discussing will be a concrete fact and not speculation, and will serve as a warning to others not to bother submitting more PRs
I don't have a good solution for this use-case other than maybe just the good old RDBMS. I'm open for suggestions or anyway to tweak Meilisearch for documents that gets updated every few seconds. We have about 7 million documents that's about 5kb each. What kind of instance do I need to handle this.