Hacker Newsnew | past | comments | ask | show | jobs | submit | searchfaster's commentslogin

Thank you !


If you are interested in trying out an alternative, please let me know.


I would, we run 2 128gb enterprise algolia clusters at significant cost, 200 million documents


Please take a look at https://searchera.io for a demo. My personal email is on my profile.


Mind if I reached out to you to chat about your experience in running search at scale? How do I contact you?


You should maybe take a look at Coveo


Yes.. this is something our startup is working on fixing. How you sort and how you prioritize your fields to be searched are all configurable at query time.

If you pay for a million records, you should be able to store a million records.

https://searchera.io

We are currently in beta and our website is not fully up but the demos give an idea of what is possible.


Nice.

This brings up a related point.

How strong are the defensibility and separately the network effects in Algolia's business?

I guess once they have a customer, it might be annoying for that customer to switch. Is that accurate?

But is there any reason for the 101th customer to use Algolia other than the brand? My hunch says no, and that there aren't any network effects.


Thanks ! Actually 40% of our current beta users are existing Algolia customers. Maintaining separate indexes for every sorting / ranking option and intentionally restricting the application to stay under limit is a drawback that many complain about.

Our On-Premises option is something which a few potential customers have been interested in.


So won't you have the same problem?

Limited customer lock in?


Of course. We are doing our best to build a rock solid product which is fast, flexible, cost-effective with excellent support.

Finally, it is the customers call. They are going to choose what works best for them :)


The response times are six times as long as with Algolia. I assume because of latency.

Will you offer distributed datacenters as well?


Yes, we are currently hosted only in NY. Can you please ping beta.searchera.net and check the latency from your location ?

Once we are out of beta we will be offering distributed datacenters. West coast USA and Europe to start with. The option to install in your own servers / cloud provider is another option.

If you would like to try it out, I can always bring up a host quickly next to your location on digitalocean or aws. Please send me an email on [email protected]


90ms from Europe

I signed up for your beta and will follow your progress.

Is your solution based on solr/lucene/elasticsearch?


Ours is a custom index written mostly in 'C' and bit of x86 assembly. It is very lightweight and extremely fast even without the use of replica indexes for every sort order.

Thanks for signing up.. Will get you started as soon as we have our additional servers up.


Really cool.. Please highlight what is currently being played, will make it even better.


Plenty of 3rd party devices support Google voice though.. I have been using a ooba (not sure about the name) for 4 years now as my home phone.


Oh yeah there are plenty of hacks to make it work, but Google worked to remove native SIP support from their service, which was originally there in GC. It's too bad, because it was handy as hell.


*Ooma likely, e.g. Ooma Telo.

Didn't know GV integrated w/ it though, cool.


> I can find the code and repost it.

Here it is.. https://github.com/yahoo/mdbm


So Yahoo took the code and did all sorts of stuff to it. I've got the code pre-yahoo and that's the stuff that is fast.

It's part of BitKeeper which is open source.


I did a test of 'Bare metal Linux' vs 'Containers on bare metal linux' for our product. In this case it is just 2 processes a 'search component' and an 'analytics and logging component'. Under heavy load the 'search' uses a lot of disk reads, CPU and network, while the logging module uses a lot of disk writes.

The comparison was done on

1) Ubuntu server 16.04 with both processes running as they usually do (Search with higher priority)

2) Core OS - Both processes running each in a separate rkt container (search with higher priority).

I saw no change in CPU / Network / Disk access metrics and my throughput remained the same.

Please note though, in my case I do not have way too many microservices as the general usage is. Also I use host networking. I also had no need for orchestration services like Kubernetes / swarm etc.,

TLDR:; No change between running product in container vs no-container mode with host networking, minimal containers and no orchestration.


As a developer who just spent the last night building a visualization tool analyzing reddit data, totally appreciate the value of a good designer.

It is very hard to balance 'simplicity', 'usability' and 'features'.

I had to cut short many features and options because the interface was getting too complex.


Yes.. Its left to the API provider.

Try https://learngraphql.com gives a very good idea of what graphql is. Way better than reading the official documentation or spec.

Once I went through it, made up my mind that our API as a service product should support Graphql.

Note: I am in no way related to the site.. It is free and I finally actually understood what graphql is in 15 mins.


Thanks for sharing that link, looks really good. I'll be going through it after work.


Agree, I am waiting for powerful, yet cost-effective bare metal servers.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: