True, there are already some tools available to improve Kubernetes resource usage. Some might also use the VPA that comes with K8s. With Optimize Live we focused on the developer experience and precision of our recommendations.
Optimize Live can look at all historical data and trends and uses our machine learning algorithm to recommend improvements. From an engineering perspective, you don't need to write YAML files, write load test or make your way through complex configurations. The recommendations for more optimal configurations comes directly as YAML file which you can either deploy manually after checking or automatically and put optimization on autopilot once you feel comfortable with the tool.
My opinion is for sure biased but at the moment, we think we have the simplest solution to the problem of over-provisioning, CPU throttling or OOM issues.
Would say it comes down to what risk you are willing to take.
If you use a logo without permission especially from a larger corp, you might run into legal issues or maybe even jeopardize a deal that is in the making with your company.
Some public record clearly stating that a company X is using you can help to migitate the risk (public Job posting, article, webinar, etc.)
And of course always take down the logo if someone from the company is complaining. The very best is always to get permission from the companies marketing, business, legal or sometimes even hiring department, depending on the market you are in. Would recommend to go for case studies highlighting them and their solution and then get the logo that way.
You can see the exact pricing for the deployment configuration you desire within Oasis when you create a deployment. In general, the actual price for your deployment depends on the cloud provider, amount of main memory & storage you selected. The pricing is usage based, so if you only need a deployment for one hour, you only pay for that hour. Hope this helps a bit at least.
Supporting full distributed ACID transactions is the holy grail for databases, especially if you want them with minimal performance impact. For single instance settings we guarantee full ACID semantics and in a cluster setting ArangoDB supports atomic operations. Getting full distributed transactions working and minimizing their performance impact on writes is something we are still investigating. The new streaming transactions API in v3.5 brings also improvements to handling of transactions in a clsuter but are not distribuzed transactions.
Think we have to be a bit more precise here. ArangoDB supports documents, key/value, and graph. It is not really optimized for large timeseries use cases which might need windowing or other features. Influx or Timescale might provide better characteristics here. However, for the supported data models we found a way to combine them quite efficiently.
Many search engines access data stored in JSON format. Hence integrating a search engine like ArangoSearch as an additional layer on top of the existing data models is no magic but makes a lot of sense. Allowing to combine models with search is then rather an obvious task for us.
Optimize Live can look at all historical data and trends and uses our machine learning algorithm to recommend improvements. From an engineering perspective, you don't need to write YAML files, write load test or make your way through complex configurations. The recommendations for more optimal configurations comes directly as YAML file which you can either deploy manually after checking or automatically and put optimization on autopilot once you feel comfortable with the tool.
My opinion is for sure biased but at the moment, we think we have the simplest solution to the problem of over-provisioning, CPU throttling or OOM issues.