Hey! Full disclosure -- I am a co-founder at shadeform.ai. CoreWeave has pretty much gone offline for anyone who can't shell out a ton of cash for a multi-year commitment. Over the last couple months they've stopped letting anyone onto their platform though you're the first I've heard of being deactivated.
If you have a smaller scale use case and need GPUs, feel free to reach out ([email protected]), would love to help!
So, how do you plan to commercialize your product? I have noticed tons of chatbot cloud-based app providers built on top of ChatGPT API, Azure API (ask users to provide their API key). Enterprises will still be very wary of putting their data on these multi-tenant platforms. I feel that even if there is encryption that's not going to be enough. This screams for virtual private LLM stacks for enterprises (the only way to fully isolate).
We have a cloud offering at https://trypromptly.com. We do offer enterprises the ability to host their own vector database to maintain control of their data. We also support interacting with open source LLMs from the platform. Enterprises can bring up https://github.com/go-skynet/LocalAI, run Llama or others and connect to them from their Promptly LLM apps.
We also provide support and some premium processors for enterprise on-prem deployments.
But, in order to generate the vectors, I understand that it's necessary to use the OpenAI's Embeddings API, which would grant OpenAI access to all client data at the time of vector creation. Is this understanding correct? Or is there a solution for creating high-quality (semantic) embeddings, similar to OpenAI's, but in a private cloud/on premises environment?
Enterprises with Azure contracts are using embeddings endpoint from Azure's OpenAI offering.
It is possible to use llama or bert models to generate embeddings using LocalAI (https://localai.io/features/embeddings/). This is something we are hoping to enable in LLMStack soon.
Enterprises can bring up https://github.com/go-skynet/LocalAI, run Llama or others and connect to them from their Promptly LLM apps - So spin up GPU instances and host whatever model in their VPC and it connects to your SaaS stack? What are they paying you for in this scenario?
We built a new framework to automate UI actions on macOS This framework enables you to write automation for UI actions in more behavior-driven steps and not worry about UI system inconsistencies like retries, timeouts, etc. Right now, this framework is exposed to work in Anka macOS VMs. We plan to expose this framework to work on non-virtualized macOS systems.
Detect known security vulnerabilities in real-time in packages, libraries, and all sorts of downloads on your macOS. If you are a developer using a mac machine, you end up downloading all kinds of frameworks and libraries. Mac Scan detects known security vulnerabilities in real-time on all these downloads.
I just tried downloading it from https://nixos.org/download.html#nix-install-macos and it looks like the scan doesn't detect it. I am asking our developer how quickly we can add it to the mac scan tool.