Kai | Engineering (Multiple) | Onsite (San Jose, CA) | Full-time
Ai + Cybersecurity. Our Ai team is well funded, and growing. Although we have a dedicated engineering team, the Ai team is looking for dedicated engineers to work closely with the applied Ai scientist to build the platform and pipelines.
Microsoft used to with their Azure Service Dev Kit. the ASDK was a single-node "sandbox" meant to emulate the entire Azure cloud locally. They may have something similar now but paired back
Toyota restricted the sale of its hydrogen fuel cell vehicles to specific, qualified customers who lived or worked near existing, functional hydrogen refueling stations. I remember looking into them when first released but realized I wasn’t eligible and the fact that Toyota restricted the sale meant there was a huge risk in buying them.
With all the recent outrage and lawsuits, I wonder how many buyers actually did their due diligence and weighed the risk before committing to them? Or maybe the huge fuel subsidy was seen as a win even if this event played out? Idk but I commend Toyota for taking the risk and going for it.
Approximately zero regular consumers purchased hydrogen cars. They were all fleet purchases designed primarily to publish burnish eco-friendly credentials, like this:
"This new initiative reinforces Air Liquide's commitment to decarbonizing transportation and accelerating the shift toward sustainable and low-carbon mobility solutions."
> and backed out when the tax credits disappeared...
As they should. If the terms of the deal change, you need to start over with the business case and financials.
If you want someone to be mad at, it’s the politicians making these bad tax credit decisions. Not the companies trying to respond to the tax credit incentives. Getting companies to build things they otherwise wouldn’t is the entire purpose of tax credits.
This video is pretty great. “The joke is this is not a joke” comment in there… how many of us understood everything that was said and then felt like maybe we need a different hobby…
This is such a great website. I have enjoyed reading the articles in the past. It was the final push that inspired me to build my own wheel set instead of buying a complete when I was building my new mountain bike piece by piece. The art and zen (and frustration of trying to feed a shift/brake line through a frame), I tell ya.
If they have an Apple TV, you can just install the app and use it as an exit node. I would check out the devices that are on their network currently, chances are you can use one of those.
I don’t get it? Are they sharing a quote they liked or taking credit for it? Maybe they just saw the YouTube video and decided to turn it into the page?
Edit:
Seems like a way to show they’re looking for roles, I guess.
It sounds like you don’t need immediate llm responses and can batch process your data nightly? Have you considered running a local llm? May not need to pay for api calls. Today’s local models are quite good. I started off with cpu and even that was fine for my pipelines.
Though haven't done any extensive testing then I personally could easily get by with current local models. The only reason I don't is that the hosted ones all have free tiers.
I started off using gpt-oss-120b on cpu. It uses about 60-65gb of memory or so but my workstation has 128gb of ram. If I had less ram, I would start off with the gpt-oss-20b model and go from there. Look for MoE models as they are more efficient to run.
My old threadripper pro was seeing about 15tps, which was quite acceptable for the background tasks I was running.
I started off using gpt-oss-120b on cpu. It uses about 60-65gb of memory or so but my workstation has 128gb of ram. If I had less ram, I would start off with the gpt-oss-20b model and go from there. Look for MoE models as they are more efficient to run.
I started off using gpt-oss-120b on cpu. It uses about 60-65gb of memory or so but my workstation has 128gb of ram. If I had less ram, I would start off with the gpt-oss-20b model and go from there. Look for MoE models as they are more efficient to run.
10 tps, maybe, given the Spark's hobbled memory bandwidth. That's too slow, though. That thread is all about training, which is more compute-intensive.
A couple of DGX Stations are more likely to work well for what I have in mind. But at this point, I'd be pleasantly surprised if those ever ship. If they do, they will be more like $200K each than $100K.
Ai + Cybersecurity. Our Ai team is well funded, and growing. Although we have a dedicated engineering team, the Ai team is looking for dedicated engineers to work closely with the applied Ai scientist to build the platform and pipelines.
Principal Software Engineer, AI Platform: https://ats.rippling.com/kai-cyber-inc/jobs/a6d62968-4831-42...
Senior Software Engineer, AI Platform: https://ats.rippling.com/kai-cyber-inc/jobs/d567048a-7b1b-4e...
We are also looking for a few more applied Ai scientists as well as many other roles. I invite you to take a look:
https://www.kai.security/careers#open