Hacker Newsnew | past | comments | ask | show | jobs | submit | LTL_FTC's commentslogin

Kai | Engineering (Multiple) | Onsite (San Jose, CA) | Full-time

Ai + Cybersecurity. Our Ai team is well funded, and growing. Although we have a dedicated engineering team, the Ai team is looking for dedicated engineers to work closely with the applied Ai scientist to build the platform and pipelines.

Principal Software Engineer, AI Platform: https://ats.rippling.com/kai-cyber-inc/jobs/a6d62968-4831-42...

Senior Software Engineer, AI Platform: https://ats.rippling.com/kai-cyber-inc/jobs/d567048a-7b1b-4e...

We are also looking for a few more applied Ai scientists as well as many other roles. I invite you to take a look:

https://www.kai.security/careers#open


Microsoft used to with their Azure Service Dev Kit. the ASDK was a single-node "sandbox" meant to emulate the entire Azure cloud locally. They may have something similar now but paired back


Those keys are easily replaced, my friend.


Toyota restricted the sale of its hydrogen fuel cell vehicles to specific, qualified customers who lived or worked near existing, functional hydrogen refueling stations. I remember looking into them when first released but realized I wasn’t eligible and the fact that Toyota restricted the sale meant there was a huge risk in buying them.

With all the recent outrage and lawsuits, I wonder how many buyers actually did their due diligence and weighed the risk before committing to them? Or maybe the huge fuel subsidy was seen as a win even if this event played out? Idk but I commend Toyota for taking the risk and going for it.

Edit: typo


Approximately zero regular consumers purchased hydrogen cars. They were all fleet purchases designed primarily to publish burnish eco-friendly credentials, like this:

"This new initiative reinforces Air Liquide's commitment to decarbonizing transportation and accelerating the shift toward sustainable and low-carbon mobility solutions."

https://www.airliquide.com/group/press-releases-news/2025-11...

Of course, Air Liquide would also profit massively from building hydrogen infra if it did become commonplace.


Well… I did/do see many around the Bay Area. Especially during the morning commute. But I agree, overall it was a low volume car.


Funny thing, Air Liquide. They were going to build a massive green hydrogen plant in upstate NY and backed out when the tax credits disappeared...

https://www.airproducts.com/company/news-center/2025/02/0224...


> and backed out when the tax credits disappeared...

As they should. If the terms of the deal change, you need to start over with the business case and financials.

If you want someone to be mad at, it’s the politicians making these bad tax credit decisions. Not the companies trying to respond to the tax credit incentives. Getting companies to build things they otherwise wouldn’t is the entire purpose of tax credits.


Hydrogen systems just don't make sense. Neither do molecular Hydrogen Fuel Cells.

Now, green hydrogen for ammonia, and Ammonia fuel cells? Yes.


This video is pretty great. “The joke is this is not a joke” comment in there… how many of us understood everything that was said and then felt like maybe we need a different hobby…


This is such a great website. I have enjoyed reading the articles in the past. It was the final push that inspired me to build my own wheel set instead of buying a complete when I was building my new mountain bike piece by piece. The art and zen (and frustration of trying to feed a shift/brake line through a frame), I tell ya.


If they have an Apple TV, you can just install the app and use it as an exit node. I would check out the devices that are on their network currently, chances are you can use one of those.


I don’t get it? Are they sharing a quote they liked or taking credit for it? Maybe they just saw the YouTube video and decided to turn it into the page?

Edit: Seems like a way to show they’re looking for roles, I guess.


It sounds like you don’t need immediate llm responses and can batch process your data nightly? Have you considered running a local llm? May not need to pay for api calls. Today’s local models are quite good. I started off with cpu and even that was fine for my pipelines.


Though haven't done any extensive testing then I personally could easily get by with current local models. The only reason I don't is that the hosted ones all have free tiers.


Agreed, I'm pretty amazed at what I'm able to do locally just with an AMD 6700XT and 32GB of RAM. It's slow, but if you've got all night...


I haven't thought about that, but really want to dig in more now. Any places you recommend starting?


I started off using gpt-oss-120b on cpu. It uses about 60-65gb of memory or so but my workstation has 128gb of ram. If I had less ram, I would start off with the gpt-oss-20b model and go from there. Look for MoE models as they are more efficient to run.

My old threadripper pro was seeing about 15tps, which was quite acceptable for the background tasks I was running.


Can you suggest any good llms for cpu?


I started off using gpt-oss-120b on cpu. It uses about 60-65gb of memory or so but my workstation has 128gb of ram. If I had less ram, I would start off with the gpt-oss-20b model and go from there. Look for MoE models as they are more efficient to run.


Following.


I started off using gpt-oss-120b on cpu. It uses about 60-65gb of memory or so but my workstation has 128gb of ram. If I had less ram, I would start off with the gpt-oss-20b model and go from there. Look for MoE models as they are more efficient to run.


Hey Olivaw, saw a comment of yours asking about planners. Wanted to reply but it’s expired. Check out bullet journalling.


Thanks for the reply!

Bullet journaling is neat, but I'm far too whacky with my notes to stick to that kind of structure.

I have various other structures I implement, but they're just hodge podges of things.


I mean, you could put together a cluster of dgx sparks (8 of them) and hit 100tps with high concurrency:

https://forums.developer.nvidia.com/t/6x-spark-setup/354399/...

Or a single user at about 10tps.

This is probably around $30k if you go with the 1tb models.


I'd love more people to try to enable local LLMs at the speeds they wish to use and face the music of the fans, heat and power bills.

When people talk about the cost and requirements of AI, other people can't grasp what they are talking about.


10 tps, maybe, given the Spark's hobbled memory bandwidth. That's too slow, though. That thread is all about training, which is more compute-intensive.

A couple of DGX Stations are more likely to work well for what I have in mind. But at this point, I'd be pleasantly surprised if those ever ship. If they do, they will be more like $200K each than $100K.


I linked results where the user ran Kimi k2 across his 8-node cluster. Inference results are listed for 1,10,100 concurrent requests.

Edit to add:

Yeah, those stations with the GB300 look more along the lines of what I would want as well but I agree, they’re probably way beyond my reach.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: