Hacker Newsnew | past | comments | ask | show | jobs | submit | theturtletalks's commentslogin

Each store has its own Openfront setup, with its own database, checkout, and payment (Stripe & Paypal) account. The marketplace just connects to the store’s API in real time, fetching products, adding items to the cart, and handing off to checkout. When a customer pays, the funds go directly to the store’s Stripe account.

There’s no central database or shared backend. The marketplace is simply a discovery layer that sits on top of fully independent stores.


If I have 5 items in my marketplace basket that means my payment details need to go from the marketplace to five separate stores and then on to Stripe, and that means Stripe is going to see five transactions at about the same time using my card details. They'll flag that as fraud and decline them.

Yeah, you're right. Amazon isn't really about the software at this point. It's the lock-in. Sellers can't leave without losing all their reviews, rankings, and years of optimization. That's the moat.

I'm not building better software to compete directly with Amazon. I'm building infrastructure that sellers can truly own, so lock-in stops being such a powerful moat.

Traditional marketplaces charge 15-30% because they provide checkout, payments, and the customer database. But if stores already own that infrastructure, the only thing you really need is discovery. And discovery doesn't have to cost anything.

Our marketplace is essentially just a directory. Stores keep their own checkout and process their own payments. We query their API and render the results conversationally. And because the code is open source, if we ever became like Amazon, anyone could fork it and launch a competing directory.


Traditional marketplaces provide a range of services, from unified delivery to complete logistics management. They also provide all the kyc filtering and fraud screening that quite lowers the merchant risk.

On top of that, many of them provide additional assurances, like vendor screening and easy dispute resolution on fraudulent operations.

The catalog and the checkout are the easy part.


The service could capture a proof of a new seller’s Amazon rating,

to blunt the hit of that (0).


That is an old database used for development, I will remove and add .claude to gitignore

It’s using AISDK and MCP-UI, which is standard for chats. If you check the cart in the input, there’s only 2 stores added for now.

Also, that chat is just one part of it. Those stores are running on Openfront, our Shopify alternative. Please check our ethos to get the full vision.

Also, I was working on this before AI. Openfront and the/marketplace are part of an ecosystem. We built Openship, an e-commerce order management system, years ago.


Easy. Just tell the LLM to use the Linear CLI or hit their API directly. I’m only half-joking. Older models were terrible at doing that reliably, which is exactly why we created MCP.

Our SaaS has a built-in AI assistant that only performs actions for the user through our GraphQL API. We wrapped the API in simple MCP tools that give the model clean introspection and let us inject the user’s authenticated session cookie directly. The LLM never deals with login, tokens, or permissions. It can just act with the full rights of the logged-in user.

MCP still has value today, especially with models that can easily call tools but can’t stick to prompt. From what I’ve seen in Claude’s roadmap, the future may shift toward loading “skills” that describe exactly how to call a GraphQL API (in my case), then letting the model write the code itself. That sounds good on paper, but an LLM generating and running API code on the fly is less consistent and more error-prone than calling pre-built tools.


Yes, let's have the stohastic parrot guessing machine run executables on the project manager's computer - that can only end well, right? =)

But you're right, Skills and hosted scripting environments are the future for agents.

Instead of Claude first getting everything from system A and then system B and then filtering them to feed into system C it can do all that with a script inside a "virtual machine", which optimises the calls so that it doesn't need to waste context and bandwidth shoveling around unnecessary data.


Easy if you ignore the security aspects. You want to hand over your tokens to your LLM so it can script up a tool that can access it? The value I see in MCP is that you can give an LLM access to services via socket without giving it access to the tokens/credentials required to access said service. It provides at least one level of security that way.

The point of the example seemed to be connecting easily to a scoped GraphQL API.

Apple originally planned to power Siri with ChatGPT under the hood. They quickly saw that other models, including open-source ones, were closing the gap fast.

A few months ago, MCP-style tool calling seemed like the clear standard. Now even Anthropic is shifting toward "code-mode" and reusable skills.

For Apple, reliable tool calling is critical because their AI needs to control apps and the whole device. My bet: Apple's AI will be able to create its own Shortcuts on the fly and call them as needed, with OSA Script support on Mac.


One of the reasons I'm heavily biased towards actual Mac native apps is that supporting callback URLs and Shortcuts unlocks so much of what I might ask of an AI tool already. Ironically I often ask AI assistants for line by line steps to create Shortcuts when I need them because actual Shortcut naming and properties can be quite obtuse.

Sadly, much as I love AppleScript, I think Apple giving it any love at this point in time is likely to be a pipe dream. Much more likely they're just going to try to beef up Shortcuts support across the board.

I'm convinced Shopify only bought Remix since so many Shopify shops started using Next.js as an external storefront. Shopify realized their moat was dissapearing and if users could roll their own storefront, they were one step closer to rolling their own backend.

Didn’t the transformer model come from AlphaFold? I feel like we wouldn’t have had the LLMs we use today if it wasn’t for AlphaFold.


The Transformer was invented at Google, but by a different team. AFAIK the original AlphaFold didn't use a transformer, but AlphaFold 2.0 and 3.0 do.


Kali Linux and Pineapple routers made me adamant about never connecting to public wi-fi without a VPN ever again.


Their website says fully opensource and since it’s a Stripe wrapper, this seems to be an opensource alternative to Stripe Elements. It does require a Flowgrad account and key so it will provide value if other payment processors started using it.

Is there any benefit to using Flowgrad’s react components and hooks over Stripe elements directly?


I thought the whole post was about the benefits, namely that Stripe has too many webhooks and doesn’t handle enough of the tax computation.

Did I misunderstand something? Or is this about a different angle?


Ah that is another benefit, using Stripe Elements and Stripe directly, you have to save the subscription ID and other fields in your database. Flowgrad seems to handle this on their side.


The benefit is that our hooks get entitlement data directly to your frontend. Stripe Elements just focus on payments forms. But what about how payments state impacts what features or usage credits you grant? Stripe doesn’t handle that; they leave that as a concern for your business logic to figure out. The result is glue code you need to write to keep your application database up to date with the ground truth in your processor. Usually through brittle, messy webhooks code.

With Flowglad’s hooks and server SDK, we get that data to your frontend and backend. So you can read it from us wherever you need it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: