Hacker Newsnew | past | comments | ask | show | jobs | submit | kemyd's commentslogin

Hey! Yes, we use leading AI providers for website generation. All license tiers include the same daily limits for AI requests. If you can't wait, you can purchase additional AI tokens.

We offer lifetime licenses, which we treat as a source of capital to continue developing Shuffle. Some people see that as a potential downside, but we see it differently. It allows us to maintain full control over Shuffle's direction.

The Shuffle Editor has been on the market for over six years, and the majority of our customers use subscription plans (monthly or annual).

AI Website Redesign is a supporting tool for Shuffle. You can use it to explore different design directions without ever opening the Shuffle Editor, but I don't expect that to be the main use case.

Most users start with Shuffle here: https://shuffle.dev/new


Each result includes a unique, shareable link with an open-graph preview. If you're proud of what you've built, feel free to share it with others.

Example:

https://shuffle.dev/ai-design/t7-wkZU73EOcHw


My guess is they had to sell to keep the lights on (similar to Windsurf).

They’re reportedly at ~$100M ARR, implying about $8.3–8.5M in monthly revenue (ARR = last month * 12).

At the same time, they claim to have processed 147T+ tokens. For context, pricing that volume on something like Sonnet 4.5 would come out to roughly $500M in API costs. They likely offset a chunk of that with open models, but for higher-quality outputs, they’re still paying meaningful amounts to Claude / OpenAI / Google.

Hard to make those numbers work without a lot of capital or an exit.


Is that input or output tokens or both? That number sounds quite extreme. Maybe they include input tokens from deep research? That could be tens of thousands of input tokens into a cheap model per task, for example.


Hey! The components you’re seeing come from the libraries included in our tool. Each UI library typically has around 10 variations of a feature in a coherent style, such as hero sections or pricing, but with different UX. If you want to explore more, you can use CMD+F to quickly search for sets by name (try searching 'Zospace' as an example). This way, you’ll uncover more options to work with.


Thanks! The wall is one of the supporting tools for Shuffle. You can browse freely and find inspiration without an account. If you want to go further, Shuffle lets you modify components visually and export them in the CSS frameworks we support (Tailwind CSS, Bootstrap, etc.).


I don't get the hype. Tested it with the same prompts I used with Midjourney, and the results are worse than in Midjourney a year ago. What am I missing?


The hype is about image editing, not pure text-to-image. Upload an input image, say what you want changed, get the output. That's the idea. Much better preservation of characters and objects.


I tested it against Flux Pro Kontext (also image editing) and while it's a very different style and approach I overall like Flux better. More focus on image consistency, adjusts the lighting correctly, fixes contradictions in the image.


I've been testing it against Flux Pro Kontext for several weeks. I would say it beats Flux in a majority of tests, but Flux still surprises from time-to-time. Banana definitely isn't the best 100% of the time -- it falls a bit short of that. Evolution, not revolution.


Agreed. I find myself alternating between Qwen Image Edit 20B, Kontext, and now Flash 2.5 depending on the situation and style. And of course, Flash isn't open-weights, so if you need more control / less censorship then you're SOL.


Has there been a sufficient indication to conclude these weights will not (now or ever) be released?


Are any of Google's generative models besides Alphafold open weight? (Veo, Imagen, etc.)

I don't think we can really answer the question if Flash will ever be released.


It’s good but holy shit is it censored! Try generating any kind of scene on a beach…


Can it edit the photo at the original resolution?

Most of my photos these days are 48MP and I don't want to lose a ton of resolution just to edit them.


Great question. I really doubt it would be able to support any resolution. I'm sure that behind the scenes it scales it down to somewhere around 1 mp before processing even if they decide to upscale and return it back at the original resolution.


So then this doesn't really replace traditional photoshop editing of my photos I guess.


I don't know. All the testing I've done has output the standard 1024x1024 that all these models are set to output. You might be able to alter the output params on the API or AI Studio.


No, it resizes them.


Thanks for clarifying this. That makes a lot more sense.


Midjourney hasn't been SOTA for over a year. Even the latest release of version 7 scores extremely low on prompt adherence only managing to get 2 out of 12 prompts correct. Even Flux Dev running locally consistently out performs it.

Here's a comparison of Flux Dev, MJ, Imagen, and Flash 2.5.

https://genai-showdown.specr.net/?models=FLUX_1D%2CMIDJOURNE...

That being said, if image fidelity is absolutely paramount and/or your prompts are relatively simple - Midjourney can still be fun to experiment with particularly if you crank up the weirdness / chaos parameters.


Hmm, I think the hype is mainly for image editing, not generating. Although note I haven't used it! How are you testing it?


I tested it with two prompts:

// In this one, Gemini doesn't understand what "cinematic" is

"A cinematic underwater shot of a turtle gracefully swimming in crystal-clear water [...]"

// In this one, the reflection in the water in the background has different buildings

"A modern city where raindrops fall upward into the clouds instead of down, pedestrians calmly walking [...]"

Midjourney created both perfectly.


As others have said, this is an image editing model.

Editing models do not excel at aesthetic, but they can take your Midjourney image, adjust the composition, and make it perfect.

These types of models are the Adobe killer.


Noted that! The editing capabilities are impressive. I was excited for image gen because of the API (Midjourney doesn't have it yet).


David Holz mentioned on Twitter that he was considering a Midjourney API. They're obviously providing it to Meta now, so it might become more broadly available after Midjourney becomes the default image gen for Meta products.

Midjourney wins on aesthetic for sure. Nothing else comes close. Midjourney images are just beautiful to behold.

David's ambition is to beat Google to building a world model you can play games in. He views the image and video business as a temporary intermediate to that end game.


It actually has impressive image generating ability, IMO. I think the two things go hand-in-hand. Its prompt adherence can be weaker than other models, though.


Not contributing much to the discussion, but thanks for explaining who still uses Tcl. It was my first programming language about 20 years ago (I used it to write scripts for Eggdrop - an IRC bot). Just stopped by the comments out of nostalgia.


The language core is still actively used, but not in the way most people assume.

Expanding TCL C support proves it is not deprecated as some suggested (most Java VM also run a dual stack with C/C++ native binary object support.)

Automated remote host administration with TCL is one area where it still works extremely well... I guess it is not relevant if people like pseudo-repetitive typing... so much typing people actually know all the parameters to tar without the manual. lol

Have a great day =3


Nitpick: it is Tcl, not TCL, just like how Ada is not ADA.


Following the Wikipedia citation will point people here:

https://wiki.tcl-lang.org/page/Tcl+vs%2E+TCL

Which gets into the syntactical preference people developed over time.

The Tool Command Language acronym allusion is rather distinct from a popular "Tcl" colloquialism for the "tickle" project. From my perspective Tcl is a TCL, but not all TCLs are necessarily Tcl nor include a specific extension package.

Have a glorious day friend =3


When I say it out loud, I say T-C-L, too, but I write it as Tcl.

You have a nice day too! =3


https://shuffle.dev

For the last few weeks, we have been working on catching up on features for vibe coders (prompt -> project), but now we are back to our strengths (visual editor and new beautiful UI libraries for Tailwind CSS, Bootstrap, and more).

We realized there are just too many apps for vibe coders, and it would be better to work on something unique that we are really good at!


Great job! Can you share where the codebase prompts are listed? Maybe it is a valuable thing to learn


I'm sorry, but this is not possible. UI Library Creator works with Tailwind CSS, and we will add support for Bootstrap later. We don't have a plan to support custom frameworks. We need to know the framework "structure" to make this tool work. :)


That makes sense! Thanks for the quick response!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: