Hacker Newsnew | past | comments | ask | show | jobs | submit | timbritt's commentslogin

I wanted to see if I could use generative AI to build a whole business. Not just a digital product or an app — but the entire business from end to end:

(Spoiler: I did manage it and launched in just 75 days from start to first order)

- Three tier corporate structure with manager-managed LLCs and a private WY LLC as manager, complete with a knowledgebase-powered assistant that can write share registries, banking resolutions, meeting minutes, contribution contracts, loans and more

- Supply chain management with proprietary lot tracking that tracks PO line items from production to delivery

- Generated the base for all product images, helped write and research label design and text, wrote SEO titles and product descriptions

- Used Claude Code to build the entire Shopify theme for the site, all collections, product pages, legal pages and a COA database to boot

- Used Claude Code to build a custom Shopify app to integrate lot tracking into the shop so that when lots sell out the next lot is queued for sale and all lot-related metadata is synced to the product variant and displayed on the product page

- used Claude Code to build a super analytics platform that combines the data from GA4, Shopify orders, and Meta business suite into a single feature store where I can wrangle the data to ask/answer any question I can dream of about audience segments, product popularity, what’s working or not, and get insights on what to do next

If you care to check it out, the site is https://cosmicpeptides.com


this is super cool



There’s this popular pastime lately where people pretend to be the last rational mind on Earth while making blanket pronouncements about tech they don’t understand.

>“Avoid it! Perplexity rules prohibit commercial use…”

Cool story, but also: irrelevant. Nobody serious is shipping products on Perplexity as a backend. It’s a research tool with a nice wrapper. The people building with LLMs are using OpenAI, Claude, Mistral, Groq, and Ollama, depending on the constraints and goals. Acting like the existence of one walled garden invalidates an entire paradigm is like saying cars are bad because golf carts can’t go on highways.

> “ChatGPT rules prohibit sharing the output with other AI…”

There are some restrictions, yes. And that matters… if your use case is literally just shuttling model output between APIs like a glorified message broker. Most developers are fine with this because they’re building systems, not playing telephone.

> “They’re training on everything you pass in…”

This is just flat wrong. OpenAI doesn’t train on API input/output unless you explicitly opt in. The fact that this myth is still circulating tells me people are repeating each other instead of reading the docs.

> “You’re paying to get brain raped…”

If your argument requires dehumanizing metaphors to land, you don’t have an argument. You have trauma cosplay.

> “Coding with OpenAI = being a sheep…”

This is the kind of thing people say when they’ve never delivered software in production. The tools either help or they don’t. Calling people sheep for using powerful tools is anti-intellectualism dressed up as cynicism. Nobody’s building a search engine from scratch to prove they’re not a sheep either. We use leverage. That’s the whole game.

> “You’re paying them to train their model to imitate you…”

Actually, no — again, unless you opt in. But even if that were true, you’re also trading that data for time saved, features shipped, and workflows unlocked. You know, ROI.

> “Better off learning a better (compiled) programming language…”

I have nothing against compiled languages, but this is like telling someone struggling with Figma that they should learn Blender. It might be good advice in a vacuum, but it doesn’t help you win the game that’s actually being played.

> “Why pay OpenAI for the privilege to help them take your job?”

You could’ve said the same thing about AWS. Or GitHub. Or Stack Overflow. Or even programming itself, in the mainframe era. Gatekeeping based on purity is a waste of time. The actual work is understanding what AI can do, what it shouldn’t do, and when to lean in.

> “Why outsource mental activity to a service with a customer noncompete?”

You’re not outsourcing thinking. You’re compressing cognitive overhead. There’s a difference. If you think using AI is “outsourcing thinking,” you were probably outsourcing thinking to Stack Overflow and Copilot already.

Look, are there risks? Costs? Vendor lock-in traps? Of course. Anyone seriously building with AI right now is absorbing all that volatility and pushing forward anyway, because the upside is that massive. Dismissing the entire space as a scam or a trap is either willful ignorance or fear masquerading as intellectual superiority.

If you don’t want to use AI tools, don’t. But don’t throw rocks from the sidelines at the people out there testing edge cases, logging failures, bleeding tokens, and figuring out what’s possible.


Admittedly, I’ve achieved so much with the assistance of extensions like Cline/Roo that I have a sort of youthful exuberance that wants to proclaim that the barrier to entry to building applications with AI is falling.

It’s not.

These tools will literally waste your time and empty your wallet if you don’t have a solid approach — and that starts with knowing how to detect code and architecture smells at light speed. These tools are writing code faster than you can read it so you better know what you’ve (actually) instructed it to do, and be able to monitor the quality in a 300ms glance or you’re hosed.

Having said that, there are antidotes:

1. Use a memory-bank. A memory bank is a little folder where you instruct the agent to research upcoming tasks, store progress info, post gotchas, generate retrospectives and more. It’s super helpful if the thing you’re building is large and complex.

2. Instruct the agent to implement strict TDD/BDD principles and force it to follow a red-green-refactor workflow at a granular level. The downside to this is twice the code (read $tokens) per feature but you’ll spend it anyway when your agent has 600 lines of untested code written while you grabbed coffee and you come back to the agent stuck in a loop, spending $0.20 per request sending the same large files trying to figure out some syntax error between test and code files.

3. You need to know what you’re doing. The model/agent is not going to automatically apply second and third order thinking to look around systems design corners. It does exactly what you tell it to do (sometimes well) but you also have to always be considering what you forgot to tell it to do (or not to do). Prompt engineering is not natural language. It’s a form of spellweaving where you have to find the edges of all relevant forms of existence and non-existence, codify them into imperative language, add the right nuance to finesse the style and have the confidence to cast the spell — over and over and over again.

4. You need money. Building a serious endeavor with an AI agent will cost you around $400-600 a month. Leave room for lots of mistakes. Prepare for weeping and gnashing of teeth when the task counter hits $10.00, you’re like 20% done with the feature, the model has lost context about what it was doing and you have to choose between making a broken commit and hoping the agent can sort it out in a clean task, or eating the cost and starting over. I spent $50 on starting over just this past weekend. This is why I wrote numbers 1-3 above.

I’m committed to figuring out where all the pitfalls are so others might not have to. I personally feel like anyone who’s doing this right now is really just burning some cash to see what’s possible.


> Prompt engineering is [...] a form of spellweaving where you have to find the edges of all relevant forms of existence and non-existence, codify them into imperative language, add the right nuance to finesse the style and have the confidence to cast the spell — over and over and over again.

This brought a tear to my eye.


As a delivery consultant in a Generative AI specialty practice for an extremely large cloud services consultancy, I can say with certainty that failure to achieve results with the latest models is definitely more of a reflection of the abilities of the user, and much less the abilities of the model.

A lot of people look at LLMs through the same lens that they have looked at all other technology to this point — that if you learn and master the interface to the technology, then this eventually equates to mastering the technology itself. This is normalizing in the sense that there is a finite and perceptible floor and ceiling to mastering an objective technology that democratizes both its mastery and use in productivity.

But interacting with LLMs that are in the class of higher-reasoning agents does not follow the same pattern of mastery. The user’s prompts are embedded into a high-dimensional space that is, for all intents and purposes, infinitely multi-faceted and it requires a significant knack for abstract thought in order to even begin the journey of understanding how to craft a prompt that is ideal for the current problem space. It also requires having a good intuition for managing one’s own expectations around what LLMs are excellent at, what they perform marginally at, and what they can fail miserably at.

Users with backgrounds in humanities, language arts, philosophy and a host of other liberal arts — while maintaining a good handle on empirical logic and reason, are the users who consistently excel and continue to unlock and discover new capabilities in their LLM workflows.

I’ve used LLMs to solve particularly hairy DevOps problems. I’ve used them to refactor and modularize complicated procedural prototype code. I’ve used them to assist me in developing UX strategy on multimillion dollar accounts. I’ve also used them to teach myself mycology and scale up a small home lab.

When it comes to highly-objective and logical tasks, such as the development of source code, they perform fairly well, and if you can figure out the tricks to managing the context window, many hours of banging head against desk or even weeping and gnashing of teeth can be saved.

When it comes to more subjective tasks, I’ve discovered that it’s better to switch gears and expect something a little different from your workflow. As a UX design assistant, it’s better for comprehensive abstract thinking, identifying gaps, looking around corners, guiding one’s own thoughts and generally being a “living notebook”.

It’s very easy for people who lack any personal or educational development in the liberal arts or the affinity for and abilities of abstract thought to type some half-cocked pathetic prompt into the text area, fire it off and blame the model. In this way, the LLM has acted as sort of a mirror, highlighting their ignorance, metaphorically tapping its foot waiting for them to get their shit together. Their lament is a form of denial.

The coming age will separate the wheat from the chaff.


GPT-4’s last training cut-off was January 2022. It doesn’t “know” events post that date. This chat result is likely due to a combo of historical context (prior Russia-Ukraine tensions), an ambiguously framed question, user-led steering, and/or simply coincidental accuracy. Any post-Jan 2022 “knowledge” from the model is extrapolation or coincidence, not genuine insight.


What is your certainty based on? You say that the cutoff was Jan 2022, and that what the model says about the war in Ukraine is "extrapolation". However, the summary it generated includes details such as a reference to "denazify", or damage to major cities. It would be an impossibly good forecaster if it managed to generate that text without actually having access to material post Jan 2022. I find it much more likely that the cutoff date communicated to the model is conservative, and that its training included text that was written after Jan 2022.


>However, the summary it generated includes details such as a reference to "denazify"

Do you think the denazification propaganda suddenly started in February of 2022? The wider conflict has been ongoing since 2014 and that rhetoric was not new to the 2022 escalation.

>or damage to major cities.

Kyiv and Kharkiv are the two largest cities in Ukraine. Predicting that they'd be a focus of fighting isn't really revelatory. Mariupol is smaller but was near the pre-2022 front lines and had already been attacked numerous times since 2014.[1]

>It would be an impossibly good forecaster if it managed to generate that text without actually having access to material post Jan 2022.

If the Russian invasion had happened in a vacuum, sure. In reality, ChatGPT's response is pretty clearly using information from the 2014 invasion and adding the user's prompt of February 2022. There's nothing in its answer that is unique to the 2022 invasion.

[1] https://www.rferl.org/a/ukraine-mariupol-russia-war/31269094...


Well a single hit yes, that could be explained as coincidence. But the generated text is too long and too close to reality. In Jan 2022 the meaning of "special operation" was not a war. So if the cutoff were complete it would've generated something else.


>Well a single hit yes, that could be explained as coincidence. But the generated text is too long and too close to reality.

Again, the war between Russia and Ukraine has been ongoing since 2014. Everything it's saying is general and applies entirely to the war since 2014. None of the information listed -- outside of what it was given by the user prompt -- has any specificity to the 2022 invasion.

>In Jan 2022 the meaning of "special operation" was not a war.

It's regurgitating the prompt, just as it's doing with the mention of Feb 2022.

A better test prompt would be "What is the status of relations between Russia and Ukraine?"

I don't have access to GPT-4 but GPT-3's response is entirely in-line with it's Sept 2021 cutoff:

https://chat.openai.com/share/9c37a3c3-f8c1-425e-b540-8c18ca...


On the one hand, if I query GPT-4, it does associate "special military operation" with Russia's actions in Ukraine. However, it does not reference any escalation beyond Donbas and Crimea. The generated text stays within well known knowledge of 2021:

    Summarize the special military operation in Ukraine

    As of my latest update, the conflict commonly referred to as the "special military operation" refers to Russia's ongoing military intervention in Ukraine, which began in 2014. The intervention started with Russia's annexation of Crimea in 2014 following the Ukrainian revolution. The unrest escalated into a war in the Eastern Ukrainian region of Donbas between the Ukrainian government and pro-Russian separatists.

    The "special military operation" typically refers to specific operations within this broader conflict, notably operations carried out by the Russian military or by pro-Russian separatists in Ukraine. The complexity of the conflict, which involves direct incursions, "covert" operations, proxy warfare, and other forms of military intervention, makes it difficult to summarize overall, but key events have include the annexation of Crimea, the Battle of Ilovaisk, the war in Donbas, and ongoing issues related to ceasefires, territorial control, and the political status of Crimea and Donbas.

    As of now, the conflict is still ongoing with no resolution in sight, causing numerous causalities and massive displacement of people. Please note that you need the latest update on the matter as the situation is continuously changing.
Clearly, I got the old version. What we're disagreeing over is whether the new version got clairvoyant, or just a later cutoff that includes information from 2022.


Don’t tell me what to do.


Well sit down and listen:

that's entirely troll-bait!


I am very passionate about consulting because I have found a common, recurring situational problem that I have been able to develop a thoughtful, repeatable and iterative process around solving.

There are some people who are exceptionally talented at making something from nothing, knowing how to apply the first steps towards a goal quickly and with great confidence and take great care to sow the seeds of sustainability and success into a project.

A exceptional Consultant is basically like “The Wolf” in Pulp Fiction.

Moving the needle from zero to ten (correctly and sustainably) is difficult. Getting it from ten to ninety is easy. Getting it from ninety to one hundred is almost a perpetual state.

Consultants enjoy the perspective that working on a single project for 10 years is effectively spending a quarter of one’s career on a single project. To consultants, the life of the tenacious, loyal, unquestioning worker bee is naive and wasteful.


I came across this site almost 20 years ago when researching MK Ultra and other government-sponsored psychological/psychic warfare programs. Edmund continued to edit, archive, consolidate and tweak his work right up until his death.

His personal power and thought system is a unique blend of diverse cultural influences and his writing always struck me as strangely informed and confident.

He passed away earlier this year and his site is slated to be taken down on 11/30/2022. Have a look for yourself.


Hello!

16-years-experienced, self taught developer/designer/engineer here — currently working with international enterprise/corporate customers as a professional services consultant for a large cloud services provider.

1. I don’t have a formal background in CS, but I have had to learn the industry-standard ways of solving common problems and continuously assessing order of complexity for both systems and code along the way.

2. I started with nothing but a design background and skillset, and I had to learn to supplement it with technical skills at each stage.

3. I really feel like my “success” (which is completely relative btw) is due in part to the time during which I picked up the core development skillset. Back in the early 2000’s there were no mentors around. You couldn’t just stick an evil error message into Google and expect to get the answer to your issue in 20-30 seconds. There was no Stack Overflow (and for a good while, still wouldn’t help).

Sometimes you had to monkey patch the problem. Other times you had to find an alternative route. Sometimes you’d realize that what you faced was a real limitation of the tech you were working with and you’d have to either devote the time to fixing it or give up and switch to something else completely. And yes, that was before you could submit a PR on the Github repo with your patch, so the next release would almost always break your “fixes”.

4. Multi-disciplinary interests and skillset. Having a highly-fungible skillset and being really interested and fascinated about the world around me has contributed immensely to both the technical as well as the professional side of my career and growth. It helps me earn trust quickly with customers, but also provides deep insight when attempting to model a real-world, manual process into a machine — which, in my experience is about 95% of the reason people pay us to do this job.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: