I've been working on Lorcast (https://lorcast.com/) a Scryfall like search for Lorcana. Scryfall offers a powerful search engine and query language to search for Magic: The Gathering cards. Lorcana is a new card game by Disney that also has a competitive scene. As the number of cards grows it will become more important to have good search _and_ good APIs for the community to build with.
It's been a breath of fresh air to work on something that I have no plans to monetize. I just want to build something useful for the community to use and that has allowed me to be relaxed with building it.
Anyways, if you play Lorcana I encourage you to check it out! The game is lots of fun.
I write about building cloud multi-tenant SaaS products and run a niche newsletter that goes further into detail about building these. My goal is to help every software engineer (if they want to do this) turn their side project into a product they can monetize and live off of.
How do people on HN like Row Level Security? Is it a better way to handle multi-tenant in a cloud SaaS app vs `WHERE` clauses in SQL? Worse? Nicer in theory but less maintainable in practice?
fwiw, Prisma has a guide on how to do RLS with it's client. While the original issue[0] remains open they have example code[1] with the client using client extensions[2]. I was going to try it out and see how it felt.
I use both for defence in depth. The SQL always includes the tenant ID, but I add RLS to ensure mistakes are not made. It can happen both ways: forget to include the tenant in the SQL, or disable RLS for the role used in some edge case. For multitenancy, I think it’s absolutely critical to have cross-tenancy tests with RLS disabled.
One of the things I think is important is to make the RLS query is super efficient - make the policy function STABLE and avoid database lookups, get the context from settings, etc.
RLS is pretty great as a backstop, but I found Supabase over-reliant on RLS for security, when other RBACs are available in regular PG. I can’t remember the details now.
I’ve found RLS is great with Postgraphile which uses a similar system to Supabase but is a bit more flexible.
I found RLS challenging to work with when I prototyped an app with it and postgraphile.
I had seemingly-simple authz rules that RLS made challenging to express. I needed some operations honor the user's row access privileges, but with different column SELECT/UPDATE privileges. E.g., a user can only change a value after the backend validates and processes the input, or they shouldn't be allowed to retrieve their password hash.
Expressivity was challenging, but was compounded by security being implicit. I couldn't look at any given spot in my code and confirm what data it's allowed to access - that depends on the privileges of the current DB connection. Once you mix in connections with cross-user privileges, that's a risky situation to try to secure.
The main issue we've had with it is that it's just plain slow for a lot of use cases, because Postgres will check the security for all rows before filtering on the joins, doing anything with WHERE clauses, doing anything to even tentatively take LIMIT into account, etc.
Imagine a 1-million-row table and a query with `WHERE x=y` that should result in about 100 rows. Postres will do RLS checks on the full 1 million rows before the WHERE clause is involved at all.
I'm having a hard time relating to this comment given our own experience.
We use RLS extensively with PostgREST implementing much of our API. It _absolutely_ uses WHERE clauses and those are evaluated / indexes consulted before RLS is applied. Anything else would be madness.
> because Postgres will check the security for all rows before filtering on the joins, doing anything with WHERE clauses, doing anything to even tentatively take LIMIT into account, etc.
Note that the above only happens for non-inlinable[1] functions used inside RLS policies.
Going from what you mentioned below, it seems your main problem are SECURITY DEFINER functions, which aren't inlinable.
It's possible to avoid using SECURITY DEFINER, but that's highly application-specific.
Try it with RLS policies that have any plain JOINs in them to reference other tables and you'll see execution times balloon massively (as in, orders of magnitude worse) for a lot of simple use cases, because it's then doing the RLS checks against every involved table to determine if your original RLS check is allowed to use them. The only way around that if you have multiple tables involved in determining access is to use cached subqueries with SECURITY DEFINER functions that aren't subject to the recursive RLS checking.
You can use that to inject your ACL/permissions into a setting - set_config('permissions', '{"allowed":true}'). Then in your RLS rules you can pluck them out - current_setting('permissions'::jsonb).
This should make your RLS faster than most other options, in theory, because of data co-location
That seems deeply impractical for a lot of cases. If user A has access to 80,000 of those 1,000,000 rows in a way that's determined from another table rather than as part of in-row metadata, doing the lookups to JSONify 80,000 UUIDs as an array to pass along like that really isn't going to help beyond cutting down a 20-second query response to a still-unacceptable 7-second query response [1] just to get 100 rows back.
[1]: Both numbers from our own testing, where the 7 seconds is the best we've been able to make it by using a SECURITY DEFINER function in a `this_thing_id IN (SELECT allowed_thing_ids())` style, which should have basically the same result in performance terms as separately doing the lookup with pre-fetching, because it's still checking the IN clause for 1,000,000 rows before doing anything else.
You certainly wouldn't want to inject 80K UUIDs. I'm not sure I understand the structure you're using but if you want to send me some details (email is in my profile) I'd like to dig into it
One of the tenants views a page that does a simple `SELECT * FROM products ORDER BY updated_at LIMIT 100`. The RLS checks have to reference `products` -> `tenants` -> `tenant_users`, but because of how Postgres does it, every row in products will be checked no matter what you do. (Putting a WHERE clause on the initial query to limit based on tenant or user is pointless, because it'll do the RLS checks before the WHERE clause is applied.) Joins in RLS policies are awful for performance, so your best bet is an IN clause with the cached subquery function, in which case it's still then got the overhead of getting the big blob of IDs and then checking it against every row in `products`.
Yes. That's also irrelevant to the cause of the performance issues, which all happen before the ORDER BY and LIMIT even come into the picture in Postgres' query optimization.
Edit: To give a better idea of the impact of RLS here, writing up an equivalent query outside of the RLS context [1] has an under-1-second response time, where RLS turns that into 10x the time even in the most optimized case.
[1]: This kind of thing, roughly:
SELECT *
FROM products
JOIN tenants ON products.tenant_id = tenants.id
JOIN tenants_users ON tenants.id = tenants_users.tenant_id
WHERE tenant_users.user_id = auth.uid()
ORDER BY updated_at
LIMIT 100
Hi - We're an analytics solution for a specific vertical, so this is probably not appropriate for everyone but - what we did was create partitioned data tables that are named using a hash of the user UUID and other context to create the partition table name upon provisioning data tables for the user. The parent table is never accessed directly. We're using Supabase, but we don't use Supabase's libraries to operate this.
It is highly appealing to have that defense in depth. However, when building a prototype or a product, not having experience in it causes me to worry that we will end up being stuck with a choice where it's very hard to pull ourselves out of.
So instead we've stuck to having that filtering logic in the application side. The main concern is how user auth/etc works in Postgres. (lack of knowledge, not lack of trust).
Because we also have complex filtering like, "let me see all the people in my team if I have this role, but if i'm a public user, only show this person" etc
I was thinking of building this for Next.js in a very similar way. I love the fact that this exists for a different stack! I think your implementation of the project template is definitely the way to go. As a customer, when I buy something to jump-start me, I want it to be as clean and simple as possible. Generating the project from a configuration handles that nicely.
Outside of the generator itself, what have you found most beneficial for converting customers and convincing people to buy? Documentation? Videos? The community access?
Yeah, as I mention in the post, I started without the template option and just couldn't stomach how much extra cruft would be lying around...
Re: converting, it's a mixed bag and often hard to know. But some factors that customers have mentioned:
1. Positive comments (there have been a few threads about Pegasus/boilerplates on HN and Reddit where people have had good things to say).
2. Some trust in me. I have a blog, write Django guides, publish YouTube videos, am active on Twitter, have been on podcasts, etc. I think people realizing that the creator is a serious person who has at least some credibility goes a long way.
3. I think the high-quality docs and regular release history help a lot to convince people it's a solid product.
Those are the first ones that come to mind. But honestly, I'm lucky if I find out why a customer picked me! And I'm sure some just google "django saas boilerplate" and buy the first thing that comes up, which, thankfully, at the moment is Pegasus.
Thanks for the response! It sounds like (in a good way) there's no shortcut. You build something, make it better, make it great, and talk about it for a long time to establish credibility.
> More importantly, it gives me the courage to dream and attempt things beyond my current abilities.
This is one of the small things that GPT has pushed me to do. I'm an experienced programmer and grew up playing video games, but I am not a game developer. I've dabbled over the years but can't build anything outside of a tutorial.
The other day, using GPT I went through and made a Pixel Dungeon clone using Phaser (JavaScript game engine). It was such a delight. Together we got the game to a working state before I started to change some of the fundamental design, and then GPT started hallucinating pretty badly. When that happened, I stopped for the night, and then just started new focused conversations to add or change a feature, putting in the relevant code. It's turn-based, with simple AI, enemy monsters, fog of war, line of sight, and a victory/losing condition.
Being able to build this in a couple of days is an absolute game-changer, pun intended. I can only imagine the riches of experiences we will have as people are unblocked on skills and can pursue an idea.
I taught myself to code just to build a prototype of an app I wanted to create. My plan was to launch the prototype, get some market feedback, and then find a technical cofounder or hire someone.
It took me 8-10 months of learning to build a prototype. But then GPT-4 came along and I could suddenly ask it questions about features I wanted to add. I then realized that there were so many features I could build on my own.
It has made me far more ambitious as a noob programmer.
I am software engineer and I have certainly attempted use GPT for coding with mixed results.
But what I am really impressed using it for writing. I am writing the story of my abusive upbringing with a mental ill mother. I am not a writer, but this thing is awesome because I can ask it to give me prompts and it is jogging my memory more and more then I can give it rough outlines and thoughts and it will organize it into more of a literary style, then I can go through and update most of the wording to my own liking or leave as is as I see fit. I am easily able to produce much more and a better quality than without it.
I use GPT-4 for coding and technical tasks every day, its abilities to make ideas a reality which would otherwise take me days or weeks is astonishing.
I wanted to try to make real-world terrain in Unity - GPT-4 guided me to the USGS's LiDAR data, took me step-by-step through creating a mesh from a point cloud, and created several scripts to edit and filter the mesh programmatically.
There are some caveats and many dead-end conversation branches - GPT-4 seems to know 'about' more libraries than it actually knows how to use, so certain library choices tend to produce erroneous code.
GPT-4 sometimes picks a poor method, for example conflicting methods of moving an object in a single script, but can usually resolve the issue when notified.
Dozens of hours fiddling with technical details saved.
Given the token count limits on input and output, I've been wondering how folks work on these long-running projects? Is it just not a problem, or are there some tactics? I find in trying to build up lengthy outputs that it starts truncating or leaving out things from way before.
I think this is a good question. Screenshots are hard on HN, but I could write up a blog post about it all once I have a demo of the game.
I've been using ChatGPT and GPT-4 since they came out, so I'm pretty aware of their limitations. You can't build an entire project in a single project. GPT loses the context at one point.
With that in mind, break down the project into chunks and make each one a conversation. For me, that was:
1. Initial Setup
2. Add a randomly generated tilemap
3. Character Creation and Placement
4. Movement
5. Turn-based execution
6. Fog of War
7. Enemy Monsters
Each one of these is a separate thread. They start with:
> You are an expert game developer using Phaser 3 and TypeScript. You are going to help me create a game similar to Pixel Dungeon using those technologies.
I then copy in relevant code and design decisions we already made so it has the context. If it gets the answer totally wrong, I'll stop the generation and rephrase the question to keep things clean.
Lastly, I'll frequently ask it to talk about design first saying something like:
> We are going to start with the world map and dungeon generation to start. We'll add a character sprite and movement next. What's the best way to do tilemap generation and store the data? Don't write any code, just give me high level suggestions and your recommendation.
This lets me explore the problem space first, decide on an approach, and then execute it. I'll also ask for library suggestions to help with approaches, and I'm generally surprised by the suggestions. Like all software, once you know the design, the pattern, the goal, and have good tools to help you achieve it... the coding isn't all that hard.
I use the API and have a generic prompt prefix for each programming language and type of task to try to get as much bang for my buck with as small of a token count as possible, and then tossing in a single query with a bit of project-specific code fleshes out the rest of what I'm asking it. Prompt tokens are usually around 300-1000, leaving over 3000 for the response (which it rarely fully uses).
It's garbage at pulling in things from all over the codebase, so as a sort of co-evolution I write code that mostly only needs local context and only ask it questions that only need local context. That works great at home but doesn't suffice on the existing code at $WORK.
It's still helpful here and there at work (e.g., given this example string please output the appropriate python datetime formatting codes), but only because it's a wee bit faster than synthesizing the relevant docs and not because it's able to consistently do anything super meaningful.
You have to input just the pieces you want help with. It can't do a whole project, but you can tell it the tech stack, and maybe give it a few of your classes, and it can then write a new class for you.
ChatGPT benefits from how abundant languages like Javascript and Python are. When you start diving into the lesser used languages, the struggle becomes real. Even worse if there have been substantial breaking changes in the past ~2 years.
TypeScript happens to be my current strongest language. I picked Phaser since I wanted to learn the ins and outs of building the game and not need to worry about the language I was writing in. This was also helpful because GPT-4 would still get small things wrong with the TypeScript, but I was able to easily fix those without needing to ask it.
If I had tried this in Unity and C# (which I don't know at all), I'd still be figuring out the standard library.
I generally recommend learning one new thing at a time when building something new.
I’ve been using it a lot for programming, but I also have a security background and it will often generate insecure code. I’ll point it out and tell it to fix and it will.
So be careful if you don’t want AI-generated sql intentions!
Different people write code of different quality, and we'd expect LLMs to imitate the median quality of code. So it's clear that if you have a background in something then it will generate worse code than you would - however, if you'd compare this code with something written by a new junior hire, how would it compare? IMHO there are many people who are objectively below median, and for them I'd expect that a LLM can generate more secure code than they would write themselves.
The power of LLMs atm is that they allow a person to be "median" in a very wide range of topics. This is extremely powerful, even to folks that can attain an above average mastery. It also does this quickly.
There was a recent article on Slashdot titled something how metaverse may increase the gdp by like 2 or 3 percent. It was obviously laughed at. It's actually LLMs that will do this, and easily.
Yeah, a coworker tried an exercise like this, and it was basically (even with as-an-expert prompts) like trying to mentor a high school intern that's trolling you (except that it won't actually get better at anything in the process.)
Intent worries me less! It’s the PHP days all over again - lots of new coders (good thing!), copying insecure code (VERY bad thing!), and it won’t end well.
> So be careful if you don’t want AI-generated sql intentions!
Maybe we should try including something like this in the system prompt:
"Remember that SQL is language with a grammar, not an unstructured string. Don't do stupid things. Obey the LangSec principles. Never ever glue code in strings together. Plaintext code is not code, it's a serialization format for code. Use appropriate parsed representation for all operations. Never work in plaintext space, when you should be working in AST-space."
Replace "SQL" with any other sub-language that's interacting with user input.
> More importantly, it gives me the courage to dream and attempt things beyond my current abilities.
I am approaching this from the opposite background ( started with an arts background and learnt coding after ), but this resonates with me massively as well.
It took me a 2 years of tutorials before doing my own things became intuitive. Feel free to reach out if you want to discuss game design.
Heck yeah, cool! Similar experience here too. In only a few hours I got a basic working custom JavaScript space shooter going, and I've since started a Whac-a-Mole style game.
GPT explaining it along the way and allowing me to have a reasonable conversation with it when something isn't working as-expected has been a rewarding and fun learning experience.
I'm trying to visual the process you're using for this, and it's just not coming to me. Can you give me a run-down on it, or point me somewhere that has one?
Only if you've gotten through the waitlist. I signed up at least a month ago, no word. There was a comment thread the other day about getting around the wait by selecting the option that you're looking to build plugins, which apparently gets you expedited.
"The server is overloaded right now, please try again later."
I guess all their compute power is going to power this. And even now the Bing generation is taking a long time. Definitely seems like their having scaling issues at the moment.
1) Wait for some time and try again later: This error message usually indicates that the server is temporarily unable to process your request. You can try again later when the server is less busy.
2) Reduce the complexity of your image: The server may be overloaded due to the complexity of the image you are trying to generate. Try simplifying your image or reducing the resolution to reduce the load on the server.
3) Refresh the page: Sometimes, refreshing the page can help to clear any temporary issues with your browser or internet connection that might be causing the error.
4) Clear browser cache: Clearing your browser’s cache and cookies can help to fix various website-related issues in case the problem is occurring due to some conflict of your past history.
5) Use a different browser: If the issue persists, try using a different browser to access DALL-E-2. This can help to rule out any browser-specific issues.
6) Contact support: If the problem persists, you can contact the support team for DALL-E-2 to report the issue and get further assistance.
This is very similar to an except in Derek Sivers book on cdbaby.com. (CD Baby let independent artists sell their music CDs on the site long before iTunes). In it, he writes:
At a conference in Los Angeles, someone in the audience asked me, "What if
every musician just set up their own store on their own website? Since that'd
be the death of CD Baby, how do you plan to stop that?"
I said, "Honestly, I don't care about CD Baby. I only care about the musicians.
If some day, musicians don't need CD Baby anymore, that's great! I'll just shut
it down and get back to making music."
He was shocked. He had never heard a business owner say he didn't care about
the survival of his company.
To me, it was just common sense. Of course you should care about your customers
more than you care about yourself! Isn't that rule #1 of providing a good
service? *It's all about them, not you.*
The original creator of the company will care about that because they know why they are trying to solve the problem. Everyone that comes after will not. This is natural. Everyone after sees the validation of the product as a validation of a money printing machine and will look to run just that - a money printing machine.
Hate to deify a Steve Jobs, as if he needs more god-like overtures, but that is what will always make him different than your Tim Cooks of the world (or your Sundar Pichais of the world as opposed to Larry or Sergey, your Activision-Blizzard management versus just the original Blizzard team).
Two completely different types of animals. In a sense, that’s why we usually describe this loss as a company’s soul leaving it. The why goes missing and the what, which is always money, is all that’s left.
I'm racking my brain trying to remember where I heard it, but I've heard it roughly summarized as: Companies are founded by people who are loyal to a vision, but tend to promote people who are loyal to the company. Eventually the people who are loyal to the company have all of the power. The people loyal to the vision are only left in the lower ranks, if at all.
At least people who are loyal to the company care about the long term viability of the company. I think Cook, Nadella and Jassy (my skip*7 manager) care about the company. I don’t know what the hell Pichai cares about.
Google has the focus and the vision of a crack addled flea.
Google was organized such that the founder didn't have to do much, but that also means that the CEO doesn't have much power to run things so Pichai would need to rebuild the whole company to start to change how things are run.
This is the reason why Google can launch so many products, but also why they can cancel so many products: there is no person holding the reins and instead middle managers just go off and do their own thing. Apple, Microsoft, Amazon and Facebook all have a clear power center created by the founders, Google doesn't have that which is why they are so much more unpredictable.
It seems like the best model of Sundar Pichai's behavior is someone who myopically cares about his own career. He puts most of his effort into looking smart and avoiding offending people.
Honestly, Ruth Porat seems to run most of the impactful decisions at the C-level, and that's a good thing.
The shareholders won't care about that, but a private owned company can care about it. Even successors. Startups generally attempt to become publicly owned, so they use bait and switch. And most fail anyway (to become a unicorn). However, even a CEO can care about it, its just that they ultimately fall under authority of the shareholders / board.
Right. Well I think that’s one of the reasons Musk doesn’t take Spacex public. The why he operates under is very much not palatable for any reasonable shareholder. Shareholders would want to scale out space deployments of practical things (satellites) and focus on that. If they knew Musk’s why, which is to get to Mars, well … well I mean that’s just crazy. What board is going to go for that?
Out of sight? He talks about it publicly and constantly (at lease he used to. Before Tesla and then Twitter started occupying more of his attention).
Shareholders can be told that they are aiming at Mars, full stop. As long as they don't control the voting and were told that before they bought the shares, it's fine.
I don’t disagree with what you’re trying to say, but keep in mind that SpaceX still has shareholders, and a board. It’s a much more limited and vetted set of investors, but very much not a closely-held corporation or anything like it.
> a private owned company can care about it. Even successors.
In theory, but rarely in practice. I've seen a lot of private, family-owned businesses go out of business or get sold after the son/successor takes over because of exactly the phenomenon jesuscript mentions. Even if the successor cares, they don't care about the same things as the founder.
I think it is extremely unlikely that Jobs would have said well then we just shut down Apple if there is not a need for us anymore. From what I have read he was a business man first. He also very clearly had a very good sense of product quality and was good enough technically to understand not only what was possible, but what was going to be possible in 5 years. But that he was all about the users needs and not himself is not something I think the facts bear out.
Seems like everyone was treating the non-founder Jobs as being typical of a founder to me. If he had stayed at Apple instead of going off to do Next he would have been more typical and probably have still been bad at quality and focused on slimy sales tricks.
He's really more an example of someone who was forced to correct some pretty massive defects and lack of vision in important areas where most lottery winning founding CEOs will never get corrections.
I would be kind of curious but I found it very funny if he wasn't a black mark against Next hardware back then given that workstation users were already going to see consumer as cheap even if he didn't have a reputation for cheapening his line further.
Really I think his behavior was much more inline with Cook than his earlier self once he was successful and founder envy was more an external benefit than related to some actual behavior he had.
> The original creator of the company will care about that because they know why they are trying to solve the problem. Everyone that comes after will not.
A good follow up question is why not make CD Baby a non-profit instead of for-profit? Being a non-profit foundation would telegraph to everyone, including future leadership, where their priorities should lie.
A non-profit is largely motivated to stay alive, just like a for-profit, but mainly does it by asking for money from rich people rather than selling things.
Also, the extra accounting and regulation is intense for a small organization.
Much of the wealth a non-profit generates can get lost on large salaries and expense accounts. A charity might have more rules and send a stronger signal.
But having said that the creator could make tons of money and then shut it down because it is not needed. That's perfectly logical
It helps that CDBaby was handling a lot of stuff that musicians didn't want to do themselves. They were happy to outsource it. If the alternative were easy, people would already have done it.
That wasn't what CDBaby lost to. They lost to the fact that CDs disappeared. CDBaby had a great business, removing a pain point, and they could genuinely want that pain point to go away entirely.
Such are the best possible businesses, genuinely caring about your customers while having enough moat to avoid being undercut tomorrow. Not all businesses can do that, regardless of the best intentions of management.
This isn't intended to reflect on any particular business owner. But I have yet to hear any business owner announce "We are totally motivated by greed, and will screw over our customers and partners to make money". Elizabeth Holmes said a lot of touchy-feely stuff. So I wouldn't put much weight in what any business owner says about their motivation.
Lots of business owners end up treating their businesses as their kids in the sense that they see them as extensions of themselves primarily, rather than their own thing. This, of course, is deeply narcissistic and hurts everyone in the vicinity.
> What if every musician just set up their own store on their own website?
Can anyone imagine the hell that would be "every musician setting up their own website?" Centralization offers a lot of benefits to the users. I can buy CDs of multiple artists at the same time. I can find similar artists (there's a natural competition here). None of this one password per artist. There's a reduced security risk for me buying from a centralized service (just need to trust one site instead of hundreds, and specifically small artists that might not know things). I definitely would not have answered that question so well.
> Of course you should care about your customers more than you care about yourself! Isn't that rule #1 of providing a good service? It's all about them, not you.
If you good-service yourself into being unprofitable and eventually bankrupt, it's not good for yourself and eventually your customers as well.
I've seen over and over this harm friends in frustrating ways. Tech aside, non-competes in other industries are completely insane. My wife is an optometrist and all local shops have draconic noncompetes you are forced to sign. If you leave the shop you can't work within 30 miles (or more!) of that location.
I've had friends move entire cities just so they can get out of a terrible work situation. Worse, I've had friends stay in bad situations because their noncomplete would force them to move or drive way too far for work.
And since everyone does it, they're resigned to "it's just the way it is" and nobody wants to risk being sued.
>If you leave the shop you can't work within 30 miles
I know one person who worked in sales and was banned from selling in an entire region of the country. She was completely open about this when being recruited by another employer. That new employer appeared completely willing to work around the non-compete clause to bring her onboard.
That was, until she was actually hired and she was almost immediately pressured to sell throughout the forbidden area. When she said she was willing to, but only if the new employer would sign an agreement to cover any of her associated legal fees, they eventually backed off on the demands.
These situations always seem to push the risk to the employee to the benefit of the employer.
In normal countries these are only valid when the previous employer pays for the time you are not allowed to work.
If I work at a bank as a developer then I can go straight to the next one, if the previous employer doesn't keep paying me a compensation for not working at the next bank.
Not being able to 'in sales' as sibling commenter says here is just insane, and should not hold in court anywhere.
IMO the primary concern at an office like that is having access to patient data and then enticing them to go elsewhere. If the person wants to go work elsewhere, totally fine. The concern is trying to take customers or other staff with them.
I know somebody that happened to. Company opened a branch office and one of the senior staff rented an office around the corner, took half the staff and patients. Stuck them with multi year lease agreements, after the business already bore the entire startup cost of paying people while building up a patient load, marketing, etc.
I understand the opposition to draconian non-competes but there’s a flip side of this to protect an investment that is very valid.
There are usually non-solicitation clauses as well that prevent you from actively recruiting former co-workers and clients/customers. That's different than a non-compete.
I want to like this a lot, but it's not quite there for me.
A datum says it is "Project Billing" and has a rate of $300. Is that $300 per hour? Is that $300 total? What does that mean?
I need to be able to filter by location. For me, comparable large US cities. My cost of living is vastly different than someone living in India.
For this to be most useful, I need to be able to put in my rough skillset and then find how much similar people charge. Just browsing rates around the world isn't that useful.
And that brings me to the last issue. Hourly billing is stupid [1]. As much as possible, you should avoid charging hourly. It incentives you to work slower, more poorly, and have a worse experience with the client. But it's too hard to say, "I charged USD 100k for my project," because you need to have a complete and intimate understanding of the project.
For example, my latest consulting gig was fixing some Google Analytics and Facebook conversions for a small business. They were broken, and I fixed them. It took me ~15 minutes, and they paid me $1000. They got incredible value (their ads started working properly again), and I got paid for minimal effort. What do I put down on a site like this, $4000/hr?
awesome feedback. thanks a lot. I don't disagree about project vs hourly billing either, but it's such a routine debate and sadly very common that clients _want_ hourly. In the last few contracts I've had, I've only been lucky to have 1 go project based.
I'm doubtful every business can make it to $14M ARR, but I bet many companies could generate $1M ARR without needing employees. And that amount of money would be life-changing for many people (with, of course, the potential to sell what you've created to exit).
I've spent much of my life chasing the "make it big" startup dream. I'll be pretty content if I can generate meaningful recurring revenue by building things I love. Still trying to decouple hours worked from the paycheck.
It's been a few years now, but a co-founder and I took the 'solo' (almost) route, refused to take money we didn't really need yet, refused to hire employees we didn't really need yet. We spent 8 months building our product, and when we finally did decide we were at a point we need to raise money and hire to scale, we had both had significant life changes that meant we didn't really have the heart to see the idea through. We had the self-awareness to realize that. So we sold out completely, and because it was just us, and for a pretty short amount of time, we made out pretty well. All of this was totally off the usual SV radar, and definitely not considered a startup success story. But who cares, it worked out great for me personally and allowed me flexibility to do the things I do with my time now. So many people quit the rigid, corporate culture only to join another rigid, corporate culture that just goes by a different name, "startup." Doing it on your own with no outside resources for as long as you can is totally valid, and only the VCs will try to tell you otherwise.
I've been making a little progress on the "building things I love that generate revenue" piece (in part thanks to the HN crowd!), but oh boy, do the goalposts always keep shifting.
I remember the first dollar I ever made on the internet felt so amazing and validating... but several adjustments later, the mentality is more like "okay, but I still can't do this full-time."
Goalposts aside though, I'd agree that if there's a scenario where you have the autonomy and freedom to work on just the things you love, that's a game-winning scenario.
For the past year and a half, I've been focused on just one side project: a privacy-friendly personal finance simulator that doesn't ask to link your accounts (projectionlab.com), can run Monte Carlo simulations in your browser, and keeps your data client-side unless you choose otherwise.
Randomly came across your comment on this thread and took a few minutes to check out your sandbox env. Could totally see something like this taking off or getting bought out by a bank/Intuit/etc.
I may be a bit biased since I'm huge into personal finance, but hopefully you can target that segment and carve out a solid set of paying customers
You could do that for low skill tasks. Fiverr might take the skill level up a notch while still being self-serve pay as you go. Also use
Heroku over AWS and so on, footing a bigger bill to have ops taken care of.
I realized recently there’s tax pressure to make money from equity and exits over income and revenue, since the cap gains rates are so much lower than income tax rates. Depends on your jurisdiction but alas.
That is my perfect scenario as well. I don't want the startup investment hustle, I just want to run a comfortable software business for people in my community.
It's been a breath of fresh air to work on something that I have no plans to monetize. I just want to build something useful for the community to use and that has allowed me to be relaxed with building it.
Anyways, if you play Lorcana I encourage you to check it out! The game is lots of fun.