Hacker Newsnew | past | comments | ask | show | jobs | submit | more bruce511's commentslogin

Forgive me, but I don't understand. Perhaps you can elaborate a bit (I did read the web site. )

So, it seems to me like your primary target here is the suppliers marketing budget. The line of thought is that if marketing goes away, then stuff gets cheaper.

So your plan is to alert sellers directly when a product is for sale. So if 1 million products came up for sale today, I'd get a million emails? That doesn't seem workable, but maybe I've missed something?

No cost to buyers? Means that sellers will pay to support this service? Which presumably comes out of their marketing budget? But (I'm guessing) they only pay for goods actually sold (so should relist their entire inventory every day?)

Edit - I reread - no cost to sellers. So (small?) cost to buyers, like um, an auction premium. Which the buyer should factor in...? Seller still going to relist though...

If I have an innovative product, how will I reach new people? People like my mom who don't even know that an Automated Back Scratcher 2000 exists? Perhaps by Marketing it?

All of this is predicted on suppliers using a "cost plus" pricing approach? As distinct from a "market price" approach? If my competitors are selling at $100, including a $75 marketing cost, then I should sell "direct" (well, not really direct, but through your site with presumably lower charges) at $25? Why would I do that? Why wouldn't I sell at say $99?

How is your site not just eBay?

I'm sure you've thought this through, and can illuminate me. I suspect an FAQ on your web page might help answer this too...


I have come across systems that use GET but with a payload like POST.

This allows the GET to bypass the 4k URL limit.

It's not a common pattern, and QUERY is a nice way to differentiate it (and, I suspect will be more compatible with Middleware).

I have a suspicion that quite a few servers support this pattern (as does my own) but not many programmers are aware of it, so it's very infrequently used.


Sending a GET request with a body is just asking for all sorts of weird caching and processing issues.

I get the GPs suggestion is non-conventional but I don’t see why it would cause caching issues.

If you’re sending over TLS (and there’s little reason why you shouldn’t these day) then you can limit these caching issues to the user agent and infra you host.

Caching is also generally managed via HTTP headers, and you also have control over them.

Processing might be a bigger issue, but again, it’s just any hosting infrastructure you need to be concerned about and you have ownership over those.

I’d imagine using this hack would make debugging harder. Likewise for using any off-the-shelf frameworks that expect things to confirm to a Swagger/OpenAPI definition.

Supplementing query strings with HTTP headers might be a more reliable interim hack. But there’s definitely not a perfect solution here.


To be clear, it's less of a "suggestion" and more of a report of something I've come across in the wild.

And as much as it may disregard the RFC, that's not a convincing argument for the customer who is looking to interact with a specific server that requires it.


Cache in web middleware like Apache or nginx by default ignores GET request body, which may lead to bugs and security vulnerabilities.

But as I said, you control that infra.

I don’t think it’s unreasonable to expect your sysadmins, devops, platform engineers, or whatever title you choose to give them, to set up these services correctly, given it’s their job to do so and there’s a plethora of security risks involved.

If you can’t trust them to do that little, then you’re fuck regardless of whether you decide to send payloads as GET bodies.

And there isn’t any good reason not to contract pen testers to check over everything afterwards.


> I don’t think it’s unreasonable to expect your sysadmins, devops, platform engineers, or whatever title you choose to give them, to set up these services correctly, given it’s their job to do so and there’s a plethora of security risks involved.

Exactly, and the correct way to setup GET requests is to ignore their bodies for caching purposes because they aren't expected to exist: "content received in a GET request has no generally defined semantics, cannot alter the meaning or target of the request, and might lead some implementations to reject the request and close the connection because of its potential as a request smuggling attack" (RFC 9110)

> And there isn’t any good reason not to contract pen testers to check over everything afterwards.

I am pretty sure our SecOps and Infra Ops and code standards committee will check it and declare that GET bodies is a hard no.


> Exactly, and the correct way to setup GET requests is to ignore their bodies for caching purposes because they aren't expected to exist

No. The correct way to set up this infra is the way that works for a particular problem while still being secure.

If you’re so inflexible as an engineer that you cannot set up caching correctly for a specific edge case because it breaks you’re preferred assumptions, then you’re not a very good engineer.

> and might lead some implementations to reject the request and close the connection because of its potential as a request smuggling attack"

Once again, you have control over the implementations you use in your infra.

Also It’s not a RSA if the request is supposed to contain a payload in the body.

> I am pretty sure our SecOps and Infra Ops and code standards committee will check it and declare that GET bodies is a hard no.

I wouldn’t be so sure. I’ve worked with a plethora of different infosec folk from those who will mandate that PostgreSQL needs to use non-standard ports because of mandating strict compliance with NIST, even for low risk reports. To others that have been fine with some pretty massive deviations from traditionally recommended best practices.

The good infosec guys, and good platform engineers too, don’t look at things in black and white like you are. They build up a risk assessment and judge each deviation on its own merit. Thus GET body payloads might make sense in some specific scenarios.

This doesn’t mean that everyone should do it nor that it’s a good idea outside of those niche circumstances. But it does mean that you shouldn’t hold on to these rigid rules like gospel truths. Sometimes the most pragmatic solution is the unconditional one.

That all said, I can’t think of any specific circumstance where you’d want to do this kind of hack. But that doesn’t mean that reasonable circumstances would never exist.


I work as an SRE and would fight tooth and nail against this. Not because I can’t do it, but because it’s a terrible idea.

For one, you’re wrong about TLS meaning only your infra and the client matter. Some big corps install a root CA on everyone’s laptop and MITM all HTTP/S traffic. The one I saw was Bluecoat, no idea if it follows your expected out-of-spec behavior or not.

For two, this is likely to become incredibly limiting at some point and require a bunch of work to re-validate or re-implement. If you move to AWS, do ELBs support this? If security wants you to use Envoy for a service mesh, is it going to support this? I don’t pick all the tools we use, so there’s a good chance corporate mandates something incompatible with this.

You would need very good answers to why this is the only solution and is a mandatory feature. Why can’t we cache server side, or implement our own caching in the front end for POST requests? I can’t think of any situations where I would rather maintain what is practically a very similar fork of HTTP than implement my own caching.


> Not because I can’t do it, but because it’s a terrible idea.

To be clear, I'm not advocating it as a solution either. I'm just saying all the arguments being made for why this wouldn't work are solvable. Just like you've said there that it's doable.

> Some big corps install a root CA on everyone’s laptop and MITM all HTTP/S traffic.

I did actually consider this problem too but I've not seen this practice in a number of years now. Though that might be more luck on my part than a change in industry trends.

> this is likely to become incredibly limiting at some point and require a bunch of work to re-validate or re-implement.

I would imagine if you were forced into a position where you'd need to do this, you'd be able to address those underlying limitations when you come to the stage that you're re-implementing parts of the wider application.

> If you move to AWS, do ELBs support this?

Yes they do. I've actually had to solve similar problems quite a few times in AWS over the years when working on broadcast systems, and later, medical systems: UDP protocols, non-standard HTTP traffic, client certificates, etc. Usually, the answer is an NLB rather than ALB.

> You would need very good answers to why this is the only solution and is a mandatory feature.

Indeed


There is no secure notion of "correctly" that goes both directly against specs and de facto standards. I am struggling to even imagine how one could make an L7 balancer that should take into account possibilities that someone would go against HTTP spec and still get their request served securely and timely. I personally don't even know which L7 balancer my company uses and how it would cache GET requests with bodies, because I don't have to waste time on such things.

Running PostgreSQL on any non-privileged port is a feature, not a deviation from a standard. If you want to run PostgreSQL on a port under 1024 now that would be security vulnerability as it requires root access.

There is no reason to "build up a risk assessment and judge each deviation on its own merit" unless it is an unavoidable technical limitation. Just don't make your life harder for no reason.


> There is no secure notion of "correctly" that goes both directly against specs and de facto standards.

That's clearly not true. You're now just exaggerating to make a point.

> I am struggling to even imagine how one could make an L7 balancer that should take into account possibilities that someone would go against HTTP spec and still get their request served securely and timely

You don't need application-layer support to load balance HTTP traffic securely and timely.

> Running PostgreSQL on any non-privileged port is a feature, not a deviation from a standard. If you want to run PostgreSQL on a port under 1024 now that would be security vulnerability as it requires root access.

I didn't say PostgreSQL listening port was a standard. I was using that as an example to show the range of different risk appetites I've worked to.

Though I would argue that PostgreSQL has a de facto standard port number. And thus by your own reasoning, running that on a different port would be "insecure" - which clearly is BS (as in this rebuttal). Hence why I called your "de facto" comment an exaggeration.

> There is no reason to "build up a risk assessment and judge each deviation on its own merit" unless it is an unavoidable technical limitation.

...but we are talking about this as a theoretical, unavoidable technical limitation. At no point was anyone suggesting this should be the normal way to send GET requests.

Hence why i said you're looking at this far too black and white.

My point was just that it is technically possible when you said it wasn't. But "technically possible" != "sensible idea under normal conditions"


> I have come across systems that use GET but with a payload like POST.

I think that violates the HTTP spec. RFC 9110 is very clear that content sent in a GET request cannot be used.

Even if both clients and servers are somehow implemented to ignore HTTP specs and still send and receive content in GET requests, the RFC specs are very clear that participants in HTTP connections, such as proxies, are not aware of this abuse and can and often do strip request bodies. These are not hypotheticals.


Elasticsearch comes to mind.[0]

The docs state that is query is in the URL parameters, that will be used.I remember that a few years back it wasn't as easy - you HAD to send the query in the GET requests body. (Or it could have been that I had a monster queries that didn't fit through the URL character limits.)

0: https://www.elastic.co/docs/api/doc/elasticsearch/operation/...


> you HAD to send the query in the GET requests body.

I remember this pain, circa 2021 perhaps?


Probably closer to 2019. Maybe the optionality is a relatively new feature then.

I think graphQL as a byproduct of some serious shenanigans.

"Your GraphQL HTTP server must handle the HTTP POST method for query and mutation operations, and may also accept the GET method for query operations."

Supporting body in the get request was an odd requirement for something I had to code up with another engineer.


And the whole GET/POST difference matters for GraphQL at scale: we saved a truckload of money by switching our main GraphQL gateway requests to GET wherever possible.

Servers can support it but not browsers.

I guess I'd like a bit more explanation on what you envisage. I'm thinking back to presentations I've done, even ones where entertainment was an ancillary goal, but I'm not seeing how this would help?

Obviously there are a bunch of presentations which are dry, and can't really be tweaked. I'm thinking of financial things, income statements etc. They're primarily informational, and we need to see all the slides. Having the audience vote on whether to look at the cash-flow slide, or debtors analysis next seems contrived.

Then I've done presentations to inform. Like say introducing developers to Unicode. That tends to follow a path of knowledge, where one fact or concept builds on previous facts. We can't really discuss string normalization before covering code points and characters etc.

Sales presentations are a bit hit or miss. They can be wildly entertaining, or dreadfully dull. They can certainly be too focused on the product and too unfocused on the specific customer need. But a good one also leads the customer through specific steps (while keeping attention.)

So I'm a bit curious as to the presentations you've experienced where you feel this would have improved things? (Other than the very common "please can we end this presentation already" sentiment which is alas all too common.)


I'm really not supposed to be sarcastic here, but you're miking it hard. So let me get that out of the way first...

You looked at the state of the world, you experienced climate change, witnessed the dismantling of the US govt, the destruction caused by killing USAID, the hot war between a nuclear state and a quasi-european nation, the genocide in the Middle East, plus a million other things this year, but yhd thing that really concerns you is AI?

On a more positive note, I know the world can be a scary place. I know change is unsettling. I know AI and the impact it will have is uncertain. But I have lived through multiple wotld-shattering changes. I grew up before we had TV, never mind computers or phones or smart phones. AI is just more of the same. And yhe future will bring more change. Self-driving cars will change everything.

The solution to change is not to try and ban it. The solution is to embrace it. Your life will be full of constant change. Don't worry about the future, the future will look after itself. Live in the moment that is now.


This is a patronising comment - first telling us what we should be concerned about than telling us there’s actually no point in being concerned about anything anyway, change just happens and we should get on with it.

What if AI isn’t like the other changes you’ve experienced? What if a superintelligence is developed (or emerges)? What are the implications of that? Can it be controlled?

The people running the biggest AI companies in the world are themselves worried these questions, so we should be too.

HN is exactly the type of place that should be interested in discussing issues like this and it’s unfair to just dismiss the concerns.


Any comment from an experienced person to an inexperienced one can be categorized as patronizing. (Including this one.)

So sure, if you want to fret about the future, if you want to be anxious about hypothetical things like singularities or super-AIs or whatever, then go for it. Whatever makes you happy.

All I'm suggesting is that this path is not particularly unique. I've lived through the cold war. Through the moral panic of the 80s and 90s with the emergence of personal computers. Through cell phones. Through Y2K.

Yes, this is different. But it's also exactly the same. Forgive me for being sanguine. The future, whatever it is is coming. Will worrying about it add a single day to your life?

If you want to change the trajectory of the future, then I recommend running for public office. Perhaps others will follow you.

The genie though is out of the bottle. It cannot be put back.

(For what it's worth, I don't think a big statistics machine will end civilization. )


"Disclose" is kinda ambiguous here.

One of the hindrances to getting a patent is prior art. Meaning that the thing you're trying yo patent already exists.

After a patent is issued lots of people typically say "but that's obvious". Ok, so let's capture the obvious before someone patents it.

Equally, I might be doing something which is patentable, but I have no interest in patenting it. By registering it somewhere I make it easy for the patent office to search, thus turning down a patent application. Thus preventing me being sued by some new patent holder.

But to answer the question, no I wouldn't use a service like this. Partly because I wouldn't spend the money and partly because (in isolation) the service doesn't work. Unless the patent office is involved and actively using it, bad patents still get granted.

Yes, it might add some weight to a court case where I'm being sued, but honestly if I get there I've already lost. And my code backups show the same thing anyway.

Plus, which of the thousands of things I am doing, should I register? Am I supposed to trawl through my code deciding on each line if it's novel? If I do all this work I might as well just get a patent. That can at least be used in cross-licencing agreements.

Yes, software patents are a blight, but I'm not sure this is really a solution.


The principle is fine, and certainly applies in many countries.

$100 extra per entry seems steep to me, but I guess that boils down to your currency exchange rate.


Yeah, at first I was thinking it was an equivalent to the America the Beautiful passes which are $80/year for US citizens and residents for access across multiple parks. And you only need one of those per vehicle. But no, this is $100 per person per park entry which does seem quite excessive. I had an amazing time on a road trip with friends about a decade ago in the US South West. We visited Bryce Canyon National Park, Grand Canyon National Park, Rocky Mountain National Park, Yosemite National Parks, and Zion National Park. Some across multiple days. If our group had been "foreigners" with this fee structure, we would have had to pay an additional $800/person or so. That would have almost doubled the cost of our trip.


>> Tools you pay for that feel overpriced

I get your thinking here - free market spots a gap and competes on price. But personally I think this is a "poor" business strategy.

Firstly, competing on price is a race to the bottom. There's always someone who's prepared to offer it for less. And that always means the offering ultimately gets worse over time.

Secondly if you go thus route you attract customers who are "price sensitive". They'll leave you as quickly as they joined for someone who offers a lower price. In other words you are self-selecting the worst possible kind of customer to have.

Thirdly you send a signal regarding quality. A $5 product is obviously worse than a $100 product, because if the $5 -could- charge more they -would-.

(Hint: for VC funded companies the customer I the VC NOT the person paying a subscription. So be careful comparing your pricing to VC company pricing if you are not VC funded.)

Yes, look for pain. If you make something good charge more, not less, than incumbents. Build a customer base of people who are quality sensitive not price sensitive.

30 years ago I had a product on the market at $199. It sold well, validating the market space. A not-quite-as-good competitor appeared for $99. He made some sales. I was tempted to reduce my price. An old head told me otherwise. Sales wise we'd say things like "you get what you pay for". We outsold the new-guy. Which meant we had lots more revenue for support, docs, polish and so on. Today we charge $399. The competitor folded after a couple years.

We aimed high, charged a lot, and focused on delivering quality. We attracted loyal customers who appreciated quality more than a few $ saving. 30 years, and 50 products later, we continue to dominate our space.


I agree that driving down pricing gets you customers you don't want, but LLM-written software changes the game. If it cost $100,000 to build a thing before, and now it costs $10,000 to build it with AI, your upstart competitor can charge $50 for something good enough to be competitive to your $199.


Maybe. But maybe not.

Firstly, writing the code is perhaps only 10% of the product cost (over its lifetime.) Support, marketing, adjusting, debugging, handling edge cases and so on are where most of the costs lie.

In my case brighter folks than me wrote very good code in the same space. But taking working ode and turning it into a generic product is a lot of work.

Think of it like building a landing strip for your plane. If you're the only one landing there, then the specs are simple, and tight. You need little more than a grader and a building big enough for your plane. But if you're building an airport (think, say, JFK) you have to cater for everything from a Cessna to a Dreamlifter.

A LLM can certainly speed things along. But I wouldn't want to build based on LLM foundations. Long term support and maintainability would scare me.


You're sensitive to your passport information being stolen. You don't trust their security. That's all perfectly OK.

Fortunately they offer other option(s), which it seems you made use of. So you're all good.

A different user may have different priorities, and may choose a different option.

Which is fine. Options are good. There's no requirement that you have to like the ones you don't use.


It's not requiring you to submit a passport. That is just one of the options.

Feel free to use a different option.


There are two options;

1. Paypal 2. Passport

I do not have Paypal, and would not do passport.

That's fine - Hetzner's business belongs to them. I did not criticize that no options could be viable; only that passport was an option that should not be in use.


I agree, it's not as simple as "$500 per year". In some cases it can be, but mostly it's not.

Firstly, you need to clearly define what is included, and even more so, what is not included. How many hours is $500? Who decides what us should bug? Can they get new features because they have support? How many installs does the support cover? And do on.

And if they start with things like "supplier agreements" etc, just walk away.

Yes, some companies have a threshold where managers can just "spend money". Some managers may even use that to support you. But taking any money changes the relationship you have with the user.

Right now, it's completely inside your control. Direction, Priorities, Scope, Pace, levels of effort etc. I'm a huge fan of getting paid, I write software for money, but make no mistake - taking money changes things.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: