Hacker Newsnew | past | comments | ask | show | jobs | submit | strgcmc's commentslogin

For anyone who doesn't know what you mean, here's an archived copy of Steve Yegge's post about this directive + other musings comparing Amazon vs Google (which is how a lot of us came to find out about this, via Yegge's write-up): https://news.ycombinator.com/item?id=3102800

Copied the most relevant snippet below

---

So one day Jeff Bezos issued a mandate. He's doing that all the time, of course, and people scramble like ants being pounded with a rubber mallet whenever it happens. But on one occasion -- back around 2002 I think, plus or minus a year -- he issued a mandate that was so out there, so huge and eye-bulgingly ponderous, that it made all of his other mandates look like unsolicited peer bonuses.

His Big Mandate went something along these lines:

1) All teams will henceforth expose their data and functionality through service interfaces.

2) Teams must communicate with each other through these interfaces.

3) There will be no other form of interprocess communication allowed: no direct linking, no direct reads of another team's data store, no shared-memory model, no back-doors whatsoever. The only communication allowed is via service interface calls over the network.

4) It doesn't matter what technology they use. HTTP, Corba, Pubsub, custom protocols -- doesn't matter. Bezos doesn't care.

5) All service interfaces, without exception, must be designed from the ground up to be externalizable. That is to say, the team must plan and design to be able to expose the interface to developers in the outside world. No exceptions.

6) Anyone who doesn't do this will be fired.

7) Thank you; have a nice day!

Ha, ha! You 150-odd ex-Amazon folks here will of course realize immediately that #7 was a little joke I threw in, because Bezos most definitely does not give a shit about your day.

#6, however, was quite real, so people went to work. Bezos assigned a couple of Chief Bulldogs to oversee the effort and ensure forward progress, headed up by Uber-Chief Bear Bulldog Rick Dalzell. Rick is an ex-Armgy Ranger, West Point Academy graduate, ex-boxer, ex-Chief Torturer slash CIO at Wal*Mart, and is a big genial scary man who used the word "hardened interface" a lot. Rick was a walking, talking hardened interface himself, so needless to say, everyone made LOTS of forward progress and made sure Rick knew about it.

Over the next couple of years, Amazon transformed internally into a service-oriented architecture. They learned a tremendous amount while effecting this transformation. There was lots of existing documentation and lore about SOAs, but at Amazon's vast scale it was about as useful as telling Indiana Jones to look both ways before crossing the street. Amazon's dev staff made a lot of discoveries along the way. A teeny tiny sampling of these discoveries included:

- pager escalation gets way harder, because a ticket might bounce through 20 service calls before the real owner is identified. If each bounce goes through a team with a 15-minute response time, it can be hours before the right team finally finds out, unless you build a lot of scaffolding and metrics and reporting.

- every single one of your peer teams suddenly becomes a potential DOS attacker. Nobody can make any real forward progress until very serious quotas and throttling are put in place in every single service.

- monitoring and QA are the same thing. You'd never think so until you try doing a big SOA. But when your service says "oh yes, I'm fine", it may well be the case that the only thing still functioning in the server is the little component that knows how to say "I'm fine, roger roger, over and out" in a cheery droid voice. In order to tell whether the service is actually responding, you have to make individual calls. The problem continues recursively until your monitoring is doing comprehensive semantics checking of your entire range of services and data, at which point it's indistinguishable from automated QA. So they're a continuum.

- if you have hundreds of services, and your code MUST communicate with other groups' code via these services, then you won't be able to find any of them without a service-discovery mechanism. And you can't have that without a service registration mechanism, which itself is another service. So Amazon has a universal service registry where you can find out reflectively (programmatically) about every service, what its APIs are, and also whether it is currently up, and where.

- debugging problems with someone else's code gets a LOT harder, and is basically impossible unless there is a universal standard way to run every service in a debuggable sandbox.

That's just a very small sample. There are dozens, maybe hundreds of individual learnings like these that Amazon had to discover organically. There were a lot of wacky ones around externalizing services, but not as many as you might think. Organizing into services taught teams not to trust each other in most of the same ways they're not supposed to trust external developers.

This effort was still underway when I left to join Google in mid-2005, but it was pretty far advanced. From the time Bezos issued his edict through the time I left, Amazon had transformed culturally into a company that thinks about everything in a services-first fashion. It is now fundamental to how they approach all designs, including internal designs for stuff that might never see the light of day externally.

At this point they don't even do it out of fear of being fired. I mean, they're still afraid of that; it's pretty much part of daily life there, working for the Dread Pirate Bezos and all. But they do services because they've come to understand that it's the Right Thing. There are without question pros and cons to the SOA approach, and some of the cons are pretty long. But overall it's the right thing because SOA-driven design enables Platforms.

That's what Bezos was up to with his edict, of course. He didn't (and doesn't) care even a tiny bit about the well-being of the teams, nor about what technologies they use, nor in fact any detail whatsoever about how they go about their business unless they happen to be screwing up. But Bezos realized long before the vast majority of Amazonians that Amazon needs to be a platform.

You wouldn't really think that an online bookstore needs to be an extensible, programmable platform. Would you?


> You wouldn't really think that an online bookstore needs to be an extensible, programmable platform. Would you?

Well, we were making it a platform in small ways long before that edict from Bezos. But because it used to be only an online bookstore, the footprint was a lot smaller.

1. the external interface was ... HTTP

2. the pages were designed to be easily machine parsable

3. you could queue up search queries that amzn would run on its own hardware, and notify you of the results asynchronously.

Sure, this didn't look anything like the things Yegge is describing, but the idea that "it's a platform, dummies" was some new revelation is misleading.


I haven’t read this in years and it was delightful to see it posted here.

I think you might be over-fixated on a very prediction-market-esque framing of this plot device... if you broaden it slightly, the idea of someone in a fictional world manipulating the news reporting of an act or set of acts, rather than caring so much about the root act itself, is as stated before, quite common.

For example, this from House of Cards: https://www.nytimes.com/2016/03/07/arts/television/house-of-...

> Pollyhop is a fictional, Google-esque search engine that according to Leann’s polling expert is being exploited by the Republican candidate Will Conway in ways that suggest Underwood can’t possibly beat him in the general election. The explanation of how Pollyhop works is convoluted at best, but the gist is that Conway and his people are manipulating search engine results so that only positive coverage of their side appears.

Or more recent examples of what essentially boils down to the plot device of "media manipulation" aka manipulating the "news reporting of the act":

- See the most recent season of Industry, which included several plot points about manipulating news coverage as a short-seller and the company being targeted fought back and forth (including specific focus on the individual journalists involved)

- See Andor, everything about how the Empire twists perception of what's happening on Ghorman, leading up the Ghorman massacre itself, and then culminating in Mon Mothma's speech in the Senate denouncing "the death of truth is the ultimate victory of evil"

- See The Orville, a particular episode: https://orville.fandom.com/wiki/Majority_Rule which includes the plot point of hacking that society's "master feed" to plant false manipulative stories to curry public favor and save a character from being punished

- See The Boys, how Vought manipulates the media to twist coverage of their "heroes" even when they commit atrocities

- See other House of Cards plotlines involving Zoe Barnes and being a direct mouthpiece for Frank Underwood

I think the only real difference if any, is that in the most common form of portrayal, maybe less attention is paid to the journalist as the point of leverage, and how they deal with threats or bribes or whatever. The fact that such manipulation occurs, is commonly accepted as a trope, without requiring too much of a deep dive. Whether a story choose to focus on the "reporter's perspective" is perhaps less common, but not uncommon IMO.


This is why (flawed though the process may be in other ways), a company like Amazon asks "customer obsession" questions in engineering interviews. To gather data about whether the candidate appreciates this point about needing to understand user problems, and also what steps the candidate takes to try and learn the users' POV or walk a mile in their shoes so to speak.

Of course interview processes can be gamed, and signal to noise ratio deserves skepticism, so nothing is perfect, but the core principle of WHY that exists as part of the interview process (at Amazon and many many other companies too) is exactly for the same reason you say it's your "favorite".

Also IIRC, there was some internal research done in the late 2010s or so, that out of the hiring assessment data gathered across thousands of interviews, the single best predictor of positive on-the-job performance for software engineers, was NOT how well candidates did on coding rounds or system design but rather how well they did at the Customer Obsession round.


I think it comes down to, having some insight about the customer need and how you would solve it. Having prior experience in the same domain is helpful but is neither a guarantee nor a blocker, towards having a customer insight (lots of people might work in a domain but have no idea how to improve it; alternatively an outsider might see something that the "domain experts" have been overlooking).

I just randomly happened to read about the story of, some surgeons asking a Formula 1 team to help improve its surgical processes, with spectacular results in the long term... The F1 team had zero medical background, but they assessed the surgical processes and found huge issues with communication and lack of clarity, people reaching over each other to get to tools, or too many people jumping to fix something like a hose coming loose (when you just need 1 person to do that 1 thing). F1 teams were very good at designing hyper efficient and reliable processes to get complex pit stops done extremely quickly, and the surgeons benefitted a lot from those process engineering insights, even though it had nothing specifically to do with medical/surgical domain knowledge.

Reference: https://www.thetimes.com/sport/formula-one/article/professor...

Anyways, back to your main question -- I find that it helps to start small... Are you someone who is good at using analogies to explain concepts in one domain, to a layperson outside that domain? Or even better, to use analogies that would help a domain expert from domain A, to instantly recognize an analogous situation or opportunity in domain B (of which they are not an expert)? I personally have found a lot of benefit, from both being naturally curious about learning/teaching through analogies, finding the act of making analogies to be a fun hobby just because, and also honing it professionally to help me be useful in cross-domain contexts. I think you don't need to blow this up in your head as some big grand mystery with some big secret cheat code to unlock how to be a founder in a domain you're not familiar with -- I think you can start very small, and just practice making analogies with your friends or peers, see if you can find fun ways of explaining things across domains with them (either you explain to them with an analogy, or they explain something to you and you try to analogize it from your POV).


I got curious and validated your source [1], to pull the exact quote:

"The proportion of Connecticut gambling revenue from the 1.8% of people with gambling problems ranges from 12.4% for lottery products to 51.0% for sports betting, and is 21.5% for all legalized gambling."

Without going into details, I do have some ability to check if these numbers actually "make sense" against real operator data. Will try to sense-check if the data I have access to, roughly aligns with this or not.

- the "1.8% of people" being problem gamblers does seem roughly correct, per my own experience

- but those same 1.8% being responsible for 51% of sportsbook revenue, does not align with my intuition (which could be wrong! hence why I want to check further...)

- it is absolutely true that sportsbooks have whales/VIPs/whatever-you-call-them, and the general business model is indeed one of those shapes where <10% of the customers account for >50% of the revenue (using very round imprecise numbers), but I still don't think you can attribute 51% to purely the "problem gamblers" (unless you're using a non-standard definition of problem-gambler maybe?)


I'm sure nobody cares, but the data I can check shows a couple interesting observations (won't call them conclusions, that's too strong):

- Yes, you can find certain slices of 1.8% of customers, that would represent 50%+ of revenue... But this is usually pretty close to simply listing out the top 1.8% of all accounts by spend

- Therefore, to support the original claim, one would essentially have to definitionally accept that nearly all of the top revenue accounts are "problem gamblers" and almost no one else is... But this doesn't pass a basic smell test, because population wise there are more "poor" problem-gamblers than there are "rich" ones, because there are a lot more poor people in general than rich ones, so it's very unlikely that nearly all of the 1.8% of total population problem gamblers also happen to overlap so much with the top 1.8% customer accounts by revenue.


In such scenarios (data engineering / DS / analytics is my personal background), I have learned not to underestimate the value of, explicitly declaring within Team X, that person X1 is dedicated to line L1, person X2 is dedicated to line L2, etc. (aka similar to your last line about embedding a person with that line of business).

In theory, it doesn't actually "change" anything, because Team X is still stuck supporting exactly the same number of dependencies + the same volume and types of requests.

But the benefit of explicit >>> implicit, the clarity/certainty of knowing who-to-go-to-for-what, the avoidance of context switching + the ability to develop expertise/comfort in a particular domain (as opposed to the team trying to uphold a fantasy of fungibility or that anyone can take up any piece of work at any time...), and also the specificity by which you can eventually say, "hey I need to hire more people on Team X, because you need my team for 4 projects but I only have 3 people..." -- all of that has turned out to be surprisingly valuable.

Another way to say it is -- for Team X to be stretched like that initial state, is probably dysfunctional, and in a terminally-fatal sense, but it's a slow kind of decay/death. Rather than pretending it can work, pretending you can virtualize the work across people (as if people were hyper-threads in a CPU core, effortlessly switching tasks)... instead by making it discrete/concrete/explicit, by nominating who-is-going-to-work-on-what-for-who, I have learned that this is actually a form of escalation, of forcing the dysfunction to the surface, and forcing the organization to confront a sink-or-swim moment sooner than it otherwise would have (vs if you just kept limping on, kept trying to pretend you can stay on top of the muddled mess of requests that keep coming in, and you're just stuck treading water and drowning slowly).

---

Of course, taking an accelerationist stance is itself risky, and those risks need to be managed. But for example, if the reaction to such a plan is something like, "okay, you've created clarity, but what happens if person X1 goes on vacation/gets-hit-by-bus, then L1 will get no support, right?"... That is the entire purpose/benefit of escalating/accelerating!

In other words, Team X always had problems, but they were hidden beneath a layer of obfuscation due to the way work was being spread around implicitly... it's actually a huge improvement, if you've transformed a murky/unnameable problem into something as crispy and quantifiable as a bus-factor=1 problem (which almost everyone understands more easily/intuitively).

---

Maybe someday Team X could turn itself into a self-service platform, or a "X-as-a-service" offering, where the dependent teams do not need to have you work with or for them, but rather just consume your outputs, your service(s)/product(s), etc. at arms-length. So you probably don't always want to stay in this embedded or explicit "allocation" model.


The most apt way that I've read somewhere, to reason about AI, is to treat it like an extremely foreign, totally alien form of intelligence. Not necessarily that the models of today behave like this, but we're talking about the future aren't we?

Just framing your question against a backdrop of "human benevolence", as well as implying this is a single dimension (that it's just a scalar value that could be higher or lower), is already too biased. You assume that logic which applies to humans, can be extrapolated to AI. There is not much basis for this assumption, in much the same way that there is not much basis to assume an alien sentient gas cloud from Andromeda would operate on the same morals or concept of benevolence as us.


A purely technology-minded compromise to this question (aka how to support both the "good" and "bad" kinds of recording), is probably something along the lines of expiry and enforcing a lack of permanence as the default (kind of like, the digital age recording-centric version of "innocent until proven guilty", which honestly is one of the greatest inventions in the history of human legal systems). Of course, one should never make societal decisions purely from a technological practicality standpoint.

Since you can't be sure what is "bad"/illegal, and people will just record many things anyways without thinking too much about it --> then the default should be auto-expiring/auto-deletion after X hours/days, unless some reason or some confirmation is provided to justify its persistence.

For example, imagine we lived in a near-future where AI assistants were commonplace. Imagine that recording was ubiquitous but legally mandated to default into being "disappearing videos" like Snapchat, but for all the major platforms (YouTube, TikTok, X, Twitch, Kick, etc.). Imagine that every day, you as a regular person doing regular things, get maybe 10000 notifications of, "you have been recorded in video X on platform Y, do you consent for this to be persisted?", and also law enforcement has to go through a judge (kind of like a search warrant) to file things like "persistence warrants", and then maybe there is another channel/method for concerned citizens who want to persist video of a "bad guy" doing "bad things" where they can request for persistence (maybe it's like an injunction against auto-deletion until a review body can look at the request)... Obviously this would be a ton of administrative overhead, a ton of micro-decisions to be made -- which is why I mentioned the AI-assistant angle, because then I can tell my personal AI helper, "here are my preferences, here is when I consent to recording and here is when I don't... knowing my personal rules, please go and deal with the 10000 notifications I get every day, thanks". Of course if there's disagreement or lack of consensus, some rules have to be developed about how to combine different parties wishes together (e.g. take a recording of a child's soccer game, where maybe 8 parents consent and 3 parents don't to persistence... perhaps it's majority rule so persistence side wins, but then majority has to pay the cost of API tokens to a blurring/anonymization service that protects the 3 who didn't want to be persisted -- that could be a framework for handling disputed outcomes?)

I'm also purposefully ignoring the edge-case problem of, what if a bad actor wants to persist the videos anyways, but in short I think the best we can do is impose some civil legal penalties if an unwilling participant later finds out you kept their videos without permission.

Anyways, I know that's all super fanciful and unrealistic in many ways, but I think that's a compromise sort of world-building I can imagine, that retains some familiar elements of how people think about consent and legal processes, while acknowledging the reality that recording is ubiquitous and that we need sane defaults + follow-up processes to review or adjudicate disputes later (and disputes might arise for trivial things, or serious criminal matters -- a criminal won't consent to their recording being persisted, but then society needs a sane way to override that, which is what judges and warrants are meant to do in protecting rights by requiring a bar of justification to be cleared).


True of course that dollars is the end goal, but frankly it'd be better if they just took the dollars out of my pocket directly, instead of poisoning my brain first so that they can trick me into giving some dollars...

Obviously I'm being hyperbolic, but I think eventually if society survives past this phase, our descendants will look back and judge us for letting psychological manipulation be a valid economic process as a way to generate dollars, in much the same way we might judge our ancestors for ever building up a whole industry to hunt whales for oil for fuel (meaning, they might acknowledge that fuel is important and necessary to power an industrializing society, but they would mock us for not understanding how to refine petroleum sooner, and how silly going through the tech tree of fucking whale hunting is, just to get some fuel).

It is fucking silly/absurd/dangerous, that we go through the tech tree branch of psychological manipulation, just to be able to sell some ads or whatever.


I think you're veering too far into politics on what was originally not a very political OP/thread, but I'll indulge you a tiny bit and also try to bring the thread back to the original theme.

You said a lot of words that I basically boil down to a thesis of, the value of "truth" is being diluted in real-time across our society (with flood-the-zone kinds of strategies), and there are powerful vested interested who benefit from such a dilution. When I say powerful interests, I don't meant to imply Illuminati and Freemasons and massive conspiracies -- Trump is just some angry senile fool with a nuclear football, who as you said has learned to reflexively use "AI" as the new "fake news" retort to information he doesn't like / wishes weren't true. But corporations also benefit.

Google benefited tremendously from inserting itself into everyone's search habits, and squeezed some (a lot of) ad money out of being your gatekeeper to information. The new crop of AI companies (and Google and Meta and the old generation too) want to do the same thing again, but this time there's a twist -- whereas before the search+ads business could spam you with low-quality results (in proto-form, starting as the popup ads of yesteryear), but it didn't necessarily directly try to attack your view of "truth". In the future, you may search for a product you want to buy, and instead of serving you ads related to that product, you may be served disinformation to sway your view of what is "true".

And sure negative advertising always existed (one company bad-mouthing another competitor's products), but those things took time and effort/resources, and also once upon a time we had such things as truth-in-advertising laws and libel laws but those concepts seem quaint and unlikely to be enforced/supported by this administration in the US. What AI enables is "zero marginal cost" scaling of disinformation and reality distortion, and in a world where "truth" erodes, instead of there being a market incentive for someone to profit off of being more truth-y than other market participants, on the contrary I would except that the oligopolistic world we live in would conclude that devaluaing truth is more profitable for all parties (a sort of implicit collusion or cartel-like effect, with companies controlling the flow of truth, like OPEC controlling their flow of oil).


Why would you think it matters what you think? Keep your pretentious, supremacist narcissism to yourself and tell those you abuse what to do, because that is not going to matter here.


This is a really strange reply.


I think they just read my first sentence and decided to take offense immediately. Shrug.

All I meant was, I didn't want to go down a path of talking about Trump... that's a very very dead horse to beat. I thought there were interesting elements to this person's ideas that were worth further discussion, that could be divorced/split-off from the Trump lightning rod, so I tried to do that. I generally thought I agreed with their original ideas, and wanted to build on them or respond to them, without getting sucked into wasting breath on Trump (nobody benefits, regardless if you have left or right leaning views).

I'm sure I could fix some gaps in the way I explained myself, but oh well, just another day on the internet.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: