Hacker Newsnew | past | comments | ask | show | jobs | submit | pwatsonwailes's commentslogin

Good segmentation is mostly not by demography nowadays. At best, demographics are a correlative element to something more fundamental, usually economic, behavioural or psychographic.

The problem I tend to see is that companies say they're doing JTBD research, but they're actually just running attribute preference surveys (asking customers to rank features, from a list of things the company would like to build, rather than starting off by assuming you don't know what customers require).

Listening to what people say they want (feature preferences) almost always diverges from what they actually want the product to do (a functional, emotional, or social outcome). That gets more complex when we think about that there's different levels by which you can evaluate what someone wants, which in the JTBD word are thought of as jobs as progress (why they're doing the thing), and jobs as outcomes (how they're doing the thing). There's another famous example, which is from Bosch's circular saw evolution. Professionals said they wanted lighter tools (and that's true), but the constraint they experienced as a result of weight was the impacts that had. So you can solve for weight, or you can solve for improved usability. Symptoms vs causes sort of thing.

This is also why product teams should involve marketers, and why marketers should understand research design. The teams who I've seen do this well at this aren't running quick preference tests and A/B tests on features most of the time. They're generally more focused on running continuous feedback loops, where they conduct broader research, then engage in grounded theory style interpretation to understand what they can do, look at field validation to figure out what they should do, and then iterate.

For B2B especially as a side note, if your value proposition is something like accountability or proof of value, but your product's workflows don't make accountability or proving value effortless, fixing that workflow will do more for brand perception than any campaign, because nothing nukes good comms like a poor experience.


>Professionals said they wanted lighter tools (and that's true), but the constraint they experienced as a result of weight was the impacts that had. So you can solve for weight, or you can solve for improved usability.

"I want a lighter hammer, smaller bags of concrete, etc" Drake turning away --> "I want a heavy hammer with a long handle, and big bags plus a dolly/block & tackle" Drake nodding in approval


This is all very true. I always try to push for customer interviews to be as much about formulating new hypotheses as they are about validating existing hypotheses.

I want to scream [1] every. single. time. a business wants to talk to me. Every second of the conversation is just the same thing, over and over: here are my boxes. Please put yourself in one of my boxes.

There is no room for my feelings or free expression. It is inconvenient to the agenda of whoever created the survey.

If companies are so desperate to know what customers want, how come no one, in the past 25 years of my life as a consumer, has ever had any time to ask me a single real question?

[1] At least that's how I would feel if I hadn't numbed myself to this decades ago. Now I just escape. I hang up, I click the X, I think of nothing but getting away from this thing that, no matter what its motivation, no matter what product or cause it's selling, has exactly the same agenda: "would you like to dehumanize yourself for five minutes, so that I can make a data table that I was paid to make by someone who doesn't know what they're doing or why?"


Too many layers and competing interests. The person who ends up creating the survey realistically cares much more about going home 1 minute earlier or with 1% less neural energy spent. They've got their kids to manage when they get home.

Do you use any products from a one-person shop? They will be so much more likely to ask you real questions. Not guaranteed, of course, as some people are just clueless. Those usually don't last long as business owners though.


That's precisely what I'm wondering about now.

Is it possible that all real product ideas come from the mind of a single person, understanding and interacting with other people as actual people? And corporations are some sort of Thing that takes humanity and squishes it into a Box, a Box with one very narrow goal: to freeze the product idea from the single person and mass-produce it, "scale it up" so that millions of people are aware of its existence and are able to use it?

I mean, Google didn't even invent Google Docs. Some random little startup did. Google just bought it, then made it discoverable and usable to millions.

Which is not a small thing, I guess. But I don't know what the marketing department is for. Other than putting Google Docs on billboards, maybe, or their digital equivalent.


I think the problem is that it's rare for people to be capable of empathizing at that level and be capable of doing all the types of things that the leader of a large company needs to be able to do. It's also like a game of telephone. It's not enough to just empathize with the customer, it actually has to be turned into a product or experience. In big companies this means it gets communicated through multiple layers and each of those layers will warp and shift the actual empathy for the customer.

But honestly, I think the biggest issue is that leaders don't even realize they have this problem. I don't think it's the lower level employee being lazy. I think it's the leader not realizing how important customer and market empathy is, and not baking it into the company culture and processes. I agree wholeheartedly with the op comment at the top of this thread.

It's one of the reasons why monopolies lead to such bad experiences and products because there's no competitive pressure to empathize with the customer.


Doesn't need to be a single person, but yes, all good products come from someone or a (generally) small team, spotting something, and fixing it.

As for the marketing department - depends on definition, but assuming you mean (as it mostly is nowdays) comms, it's two things:

1. Making sure that potential future customers know you exist, so when they enter the market to buy, they know you're relevant and can purchase from you.

2. Making sure that when someone is in the market to buy now, that they're more likely to buy from you, because you have a compelling reason for people to pick you, and not the other brands in the space that they can think of.

Mostly 1, some of 2, if the marketing team is good. Ratios vary as to how much though by more than you'd think, depending on industry, customer lifecycle, brand maturity and so on.


I think it's very often repeated that the hardest thing about growing a business is keeping the culture, and it's simply impossible past a certain size unless you literally develop a cult of personality like Jobs or to a lesser extent Musk. Extremely few manage to do it. And when the culture goes, that's what you get: least effort. And the least effort is to do what you're saying, "just do more of the same, but bigger, or with a hint of chocolate".

At least in most of the modern world. I think this effect is a _little_ weaker in China. It definitely used to be much weaker in the West too, at some point in time. But individualism, cultural capture by big capital, yaddah yaddah.


"how come no one, in the past 25 years of my life as a consumer, has ever had any time to ask me a single real question"

There's a lot of shit marketers, is my short answer.

Like, a lot.


Hahaha. I'm sorry but I find this funny. I discovered jobs to be done in 2017, then dove deep into outcome-driven innovation and spent years consulting as a practitioner who interviewed people on behalf of my clients. My clients would push for their hypotheses to validate, but instead I would approach the interviews as open-ended exploratory empathy building exercises.

I would start out my interviews with things like " in your own words, what is the purpose of what you do?" And then engage in an open conversation stopping by at questions like what do you enjoy the most about what you do, what do you find the most difficult, what do you find the most tedious, what's the most important thing that you need to get done that you're the least satisfied with?

Most of my questions would be follow-up questions. I would also do a lot of active listening.

What I find funny is that people would often be a bit resistant to doing these interviews, but almost every single interview went long because the interviewee was clearly enjoying it so much. I tended not to ask whether they had a hard stop at the start of the interview, but instead would ask if they had a hard stop about 15 minutes before the scheduled end time. Of course, when interviewees truly did have a hard stop we would end on time, but I swear about 80% of the time. Interviews went long, sometimes 2 hours long. I started to realize that part of what I was doing was providing therapy.

Often the clients hypotheses wouldn't be validated, but I could almost always point to what the interviewees would value and what opportunities were there.

I don't do this anymore because most of the clients I worked with couldn't see the value in it, and I'm sure for very similar reasons that you were lamenting in your comment.

I'm grateful for the experience. I performed over 500 of interviews like this across many different industries from Frontline workers to c-suite of Fortune 500 companies. The experience was so valuable. Now I build product firsthand and exercise the skills I developed with my own customers and the market I serve, and being in a tight feedback loop is working really well so far. It's fun to build things people want.


Which reading material would you recommend to get up to speed with outcome-driven innovation and JTBD.

And why didn't your clients see value in this? Was it because these insights didn't make its way to their service or product?

What's the current demand for JTBD and ODI consulting?


It's what always happens when there's a disconnect between the product built and the actual thing people want to do. In marketing, we differentiate between Jobs-As-Activities (the task of "changing a background") and Jobs-As-Progress (the user trying to go from something being unsatisfactory to something better).

When UI feels dumbed down to that level, or hidden behind advanced settings, it’s often because the product team ends up treating users as a gestalt persona, rather than thinking about their constraints around time and attention. The most meaningful innovations occur when customer insights influence development before launch; sadly, that frequently doesn't happen. People launch the thing, come up with features they could add, ask what people want from that list (and potentially don't even do that) and then add stuff like barnacles accumulating on a ship.


People's actual measured experience, vs people's experience of the experience, are rarely the same things, when they have prior knowledge of one thing, and low knowledge of the alternative. They prefer the thing they know, even when it's worse.

There's loads of this in the UX space. To overly simplify, people's brains use expected ideas about what things are like, in order to interact with the world. We build models as to what things are like, and then things that look like what we expect, we over-weight to stating as things that we understand.

So when people are presented with something which is visually appealing, we think it's easy to use, even when it isn't. And people will then default to blaming themselves, not the pretty, elegant thing, because clearly the pretty elegant thing isn't the issue.

We call this the aesthetic-usability effect. Perception of the expected experience, and attribution of the actual experience, is more important part than the actual experience.

It's one of the many ways in which engineers, economists and analysts (in my experience) tend to run in to issues. They want people to behave rationally, based on their KPIs, not as people actually experience and interact with the world.

There's all sorts of research that then comes off this, like people enjoying wine they've been told is more expensive, over wine they've been told is cheaper, and the physiological response as measured with an MRI confirms their reported divergence in experience, despite that the wines are the same, as one quick example.

Low contextuality evaluations (my term for where you ask someone to state things about something where they lack enough experience with enough breadth and depth to answer reliably) are always wonky. People can't comment on wine, because they don't know enough about wine, so they seek other clues to tell them about what they're experiencing. Similarly, people don't know about things that are new to them (by default) or that look different to what they expect, so their experience is always reported as being worse than it probably actually is, because their brain doesn't like expending energy learning about something new. They'd rather something they understood. It's where contextualisation and mimicry come in really useful from a design of experience standpoint.


Not quite. The idea that corporate employees are fundamentally "not average" and therefore more prone to unethical behaviour than the general population relies on a dispositional explanation (it's about the person's character).

However, the vast majority of psychological research over the last 80 years heavily favours a situational explanation (it's about the environment/system). Everyone (in the field) got really interested in this after WW2 basically, trying to understand how the heck did Nazi Germany happen.

TL;DR: research dismantled this idea decades ago.

The Milgram and Stanford Prison experiments are the most obvious examples. If you're not familiar:

Milgram showed that 65% of ordinary volunteers were willing to administer potentially lethal electric shocks to a stranger because an authority figure in a lab coat told them to. In the Stanford Prison experiement, Zimbardo took healthy, average college students and assigned them roles as guards and prisoners. Within days, the roles and systems set in place overrode individual personality.

The other relevant bit would be Asch’s conformity experiments; to whit, that people will deny the evidence of their own eyes (e.g., the length of a line) to fit in with a group.

In a corporate setting, if the group norm is to prioritise KPIs over ethics, the average human will conform to that norm to avoid social friction or losing their job, or other realistic perceived fears.

Bazerman and Tenbrunsel's research is relevant too. Broadly, people like to think that we are rational moral agents, but it's more accurate to say that we boundedly ethical. There's this idea of ethical fading that happens. Basically, when you introduce a goal, people's ability to frame falls apart, including with a view to the ethical implications. This is also related to why people under pressure default to less creative approaches to problem solving. Our brains tunnel vision on the goal, to the failure of everything else.

Regarding how all that relates to modern politics, I'll leave that up to your imagination.


I find this framing of corporates a bit unsatisfying because it doesn't address hierarchy. By your reckoning, the employees just follow the group norm over their own ethics. Sure, but those norms are handed down by the people in charge (and, with decent overlap, those that have been around longest and have shaped the work culture).

What type of person seeks to be in charge in the corporate world? YMMV but I tend to see the ones who value ethics (e.g. their employees' wellbeing) over results and KPIs tend to burn out, or decide management isn't for them, or avoid seeking out positions of power.


Responded on this line of thinking a bit further down, so I'll be brief on this. Yes, there's selection bias in organisations as you go up the ladder of power and influence, which selects for various traits (psychopathy being an obvious one).

That being said, there's a side view on this from interactionism that it's not just the traits of the person's modes of behaviour, but their belief in the goal, and their view of the framing of it, which also feeds into this. Research on cult behaviours has a lot of overlap with that.

The culture and the environment, what the mission is seen as, how contextually broad that is and so on all get in to that.

I do a workshop on KPI setting which has overlap here too. In short for that - choose mutually conflicting KPIs which narrow the state space for success, such that attempting to cheat one causes another to fail. Ideally, you want goals for an organisation that push for high levels of upside, with limited downside, and counteracting merits, such that only by meeting all of them do you get to where you want to be. Otherwise it's like drawing a line of a piece of paper, asking someone to place a dot on one side of the line, and being upset that they didn't put it where you wanted it. More lines narrows the field to just the areas where you're prepared to accept success.

That division can also then be used to narrow what you're willing to accept (for good or ill) of people in meeting those goals, but the challenge is that they tend to see meeting all the goals as the goal, not acting in a moral way, because the goals become the target, and decontextualise the importance of everything else.

TL;DR: value setting for positive behaviour and corporate performance is hard.

EDIT: actually this wasn't that short as an answer really. Sorry for that.


> That division can also then be used to narrow what you're willing to accept (for good or ill) of people in meeting those goals, but the challenge is that they tend to see meeting all the goals as the goal, not acting in a moral way, because the goals become the target, and decontextualise the importance of everything else.

I would imagine that your "more lines" approach does manage to select for those who meet targets for the right reasons over those who decontextualise everything and "just" meet the targets? The people in the latter camp would be inclined to (try to) move goalposts once they've established themselves - made harder by having the conflicting success criteria with the narrow runway to success.

In other words, good ideas and thanks for the reply (length is no problem!). I do however think that this is all idealised and not happening enough in the real world - much agreed re: psychopathy etc.

If you wouldn't mind running some training courses in a few key megacorporations, that might make a really big difference to the world!


You're not wrong strictly speaking - the challenge comes in getting KPIs for ethical and moral behaviour to be things that the company signs up for. Some are geared that way inherently (Patagonia is the cliché example), but most aren't.

People will always find other goalposts to move. The trick is making sure the KPIs you set define the goalposts you care about staying in place.

Side note: Jordan Peterson is pretty much an example of inventing goalposts to move. Everything he argues about is about setting a goalpost, and then inventing others to move around to avoid being pinned down. Motte-and-bailey fallacy happens with KPIs as much as it does with debates.


Idk where you're at, but it's been the complete opposite in my experience

My favourite part about the Milgram experiments is that he originally wanted to prove that obedience was a German trait, and that freedom loving Americans wouldn't obey, which he completely disproved. The results annoyed him so much that he repeated it dozens of times, getting roughly the same result.

The Stanford prison experiment has been debunked many times : https://pubmed.ncbi.nlm.nih.gov/31380664/

- guards received instructions to be cruel from experimenters

- guards were not told they were subjects while prisoners were

- participants were not immersed in the simulation

- experimenters lied about reports from subjects.

Basically it is bad science and we can't conclude anything from it. I wouldn't rule out the possibility that top fortune-500 management have personality traits that make them more likely to engage in unethical behaviour, if only by selection through promotion by crushing others.


- participants all self-selected into the study

They put an ad in a newspaper in San Francisco and then selected for apparent neurotypicality:

ZPE: https://en.wikipedia.org/wiki/Stanford_prison_experiment :

> Participants were recruited from the local community through an advertisement in the newspapers offering $15 per day ($119.41 in 2025) to male students who wanted to participate in a "psychological study of prison life".

Here's that newspaper ad: https://exhibits.stanford.edu/spe/catalog/cj859hr0956 :

> Steady P-Time Job

> [...]

> Male college students needed for psychological study of prison life. $15/day for 1-2 weeks beg Aug. 14. For further information & applications, come to Room 248, Jordan Hall, Stanford U.


It's instructive though, despite the flaws, and at this point has been replicated enough in different ways that we know it's got some basis in reality. There's a whole bunch of constructivist research around interactionism, that shows that whilst it's not just the person's default ways of behaving or just the situation that matters, the situational context definitely influences what people are likely to do in any given scenario.

Reicher & Haslam's research around engaged followership gives a pretty good insight into why Zimbardo got the results he did, because he wasn't just observing what went on. That gets into all sorts of things around good study design, constructivist vs positivist analysis etc, but that's a whole different thing.

I suspect, particularly with regards to different levels, there's an element of selection bias going on (if for no other reason that what we see in terms of levels of psychopathy in higher levels of management), but I'd guess (and it's a guess), that culture convincing people that achieving the KPI is the moral good is more of a factor.

That gets into a whole separate thing around what happens in more cultlike corporations and the dynamics with the VC world (WeWork is an obvious example) as to why organisations can end up with workforces which will do things of questionable purpose, because the organisation has a visible a fearless leader who has to be pleased/obeyed etc (Musk, Jobs etc), or more insidiously, a valuable goal that must be pursued regardless of cost (weaponised effective altruism sort of).

That then gets into a whole thing about what happens with something like the UK civil service, where you're asked to implement things and obviously you can't care about the politics, because you'll serve lots of governments that believe lots of different things, and you can't just quit and get rehired every time a party you disagree with personally gets into power, but again, that diverges into other things.

At the risk of narrative fallacy - https://www.youtube.com/watch?v=wKDdLWAdcbM


> The Milgram and Stanford Prison experiments are the most obvious examples.

BOTH are now considered bad science. BOTH are now used as examples of "how not to do the science".

> The idea that corporate employees are fundamentally "not average" and therefore more prone to unethical behaviour than the general population relies on a dispositional explanation (it's about the person's character).

I did not said nor implied that. Corporate employees in general and Forbes 500 are not the same thing. Corporate employees as in cooks, cleaners, bureaucracy, testers and whoever are general population.

Whether company ends in Forbes 500 or not is not influenced by general corporate employees. It is influenced by higher management - separated social class. It is very much selected who gets in.

And second, companies compete against each other. A company run by ethical management is less likely to reach Forbes 500. Not doing unethical things is disadvantage in current business. It could have been different if there was law enforcement for rich people and companies and if there was political willingness to regulate the companies. None of that exists.

Third, look at issues around Epstein. It is not that everyone was cool with his misogyny, sexism and abuse. The people who were not cool with that seen red flags long before underage kids entered the room. These people did not associated with Epstein. People who associated with him were rewarded by additional money and success - but they also were much more unethical then a guy who said "this feels bad" and walked away.


Not sure where you get that for Milgram. That's been replicated lots of times, in different countries, with different compositions of people, and found to be broadly replicable. Burger in '09, Sheridan & King in '72, Dolinski and co in '17, Caspar in '16, Haslam & Reicher which I referenced somewhere else in the thread...

To expand on this a little for those interested, time has properties space doesn't. For example, you can turn left to swap your forward direction for sideways in space. You cannot turn though, in a way that swaps your forward (as it were) direction in space for a backward direction in time.

Equally, cause always precedes effect. If time were exactly like space, you could bypass a cause to get to an effect, which would break the fundamental laws of physics as we know them.

There's obviously a lot more, but that's a couple of examples to hopefully help someone.



I had always thought that the fundamental forces were largely the same regardless of whether time was reversed or not.


Not really. Even the electric force is not purely time symmetric - you have to flip the sign of the charge if you want to flip the direction between forwards vs backward in time.

Even worse, the weak force breaks another symmetry as well, parity symmetry (which basically means that moving backward in time, weak force particles "look" like their mirror image, instead of looking the same).


Theoretically this holds true, but in practice it never happens.

Why is a major question, but any understanding of our universe must assume this fact.


How do you test that behavior if you can't make time go backwards?


None of it is them, all of it is LLMs.


Blindsight by Peter Watts

Pushing Ice by Alastair Reynolds

Startide Rising by David Brin

A Fire Upon the Deep by Vernor Vinge

The Algebraist by Iain M. Banks

Semiosis by Sue Burke

Project Hail Mary by Andy Weir

Dragon's Egg by Robert L. Forward

Solaris by Stanisław Lem

The Mountain in the Sea by Ray Nayler

The Long Way to a Small, Angry Planet by Becky Chambers

Translation State by Ann Leckie

Pandora's Star by Peter F. Hamilton

The Mote in God's Eye by Larry Niven & Jerry Pournelle

House of Suns by Alastair Reynolds

This is what sprang to mind as answering that sort of brief off the top of my head.


I loved Blindsight, Pandora's star and Solaris. Might check out the rest on your list.


That's a stellar observation right there.


To be clear, this is a pun on A*


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: