Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Somebody asks for a feature, you build it, they pay you — that’s data.

Data isn’t just spying on your users to see how they behave.



A big part of product management's function is buffering all the features being asked for so that they can take a step back and design a solution that isn't just 100 new features. The feature factory process is mostly a symptom of product being bad at their job.


Sure, but my point was that this article seems to be focused on data-driven product decision-making. Creating good designs with far reach in my view is unrelated. I wanted to counter the narrative that unless you are spying on your users and using that to 'prove' your features are being successful you are doing a bad job at product development.

It's certainly possible in my mind to do requirements gathering, feature design, implementation, deployment, and iteration without automated data collection from user behavior. You can, you know, talk to people. Nowadays when someone says "data-driven" they typically mean "instrument the hell out of your product and observe user behavior." Perhaps that's not what the author is getting at. But if they are, I think it's important to tell people to not feel guilty for not spying on their users. If you are doing the follow-up work to actually communicate with customers to understand their needs, you should feel confident that you're doing your job well. And you should be proud that you are doing so without having to spy on people.


totally agree, it's also possible to have built a feature that users have to use but aren't happy about - then the data can be misleading.


Sure it is. "Spying" is just a deliberately loaded term to make it seem bad. What would you think of an office manager who set things up only according to requests, never trying to observe and anticipate employee needs?


What would you call it? Your analogy fails on several levels — is there a more direct counter argument you can make?


If you care about your user experience, you want as much data as you can get about what exactly your users are doing. This generalizes across all companies in all industries, and I would call it "data collection". Saying "we satisfied the requirements so it succeeded" is pretty universally recognized as a cop-out, which you resort to when you don't really care or have no other choice.

There exist strategies that step over the line to spying (you shouldn't, like, put tracking devices on your users), but the article doesn't suggest that you should do those.


Yours is the consensus view, I'm contrarian on it. Appealing to it as being "universally recognized" as an argument about its validity is a logical fallacy.

The methodology of recent history has been to instrument everything, and then use that to analyze. I'm not ignorant, I've done this. It's an effective way of not fooling yourself. However, the downsides of such data collection have, for the most part, been ignored in the process of determining what to instrument, how that data is stored, and what is to be done with it. This is an industry-scale oversight and is going to rapidly change, especially in light of new regulations.

Beyond the fact that such data is a liability (and usually can be de-anonymized) and the legal requirements of having it being burdensome, organizations need to ask the question of if their data collection policy is ethical. Your "universal recognition" in my mind is more of a "universal blind spot". On the contrary, instead of saying that lack of a data collection policy is a cop-out due to laziness or lack of care, I'd argue that falling back on collecting data shows a similar kind of laziness, where you've failed to come up with a better solution to your problem of understanding how well your product is performing that doesn't require violating the privacy of your customers. (Even if you aren't violating it per se, you're only one data breach away from someone else doing so, and that would be entirely your fault.)

The correct methodology is to try to solve these problems with minimal data collection, ideally none. It also means sometimes you will not collect data, and knowingly potentially miss insights, because the trade-off doesn't make sense.

That doesn't mean you don't do it, but it does mean that when you do, you should recognize it as a liability, a deep trade-off that has many negatives, where the benefits should outweigh it. If it's not actionable, don't collect it. If you don't need it anymore, permanently delete it.

Most organizations don't see it that way, but I predict many will in the coming years.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: