You can summarize things based on research such as focus groups but it's less concrete than hearing about specific customer situations.
We dogfooded it, certainly, but Riak is designed for massive data sets that we simply didn't have need of. So while we tested it, we had to rely heavily on our customers to know what use cases were working well and what weren't.
To help alleviate that problem, our developers were frequently engaged with the customer support team; I think it was fairly standard that new engineers would start out on customer support.
I really envied companies that could properly exercise their products internally.
I suspect that if engineers truly knew the competition, they would be motivated to build a better product, which would also take more effort and take longer. So they have to be shielded from doing this by product management.
As an engineer, I find this "organizational laziness" (do just the bare minimum) very demotivating. There are many good things that have been built because people were irrational and just pursued them for their own sake.
You'd have loved trying it pre-Web :-) I was a product manager in the late 80s/90s and I needed to do competitive analysis for pricing and feature prioritization.
One of our sources of information was an analyst firm that basically collected faxes of product briefs from ourselves and everyone else and then basically charged us large sums of money to send us copies of the product briefs for relevant products because we couldn't ask for them directly.
There was another firm that shipped us basically their own datasheets in a standard format of the products in a space (computer systems), which were often considerably outdated/inaccurate.
I sometimes say if I had to go back to those days, I'd quit in a week as I basically wouldn't have the information to do my job.
I've only worked at one company that really went all-out on this. Samples were bought of competitive products, tear-downs and estimates on manufacturing cost, use of ICE-machines (where practical) to understand the underlying software to some extent. All in addition to studying usage, manuals, etc. Mid 1980's.
I think it was really useful, but then they also had a QA department that was a peer to and practically as large as engineering.
Certainly there were computer mag reviews of products and I'm honestly somewhat split on the transition from "expert" and theoretically unbiased reviews to a much more heavily crowdsourced set of reviewers. Not sure I'd call modern online crowdsourced reviews "magical."
Thanks for reading!