There are many types of measurement that fit under the “evaluating for iteration” heading, but are outside the scope of this book to review in detail. These include desirability or usefulness research, which investigates whether a product is appealing to users and helps address their needs; usability research, which focuses on whether people can accomplish tasks within the product; and A/B testing to see which versions of a design are more effective with users. Whatever research methods are in your toolkit, they can probably be used with an outcomes logic map to continually improve your product.
Mind Your Research P’s and Q’s
How do you make sure that your research is done well and ethically? Generally speaking, academic research and product research follow two different but parallel lanes on the highway of review and oversight processes. You’ll want to be clear which of those lanes to occupy with the research you’re doing as part of your product development process. The checkpoints you’ll hit along the way will help ensure that your research is done ethically and correctly, and will reduce the chances that you lose users’ trust through a misstep.
I’m using the phrase “academic research” to describe any research done for the purpose of increasing scientific knowledge without direct product implications. Often, this is the type of investigation done by people at universities or research institutions, but sometimes companies will do it, too. Academic research often makes its way into the world through peer-reviewed journal publications that are theoretically available for anyone to read.4 Your outcomes research may fall into this category; your product iteration research probably won’t.
TIP ASK AND YE SHALL RECEIVE
If you’re interested in an academic research paper but can’t find the full text online, reach out to the author(s) directly. Twitter is a great way to do this, or you may be able to find their email addresses on the website of the organizations where they work. Most researchers are happy to share their papers if asked.
Importantly, when a team starts the process of planning an academic-type study, they’ll have a group called an Institutional Review Board (IRB) look over their protocols and materials. Most research institutions, including universities, have their own IRBs that are free for affiliates to submit to. There are also independent IRBs, which charge small fees to review study proposals. The IRB’s purpose is to make sure that any people who participate in the study are treated ethically. IRBs pay attention to details like whether people are compensated fairly for their time in the study, whether participants receive the information they need to understand what’s being asked of them and make an informed decision to take part or not, and whether they get the information they need to ask questions later if they want to.
Here’s an example of something I’ve been asked to fix as part of an IRB review for a study. I was giving people feedback on a puzzle task they’d just finished. Half of the participants were told they’d kicked the puzzle’s ass, while the other half were told they’d just proven themselves to be the world’s worst puzzlers. The feedback had no relationship to their actual performance. The IRB pointed out that the people getting the negative feedback might be in a bad mood afterward. They asked me to include something in the study to make them feel better. Their proposed solution? After getting the fake feedback, everyone watched a video of puppies and kittens. They were confused by the abrupt segue, but delighted by the cuteness.
Product research, in contrast to academic research, is done specifically in order to improve a product or service. It is unusual for this type of research to be published someplace where a general audience could read it. Its audience is usually product teams or other organizational decision-makers who will use the information to make decisions about a product feature, roadmap, or investment.
Usually, there is no IRB oversight of a product research study. There may be an internal team who reviews the proposed study protocol to make sure that it meets the needs of the organization and follows ethical processes, but that doesn’t always happen.5 If there’s not a formal review process set up, it’s still a best practice for an internal team to consider potential pitfalls. Specifically, the team should consider whether users who are part of the research will be put at risk by participating. How will their privacy be handled? Will they experience anything that might upset them or make them feel taken advantage of?
If a product team decides to do research that they might also try to publish in a journal, the best practice is for them to go through both the academic and the product research paths prior to launching the study. Their internal teams will still do their review of the research protocol to make sure that it supports their goals, but an IRB will do an additional review with an eye toward ethical issues.
So, if you’re planning research that will add to the general body of knowledge beyond your specific product, have an IRB review your protocol before you begin collecting data. If your research is purely for product development purposes, you probably don’t need an IRB. If you’re thinking about publishing the results of your study in a journal, that’s a strong signal that you should be talking to an IRB.
The Upshot: Metrics Tell Your Story
Having a measurement plan is crucial for the success of your product. You want to tell a compelling story about why your product is great, and the data you gather with your measurement plan will help you tell that story with conviction. Metrics allow you to determine whether your product works, how much people like it, and what the most effective ways to improve it would be. If you have a B2B model for your product, metrics help you sell yourself to companies who want to get the positive outcomes you can provide for their people. And whether you’re distributing through a B2B or B2C model, success stories interest people in becoming your users.
Perhaps counterintuitively, the most effective metrics are planned at the very start of product development. Doing this ensures that you can build the right hooks into your product to collect the data you’ll need, and that you include the right content and features to achieve the results you want. A tool like an outcomes logic map can help you plot out all of the steps that will need to happen to make your product effective. It will guide you in effectiveness research to determine how your product works, as well as research to investigate what your next iteration should look like. And the upfront planning will help you work effectively with IRBs or other reviewing bodies to ensure that all of your research is done in a way that respects users and maintains their trust.
PERSPECTIVE Cynthia Castro Sweet and Pragmatic Scientific Rigor
In the digital health world, one company stands apart for the strength of its outcomes story: Omada Health. As of this writing, they’ve published 11 studies in peer-reviewed journals (that’s a lot), and have launched a large randomized controlled trial of their diabetes prevention product. Their work on creating an outcomes story has translated to business success, with Omada boasting one of the highest venture capital fundraising totals in digital health and a top-notch list of clients. A driving force behind Omada’s ongoing research program is Dr. Cynthia Castro Sweet. I was interested in talking to Cynthia to learn how Omada has become a leader in telling their story with data. What can other teams learn from Cynthia’s experience at Omada?
How can you leverage existing evidence?
There’s a big body of literature out there around diabetes prevention programs, but we need to show what our specific product does. You can draw a line from the original Diabetes Prevention Program (DPP) format to the way Omada has implemented it and show apples to apples. By producing our own evidence, we are instilling reassurance that we’ve held up the integrity and are faithful to the essential elements that made that product or service work in an older, traditional format.
Well-designed, well-conducted scientific