In this example, one could use the wrong question as a supplementary question or as input in the algorithm, in support of the correct question: “What do readers want to read?” But ultimately this formula falls short as well because the answer is based on the question instead of the destination. How so? Well, consider if I pose this same query and the answer is that readers don’t want to read any of the tech- or science-related topics I write about — they want to read about the newest reality show instead. That’s simply an output — an answer — I can’t use, and neither can any science or tech publication. Magazines that focus on TV entertainment would give this answer a collective shrug, as in, “Tell us something we didn’t already know.” It’s just not that useful of an insight for the work and cost involved for either genre.
Examples of the right questions for this scenario might include:
Which descriptive words appear most frequently across topics in the most read articles over the past year and how do they correlate with the number of likes and shares on this publication’s articles across social media? (What I’m looking for here are reader triggers and themes of recurring interests.)
What are the top ten shared memes or social media post issues in my audience demographic and how do they correlate with current or breaking science or tech news? (What I’m looking for here are emerging or sustained interests that I can tap into as popular culture or high-interest angles for articles.)
How much did writer style and word choices vary between the top performing articles (in terms of eyeballs, clicks, or social media shares and likes) and where are the commonalties. (I’m looking for what kinds of story-telling readers prefer so I can change writer guidelines to improve readability of articles across the board.)
What is the impact of SEO keywords on article readership? (Here I’m looking to see if incorporating SEO keywords in the text and headline actually helped or hurt readership and to what extent, so I can adjust how stories are written accordingly.)
What is the overall pattern across all top performing articles over the past six months? (Here I’m looking to see what bells and whistles readers may be responding to, even if subconsciously.)
What are my competitors top performing articles according to readership numbers and social media shares and what are their commonalities? (Here I’m looking to see if my reader patterns match my competitors and where they diverge so I can consider topic options based on patterns my publication may not have previously considered.)
Now it’s your turn. What do you think the right questions would be for a publication to increase its readership by taking the lead instead of following the crowd?
In decision intelligence, you decide first where you want to go or what you want to achieve and then figure out which tools, queries, data, and other resources you need to get there. Think of it as marking a destination and mapping the course to get there before you take the trip or take an action. In other words, decision intelligence asks you to regroup your decisioning processes so that they focus on specific goals — rather than formulate queries that may prove of little business consequence.
The problem doesn’t lie in the math or the data queries. Rather, organizations have a problem because they lack a clear definition of the desired business outcome, resulting in a lack of direction at the outset of the decision-making process.
Let the business outcome you seek define the queries you ask of data to ensure that your decisions lead you to where you meant to be.
Why data scientists and statisticians often make bad question-makers
Not so long ago, data scientist was the hottest job on the market. Everyone was in pursuit of these data gurus to unleash the value of data and help drive companies forward. And data scientists did deliver what was asked of them. Unfortunately, many of their projects still failed because what they delivered wasn’t a match for expectations, although it usually was exactly as ordered. Organizations were and are notorious for not having a business plan in place for these initiatives from the start, and for not being precise in what they are asking data scientists to do.
In short, typically the data scientists didn’t fail. Ill-defined expectations and the lack of business planning rendered their work moot. But that’s not to say that data scientists’ work is always perfect either.
At first, data scientists had free rein, for no one else in the business could quite wrap their minds around this big data tsunami. They experimented with new big data tools to explore possibilities and to educate their businesses on how useful data analytics can be. Then they included projects to answer their business analysts’ and business users’ most often asked questions. They built dashboards and visualizations, automated them, scheduled regular releases of updated insights, and eventually advocated self-service business intelligence solutions to provide some user autonomy (within carefully structured limits, of course).
But the further this work progressed, the larger the gap typically became between the data scientists/data analysts crowd and the business managers/business executives crowd. That happens when data scientists have too little an understanding of the business and when business leaders have too little an understanding of data science.
As the data analytics industry has matured, businesses are finding that they have little appetite or budget for data projects that fall short of producing business value. The definition of a data-driven company has also changed — now it means that data has moved out of the driver’s seat and is riding shotgun. Data is an augmenter rather than a usurper.
By and large, data scientists are builders, and statisticians are largely data assemblers and interpreters. Data scientists and statisticians may still be building, assembling, and interpreting, but the problem is that almost everyone now has access to plenty of data tools — visualization tools and templates, model stores, sharable algorithms, specialized automation tools, AI in a box, and so on — to do those things in a more decentralized way. In addition, many of the queries data scientists and statisticians would come up with to ask of data now come prepackaged in modern, self-service business intelligence (BI) tools, complete with AI generated narratives in case the user has trouble interpreting the visualization correctly.
If you’re in one of these professions, no worries. There’s still plenty of work for data scientists and statisticians to do. But it does mean that the demand for new kinds of talent is rising. To borrow from Cassie Kozyrkov, Google’s chief decision scientist, if you were to think of data scientists as microwave builders, you’d realize that the world no longer needs any more microwaves — what it needs now are better microwave chefs.
In general, data scientists are tool and model builders, though statisticians are data wranglers and interpreters. Neither is a business decision maker. That’s not a slam on either profession but rather a clear delineation of job roles. It’s not entirely fair to blame either profession for failed projects if there was never a business plan to use their work anyway.
It’s time to focus on the science as well as the art of making decisions. Decision intelligence is about leveraging both hard and soft skills.
Identifying Patterns and Missing the Big Picture
Data analytics, especially those powered by AI, are incredibly good at detecting patterns in data. They can not only find patterns in megasized