Decision Intelligence For Dummies. Pamela Baker. Читать онлайн. Newlib. NEWLIB.NET

Автор: Pamela Baker
Издательство: John Wiley & Sons Limited
Серия:
Жанр произведения: Базы данных
Год издания: 0
isbn: 9781119824862
Скачать книгу
and find value in only 10 percent of data, that’s still an awful lot of data to parse and analyze. AI is also faster than people at, well, everything. That’s an advantage that organizations want to maintain. Then there’s the need to automate tasks so that work gets done faster, more efficiently, and without the need to interrupt function, features, and the customer experience.

      Where do all these factors come together in the decision intelligence effort? Well, that’s what you and your organization have to figure out for yourselves. Certainly, there are guidelines on how to do it as well as tools at your disposal that I present in greater detail later in this book. However, remember that the first part of bringing humans more fully into the decisioning roles is in deciding the particular blend of human versus machine processes that are needed.

      Or, to put it another way — and in keeping with Kozyrkov’s earlier analogy — this is the part for the microwave chefs to work their magic in making the recipe. The job of the microwave builders is largely finished. You know you have the right recipe when you see the proof in the pudding, so to speak.

      Cryptic Patterns and Wild Guesses

      IN THIS CHAPTER

      

Seeing why data analytics are better assistants than usurpers

      

Leveraging humans and machines to achieve better business value

      

Recognizing that pattern detection can miss the big picture

      Yahoo! put the first Hadoop cluster — arguably, the first truly successful distributed computing environment designed specifically for storing and analyzing huge amounts of unstructured data — into production back in 2006. It’s that date which, for most practical purposes, marks the onset of the big data gold rush and the hunt to discover unknown information buried in known data sets.

      The results were largely perceived to be worth the effort and generally enlightening — even though most big data initiatives fail to this very day. Even so, the call for data driven businesses, to the chagrin of business leaders and managers everywhere, became the mantra in business and investment circles worldwide. Organizations were soon convinced that using data analytics meant the same thing as harvesting answers. The thinking was that the answers generated were perfect right out of the box and were produced by means far beyond mere human abilities. Gut instinct and human talent were summarily discounted and dismissed as little more than wild guesses. However, the reality was and is quite different, as analytics have limits, big data and AI projects have high fail rates, and business executives very often let their gut instincts override algorithm outcomes.

      Fear of AI began to soar as people expected machine masters to leap from science fiction and rule the real world. But that’s a far cry from what has happened so far.

      It turns out that machines aren’t the new masters of the human race, after all. And they don’t provide the final answers humans seek. But that’s more the fault of humans than the machines. People were so busy asking questions of the data that they forgot to look where the work was headed. Organizations often found themselves working in circles or solving problems that yielded no tangible benefits for the questioner.

      

What organizations really seek is not so much an answer, but rather a path to a specific destination. In this chapter, you will find out why that distinction matters and how it changes the way you make decisions.

      People commonly believe that machines are unbiased and more perfect than humans. Data analytics, automation, and machine learning (referred to as AI by marketers everywhere) are often presented as though the machines are capable of sorting out the data and reaching a perfect and fair conclusion on their own.

      This simply isn’t true. It’s imperfect humans — not perfect machines — who make the technologies, set the rules, design the models, and select the training data. That means subconscious or intentional human influence can seep into every step: the rules, the programming and models, and the data selection. In short, the creation mirrors the creator. Machines are influenced by humans who build them and therefore frequently make many of the same mistakes humans make. Examples are numerous and varied. They include institutional biases, such as the (infamous) example of the continued use of redlining, a discriminatory practice in bank lending and other financial services that draws a figurative redline around minority neighborhoods so that those residents either can’t get loans approved or can’t get them at fair terms.

      Such biases in computing are insidious and not entirely new. For example, a computer algorithm used in 1988 to select medical school applicants for admission interviews discriminated against women and students with non-European names. Similarly, Amazon ended a recruiting algorithm in 2018 that proved to be biased against women.

      AI can ease such problems or make them worse. In any case, reverting to human-only decision making obviously isn’t the answer.

Whether you use traditional data analytics, decision intelligence, or a combination of the two, you need to take steps to guard against accidental or intentional biases, errors, and reasoning flaws. Here are a few important steps to take to ensure fairness in machine decision-making:

       Be proactive: Use AI specifically to seek and measure discrimination and other known decision flaws throughout an entire decision-making process. Yes, this is using AI to make other AI and humans transparent and accountable.

       Recognize the problem: Use algorithms to identify patterns, triggers, and pointers in actions, language, or rules that lead toward discrimination in order to set safeguards against discriminatory precursors in machine and human behaviors.

       Check the outcomes: AI operates in sort of a black box, where humans can’t quite see what it’s doing. Yet AI cannot yet explain itself or its actions, either. But that’s okay — you can still check and grade its homework. For example, when checking for fairness in data-based or automated recruitment and hiring, look to see whether the outputs meet current legal standards such as the 80 percent rule — the rule stating that companies should be hiring protected groups at a rate that’s at least 80 percent of that of white men. Software developers should also perform disparate impact analyses — testing to see if neutral appearing functions have an adverse effect on a legally protected class — before any algorithm is used by anyone. If your software is from a third party, ask to see the results of the analysis and a detailed explanation of how the product works.

       Do the math. Statistical analysis has been around for a long time. You can perform an old-fashioned and routine statistical test to reveal disparities arising from unintentional biases based on gender, race, religion, and other factors. Be sure, however, to automate the math rather than do it manually, because an automated process scales better, speeds results, and is likely more accurate.

      

Be sure to compare your outcomes with the reality of the environment. Context is everything. For example, a low number of female members in the Boy Scouts of America is not indicative of a bias against females but is