It is this last goal that the authors predict will prompt a surge in adoption of data analytics tooling in the next five years, across medium-to-large scale enterprises. Managers are looking to structure their spreadsheet-based processes in a more mature and robust way. By reducing the amount of unstructured processing performed manually in Excel, managers can stabilize and lock down spreadsheet-driven processes into more automated, repeatable, structured, and time-efficient processing steps. By minimizing time spent in performing routinized processing steps, and by minimizing process variance through emulating system processing, the spreadsheet-based jobs of the past will evolve to remove the least value-added steps in the processing chain. While this book cannot but acknowledge many of these data analytics capabilities and technologies, its focus will be around one subset of the wide body of data analytics disciplines – self-service data analytics.
See Exhibit 1-1, which illustrates how productivity can be built through the reduction of time spent for the performance of routinized processes. This can be achieved by enlisting self-service analytics capabilities to address many of the routine steps performed daily, which are highlighted in the list at the left of the diagram. Remember that it is realistic to assume that there will always be some measure of a manual processing tail that is expensive or even impossible to eliminate, but the idea is to move as far to the right along the continuum as possible. The end result is to recapture processing time spent on low value-added steps, to allow for a greater proportion of the day to be spent on value creation.
EXHIBIT 1-1 Building Daily Productivity
Self-Service Data Analytics
The self-service data analytics toolset is an important growing subset of the suite of data analytics tools that is emerging as a focal point of digital transformations across large companies. It is distinct from the other sets of tools in the analytics toolkit in important ways. Self-service tools are typically off-the-shelf vendor products with which individual operators, not technologists, can interact and configure directly, due to their ease of use. Process owners that have no prior technology background and that may have never seen a piece of code are well equipped to lay out a customized, automated process, armed only with their knowledge of the raw data and the processing steps they previously performed in spreadsheets. Intelligent source data parsing and drag-and-drop operations replace SQL and Visual Basic commands, enabling the most inexperienced, inexpert, if not maladroit and bungling of us to quickly roll up our sleeves, forge and test processing steps, and implement a processing workflow, all in an afternoon (“small” automation).
The benefits of self-service data analytics tools include a reduced dependency on core technology when individuals have little influence over the development queue, an improved time-to-market and reduced “wait” in the core technology stack backlog – and importantly the ability to realize the benefits of time-savings through rapid process automation. Removing technology from the critical path is an important end, in itself, and this goal has led to a raft of self-service and user-configurable tools spanning processing and reporting. The trend of data-democratization throughout the organization is one of the main drivers behind the growth of data analytics, as the operators sitting directly on top of business processes are best placed to unlock data value. Perhaps chiefly, work that was previously unstructured, risky, and manual-intensive is now in a tool, emulating a system-driven process. Laborious and time-consuming spreadsheet processing has been replaced with nearly instantaneous computer-driven processing, leading to time savings and efficiency. Of course, there are drawbacks to these tools as well. A significant portion of the pages ahead will be focused on assessing, managing, and mitigating the risks introduced by widespread proliferation of this tooling, through the prescription of a foundational governance framework.
More immediately in this chapter, we will discuss a day in the life of operators, highlight that much of the work performed by operators and analysts is not in fact analysis, but low-value-added data staging, enrichment, and processing activities. We will also look at the processing landscape from the perspective of managers, who are increasingly under pressure to cut costs, produce more, and overall to do more with fewer hands. Then we will take a top-down strategic view from the perspective of executives who are motivated to uncover opportunities to drive efficiency across functions and silos, who share an interest in minimizing unstructured spreadsheet work across the plant, and who may be more directly accountable to internal auditors, external auditors, and regulators. They may also influence the approach, the course, and the speed of the organization's digital transformation. We will discuss the levers they can pull to increase control and to drive efficiency, and the decisions they can make to adapt the organization to expectations that routine processes must be structured and accelerated, that the focus of people resources must extend beyond low value-added mundane processing steps, and that higher-order pursuits such as unlocking data value and the enhancement of decision-making are of prime importance in the new age. Last, we will introduce one of the key topics of this book, which is the need to fill a noted governance gap, as data analytics builds saturate our respective organizations.
There are any number of relevant and overlapping frameworks that cover portions of IT governance and even portions of data analytics governance in the finance and accounting environment. However, no single framework exists that is fit for the universe of self-service data analytics builds. We will draw from mature system governance, model governance, data governance, process governance, SOX 404, COSO IC (internal control framework for the financial reporting process), COSO ERM, and COBIT 2019 (ISACA) frameworks, and even the AICPA's Statements on Auditing Standards – to sketch a foundational governance model that your organization can implement and build upon as necessary. This must be done early and determinedly, so it is in place and can play a formative role in safeguarding your organization, as it embarks on its inevitable digital journey.
Let's look at the environment from the perspective of the employee.
Employee/Analyst/Operator Perspective
Generically, these operators are analysts, though very often, actual analysis is only a sliver of their day, compared to the time spent on the raw processing steps they are expected to perform, prior to generating output for evaluative analysis. Such processing steps likely include capturing information from a number of sources, enriching the data to assemble suitably rich datasets, before completing further processing steps and transformation steps to yield final outputs in the form of information and reports. It is really only at this point that the operator can embark on true analysis in earnest.
Such outputs are often validated against prior periods to attempt to identify any abnormalities or errors. There may be key ratios that are calculated, observed, and compared to get comfort that the output is correct. There may be other sanity checks and detective controls performed to ensure process effectiveness and the integrity of deliverables. We will refer to these broadly as analytical review procedures, and we will assume that these procedures are partially about quality control and error detection, but also partially about understanding the business better, so that value can be added as a true business partner. It is these latter analytical processes that lead to actualization – ensuring high-quality outputs, owning your numbers and outputs, and gaining insights into the business through analysis.
If an organization is large enough to be layered, a pecking order emerges. More junior, if not entry-level, staff will be buried in the assembly of information and information processing. Over time, if they are