Cloud Storage and Cloud Computing
We have already described that data is being captured at a rate never before seen. Some say that today, companies like Amazon, Google, and others know what we need before we do. They capture data surrounding how we shop, what we buy, our online browsing patterns, our spending patterns, and the likely order of our transactions. One consequence of harnessing the enterprise utility of customer data is that data volumes have multiplied and exploded over the last 10 to 15 years. In many cases, enterprises require data storage that far exceeds what can be accommodated with their own hardware in their own facilities. Further, the number of operations performed on the data has increased commensurately. However, with advances in connectivity, the availability of capacious networks, increased speed of information transmission, and advances in data security, companies may elect to upload their data to data centers outside of their organization in the cloud, to be administered by cloud service providers (CSPs).
Some of the largest CSPs, such as Amazon Web Services, Microsoft Azure, IBM, Google Cloud Platform, Salesforce, Alibaba, and Oracle, offer not only storage but computing, security, and enterprise software services. It is easy to see that there is a virtual ecosystem to be managed, including not only Big Data but also the hardware, specialty software, and analytical methods required to unlock its value. This ecosystem includes other ancillary components, including security and encryption, computer processing, and a host of tools and solutions to transform supply chains and to enhance customer experience. Companies must face the decision as to whether to continue to build and grow their own technological capabilities in-house or to subscribe to one or several focus-built and ready-made cloud services available in the marketplace, or perhaps to consume services from several CSPs, which is an increasingly likely choice.
Since the advent of cloud computing, many companies no longer deem it necessary to purchase licenses and install software for dozens of required programs on every individual connected machine across an enterprise. Instead, every computer on a connected network can subscribe to and run software that is housed in the cloud to process data that is stored in the cloud, assisted by the expertise of CSPs. A user may only pull down fully processed information and outputs, as required for local consumption. It is important to introduce the cloud, given some of the largest service providers are packaging up tools and expertise surrounding some of the subject technologies of this book – artificial intelligence, machine learning, and analytics, to name a few. Let's begin by providing an introductory overview of artificial intelligence.
Artificial Intelligence
Artificial intelligence (AI) is one of the broadest and most all-encompassing of the data analytics references the reader will hear. It is the over-arching theory and science of development of computer systems and processes that can consider facts and variables to perform processes that typically require human intelligence and the uniquely human capability of learning new things and applying them. Any number of sciences and disciplines are brought to AI such as mathematics, computer science, psychology, and linguistics, among many others. One need only picture the ways that humans think, interact, and understand one another to perform daily tasks to see the breadth of fields, disciplines, and specialty branches of learning that must be brought to bear.
At one time, this term included virtually all the individual technologies that will be introduced in this chapter, in one form or another. However, once the loosely organized science gives birth to a proven discipline, the emergent capability is purged from the definition of AI and can stand alone. Therefore, AI is by definition the nebulous and nondescript potential technologies that may ultimately emerge to emulate human thinking, capabilities, and interactions. Prior to building critical mass and emerging successfully as individual disciplines, optical character recognition (OCR), intelligent character recognition (ICR), speech recognition, observing any combination or sequence of variables for compound decision making, language translation, robotic process automation (RPA), neural networks and machine learning – all of these lived in the vague, blurred, and ambiguous land of potential to emulate human-like capabilities – artificial intelligence.
Blockchain and Distributed Ledger Technology
The next technology we will introduce in this chapter is distributed ledger technology (DLT) upon which blockchain is based. In order to transact digitally and with confidence, the ownership chain of assets of value must be trackable and auditable. If we think about all the transactions our companies engage in, one activity that often represents manual work and a break in straight through processing (STP) is verifying transactions when questions arise after-the-fact. Think of the number of reconciliations performed across accounting, finance, and operations functions in business today. Often, reconciliations are aimed at comparing and agreeing things like transactions, assets, securities, and account balances to confirm the true state of a ledger. A reconciliation is essentially the comparison of two datasets to either confirm their agreement or to identify any exceptions or breaks. Once exceptions are identified, countless hours of investigation can follow, tracing the exceptions back to transactional source data to confirm which of the two data sets under comparison are correct, and to take the necessary resolution steps to correct the faulty dataset. What if this could be solved in a different way?
Distributed ledgers contain different types of shared data, such as transaction records, attributes of transactions, credentials, or other pieces of information worthy of retention and validation. Blockchain technology allows a network of computers to agree at regular intervals on the “true” state of a distributed ledger. On a blockchain, transactions are recorded chronologically, forming an immutable chain, and can be made private or public, depending on how the technology is implemented. The ledger is distributed across many participants in the network; it does not exist in only one place. Instead, copies exist and are simultaneously updated with every fully participating node in the ecosystem. Therefore, a blockchain emerges as a single validated source of truth. Suddenly, a decentralized network can achieve broad consensus about the state and authenticity of a block's contents. Each participant in the network can verify the true state of the ledger, contribute to maintaining the accuracy and authenticity of the ledger, and subscribe to the resulting dataset as a golden source of truth. This technology can be used to transact at low cost, or to reduce reconciliation efforts and minimize the costs of resolution steps.
What if counterparties to transactions were both (or all) participants on the same distributed ledger? If they each (or all, respectively) agreed on the validity of ownership or asset movements, and each subscribed to the resulting golden source of truth, would there be a need for the vast numbers of after-the-fact reconciliations or the audits that are undertaken to resolve exceptions? Would there be an opportunity for exceptions to emerge at all? So goes the theoretical benefits case for distributed ledger technology to the accounting, finance, and operations functions in large organizations.
Use cases abound for distributed ledgers and blockchain. In accounting and finance functions, there are a number of opportunities to harness this to settle and reconcile transactions more efficiently than is done currently. Think of the processes undertaken by your own organizations to research the completeness of transactions, the accuracy of balances, or the true state of ownership. Have a think about the number of reconciliations and comparisons performed in your own office to get a sense as to whether there are opportunities to gain efficiencies by consuming a single source of truth that has been validated through consensus of participants. In logistics, benefits can accrue from leveraging an immutable audit trail of goods as they move through the economy – as supplies move to manufacturers, through the goods production process, as finished goods move out the door to distributors, and how they pass logistically through shipment and delivery to consumers. Other use cases arise in identity and authentication. It is clear that distributed ledger technology will change the way we do business in the future. This technology is in no way the focus of this book, but we want readers to be familiar with key disruptors that are shifting the landscape in the new digital age.
Robotic Process Automation
One