This closed‐off system poses a problem for AI cybersecurity. Gideon explains that for AI to function as they are designed, they need access to steady streams of data [5]. Going back to the letter analogy, if you bring AI into the mix, then AI algorithms would need a special way to get inside the box, be able to read the letter, and extract information from it. If someone were to break in and learn about these vulnerabilities, they could potentially exploit them by altering the data or causing the algorithm to behave other than intended.
Whenever there is an innovation in tech that disrupts the status quo, we see a similar pattern to it. Moisejevs explains in his article how after the PC, Mac, and smartphone were developed we saw an immediate rise in the usage as it became more popular and more uses were found for them. However, shortly afterward, we saw a similar growth in malware for these systems [6]. ML and AI are transitioning from their infancy stage to the growth phase, which means that in the next few years, we will continue to see more and more applications of AI in healthcare products, in sheer numbers and the depth of role that it plays. However, as these AI numbers increase so too will malware, and in the healthcare setting, this could have devastating effects on the patients.
When these concerns about malware are applied to healthcare, it is best to view it in two categories; similar to how AI is. There is the digital side that has to do with data, patterns, and ML, and there is the physical side. On the digital side, the primary concern is the protection of data. The reason for this is because all decisions made via AI stems from having reliable data. To give an example, many HCPs use AI to help diagnose patients. If a patient were to come in and have various scans and tests performed, unreliable data may cause the patient to be misdiagnosed, or possibly not diagnosed at all. Another example comes from the EMR. If the patient is taking chronic medication, and the data is corrupted, it might misinterpret the pattern and believe that the patient just picked up their medication from the pharmacy recently when in fact they are due for a refill. If this happens, it may cause issues for the patient because insurance will not pay for another refill since according to the EMR, the patient has plenty of medication.
There are just as many problems for physical AI in healthcare. Previously, it was discussed about surgical robots performing on their own. Depending on the procedure being performed, this could have immediate catastrophic events for the patient; this also includes carebots. Imagine that a carebot that is responsible for ensuring the patient is okay is hit with a ransomware. Suddenly, the carebot is being held hostage unless this person pays some amount of money; this carebot will cease to function.
The key takeaway is that cybersecurity is a critical aspect of this growing AI field. Without securing AI, it puts the user, patients, and HCPs at an unnecessarily high risk. In healthcare, we see the fastest connection between a cyber threat causing issues and people's lives being at stake. While other fields may experience this as well, healthcare, due to its nature, will experience it more often and with quicker response time. It is imperative that as new designs and new innovations for this field emerge, they are designed with cybersecurity built in from day 1, rather than added on later.
1.6 Future of AI and Healthcare
So, what is next? Based on Moisejevs graphs, we can predict that AI will continue to grow exponentially over the next several years [6]. However, the path in which it grows will be determined by the innovators behind the technology. The US government has always considered America to be at the forefront of innovation and driving technological advances [4]. Yet, other countries are working hard and in some areas are surpassing American innovation. To keep American innovations competitive, the US government has laid out a framework that they believe is necessary for the future.
In 2019, there was an update to the National AI R&D Strategy that did not exist when it was previously published in 2016. This update pertained to the partnership between the US federal government and the outside sources. There are four main categories in which these partnerships fall into: Individual project‐based collaborations; joint programs to advance open, precompetitive, fundamental research; collaborations to deploy and enhance research infrastructure; and collaborations to enhance workforce development, including broadening participation [4]. These areas all strive to enhance AI by linking universities and students with industry partners to yield real results.
It makes logical sense to establish these private–public partnerships and encourage university students to study and advance AI. The bulk of research done on the subject is by universities and then industry takes that research and develops products based on it. If the United States aims to remain as one of the top innovators of AI, then they must continue to research deeper uses of AI. By providing funding, students can delve deeper into the subject matter and advance the field.
There are several US federal agencies that have already adopted embracing these partnerships. These include, but are not limited to, Defense Innovation Unit (DIU), National Science Foundation (NSF), Department of Homeland Security (DHS), Silicon Valley Innovation Program (SVIP), and Department of Health and Human Services (HHS) [4]. It is clear through the HHS already working on establishing partnerships that AI and healthcare will continue to grow and be of great importance. The main goal of the HHS partnerships is to develop new AI pilot products and establish research into AI and deep neural networks to further AI's uses in the healthcare field.
To some degree, it is possible to predict what is coming for AI. This can be achieved by looking at the current trajectory and extrapolating what will be coming next. However, this extrapolation is extremely limited. Deep neural networks as a basic structure for AI existed back in the 1980s; it was not until recently when there was enough data and technological capabilities that this became a reality [4]. Without knowing what technological advances will disrupt the status quo or become available, it is impossible to predict the far future of AI. This is the underlying importance of being viewed as a top innovator and researcher into the subject, so the United States may be first with the latest and greatest AI applications.
1.7 Conclusion
When considering the intersection of AI, cybersecurity, and healthcare industry, we have seen that a myriad of problems exist today, and there are more coming down the pipe in the future. To be prepared, there are several issues that must be addressed. The first issue is also the hardest to resolve, and that is the morality of AI. Morality is a fluid topic that changes not only over time but also by who is viewing the moral issue. Therefore, it is recommended that an international organization preside over AI and what morals are being implemented into AI in the current and future state. By having an international organization, it would allow voices to be heard from all nations so that the best possible options can be decided. The reason that this issue is the hardest to resolve is a conflict between morality and legislation. Having an international organization that acts as a ruling body impedes growth and cannot keep pace with AI in numbers and advancements.
The second issue that must be addressed for a sustainable AI future in healthcare is a lack of cybersecurity within devices. Currently there is an incentive to being the first to market with new products and having those products work, and the focus is on R&D. However, ensuring that these devices are protected from outsider attacks is costly and time consuming and often done after the fact. In a similar way that the federal government provides funding to schools and other research