HealthCare Security Paper p.pdf (479.9 KB)
Hi, can you please also post the full citation? (I don’t like clicking on random downloads on my work computer, but am interested in looking up whatever this article is. Thanks.)
Healthcare Data and Device Security
March 8, 2017
Table of Contents
Value and risk associated with Personally identifiable information (PII)…Page 3
The Swiss Cheese Model of error management…Page 4
Hospital data protection woes…Page 4
How might risks have been mitigated for UHG through better IT policy?..Page 5
Personal Medical Device Security…Page 6
This paper makes a broad, if not brief enough, analysis of the state of medical system cybersecurity. It will address threats to hospital infrastructure, individualized medical devices, and healthcare data records. In September of 2014, Thomson Reuters published an article titled, “Your Medical Record Is Worth More to Hackers than Your Credit Card.” This paper will explore why. Several questions will be raised, but not satisfactorily answered. For instance, “Who legally owns medical data?” and “To what degree do IT industry standards uphold healthcare business policies and ethical responsibilities?” In order to explore these questions, this paper identifies specific examples of how the Confidentiality, Integrity, and Availability (CIA triad) of data is upheld and compromised within the healthcare field. A broad definition of medical device is used here, taken from IEEE article Building Code for Medical Device Security Software. The term ‘medical device’ may, “…refer to an implant, a wearable device, a bedside device in a hospital, a large-scale diagnostic device such as an MRI system, or even an electronic health record system.” (Haigh p7)
The article that I chose as a starting point for this paper is titled Cooperation on Cybersecurity is Essential, FDA Says. It begins, “If medical devices have cybersecurity problems, the U.S. Food and Drug Administration doesn’t want to stand in the way of companies moving quickly to fix them.” In the past, it was true that Implantable Medical Devices (IMDs) like cardiac pacemakers, “…successfully complete validation only once before release to the market, and software changes after validation potentially carry legal and reliability ramifications.” (Burleson p162) Federal policies effectively dissuaded post market security life cycle development. Recent changes in Federal Law are meant to address this impediment. The 2016, a temporary exemption to the Digital Millennium Copyright Act (DMCA) was meant to encourage researchers to find vulnerabilities in medical devices by allowing them access to proprietary software. The FDA recently released their nonbinding recommendations to medical device manufactures, encouraging them to manage, “postmarket cybersecurity.” (FDA p4) Significant breaches in hospital records now require reporting and public disclosure. However, the tension between security and functionality is pushing embedded devices in the healthcare field increasingly towards the Internet of Things as patients and providers increase their demand for network connectivity. Many of the resulting compromises in security have yet to be addressed by medical manufacturers and hospitals. Patient demand for functionality far outweighs that for security.
The article that I chose to analyze was from a local newspaper. I chose to use local sources, where relevant, for several reasons. First, because one needn’t look far for sources citing examples where the CIA triad was compromised in healthcare. Where there are problems, there is work to be done. Minnesota’s Department & Labor statistics (Figure 1) project that healthcare hiring will be among the highest of any local industry through 2022. (Macht slide 22) Surrounding this ever-growing industry is some of the most critical need for development of secure systems. Our local hospitals, insurers, and medical device manufactures are the community that we, as software professionals, will work with in addressing security challenges that have the potential to affect ‘life and limb.’ Second, since our fundamental, sometimes life altering, interactions with healthcare occur as patients within our local hospital system, an optimally functioning local healthcare system directly benefits us and our loved ones.
Value and risk associated with Personally identifiable information (PII)
The Black market may pay ten times as much for healthcare data as it pays for credit card data (Cylance p2). Why? Personal Protected Information (PPI) integrity and confidentiality are difficult and time consuming to correct, unlike credit card data. Healthcare records open the door to many opportunities for fraud and financial gain. “Health information is believed to be among the most sensitive and confidential personal data, with the result that confidential data hemorrhage is exposing healthcare providers to unprecedented legal and financial risks.” (Kamoun p43) Barriers into the black market are low, as illustrated in figure 2. Valuable information can be accessed with relatively low investment of resources. (Ablon p3) “Vulnerabilities are numerous enough that novice hackers can easily obtain the tools to compromise a hospital network or a medical infusion pump.” (Hagestad p35)
While relatively high market values associated with PPI may seem counter intuitive, this valuation can be supported by market demand, which prompts a large number of cyberattacks against hospital systems each year. Healthcare accounts for almost 20% of U.S. GDP. This helps explain why, “…Health services was the most commonly breached sector in 2015” (Georgia Tech p12) A Ponemon Institute study from 2011 estimated that, “…60% of healthcare organizations have experienced at least one breach,” and that, “…The average per capita loss cost of a healthcare data breach was $250 for each stolen or lost record.” The The identity Theft Resource Center reports the number of medical records ‘divulged’ in 2012 was 2,190.845. (Kamoun p45) Hospitals in Hollywood, CA are regularly hacked for sensitive information for ransom seeking. Significant attacks against VA Hospitals have also occurred.
The Swiss Cheese Model (SCM) of error management
The Certified Information Systems Security Professional domains (CISSP) Exam Guide states that, “Some techies like to joke that all computer problems reside at layer 8,” of the Open Systems Interconnection (OSI) Reference Model. Since the OSI model lacks an eighth layer, the joke implies that, “The user is the problem.” (Harris, p479) In the article Human and Organizational Factors in Healthcare Data Breaches, authors confront the fact that, “…Over 78 percent of the 709 surveyed IT practitioners blamed employee behavior, both malicious and negligent, as the major cause of data breaches.” (Kamoun p43) However, the article’s authors assign blame more widely than the majority of IT professionals, stating that, “…latent organizational factors may play an equally important role in healthcare data breaches.” (p44) Differentiation is drawn between, “Design Versus Human-Induced Errors,” and “Variable Versus Constant Errors.” (p48) Risk analysts are encouraged to accept the premise that, “…there are many stages where errors can occur, and hence various stages where defenses can be built to prevent them.” Users are to be allowed only so many holes in their “cheese” before light starts to shine through on how to break the system, accidentally or maliciously. The SCM is shown in Figure 3 of the appendix.
Hospital data protection woes
Healthcare network demand for immediate and far reaching access to digital data may have negative implications for Personal Health Record (PHR) confidentiality and integrity. Securing electronic health records is complex, in part, due to the innate complexity of the healthcare system. Multiple providers require real time record access in the event of an emergency. Broadly distributed access, encompassing many care providers, insurance companies, and subsidiaries is expected for facilitating care and processing payment. (Georgia Tech, p13) Broad access to data has been identified as a security risk in that, “…more than half (57%) of all PHR breaches in the U.S. involved a business associate.” (Kamoun, p52)
The hospital units that serve the most serious patients require the most computer-interactive care. Bedside hospital infusion pumps deliver drugs at very small, very precise dosages, over regular time intervals. Continuous medication administration requires continuous monitoring. This increases the amount of data that must remain confidential, genuine, and accessible. Medical mistakes have decreased due to technological checks. However, an increased number of and increased variety of devices must be networked together, increasing the magnitude of security risk. In response, research into secure medical sensor networks is currently being conducted. Partitioning the networks that support IoT devices from the rest of the network may also help decrease risk.
Broad access to health record data, absent proper controls, impacts more than data confidentiality and integrity. It can call into question the quality of patient care, as was demonstrated in a recent Federal lawsuit brought against United Health. (Snowbeck pA1) The lawsuit was publicized in the Star Tribune article, Feds Join UnitedHealth fraud suit, from Febuary 19, 2017. It states that security concerns at UnitedHealth Group (UHG) have deteriorated into allegations of collusion. The lawsuit alleges that patients were treated for more serious conditions than they actually had, implying that diagnoses were compromised by the potential for financial gain. Those allegations are damning. By calling into question medical ethics, federal allegations imply that providers are unfit to provide care. UnitedHealth Group is accused of having, “…defrauded the United States of hundreds of millions – and likely billions – of dollars.” (Snowbeck pA10)
How might risks have been mitigated for UHG through better IT policy?
Risk management at the business tier manages, “…risk to the major functions of the organization, such as defining the criticality of the information flows between organization and its partners or customers.” (Harris p95) I will assert that numerous risks to UHG could have been mitigated through: 1.) Risk analysis that addressed the risks to fundamental healthcare business objectives, like protecting PPI, and adherence to medical ethics; and, 2.) Better data protection. The Swiss Cheese Model of error management would support the perspective that organizational failures exposed UHG to an inevitable realization this worst case outcome. It is also possible that one, overlooked source of risk stems from the level of complexity inherent to the healthcare system.
Efforts to align healthcare business objectives with IT security policy have been initiated by the healthcare advocacy group ‘I am the Cavalry.’ It formed after presenting their objectives to the DEFCON 2013 conference in Las Vegas. They published The Hippocratic Oath of Connected Medical Devices (Figure 4), which attempts to bridge a gap between the paradigms of the medical community and IT, setting the stage for the two fields to align their standards, goals, and policies. (Constantin)
Good security requires, “…a balance between effective security and functionality.” (Harris p 326 ) The allegations against UHG suggest that relationship may be out of balance. Separation of duties, including split knowledge and dual control, represent “…preventative administrative control put in place to reduce the potential of fraud.” (Harris p155) Separation of duties, alongside rotation of duties, and mandatory vacation are several strategies that could have been initiated by the organization in order to reduce their risk of fraud. (Harris p155). Database partitioning and other manners of segregating sections of health and billing information, like stricter access controls, might have been implemented in order to limit opportunities for fraud to occur. Organizational restructuring could require billing and patient care providers to rely on a more limited data set that still allows them to achieve their business objectives.
Where organizations fall short, the Federal government has provided financial reimbursement to motivate protection of healthcare data. The Health Information Technology for Economic and Clinical Health Act (HITECH) was intended to promote data protection in 2011. “The incentive payments under HITECH are substantial: eligible professionals who demonstrate the meaningful use of an (security certified) EHR (Electronic Health Record) in 2011 or 2012 will be entitled to incentive payments of $18,000 in the first year.” (Burde p1) Section 13402 requires mandatory reporting of any breaches impacting over 500 individuals. Those breaches are listed at the U.S. Department for Health and Humans Services Office for Civil Rights.
Models to detect fraud are being developed. “Recent Georgia Tech graduate Musheer Ahmed and his advisor, Prof. Mustaque Ahamad, identified a computer algorithm that can accurately predict who is most likely to commit healthcare fraud.” The two have received $400,000 to continue development on their algorithm, called FraudScope. “What I figured out was, to successfully find abuse, you need to look at the noise, not the signal,” Ahmed says, “But, if you find a pattern within the noise, itself, that means that someone is trying to game the system.” (Georgia Tech p13)
Personal Medical Device (PMD) Security
Personal medical devices have domain specific security challenges. This paper attempts to highlight several of these. It evaluates security concerns unique to: 1.) Diabetic systems - blood glucose continuous data monitors and insulin infusion pumps; 2.) Implantable cardiac defibrillators and pacemakers; and, 3.) Bedside hospital infusion pumps.
Medical devices may have vulnerabilities linked to the languages that they were programmed in. Since critical processes must not be interrupted by garbage collection, they tend to be written in C++ or C. Those languages, “…provide programmers with flexibility, perceived efficiency, and compatibility with large bodies of legacy software.” However, language specific risks include: “…buffer overflow, null pointer dereference, use after free, uninitialized memory use, and illegal free (of an already freed pointer, or a not malloced pointer).” (Haigh 9)
An article by Cylance listed other obstacles in overcoming security failures as: FDA regulations, a network of older devices currently in use, and a growing threat by metamorphic and polymorphic malware. Poor software practices, hardware limitations, spearfishing, activities by botnets, and ransomware have also been identified as contributing technological threat actors. Most medical device attacks by researchers have involved wireless telemetry, specifically, “radio-based communication channels, biometric authentication, distance-bounding authentication, out-of-band authentication, external devices, and anomaly detection.” (Burns p72)
Historically, implanted cardiac pacemakers have been difficult to patch, not only because of regulatory obstacles, but because these devices required surgical removal from the body before software patches or updates could be applied. In the past it was true that, “Significant updates to devices already on the market require new clearance or declaration of substantial equivalence to an existing device before being released to market. A tempting conclusion for manufacturers is that the liabilities and complications inherent in this process justify a, “no security updates” stance.” (Burns p69) Between 1990 and 2000, there were 114,645 FDA recalls on Implanted Cardiac Defibrillators (ICDs). Burleson advocates for applying threat modeling techniques at design time in order to maximize preparation for security shortcomings. (p162) FDA regulations have also attempted to adapt to an environment where medical devices are increasingly reliant on software.
On October 28, 2016, a temporary exemption to the Digital Millennium Copyright Act (DMCA) was extended to allow security researchers to, “…conduct controlled research on consumer devices so long as the research does not violate other laws such as the Computer Fraud and Abuse Act (CFAA).” (Alva p1) The exemption includes medical devices, providing that they are not connected to a human being during testing. Research must be conducted in, “good faith,” which assumes that researchers will report discovered vulnerabilities, although they are not specifically mandated to do so. (Alva p1)
In December of 2016, the FDA published their nonbinding recommendations to medical device manufacturers in a report titled, Postmarket Management of Cybersecurity in Medical Devices: Guidance for Industry and Food and Drug Administration Staff. Although the FDA, “…does not intend to enforce reporting requirements,” new recommendations request that manufactures, “…monitor, identify, and address cybersecurity vulnerabilities and exploits as part of their postmarket management of medical devices.” (p4) The report outlines threat modeling best practices, methods for assessing risk and severity of patient harm, and the criteria for manufacturers to meet Information Sharing and Analysis Organization (ISAO) standards.
Today, newer versions of pacemakers are called implanted cardiac defibrillators (ICDs) and were among the first medical devices to allow for wireless updates. Network connectivity was deemed essential due to a patient death in 2000. (Burns p69). Methods for wirelessly recharging ICD batteries can prolong the need for surgery. FDA regulations make allowances for software patches and updates because they are seen as increasingly valuable to protecting patient health and safety.
In summary, I would like to emphasize the point that there have been no known black hat attacks against personal medical devices with hardware attached to live people. Attacks have been focused on electronic medical records and likely motivated by financial gain, as opposed to doing harm to patients. However, it is important to recognize that threats exist.
Dick Cheney was interviewed by 60 Minutes in 2014, where he reported that his doctors had disabled the wireless connectivity in his implanted defibrillator. This illustrates the variability in security concerns, between individuals. Cheney’s former position as a “high value” target for assassination prompted particular concern for internet connectivity. This also illustrates the amount of individual customization that is necessary for PMDs to function well for users.
Definitions of “Acceptable risk,” are facing more scrutiny within the medical field as the FDA invites hacking of medical devices. Local companies like Smith’s Medical, Boston Scientific, and Medtronic are playing war games to train for scenarios where cyberattacks may occur from threat actors like nation states, hactivists, improper security controls, and leaked information resulting from residual risks. “Low risk” of malicious attacks sounds acceptable, until the occasional malicious attack occurs. For many people, not under threat of assassination, it may be easy to discount the risk of black hat hacks against PMDs. However, malicious attacks with immediate consequences for health and safety do occur. For instance, an attack in 2008, where, “…hackers defaced a Web page run by the nonprofit Epilepsy Foundation, replacing the page’s content with flashing animations that induced migraines or seizures for some unsuspecting visitors.” (Burleson p162) In 2012, surgeons of Lake County in Libertyville, ILL were, “…kept hostage by hackers who penetrated the computer network, encrypted the data server and sent a ransom note, requesting payment in return for access to the ePHI.” (Kamoun p53)
Where protective efforts have kept embedded devices from becoming network connected IoT devices, there has been push back. Diabetic patients and care providers have demanded increased access to data, conducting white hat hacks that to achieve mobile phone access to Dexcom’s continuous blood glucose sensor readings. Parents of children with diabetes were desperate for real time data to be exported to their mobile phones. This desire for increased data availability was meant to facilitate continuous monitoring. It prompted support for the NightScout Project that lead white hat hacks against Dexcom. The newest version of Dexcom’s device has incorporated NightScout functionality. It allows IoT functionality, but simultaneously increases security vulnerabilities.
The tension that exists, between a drive for functionality, and a drive for secure systems is best illustrated in the realm of personal diabetes systems, where many high profile attack methodologies have been publicized. (Radcliffe) PMDs are run from low power batteries that make standard secure encryption algorithms inapplicable. Figure 5 shows the unencrypted communication packet for a common medication infusion pump used by diabetes systems. (Burleson p181) Device manufacturers are currently using “Proximity-based access control…to verify the distance of the communicating peer before initiating wireless communication, thereby limiting attackers to a certain physical range.” (Li p152) Several lightweight algorithms for encryption have been proposed, including body coupled communication (See Figure 6), and research on secure medical sensor networks is ongoing. (Li p151 and Burleson, p168) Device developers struggle to secure devices while simultaneously satisfying market demand for data accessibility, as illustrated by the NightScout Project. Risks are inflated by market demand for personal healthcare mobile phone applications.
Very little work has yet been done to address the issue of healthcare data ownership. Kamoun cites the idea that, “…Individuals have the right to control all matters related to their own body, including their personal health information.” (p43) However, this is not yet a realistic or enforceable standard. The best summary of the current state of medical data security can be drawn from the idea that, “Possession is nine-tenths of the law.” Diabetics who use proprietary (closed) medical systems for data collection, may forfeit some access and, thereby, some ownership of that data. Data largely belongs to those that collect it. Those that collect it, determine how to secure it. Healthcare data is highly valuable and is described by Kamounas as, “hemorrhaging,” from the system. (p43) Its misuse can result in, “…serious reputational harm such as discrimination, stigmatization, loss of insurance and/or employment,” for individuals. (Kamoun, p43) This, in addition to threats of financial and bodily harm. For that reason, increased enforcement and regulation by the FDA may assist, but will not prevent, attacks against medical devices.
Figure 1.Workforce changes from Minnesota Department of Labor
Figure 2. Different levels of participation in the Underground Market from “Markets for Cybercrime”
Figure 3. Swiss Cheese Model of Error Management
Figure 4. Hippocratic Oath for Conected Medical Devices
Figure 5. Communication packet sent from an insulin delivery system (Burelson, p181)
Figure 6. SFDR distances over air and body coupled communication channel (Burleson, p188)
The first reference is one of the coolest because I think that Medtronic will be hosting another medical device manufacturer cyber war games event this summer. Ill post when I have more information…
For future reference, it is TuDiabetes policy to discourage posting “blind” links to other sites. Not only does a bare link pose security isssues, it is also a technique often used by internet trolls to covertly “steer” traffic to their sites. Since looking at a link gives no clue to what’s really behind it, they make people nervous—understandably.
All that being so, the diabetes online community is based on—in fact depends on—free exchange of information and ideas, so it’s perfectly okay to refer members to other useful pages. There are two preferred ways of doing that:
Copy the text of the article and post it here in its entirety, giving proper credit to the source, OR
Copy the first few paragraphs and then add a link to the original article. This is actually the best way. Someone can get the sense of the article that way and then decide whether they are interested enough to go read the entire story.
I don’t believe that I posted a ‘blind link,’ but I did upload a document. Is that whats causing nervousness?
Our comment was directed at the original post that began the discussion. You subsequently copied the entire article, which is great. Just as a housekeeping detail, it is considered best practice to credit the source when doing so, however.
I see. Shouldn’t just upload a doc. It looked a little funny, but I erred on the side of brevity. I’m the author, so no worries. Didn’t want to post my name.
Well, if you’re the author . . . then you probably won’t complain that you failed to give yourself credit. No harm, no foul.
Mohe. Nice paper !!!