Originally published on January 10, 2020 by Faces of Digital Health
If the critical issue of data security and privacy protection in the past was how to archive data and prevent unauthorized access to archives, the cloud brought a whole new set of challenges. “AI and machine learning are improving safety, but the bad guys are using these technologies as well,” says Chris Bowen, Founder of ClearDATA.
According to Clearwater cyberIntelligence Institute, one of the key issues in data breaches are user authentication deficiencies. These include password strength requirements, single sign-on controls, and locking accounts after too many failed login attempts are the three primary risks around user authentication – generic password use, physically posting passwords on a workspace, and or unencrypted emailing of credentials over external networks.
If the critical issue of data security and privacy protection in the past was how to archive data and prevent unauthorized access to archives, the cloud brought a whole new set of challenges. Security measures required from the personnel are getting increasingly complex. Additionally, while several advances have been made on the technological level of data protection – from different methods of encryption to high hopes stemming from AI and quantum computing, the bad guys are also using these technologies, says Chris Bowen, the Founder of and Chief Privacy & Security Officer of ClearDATA – US-based company offering technology and services to assist organizations with their healthcare cloud security needs.
According to Bowen, cybersecurity is on top of mind of healthcare executives, as we’re seeing increasing demand for timely access to needed medical data, alongside concerns about privacy issues. In the last quarter of 2019, healthcare records of 1.3 million people have been exposed. The number of victims increased by 563% states Chris Bowen.
Hacks most often happen through emails when people share patient records through emails or are tricked in sharing information in phishing attacks. “An example of a consequence of healthcare data theft is a case of a female patient who lost her waller at the gas station. Two months later, she was arrested for opioid trafficking and had to go through the process of clearing her name. If you think about medical records for a second – one transaction goes to over 100 systems, and the shelf life of a record can be over ten years,” explains Chris Bowen.
In cases of identity theft in healthcare individual’s medical identity can be compromised, resulting in problematic changes such as changes in allergies previously recorded in the EHR.
How to approach cybersecurity in healthcare?
1. Assess what to outsource
The challenge for healthcare institutions is, how to manage healthcare data on an organizational level, how much services should be outsourced versus systems build in-house. Healthcare executives are often under budget constraints limiting investment in healthcare IT.
Chris Bowen believes it is impossible to work without partners in cloud providing services today, with the rising advancements in cloud and platform technology. The question is not anymore whether to use the cloud or not, but when and for which purposes.
2. Educate staff about data protection
One of the things in-house healthcare IT personnel can help with is understanding of the cloud and data protection. “Security by obscurity doesn’t always work. I remember when during one of the risk assessments we did, a doctor proudly showed us a spot on the wall, where he wrote his password in such small letters that you wouldn’t even notice the spot if you weren’t actively looking for it. Password management is important, and if you don’t effectively manage your passwords, you quickly use similar passwords,” describes Bowen. He has over 600 passwords in his password manager.
Among data security risks in large organizations are new hires and workers leaving. As Bowen writes in one of his articles, new employees are easy targets for phishing before trained about security, employees leaving could copy data, etc. A sample administrative threat that might be included in a CEO’s IT security briefing: “We anticipate 50-plus new hires over the next two months.” Or the inverse: “We are reducing staff over the next two months.” Each of these scenarios poses its own risks. New hires need to be trained on policies concerning the Internet and email usage to avoid falling for “phishing” scams that can unlock the organization’s network to cybercriminals. Even longstanding and current hires can fall for phishing scams or access information they aren’t authorized to see. Departing employees may be disgruntled—and want to copy valuable patient files before they leave.
“Certainly if you bring someone on board who is not trained for dealing with personal health information (PHI), don’t let them close to PHI, make sure they know the basics. On the control side, you can implement proper system controls through centralized systems,” further comments Bowen.
3. Be mindful of compliance drift
CIOs also need to make sure that architectural solutions are aligned with the latest regulation and keep monitoring compliance to prevent compliance drift.
The future is in collaboration
With new digital medical devices and IoT, new challenges are on the horizon. Bowen believes that in order to combat vulnerability, the whole community will have to take part, shared responsibility, from HHS, FDA, providers, insurance companies, etc. Every medical device needs to be update-able.
Another really big challenge is how to secure legacy systems that can’t be secured with patches. AI and machine learning are improving safety, but the bad guys are using these technologies as well. New versions of the software should happen all the time. Bowen also sees blockchain as an upcoming component of data security. However, other methodologies could be nearly as effective.
Bowen also commented on Project Nightingale, which enabled Google to access medical records of over 50 million people in 21 States through Ascension, the largest nonprofit health system in the United States. While having many opinions, he questions the need for all the engineers having access to healthcare data and believes it would be nice if patients got a notice about the project. Project Nightingale was a good lesson in the transparency of sharing information.
Listen to the full discussion:
Have you ever been told cybersecurity is boring?
According to Clearwater cyberIntelligence Institute, one of the key issues in data breaches are user authentication deficiencies. These include password strength requirements, single sign-on controls, and locking accounts after too many failed login attempts are the three primary risks around user authentication – generic password use, physically posting passwords on a workspace, and or unencrypted emailing of credentials over external networks, among others. (https://healthitsecurity.com/news/user-authentication-most-common-cyber-risk-for-hospitals-health-systems) What kind of security issues have you been noticing in your career?
Over 400 data breaches affecting millions of people happened in 2019 in the US. What trends are you noticing in the cybersecurity field? Is security improving or getting increasingly difficult?
Privacy and compliance: laws and regulations may exist, but complying to them doesn’t necessarily suffice for the safety of data. How often are audits executed? Why do data breaches and leaks happen so often?
As a Patient Data Privacy expert, how did you see the news about Project Nightingale, which enabled Google to access medical records of over 50 million people in 21 States through Ascension, the largest nonprofit health system in the United States? Patients did fear discrimination.
In many cases, the reason for the gaps in data security in healthcare is the budget. What potential solutions do you see?
Healthcare IT companies can sometimes install safeguards to prevent data breaches, but then systems lack user-friendliness. For example, two-factor authentication can be efficient, but impractical if the doctor had to do 2FA with every patient during the day. Are fingerprint readers and voice recognition solutions going to be the answer? How many of these technologies are already in use?
Do you have any examples of institutions taking exemplary care of data privacy? What are they doing differently compared to others?
What are the most common gaps in data security on organizational levels?
What are the administrative threats?
How diligent are institutions in your experience with preparing security briefings and analyzing them?
Two things enabling a high level of data security are homomorphic encryption (you’re able to run computations on encrypted data, but the data is never available) and zero-knowledge proofs. To which extent are you noticing them being used in practice? What kind of future in data security do you envision?