by Chris Bowen
Chief Privacy & Security Officer and Founder
One of many services ClearDATA provides its customers as part of its Cloud Workload Protection Platform (CWPP) is hardened images. We do this according to the Center for Internet Security (CIS) Standards as part of our compliance solution for the healthcare cloud, ClearDATA Comply™. We wrote about our hardened images here on our blog.
For ClearDATA customers using Comply, that post offers a higher-level overview of the critical importance of “hardening.” Hardening makes sure the virtual server’s operating system is devoid of anything it doesn’t require to operate, thus lessening the risk vector. For ClearDATA customers, that article is probably all you need since we’re already doing this for you. The result is greater efficiency for your developers and less cost for your company.
For the rest of you who are taking a more DIY approach to the cloud, you should already feel the pain of applying robust hardening protocols to your operating environment. This post may be just for you.
What follows is a more in-depth dive to give you the best practices for considerations and actions you should take in hardening images and the system as a whole. I’ll also throw in some of the best practices for configuration and vulnerability management because you’ll need to be doing these in tandem with system hardening.
As a healthcare organization, you should adequately harden an application and the infrastructure that supports it to prevent their misuse and prevent either from being used as an attack vector. Your team should also pay close attention to configuration changes and integrate best practice vulnerability management practices for each delivery model. From the organization to its customers, a system breach can subject an organization to lawsuits, fines, reputational damage, or worse, ruin the lives of the patients it serves.
Your organization should adopt a hardened configuration standard for all systems and network components. Developers should leverage the guidance of organizations like OWASP, (ISC)2, or CIS to harden both the application and the compute, network, and storage infrastructure that supports it.
Hardening strategies, however, will differ depending on the cloud delivery model used to support the application.
Hardening with Infrastructure as a Service Best Practices
The most common cloud delivery model currently used by healthcare organizations is Infrastructure as a Service (IAAS). This delivery model typically leverages virtual machines, virtual storage volumes or services, host and edge-based network protection, and network segmentation. Hardening an application running in this model involves applying security to or disabling services, closing ports, and adding security to each component, including Web servers, application servers, storage, network configurations, and identity and access management solutions.
Hardening with Infrastructure as Code Best Practices
Some healthcare organizations have evolved beyond the IAAS model and use containers as Infrastructure as Code (IaC). This approach enables these organizations to accelerate innovation, use cloud services more efficiently, and minimize operational overhead.
Containers are single-purpose virtual compute instances that support workloads in the cloud. Similar to a virtual machine (VM), a container has a few important distinctions. Virtual machines leverage hardware virtualization to carve a physical host into multiple VMs. On the other hand, Containers virtualize the operating system (OS), carving it into multiple lightweight slices called containers. These slices share the host OS’s kernel and binaries, allowing an application to bring its own data and libraries.
Think of containers as five layers:
- Code: Code is the heart of the application and tells the technology what to do. Code may be proprietary or open-source applications built by others.
- Container: Containers are single-purpose virtual compute instances that support workloads in the cloud.
- Cluster: A Cluster is a dynamic system that places and manages containers grouped in pods. Pods run on nodes and supply the necessary interconnections and communication channels.
- Node: A Node is a worker machine managed by the Master. This machine could be virtual or physical and hosts the components necessary to run pods.
- Cloud Service Provider: provides the services needed to run applications and is generally considered the trusted computing base of a cluster of containers.
Hardening Infrastructure as Code (IaC) is different in the container application lifecycle.
Containers share the physical server memory, CPU and disk (requiring block-level I/O), and the host operating system. A container runtime/worker host’s security posture can vary depending on how it is architected because of these characteristics. Commonly used orchestration engines like Kubernetes manage worker nodes that run containers and provide updated worker hosts. Therefore, custom images, tooling, and hardening at the host level and cluster level, like those specified in the CIS hardening benchmarks, should be performed.
In general, hardening IaC should address:
- Your development pipeline and corresponding applications.
- Static scans as developers create and integrate open-source code
- Hardened and secure code repositories
- Well-documented and automated build processes
- Automated QA processes and reporting
- Your container deployment environment(s) and corresponding infrastructure.
- Host hardening
- Dynamic runtime scanning
- Auto-remediation and reporting
- Integrating with security tools and enhancing existing security policies and procedures.
- Container vulnerability scanning
- Host machine scanning
- Network scanning
- System configuration change monitoring and alerting
- Base image patching and change management
- Your development pipeline and corresponding applications.
Ultimately, the IaC hardening approach should enable your organization to support the security of the components of a cluster:
- Transport layer security (TLS) for all API traffic
- API authentication and authorization
- Cluster resource usage limitation
- User permission hardening
- Network access restrictions
- Cloud metadata access restrictions
- Node to Pod access restrictions
- Etcd access restrictions
- Cluster audit logging
- Infrastructure credential rotation
- Third-party library security and permission reviews
- Secrets encryption
Configuration management is more than just defining a setting and moving on to the next project. An effective configuration management strategy marries compliance with change management and controls on operational software to enable your organization to maintain a baseline for its critical and non-critical systems. Here are elements to consider in your configuration management strategy:
Compliance with security policies and standards:
- Your organization should conduct annual compliance reviews and evaluate those findings with asset owners. Ultimately the results should be approved by leadership.
- Where possible, using automated compliance tools yields much more accurate results than a human can. For example, the ClearDATA Comply™ product performs millions of evaluations each month on over 70 cloud services in thousands of assets.
- Automation should also enable technical compliance checking with the assistance of automated tools and auto-remediation technology.
- Your organization should implement a continuous monitoring program managed by people with appropriate independence. In other words, the IT department should not police itself.
Change management programs in place that:
- Control changes to assets, applications, systems, networks, policies, or procedures
- Control changes to equipment and software
- Define and implement fallback procedures in case a change goes awry
Operational software controls that:
- Restrict who can upgrade software and systems
- Include Application whitelists that restrict approved programs or code
- Restrict software that is out of support or end of life
- Include application whitelists that only allow approved programs or code
- Test applications for usability, security, and impact before production
Your organization should never take vulnerability management for granted. Without it, bad actors can make your system their own – literally. While hardening your system prepares you to limit destructive influences, a robust vulnerability management program can stop an attacker before he/she can inflict damage.
A vulnerability management program should be evaluated for effectiveness at least annually and include, at a minimum:
- An inventory of assets and services
- Processes for monitoring, assessing, ranking, and remediating vulnerabilities
- Internal and external vulnerability assessments of systems, virtualized environments, and networked environments, including network and application-layer tests, using:
- Input data validation for applications
- Intrusion prevention or detection technology
- Anti-malware software that provides protection using signature and behavioral change response
- A patching process that addresses production and disaster recovery environments and includes patch testing and evaluation before installation
- A penetrating testing program performed by an independent team or outside firm on an annual basis
- Log and system activity reviews
- Vulnerability scanning, including remediation scans to determine whether flaws remain after the fix
Hardening is…hard. I get it. And so much depends on the cloud delivery model as well as compute model. I probably should have posted this article earlier in your budget cycle, because hardening can be resource intensive without the right approach or automation. Let me know if you want to explore a better way. ClearDATA’s Cloud Workload Protection programs may be able to let you enjoy your weekends again.
For a closer look at Comply, request a demo.