November 16, 2022
In order to understand what container security is, we first need to fully understand what containers are. Containers usually consist of an engine and an image. The engine runs applications, with all the information required to run the application stored in the image.
They are packaged together, as a whole, so that the application can run within the container, independently of the host infrastructure. This means that the container is a moveable or portable computing environment that can run on a number of different hosts or hardware types. Bear in mind that production applications may span across multiple containers, by running parts of the app independently, an idea described as ‘microservices’, which only adds to the complexity of containers.
All of this is managed by an orchestrator, spinning up new containers and killing off old ones, monitoring container health, load balancing and traffic routing and more, as these functions are required. Well known examples of orchestration tools include Kubernetes (K8s), Tanzu and Docker.
Now that we understand what we are dealing with, here are five steps on how you can best secure containers.
Good container security must begin with the base image, as the image is critical to the container. It is vital that the image conforms to best practices, is free of any known exploits or vulnerabilities (or at least patched so they are mitigated) and meets the required security standards of the operating environment. These should be your first priority.
If the image is not being created by your teams, ensuring that the 3rd party images are thoroughly checked before use is essential. Any container misconfigurations, or areas in which images do not adhere to best practices should be remediated immediately, before they are used in a live environment. There are repository controls available to developers and administrators to ensure that only trusted repositories are used to find container images. Proactive image analysis and vulnerability management are necessary to ensure that the container image begins in the best possible condition.
Beyond that, the image must remain in this ‘golden’ state, ensuring that there is no drift or change in its configuration, no outdated libraries or insecure ports are used, etc. Being able to identify configuration drift as soon as it happens, even which account triggered the change, is a shortcut to correcting the drift and returning your image to its ideal state.
As with the golden image, the engine and orchestrator must be secured. Container infrastructure is not like other IT infrastructure, and with changes in technology come new opportunities for nefarious actors to exploit weaknesses – ones that may not even have been present in previous technologies.
Containers, housed in pods, can be vulnerable to rogue processes, or portions of code which can gain access to multiple containers or images. Your teams should have the ability to scan at any time to ensure that pods, nodes and namespaces are as expected, but more importantly, are free of any vulnerabilities.
Combined with this, developers should consider using containers that can run applications without root permissions. Many container services run on the default privileged root user and execute applications this way, regardless of whether they need privileged access or not. This is bad practice, and any compromised container could then have root access because of this. There are ways to run rootless containers, or deploy containers with non-privileged users which will mitigate this weakness. This can be a large change for developers and it should not be undertaken lightly or haphazardly.
Added to this, any credentials and sensitive information should not be stored in config files, but rather should be encrypted elsewhere. In K8s there is the idea of Kubernetes Secrets, an object built for the express purpose of removing confidential data from application code. But even Secrets are by default unencrypted and need to have encryption enabled to ensure unintended access.
While orchestrators like K8s have certain security features and can be improved by developing in the right way, they are not wide ranging enough to cover all of the vulnerabilities and attack surfaces that are present. Extending your container security with a platform that can cover the gaps in native orchestrator security is essential. This is where clear and defined Kubernetes Security Posture Management (KSPM) combined with a strong Cloud Native Application Protection Platform (CNAPP) is necessary.
KSPM is focused on delivering the specialised insight required to secure Kubernetes to the highest level, and CNAPP extends that protection from the container orchestrator down to the infrastructure upon which they run. At the same time CNAPP enables the same best practices, vulnerability assessment and management to be used across environments, and through the phases of development, testing, production and backup, etc.
KSPM can tell you whether your namespaces or pods are at risk and the best practices for those and your network security. CNAPP ensures that vulnerabilities and exploits are discovered and remediated as soon as possible, regardless of whether they’re found in your containerised development environment or your production vCenter or AWS apps.
At this point the development pipeline is almost complete and the container is ready for deployment. But before container images are pushed into production there is another opportunity to check that the workload is secure and free of vulnerabilities.
Once an image is deployed into production any vulnerabilities are exposed. New code, repositories, dependencies or additional tweaks during the development process may have introduced new vulnerabilities. The original image may have been clean, but can that be fully guaranteed, especially with the speed new vulnerabilities are discovered?
The ability to perform lightning-fast checks, as part of the deployment process, for each image, is a huge boon to the security of your containerised workloads. This scanning must also be flexible enough to ignore any false positives, or issues that have already been signed off by the security team. For a step by step guide on how to do this using Runecast, see this article. These checks can be used to block images with vulnerabilities inside from deploying. It is much easier to (proactively) block an image and fix the vulnerabilities, than to allow the image to go into production and (reactively) deal with the ramifications on your live systems.
Having this process automated, but with the full scan reports available when required for further investigation, means it takes place as quickly as possible, but retains the option for deep interrogation of the vulnerabilities found inside each image.
Successful deployment of a ‘clean’ container is not the end of container security. Containers are useful as they are scalable and non-persistent, quickly replaced or added to when required. Because of this the underlying infrastructure, network and storage must be configured to work most efficiently with containers, and in the most secure way.
It may seem counterintuitive to separate and segment the network your containers are running on when one of the advantages of containers is that they are so portable. But by limiting crossover between control plane components and nodes, by putting them on a separate network, you limit the potential for damage a malicious actor could do.
Some best practices for container networks remain from other networking principles. Practices like using explicit deny commands in networking policy and using role based access control (RBAC), so that only certain privileged users have access to control plane nodes in Kubernetes, for example.
Coupled with encrypted Kubernetes Secrets, as mentioned above, access to the Kubernetes etcd server should be limited to only what is necessary, not given to all users and accounts.
There are many monitoring and alerting systems for container security. Some of these are more capable than others. As part of a ‘shift-left’ approach container security must be practised throughout the development process. Especially in a DevOps environment, where the process is continuous and each additional piece of scanning or monitoring software must not slow down the process.
A CNAPP solution goes beyond a monitoring and alerting system and provides the ability to audit, anticipate and mitigate security risks, whether on-prem, hybrid or multi cloud, on virtualised or containerised workloads.
These systems are essential, as they automate key tasks, removing workload from developers and administrators. AI and automation can ensure that containers always conform to strict best practices and that whenever those best practices are updated the corresponding checks and scans are updated too.
A CNAPP gives the administrator a holistic view of container security, enabling them to scan images, engines and infrastructure in one view. CNAPPs combine the abilities to scan and alert for vulnerabilities with the shift-left philosophy of testing early and often.
Runecast is a cloud native application protection platform that provides Best Practice Analysis, Configuration Drift Management, Hardware Compatibility checks, Proactive Issue Prevention and Remediation, Security Compliance Checks and much more. Runecast covers all your workloads, from the containers running on Kubernetes, to Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), VMware, operating systems and other commonly integrated technologies.
Our customers report time savings of 75-90% with almost instant ROI – and almost no learning curve (ideal for dealing with skills shortages). Runecast enables teams to go from reactive to proactive, and get back to the value-adding activities that they want to be doing.
Take your container security to the next level with Runecast