Tomas Odehnal
Kubernetes
In this article:

Runecast’s latest version, 6.2, has enabled a deeper protection and integration with Kubernetes, enabling scanning not just clusters, but nodes as well. If you’d like to read more about our KSPM capabilities, you can read this article on the improvements brought by version 6.2.

In this deep-dive article we’ll be talking specifically about using Runecast as an admission controller. We leverage the admission controller functionality of Kubernetes and let you use Runecast as a validation admission webhook. This means that your workloads are scanned before they reach the cluster, ensuring they are free from critical vulnerabilities or unpatched security risks.

Our Systems Engineer, Tomas Odehnal, walks you through all the steps below, including how to:

  • deploy Runecast to your cluster, 
  • connect the cluster and generate a token, 
  • configure the K8s API server, 
  • create the webhook configuration 
  • test your setup and ensure it’s all working correctly. 

Initial setup

To demonstrate the functionality we have prepared a cluster with one node hosting the control-plane and one node to run the workload.


$ kubectl get nodes
NAME             STATUS   ROLES           AGE    VERSION
k8s-mst01-test   Ready    control-plane   110m   v1.24.3
k8s-wrk01-test   Ready    none            109m   v1.24.3

Deploy Runecast into the K8s cluster

We will deploy an instance of Runecast to the cluster using helm. First, we add the Runecast helm repository:


$ helm repo add runecast https://helm.runecast.com/charts
"runecast" has been added to your repositories

After that we can deploy Runecast to the new namespace:


$ helm install runecast-analyzer runecast/runecast-analyzer --namespace runecast –-create-namespace --set nginx.service.tls.enabled=true
NAME: runecast-analyzer
LAST DEPLOYED: Mon Aug  1 15:23:13 2022
NAMESPACE: runecast
STATUS: deployed
REVISION: 1
NOTES:
To access the Runecast Analyzer application, follow the instructions:
  Run the following command and visit https://127.0.0.1:9080/ to use the application:
     kubectl --namespace runecast port-forward service/runecast-analyzer-nginx 9080

We enabled the secure connection on the service, as the Kubernetes API requires a secure connection to the admission webhook.

Once all the pods are ready, we can move onto the next step.


$ kubectl --namespace runecast get pods
NAME                                               READY   STATUS    RESTARTS   AGE
runecast-analyzer-6fb4b5d7f-w7g6l                  1/1     Running   0          2m20s
runecast-analyzer-imagescanning-5c946df9b7-4lgtf   1/1     Running   0          2m20s
runecast-analyzer-nginx-dcdd9bbfd-r5md8            1/1     Running   0          2m20s
runecast-analyzer-postgresql-6cfbdd49d8-znm67      1/1     Running   0          2m20s

Connect the cluster and generate a token

To access the Runecast UI, we will start the port-forwarding as stated in the helm command output. In production, you would probably use ingress to reach the service from outside of the cluster, but in this example let’s settle for the localhost access.


$ kubectl --namespace runecast port-forward service/runecast-analyzer-nginx 9080
Forwarding from 127.0.0.1:9080 -> 9443
Forwarding from [::1]:9080 -> 9443

We can now login to Runecast UI on https://localhost:9080 with the default credentials, username


rcuser : Runecast!

In the initial wizard we select Kubernetes:

We need to create a service account token, a cluster role and cluster rolebinding that will grant Runecast the sufficient permissions. Please find more details in the user guide https://docs.runecast.com/system_requirements.html#kubernetes_1.


$ NAMESPACE="kube-system"
$ TOKEN_VALIDITY="8760h"
$ kubectl create serviceaccount runecast-analyzer-scan -n ${NAMESPACE}
serviceaccount/runecast-analyzer-scan created

$ kubectl create clusterrole runecast-analyzer-scan --verb=get,list,watch --resource=nodes,namespaces,pods,replicationcontrollers,serviceaccounts,services,daemonsets.apps,deployments.apps,replicasets.apps,statefulsets.apps,cronjobs.batch,jobs.batch,networkpolicies.networking.k8s.io,podsecuritypolicies.policy,clusterrolebindings.rbac.authorization.k8s.io,clusterroles.rbac.authorization.k8s.io,rolebindings.rbac.authorization.k8s.io,roles.rbac.authorization.k8s.io
clusterrole.rbac.authorization.k8s.io/runecast-analyzer-scan created

$ kubectl create clusterrolebinding runecast-analyzer-scan --clusterrole=runecast-analyzer-scan --serviceaccount=${NAMESPACE}:runecast-analyzer-scan
clusterrolebinding.rbac.authorization.k8s.io/runecast-analyzer-scan created

$ kubectl create token runecast-analyzer-scan --duration=${TOKEN_VALIDITY} -n ${NAMESPACE}
eyJhbGciOiJSUzI1NiIsImtpZCI6Ikt0eklHYXRYbXpxdGdKNXlRQThFcDRHX3JEWEdkbURBZnNtajJsT1NPdFEifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjkwODk3MTYzLCJpYXQiOjE2NTkzNjExNjMsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJydW5lY2FzdC1hbmFseXplci1zY2FuIiwidWlkIjoiM2U4YzFmZTYtYjgyMi00NTRjLWI5ZGQtYzc1MTMxMzFlMzkzIn19LCJuYmYiOjE2NTkzNjExNjMsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpydW5lY2FzdC1hbmFseXplci1zY2FuIn0.Y8WigT0HgXypwjng1cqO40uIpt-FfeHDy3rsZFaxQ_gNLaj7X7rWrntk_S9RqwSTWPffE7BzHtztlbP3eGIUSq8VLgYiHXKCPxkq7HB3FHRIO1JKusJnUvAxbcIE_CnP8paIVNKJnQF-Y8exTlvyx_uhGmxc25m1haf-tFR0qk68yAdUw5OaJPUZ6oIWElZndkh4MfZBMwMVjdmYHMl-XLRk9o9xz31PeW6wEZJgAaen5fuDSZ_h0zCkwCVtPjYtSCNDkuNKBxJQttpe_-2Q_-uXtw7ORlqfyOvUFQ2JUiktwbAicqItTCnjLIB5L0JtBJqsXn1o1dg4_DP_RapyFg

We won’t select the available security profile:

We can confirm the default schedule:

Finally, the last step of the wizard is to confirm all the settings and run the analysis.

In the UI, navigate to Settings > API Access Tokens and click on Generate API access token.

Copy the token and save it for later.

We can now move on to the next step and configure the webhook authentication on Kubernetes API server.

Configure the webhook authentication on the Kubernetes API server

To set up the control plane to authenticate to the Runecast API we need to:

  • create a kubeconfig file with the credentials of the Runecast API
  • create an admission configuration file pointing to the kubeconfig file
  • add the --admission-control-config-file parameter to the kube-apiserver manifest pointing and make sure kube-apiserver pod will see both configuration files

On the master node, we will create a new directory and later map this as a volume to kube-apiserver pod. While you can store the files in one of the directories that are already accessible by kube-apiserver in default configuration, it’s better to create a separate directory that might be used to store additional files, like auditing configuration.


sudo mkdir /etc/kubernetes/config

Create the kubeconfig file that contains the authentication token. We will use the token we generated in the previous chapter and the name of the Runecast nginx frontend service:


$ RUNECAST_TOKEN="b804386f-12e6-491c-ae1a-a64c5fa35f0c"
$ RUNECAST_ADDRESS="runecast-analyzer-nginx.runecast.svc:9080"
$ echo | cat | sudo tee /etc/kubernetes/config/runecast-validating-webhook-kubeconfig.yaml > /dev/null << EOF
apiVersion: v1
kind: Config
users:
- name: '${RUNECAST_ADDRESS}'
  user:
    token: '${RUNECAST_TOKEN}'
EOF

Create the admission configuration file that makes sure the admission controller is able to authenticate using the provided kubeconfig file:


$ echo | cat | sudo tee /etc/kubernetes/config/runecast-admission-configuration.yaml > /dev/null << EOF
apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
- name: ValidatingAdmissionWebhook
  configuration:
    apiVersion: apiserver.config.k8s.io/v1
    kind: WebhookAdmissionConfiguration
    kubeConfigFile: "/etc/kubernetes/config/runecast-validating-webhook-kubeconfig.yaml"
EOF

Now modify the kube-apiserver manifest file to use the admission configuration file. Under the - kube-apiserver line located in .spec.containers.command we will add the parameter --admission-control-config-file :


...
spec:
containers:
- command:
    - kube-apiserver
    - --admission-control-config-file=/etc/kubernetes/config/runecast-admission-configuration.yaml
...

We will also map the new directory to the kube-apiserver pod. In the manifest file, find the .spec.volumes  section and add a new mapping of the host directory to the pod:


...
volumes:
- hostPath:
    path: /etc/kubernetes/config
    type: DirectoryOrCreate
  name: etc-kubernetes-config
...

Under the .spec.containers.volumeMounts section add a new setting to mount the above volume to the kube-apiserver container directory:


...
volumeMounts:
- mountPath: /etc/kubernetes/config
  name: etc-kubernetes-config
  readOnly: true
...

Lastly, we save the kube-apiserver manifest file and wait for the pod to restart. To verify the new settings are in place, we will grep the running kube-apiserver pod manifest: Once we find the admission-control-config-file parameter in the output, we can continue.


$ sudo kubectl -n kube-system get pods -l component=kube-apiserver -oyaml | grep 'admission-control-config-file'
    - --admission-control-config-file=/etc/kubernetes/config/runecast-admission-configuration.yaml

For more information about the webhook authentication settings, please see the official documentation

Create the validating webhook configuration

Once the authentication is configured on the control plane, you can create the admission webhook configuration.

The configuration tells the API server for which objects and operations to call a specific validating webhook. The objects can be selected using multiple options. To see the complete list, please check the official documentation.


apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
...
webhooks:
- name: my-webhook.example.com
  rules:
  ...
  objectSelector:
  ...
  namespaceSelector:
  ...
  clientConfig:
  ...
- name: another-webhook.example.com
...

In this example, we will create two webhook configurations:

  • prevent running any workload created or modified in the namespaces labeled runecast-admission-policy=1  with critical severity vulnerabilities (policy id 1)
  • prevent running any workload created or modified in the namespaces labeled runecast-admission-policy=2  with critical or high severity vulnerabilities that have a fix available (policy id 2)

First we will define the rule to select the objects and operation:


rules:
- apiGroups:   ["*"]
  apiVersions: ["v1"]
  operations:  ["CREATE","UPDATE"]
  resources:   ["pods","daemonsets","deployments","replicasets","statefulsets", "replicationcontrollers","cronjobs","jobs"]
  scope:       "Namespaced"

And similarly for the second webhook:


namespaceSelector:
  matchExpressions:
  - values:
    - '1'
    operator: In
    key: runecast-admission-policy

Now any namespaces that will be labeled runecast-admission-policy with id 1 or 2 will match.

Lastly, we need to define the webhook in the clientConfig field. We are running Runecast in the same K8s cluster and will refer to it using the service  definition. If Runecast was running outside of the configured cluster, we would use the url definition to link to the webhook, but this is out of the scope of this article. Please check the integration examples in Runecast documentation.

First webhook:


...
clientConfig:
  service: 
    namespace: runecast
    name: runecast-analyzer-nginx.runecast.svc
    path: /rc2/api/v2/k8s-admission-policy-review/policy/1
    port: 9080
...

Second webhook (note the path):


...
clientConfig:
  service: 
    namespace: runecast
    name: runecast-analyzer-nginx.runecast.svc
    path: /rc2/api/v2/k8s-admission-policy-review/policy/2
    port: 9080
...

Additionally, the Kubernetes API verifies the trust of the webhook certificate. In production, your K8s cluster (and thus the kube-apiserver pod) might already trust the Runecast certificate, either because it is issued by a public CA or your internal enterprise CA and the trust was established. In our case we issued a self-signed certificate for Runecast and will need to let the kube-apiserver trust it. The issuer of the certificate (which is the actual certificate for those self-signed) needs to be set as a base64 encoded string in the caBundle field under clientConfig :


...
clientConfig:
  caBundle: LS0tLS1...
...

You can obtain the string from the secret by running the following command:


$ kubectl --namespace runecast get secret runecast-analyzer-nginx-cert -o jsonpath='{.data.tls\.crt}'
LS0tLS1...

Finally, we have the complete validating webhook configuration and we can apply it to the cluster:


$ cat << EOF | kubectl apply -f -
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
  name: "runecast-validating-webhook"
webhooks:
- name: "deny-critical.runecast.com"
  rules:
  - apiGroups:   ["*"]
    apiVersions: ["v1"]
    operations:  ["CREATE","UPDATE"]
    resources:   ["pods","daemonsets","deployments","replicasets","statefulsets", "replicationcontrollers","cronjobs","jobs"]
    scope:       "Namespaced"
  namespaceSelector:
    matchExpressions:
    - values:
      - '1'
      operator: In
      key: runecast-admission-policy
  clientConfig:
    service: 
      namespace: runecast
      name: runecast-analyzer-nginx
      path: /rc2/api/v2/k8s-admission-policy-review/policy/1
      port: 9080
    caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURsVENDQW4yZ0F3SUJBZ0lRQm5SUk9PalJ5ek9Wa1FaOWNHU05rREFOQmdrcWhraUc5dzBCQVFzRkFEQXYKTVMwd0t3WURWUVFERXlSeWRXNWxZMkZ6ZEMxaGJtRnNlWHBsY2kxdVoybHVlQzV5ZFc1bFkyRnpkQzV6ZG1NdwpIaGNOTWpJd09EQXhNVE15TXpFeldoY05NalF4TVRFek1UTXlNekV6V2pBdk1TMHdLd1lEVlFRREV5UnlkVzVsClkyRnpkQzFoYm1Gc2VYcGxjaTF1WjJsdWVDNXlkVzVsWTJGemRDNXpkbU13Z2dFaU1BMEdDU3FHU0liM0RRRUIKQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUURrRFNFd1k2ak90SnBEZjR1c3paTWVQMThaWjBtdzRsWmUzK1Y2Rm9uNgpoNnd4SU1BM1NsTTU5NW9IM2hIUU1JWE5uZGhBeW5lOUdxZmVnQk5hVWZuSGw3andnSFl5Q05YaXNON1pld3dTClJLYlZuZlQ4MTgva2YreGlMZVB6dkM4Ulc3OGJGWmVnMGxpeG83ZmdNd25YTG5iVmRYR0V4dVFpWXNmUUQ3bXIKZlFaOG1XWmxpQmVETkJuZndVZlk3bWora0VZYWNnR1I5SVRrRVZuZnRaUEN3T3RJOTl1QzdTbGpYdFNkZ1ZzTAo3QWl0a1A2cTAvRG9zaVN4cThxSjkvTHFUdGEwbUI5ZFZnNk12MWN0SmpvakJkckMvL0ErcTFnS0xPNi9GSXVyCkxoaXcwcFU4QXdSK0dGR1hELy9SQjFEK2ZuOXhZTlpJU3dOcmowOENCUlZKQWdNQkFBR2pnYXd3Z2Frd0RnWUQKVlIwUEFRSC9CQVFEQWdXZ01CMEdBMVVkSlFRV01CUUdDQ3NHQVFVRkJ3TUJCZ2dyQmdFRkJRY0RBakFNQmdOVgpIUk1CQWY4RUFqQUFNR29HQTFVZEVRUmpNR0dDRjNKMWJtVmpZWE4wTFdGdVlXeDVlbVZ5TFc1bmFXNTRnaUJ5CmRXNWxZMkZ6ZEMxaGJtRnNlWHBsY2kxdVoybHVlQzV5ZFc1bFkyRnpkSUlrY25WdVpXTmhjM1F0WVc1aGJIbDYKWlhJdGJtZHBibmd1Y25WdVpXTmhjM1F1YzNaak1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQmhFSDBsRm96SwpFQnU1aHQ2ejd5VkRpKzQ1YkZlR3EyeVlOVUZSWTl5ZzdaVVVyTkcyOTFEeFduTnNJL3NIUTFBckczTlljeHZKCkNGRTBhUU9jZ0RPN1JUdHYwYm1VMkM4NHMya1pyRWZwcksyYTFVU3ZWYzNZRGRPTmVqbFYwcU1ZVS9IdE1ZR20KelBLSkR3VXlWWlJXTHpWSXRrQXJ2QTlvZStXdXptcTFPc1hwUEpEcG5vMjY0a1Ywb3FvYkpoMlhPLzlWcnlsUQpTTzZ4NXp6eG1jdE5WWWM4K1hVajl6dVJqSExNK2FOV3h6T3FJaE81azdmWFNFNkxpL1BreFpYMksyZkpKOUZFCnZNenMxVjdBMWFUOWhuSGtIUTMvNmlsWUU0OGhLWEpQVzRJQkZwMkJhRVZDa0JIeU4xSFhLWEliK3diKzA3Y3AKMUJrRG1ObU4zZmlvCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
  admissionReviewVersions: ["v1", "v1beta1"]
  sideEffects: None
- name: "deny-fixed-critical-and-medium.runecast.com"
  rules:
  - apiGroups:   ["*"]
    apiVersions: ["v1"]
    operations:  ["CREATE","UPDATE"]
    resources:   ["pods","daemonsets","deployments","replicasets","statefulsets", "replicationcontrollers","cronjobs","jobs"]
    scope:       "Namespaced"
  namespaceSelector:
    matchExpressions:
    - values:
      - '2'
      operator: In
      key: runecast-admission-policy
  clientConfig:
    service: 
      namespace: runecast
      name: runecast-analyzer-nginx
      path: /rc2/api/v2/k8s-admission-policy-review/policy/2
      port: 9080
    caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURsVENDQW4yZ0F3SUJBZ0lRQm5SUk9PalJ5ek9Wa1FaOWNHU05rREFOQmdrcWhraUc5dzBCQVFzRkFEQXYKTVMwd0t3WURWUVFERXlSeWRXNWxZMkZ6ZEMxaGJtRnNlWHBsY2kxdVoybHVlQzV5ZFc1bFkyRnpkQzV6ZG1NdwpIaGNOTWpJd09EQXhNVE15TXpFeldoY05NalF4TVRFek1UTXlNekV6V2pBdk1TMHdLd1lEVlFRREV5UnlkVzVsClkyRnpkQzFoYm1Gc2VYcGxjaTF1WjJsdWVDNXlkVzVsWTJGemRDNXpkbU13Z2dFaU1BMEdDU3FHU0liM0RRRUIKQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUURrRFNFd1k2ak90SnBEZjR1c3paTWVQMThaWjBtdzRsWmUzK1Y2Rm9uNgpoNnd4SU1BM1NsTTU5NW9IM2hIUU1JWE5uZGhBeW5lOUdxZmVnQk5hVWZuSGw3andnSFl5Q05YaXNON1pld3dTClJLYlZuZlQ4MTgva2YreGlMZVB6dkM4Ulc3OGJGWmVnMGxpeG83ZmdNd25YTG5iVmRYR0V4dVFpWXNmUUQ3bXIKZlFaOG1XWmxpQmVETkJuZndVZlk3bWora0VZYWNnR1I5SVRrRVZuZnRaUEN3T3RJOTl1QzdTbGpYdFNkZ1ZzTAo3QWl0a1A2cTAvRG9zaVN4cThxSjkvTHFUdGEwbUI5ZFZnNk12MWN0SmpvakJkckMvL0ErcTFnS0xPNi9GSXVyCkxoaXcwcFU4QXdSK0dGR1hELy9SQjFEK2ZuOXhZTlpJU3dOcmowOENCUlZKQWdNQkFBR2pnYXd3Z2Frd0RnWUQKVlIwUEFRSC9CQVFEQWdXZ01CMEdBMVVkSlFRV01CUUdDQ3NHQVFVRkJ3TUJCZ2dyQmdFRkJRY0RBakFNQmdOVgpIUk1CQWY4RUFqQUFNR29HQTFVZEVRUmpNR0dDRjNKMWJtVmpZWE4wTFdGdVlXeDVlbVZ5TFc1bmFXNTRnaUJ5CmRXNWxZMkZ6ZEMxaGJtRnNlWHBsY2kxdVoybHVlQzV5ZFc1bFkyRnpkSUlrY25WdVpXTmhjM1F0WVc1aGJIbDYKWlhJdGJtZHBibmd1Y25WdVpXTmhjM1F1YzNaak1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQmhFSDBsRm96SwpFQnU1aHQ2ejd5VkRpKzQ1YkZlR3EyeVlOVUZSWTl5ZzdaVVVyTkcyOTFEeFduTnNJL3NIUTFBckczTlljeHZKCkNGRTBhUU9jZ0RPN1JUdHYwYm1VMkM4NHMya1pyRWZwcksyYTFVU3ZWYzNZRGRPTmVqbFYwcU1ZVS9IdE1ZR20KelBLSkR3VXlWWlJXTHpWSXRrQXJ2QTlvZStXdXptcTFPc1hwUEpEcG5vMjY0a1Ywb3FvYkpoMlhPLzlWcnlsUQpTTzZ4NXp6eG1jdE5WWWM4K1hVajl6dVJqSExNK2FOV3h6T3FJaE81azdmWFNFNkxpL1BreFpYMksyZkpKOUZFCnZNenMxVjdBMWFUOWhuSGtIUTMvNmlsWUU0OGhLWEpQVzRJQkZwMkJhRVZDa0JIeU4xSFhLWEliK3diKzA3Y3AKMUJrRG1ObU4zZmlvCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
  admissionReviewVersions: ["v1", "v1beta1"]
  sideEffects: None
EOF

The output of the command will confirm the configuration was applied to the cluster:


validatingwebhookconfiguration.admissionregistration.k8s.io/runecast-validating-webhook created

Once done, we can immediately see it in action.

Verifying the webhook

We create two namespaces and label it accordingly:


$ kubectl create namespace runecast-policy1
namespace/runecast-policy1 created
$ kubectl create namespace runecast-policy2
namespace/runecast-policy2 created
$ kubectl label namespace runecast-policy1 runecast-admission-policy=1
namespace/runecast-policy1 labeled
$ kubectl label namespace runecast-policy2 runecast-admission-policy=2
namespace/runecast-policy2 labeled

Now, in the first namespace, we try to create a pod running the latest nginx image. The image has a number of critical severity vulnerabilities where fix is not available.


$ kubectl -n runecast-policy1 run nginx-latest --image=nginx:latest
Error from server: admission webhook "deny-critical.runecast.com" denied the request: Image scan in Runecast Analyzer found policy violations: (Rejected by policy 1: 'No critical vulnerabilities').

We were prevented from running the pod. Now we will try to create the same pod in the other namespace where critical vulnerabilities are allowed, if the fix is not available:


$ kubectl -n runecast-policy2 run nginx-latest --image=nginx:latest
pod/nginx-latest created

The pod was created. We can now try to create a nginx pod with an older image which will violate the policy id 2:


$ kubectl -n runecast-policy2 run nginx-1-19 --image=nginx:1.19
Error from server: admission webhook "deny-fixed-critical-and-medium.runecast.com" denied the request: Image scan in Runecast Analyzer found policy violations: (Rejected by policy 2: 'No critical or high severity vulnerabilities with available fix').

Again, we were prevented from running the pod.

To find out more about the image scan results we reach to Runecast and navigate to Image Scanning. The results of each admission are presented as a line in the list indicated by the Trigger type of K8s admission controller.

When we select a specific scan, a pop-up window will open with the scan details - evaluation result, policy ID, number of vulnerabilities, list of the vulnerabilities and other useful info.

After clicking on a specific vulnerability, details are revealed.

Summary

Using Runecast as an admissions controller allows you to feel sure that your workloads are secure from the very start. This, in turn, lowers the operational overhead for your Security and Operations teams.

Runecast is constantly innovating and adding new features to our platform. To keep up to date with what’s possible, follow us on twitter, or schedule a demo today.

Meet other Runecasters here:

Schedule a demo today

See how Runecast can make your Kubernetes simpler and more secure

Schedule Demo