In this shot, we will learn how to use
to enforce some best practices for your Kyverno https://kyverno.io/ cluster. EKS https://aws.amazon.com/eks/
For those not familiar, Kyverno is a Kubernetes native policy engine that aims to make your life easier when managing clusters.
To know more, you can read my previous answer on Kyverno, where we discuss the project and its internals in detail.
EKS best practices recommend using separate IAM roles for different use cases. For example, for dev
and prod
environments, you should prefer to have separate IAM roles to configure objects in those environments.
The problem with this is: how do you make sure that the IAM role that has permission for the dev environment doesn’t accidentally create objects in the production environment?
If you have the roles configured properly, it would obviously not allow this to happen, but with Kyverno, you can fool-proof this and make sure that if someone does try this, it gets reported.
We are going to write a policy for Kyverno to check if the object created in a namespace (dev
or prod
) has the right IAM role specified in its annotations. Let’s begin!
The simplest way to install Kyverno on your cluster is by running:
kubectl create -f https://raw.githubusercontent.com/kyverno/kyverno/main/definitions/release/install.yaml
If you visit the Kyverno documentation, you’ll see ways to install it using helm, as well as ways to customize your installation of Kyverno.
Before we create the actual Kyverno policy, let’s first create a ConfigMap
that will store the IAM roles for dev
and prod
environments. This can be done very simply, using:
kind: ConfigMap
apiVersion: v1
metadata:
name: ns-roles-dictionary
namespace: kyverno
data:
prod: "arn:aws:iam::123456789012:role/prod"
dev: "arn:aws:iam::123456789012:role/dev"
Now, to the fun part: creating our Kyverno policy. Let’s first look at the policy YAML file. If this is your first time seeing a Kyverno policy, don’t worry at all, because one of the most beautiful things about Kyverno is how intuitive it is – it follows a very similar structure to the YAML files for Kubernetes objects.
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: deployment-valid-role
annotations:
policies.kyverno.io/category: Security
policies.kyverno.io/description: Rules to enforce valid roles, based on namespace-role dictionary
spec:
validationFailureAction: enforce
rules:
- name: validate-role-annotation
context:
- name: ns-roles-dictionary
configMap:
name: ns-roles-dictionary
namespace: kyverno
match:
resources:
kinds:
- Deployment
preconditions:
- key: "{{ request.object.metadata.namespace }}"
operator: In
value: ["prod", "dev"]
- key: "{{ request.object.spec.template.metadata.annotations.\"iam.amazonaws.com/role\" }}"
operator: NotEqual
value: ""
validate:
message: "Annotation iam.amazonaws.com/role \"{{ request.object.spec.template.metadata.annotations.\"iam.amazonaws.com/role\" }}\" is not allowed for the \"{{ request.object.metadata.namespace }}\" namespace."
deny:
conditions:
- key: "{{ request.object.spec.template.metadata.annotations.\"iam.amazonaws.com/role\" }}"
operator: NotIn
value: "{{ \"ns-roles-dictionary\".data.\"{{ request.object.metadata.namespace }}\" }}"
Not as scary as you expected, right? Now, let’s go through it line by line.
Let’s begin from the spec
section since everything before that is standard boilerplate. The first thing in the spec section is validationFaliureAction
, which we have set to enforce
. What this does is specify what should happen if an object violates this policy. Setting it to enforce will ensure that the object that violates this policy does not get created; whereas, setting it to audit
will allow the object to get created, but will report that violation.
Each Kyverno policy must have at least one rule, which is what we define next. After specifying the name of the rule, we specify the configMap
we created earlier. This is done using the context
key. We do this because it easily refers to values from this configMap
in our Kyverno policy, as you will see later.
Next up, we specify the Kubernetes objects we want this policy to act on under the resources
key. Kyverno offers a lot of control over here. For example, you can choose Deployments
, but add an exclude block which would tell Kyverno not to include some specific Deployments
. Pretty cool, right?
match:
resources:
kinds:
- Deployment
exclude:
clusterroles:
- cluster-admin
This example matches all Deployments
, excluding those created using the cluster-admin
ClusterRole.
After specifying what resources we want the policy to act on, we specify some preconditions. Think of these as custom filters that give you more control over when the policy should be applied on the Kubernetes objects you select using the Match and Exclude blocks.
The first precondition states that the policy should not be applied if the deployment is created in a namespace other than dev
or prod
. The second precondition says that if the deployment object has no annotation mentioning an IAM role, then the policy must not be applied. You may want to remove this second precondition based on your particular use case.
Once we have fine-grain control over what objects this policy would get applied to, we specify what the policy should do. Kyverno policies can mutate, validate, or generate Kubernetes objects. This policy is going to validate the IAM roles, so we specify the validate block. In the message
key, we specify what message should be shown if an object violates the policy. The next lines of the policy say that the request should be denied if the key
is not present in the value
, that is, if the value of the “iam.amazonaws.com/role
” annotation in the selected deployment does not correspond to the correct value of the namespace present in our config map.
And that is it. Yes, it is this simple to create policies in Kyverno! I hope this post was able to show you how powerful, yet easy, it is to use Kyverno. If you’re interested in knowing more, do check out the Kyverno documentation. If you have any doubts or feedback, feel free to join the #kyverno channel on the Kubernetes Slack to talk to the maintainers and other community members!
Free Resources