Kubernetes is a powerful container orchestration platform for automating applications’ deployment, scaling and management. Kubernetes environments mostly depend on servers. Although some people consider it to be a great server-decoupled tool, it does not integrate well enough with Serverless applications. In Serverless applications, events are the main communication mechanism. Sources such as API Gateways, databases or data streams emit events, while other infrastructure components such as functions react to them to fulfill a given business need. And yet, we can leverage the built-in power of Kubernetes for integration with Serverless tools too.
In this blog I will introduce a simple yet powerful effective and secure way we can integrate Lambda with existing Kubernetes environment without any codes changes at all. With such an integration, we can incorporate many cloud products with our Kubernetes environment easily.
[NOTE: This blog assumes you have a working knowledge of Python, AWS Lambdas and Kubernetes.]
Want to read this story later? Save it in Journal.
It All Starts with a Service
Imagine you have a server application running in your cluster. The service once received requests from the outside world directly, but now needs to be integrated with other cloud data streams (SQS, for example). That being said, we wish to keep the service as it is because it works well in its current form. Adding a new data stream consumer (SQS Listener) layer might increase your code’s complexity.
A possible solution could be adding a different microservice to consume the data stream. This is a suitable solution, especially in a Kubernetes environment. However, it brings additional complexity to the request processing flow and requires yet another service to maintain. In addition, we do not want our service to constantly consume the data stream. Existing cloud solutions already provide us with the ability to work in an on-push fashion, triggering a Lambda function.
In our case, we have a simple service inside an EKS cluster that handles HTTP requests, which is managed by a K8S deployment and a service (see diagram below) that is exposed to the user.
However, we now have a new source of requests in our cloud environment (SNS, SQS, etc…). We can handle these requests with a Lambda function that will transfer the requests to our service. Now, we need to alter the service to listen on incoming events from the events handler unit or, Lambda for that matter. This Lambda function will be triggered by any entity AWS supports.
Our Lambda should be able to perform Kubernetes actions inside the cluster so we can pass messages to the service. In our case, the Lambda needs to be able to send HTTP requests to our Kubernetes service. The following solution describes how to pass Lambda events as an HTTP request to the Kubernetes service without introducing any external endpoints.
Let’s get to work!
Our simple service exposes a POST /hello
endpoint which receives events. It responds with a greeting to the user:
curl -X POST simple-http-service.default:8080/hello -H “Content-Type: application/json” — data ‘{“name”: “Liav”, “age”:24}’{“message”:”Hello world from Liav”,”event”:{“name”:”Liav”,”age”:24}}
Now we want to control access to our service and limit our service to be accessible solely to the Lambda. On one hand, we do not want to add authentication and authorization layers or manage certificates. On the other hand, we still require a secure communication channel.
Access Control with Kubernetes API
Kubernetes services and pods have proxy API, which enables us to send HTTP requests to them using the Kubernetes’ raw API to the component, without exposing an external endpoint. To achieve this, we are going to set our Kubernetes service as a ClusterIP
, which does not expose an external endpoint. Finally, the Lambda will use the proxy API to pass its events to the service using the REST API.
---
apiVersion: v1
kind: Service
metadata:
name: simple-http-service
namespace: default
labels: &ServicesLabels
app: simple-http-service
spec:
type: ClusterIP
ports:
- port: 8080
targetPort: 8080
protocol: TCP
selector: *ServicesLabels
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: simple-http-service
namespace: default
labels: &ServicesLabels
app: simple-http-service
spec:
replicas: 1
selector:
matchLabels: *ServicesLabels
template:
metadata:
labels: *ServicesLabels
spec:
containers:
- name: simple-service
image: "liavyona09/simple-service:latest"
imagePullPolicy: Always
ports:
- name: http
containerPort: 8080
protocol: TCP
The service is now an internal endpoint only available within the cluster.
And yet, our service does not have an authentication and authorization set-up. Now we are going to leverage the cluster’s native authorization layer for our service and our Lambda.
EKS Authentication and Authorization Mechanism
To be able to send requests to the service we need to have a services/proxy
permission attached to our Kubernetes identity. But before we give our Lambda’s Kubernetes identity this kind of permission, we need to create one for it.
While working with EKS, the aws-auth
ConfigMap in kube-system
namespace is in charge of mapping between AWS users and Identity and Access Management (IAM) roles to Kubernetes identities.
By editing the ConfigMap we can add a Kubernetes user to our new Lambda’s IAM role.
First, we need to create an IAM role for our Lambda with the following policy:
{
"Version": "2012–10–17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"sts:GetCallerIdentity",
"eks:DescribeCluster",
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*"
}
]
}
We then add the new role to the aws-auth
ConfigMap under mapRoles
:
mapRoles: |
— rolearn: arn:aws:iam::ACCOUNT:role/ROLE
username: lambda
Now, we’d like our Lambda to be able to send proxy requests to each service in the cluster:
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: lambda-user-clutser-role
rules:
- apiGroups: [""]
resources: ["services", "services/proxy"]
verbs: ["get", "watch", "list", "create"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: lambda-user-clutser-role-binding
subjects:
- kind: User
name: lambda-user
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: lambda-user-clutser-role
apiGroup: rbac.authorization.k8s.io
We gave our lambda
user the required permission to send proxy HTTP request to any service in our Kubernetes cluster.
Finally, by assuming the IAM role of our Lambda we can perform proxy requests to our service and pass events to it for processing.
Voilà.
Lambda’s Magic — Forwarding Events to a Kubernetes Service
OK, you are not here to copy/paste the solution. You want to understand what our Lambda actually does.
The steps are as follows:
- Use the
boto3
’s describe cluster API to get the cluster’s endpoint and CA certificate - Generate EKS token using the STS client (see AWS IAM Authenticator for Kubernetes)
3. Create a Kubernetes configuration using the endpoint, CA certificate and the token using the Kubernetes Python library
4. Send proxy HTTP request to the configured service
And that’s it. Our handler function just needs to implement this flow.
[NOTE: Be mindful to configure this Lambda environment variables, because some have no default value. This Lambda is configurable in such way that it can perform any type of request to each Kubernetes service in any EKS cluster.]
Kubernetes and AWS Lambda — A Serverless Love Story
Our architecture now looks as follows:
Therefore, all we need to do is to pack our Lambda’s code and create a new Lambda function. After we do that, we will have something like this:
When we trigger the Lambda with the following event we get:
{ “name”: “Liav”, “age”: 24}
It worked! We can also see our service’s logs:
In conclusion, I want to raise a few points:
- With Kubernetes Authentication and Authorization, we managed to connect our service to new source of data stream without additional security efforts in our service development process. This being said, we absolutely can enhance the solution security level by encrypting requests with KMS key, for example.
- We can change our Lambda’s VPC to be the same as the EKS’s. By doing that we will reduce latency between the Lambda and the service and enhance security by working in private subnets.
- With such a solution we are not limited to proxy requests only, but we can perform any Kubernetes action if our IAM role has the right permissions.
- Even though this blog focuses on HTTP proxy, the Lambda can easily adapt to perform any other Kubernetes action if it is authorized to do so.
📝 Save this story in Journal.