How to Run Serverless Functions on Kubernetes with KNative

Updated: January 30, 2024 By: Guest Contributor Post a comment

Introduction

Serverless architecture has emerged as a powerful way to deploy applications without worrying about managing servers. Kubernetes has become the go-to solution for container orchestration but traditionally does not provide native serverless capabilities. Enter KNative, a Kubernetes-based platform that provides a set of components to build, deploy, and manage modern serverless workloads. In this tutorial, we’ll walk through setting up KNative on Kubernetes and deploying serverless functions.

Setting Up Your Kubernetes Cluster

Before installing KNative, make sure you have a Kubernetes cluster running. You can set up a cluster on a cloud provider or run a local cluster using tools like Minikube or Kind. Once your Kubernetes cluster is ready, check that it’s operational with:

kubectl get nodes

This should return a list of the available nodes in your cluster.

Installing KNative

To install KNative, you will need to install the Serving and Eventing components that KNative provides. First, install Istio, which is a service mesh that KNative uses for routing and managing traffic:

istioctl install --set profile=demo

Monitor the installation progress with:

kubectl get pods --namespace istio-system

Once Istio is up and running, apply the KNative Serving and Eventing CRDs, and the core YAML files from the KNative release:

kubectl apply -f https://github.com/knative/serving/releases/download/v0.26.0/serving-crds.yaml
kubectl apply -f https://github.com/knative/serving/releases/download/v0.26.0/serving-core.yaml
kubectl apply -f https://github.com/knative/eventing/releases/download/v0.26.0/eventing-crds.yaml
kubectl apply -f https://github.com/knative/eventing/releases/download/v0.26.0/eventing-core.yaml

Check that KNative components are running with:

kubectl get pods --namespace knative-serving
kubectl get pods --namespace knative-eventing

Writing Your First Serverless Function

With KNative installed, you’re ready to deploy serverless functions. Here’s a simple Hello World example in Python:

from flask import Flask
app = Flask(__name__)

@app.route('/')
def hello_world():
    return 'Hello World!'

if __name__ == '__main__':
    app.run(debug=True)

Save this file as helloworld.py. Create a Dockerfile to containerize this application:

FROM python:3.7-slim
COPY . /app
WORKDIR /app
RUN pip install Flask==1.1.2
CMD ["python", "helloworld.py"]

Build and push the image to a registry that your Kubernetes cluster can access.

Deploying the Function with KNative

Next, create a KNative service definition in YAML:

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: helloworld-python
  namespace: default
spec:
  template:
     spec:
       containers:
         - image: /helloworld-python:v1

Replace <image-registry> with the location of your docker image. Use kubectl to deploy this service:

kubectl apply -f helloworld-service.yaml

In a few moments, KNative will create a route, a configuration, and a new revision for your service. You can check the status with:

kubectl get ksvc

Scaling and Managing Workloads

One of the key features of KNative is its ability to automatically scale workloads. Explore the auto-scaling capabilities by simulating traffic to your service:

for i in {1..100}; do curl http://helloworld-python.default.example.com; done

Observe how KNative scales the number of pods up and down based on demand thanks to its built-in autoscaler. Use the following command to see the number of replicas:

kubectl get pods

Advanced Usage: Event-Driven Architecture

KNative Eventing enables you to build an event-driven architecture. Here’s how you can set up a simple event source that emits events to your service:

Create an event source sending periodic events:

apiVersion: sources.knative.dev/v1beta2
kind: PingSource
metadata:
  name: hello-events
spec:
  schedule: "*/1 * * * *"
  jsonData: '{"message": "Hello world!"}'
  sink:
    ref:
      apiVersion: serving.knative.dev/v1
      kind: Service
      name: helloworld-python

Apply with kubectl to create the PingSource:

kubectl apply -f ping-source.yaml

Debugging and Monitoring

Monitor and troubleshoot your KNative services by checking logs with:

kubectl logs --selector=serving.knative.dev/service=helloworld-python --tail=50

Conclusion

This tutorial has outlined the basic steps to get started with KNative on Kubernetes. We’ve demonstrated how to deploy and manage serverless functions, from a simple