Kubernetes - Is it possible to deploy containers serverless

2/15/2019

While architecting an application I have two constraints

  1. I have to use Microservice architecture
  2. I have to deploy using Kubernetes

I was thinking to deploy in Serverless because scalability and availability is main drive for my application. As far as I know when I use Serverless deployment usually I need to purchase “Functions as a Service” (FaaS) from service providers and there is no way to manage the internals of deployment. I wonder if I can use Kubernetes to control the deployment even when I deploy serverless.

I am newbie on this area. Please let me be guided if I am missing any part.

-- Sazzad Hissain Khan
architecture
containers
kubernetes
microservices
serverless-architecture

2 Answers

2/15/2019

Disclaimer: I work on that project

Have you taken a look at Knative here

Serverless on k8s is very much what Knative does. It extends Kubernetes through CRDs and provides more app/service developer friendly interface with autoscaling, config/routes management, and a growing list of event sources. Take a look

-- mchmarny
Source: StackOverflow

2/15/2019

They both are different concepts and you need to aware of few things

Kubernetes

Kubernetes is container orchestrator means you can manage your containers at large scale and that's not limited to deployment, roll back , load balancing etc. Using Kubernetes you are bound to Cluster (VMs/Nodes running) limitations. Say you have 10 nodes then Kubernetes will manage all your containers with in the cluster. You can scale the nodes based on your requirements and Kubenertes will manage within those nodes. This is most famous and renowned approach to Microservcies.

Serverless - Functions as a service

This is relatively new concept and building microservices solely based on this is not recommended. There are many limitations around it. Function as a service (Serverless) is generally used to complement the Microservices architecture. Functions are supposed to be task based i.e. send email , process file where you don't need services running all the time up and running.

Serverless and Kubernetes

If you want to create your own serverless function in your own environment , then another concept is to use Open FaaS framework. You have to use Kubernetes as your functions runtime . This approach is very different and very complex , you may not need this.

Scaling and Kubernetes

There is no silver bullet and are trade offs. Kubernetes is best choice for Microservices and to manage large traffic or spike you have to maintain the cluster nodes in a way that can handle your load. It also depends on your Cloud service provider. e.g. Micorosft Azure recently introcuded virtual Kublet. In short when you initially define your cluster (say 5 nodes) and at any point you system gets a spike , the virtual nodes (Azure container Instance) are created for that particlar moment and when when traffic is back to normal then those virtual node goes away and you are back to your normal cluster nodes (5 in this case) .

Again you have to access what you are trying to achieve and Architect your solution.

Hope that helps !

Edit Based on other Answers

There is difference between serverless infrastructure and running your code (FaaS) on serverless environment.

FaaS

When it comes to FaaS (function as a service) you are abstracted from server and can run your code on serverless runtime. Now you can either host your functions by different cloud providers like AWS, Azure etc in this case you don't have to worry about any server underneath and spikes are managed by cloud provider. However if you want to do the serverless on Kubernetes (managed by you) you will be doing by using functions runtime (FaaS). You don't have to worry about runtimes or framework , just package code and run it but you are still bound to the nodes limit , so to manage spike you have to manage the kubernetes nodes.

Kubernetes Serverless as Infrastructure

This is actually a Kubernetes serverless infrastructure , where your Kubernetes cluster is extended by attaching the virtual nodes to your cluster. Now if you have spike or unexpected traffic you don't have to worry about your nodes. Your kubernetes is intellegent enough to expand the traffic to virtual nodes until the spikes and shrinks back. You can run fully managed applications or FaaS on this infrastructure . Virtual Kublet Project is being worked by Microsoft and AWS to deal this kind of scenario where you actually get serverless nodes using Kubernetes.

So anywhere you are responsible to manage nodes , that practically is not the serverless in terms of Infrastructure. But at the same time you can use those nodes to run your own FaaS runtime on those nodes to run multiple instance of different functions. Following 6 mins video can help you understand the difference way better that i could explain .

https://www.youtube.com/watch?v=_GOuP9Q3BqE&list=LLxfaEBq0Fa7eiKokf98ojxA&index=5&t=0s

-- Imran Arshad
Source: StackOverflow