KNative on Kubernetes: Serverless without Vendor-Lock-in

Previously I had shared an article about installing minikube and deploying apps. You can also leverage Microk8s package of Ubuntu which is easy to work with.

I wanted to share a serverless approach you can leverage on top of Kubernetes which is KNative at this article.

Pre-reqs:

Protocol: TCP
Host IP: 127.0.1.1
Host Port: 22
Guest IP: 10.0.2.1
Guest Port: 22
sudo apt install ssh
ssh -p 22 yourusername@127.0.1.1

Lets start on KNative:

1- If you had enabled microk8s while installing you do not need to install microk8s, you can install or verify by below command:

sudo snap install classic microk8s

2- Check microk8s status :

sudo microk8s.status --wait-ready

3- Enable KNative on Microk8s:

sudo echo ‘N;’ | microk8s.enable knative

For me since I needed to add my account to microk8s group:

sudo usermod -a -G microk8s mc

or

sudo chown -f -R mc ~/.kube

exit from ssh and connect to ssh again, executing below command will work:

sudo echo ‘N;’ | microk8s.enable knative

It will take quite a while installing relevant stuff

istio-1.5.1/
.
.
.
namespace/istio-system created
.
.
.

at the end:

4- Check out pods of knative serving to verify:

microk8s kubectl get pods -n knative-serving

(since microk8s has a custom kubectl if you use “kubectl get pods -n knative-serving” command you will get an error like this: “The connection to the server localhost:8080 was refused — did you specify the right host or port?”)

5- Check out pods of knative eventing to verify:

microk8s kubectl get pods -n knative-eventing

Check out istio pods to verify:

microk8s kubectl get pods -n istio-system

6- Install KNative operator and related items

microk8s kubectl apply -f https://github.com/knative/operator/releases/download/v0.20.0/operator.yaml

Verify operator installation:

microk8s kubectl get deployment knative-operator

7- Install go and hey load generator tool:

sudo snap install go --classic
sudo snap install hey

8- Install AutoScaler HPA add-on:

microk8s kubectl apply -f https://github.com/knative/serving/releases/download/v0.20.0/serving-hpa.yaml

9- Download samples:

git clone https://github.com/knative/docs knativedocs

These samples could be executed to understand how to deploy and use KNative. Auto-scale sample:

cd knativedocsmicrok8s kubectl apply --filename docs/serving/autoscaling/autoscale-go/service.yamlmicrok8s kubectl get ksvc autoscale-go

Sending 30 seconds of traffic maintaining 50 in-flight requests sample command:

hey -z 30s -c 50 \
"http://autoscale-go.default.1.2.3.4.xip.io?sleep=100&prime=10000&bloat=5" \
&& kubectl get pods

Sample result:

Summary:
Total: 30.3379 secs
Slowest: 0.7433 secs
Fastest: 0.1672 secs
Average: 0.2778 secs
Requests/sec: 178.7861

Total data: 542038 bytes
Size/request: 99 bytes

Response time histogram:
0.167 [1] |
0.225 [1462] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.282 [1303] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.340 [1894] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.398 [471] |■■■■■■■■■■
0.455 [159] |■■■
0.513 [68] |
0.570 [18] |
0.628 [14] |
0.686 [21] |
0.743 [13] |

Latency distribution:
10% in 0.1805 secs
25% in 0.2197 secs
50% in 0.2801 secs
75% in 0.3129 secs
90% in 0.3596 secs
95% in 0.4020 secs
99% in 0.5457 secs

Details (average, fastest, slowest):
DNS+dialup: 0.0007 secs, 0.1672 secs, 0.7433 secs
DNS-lookup: 0.0000 secs, 0.0000 secs, 0.0000 secs
req write: 0.0001 secs, 0.0000 secs, 0.0045 secs
resp wait: 0.2766 secs, 0.1669 secs, 0.6633 secs
resp read: 0.0002 secs, 0.0000 secs, 0.0065 secs

Status code distribution:
[200] 5424 responses

Conclusion

It is quite neat to have your own local serverless architecture. Needless to say, you must select what is right for your business requirements since software is another tool means to an end. If it does not deliver the business value you seek, decrease your costs nor increase your revenue, nothing will make any sense.

Pros:

However of course there are possible Cons too:

Check out these too: