top of page

Language Flashcards from Songs and Movies - Part 5: Helm & Istio

  • Writer: Derek Ferguson
    Derek Ferguson
  • Nov 8, 2019
  • 9 min read

Since the time of my starting this series, all of the major flashcard sites seem to have discontinued their API services. So, I'd like to spend the next couple of installments working from the opposite direction. First, let's set up an API that lets us submit our Russian text and get back JSON of our flashcards. Then, we can put a UI in front of that. Once we have that, we've got some options for how we want to connect to a flashcard site at the end - could be a spreadsheet download from the UI, if we want.

I want to use this bit of the series to get into a little hybrid cloud - both between private (my home) and public (AWS) and between container (Kubernetes) and serverless (Lambda). So, let's begin by setting up Istio on my home Kubernetes cluster, because if I'm doing any development like this on K8S - I want a service mesh providing all the "dull but needed" services, rather than my own code. We can debate if Istio is the best mesh out there or not -- I'm choosing it just because it is the one used by the MiniKF distribution of Kubeflow and by Seldon, and I'm a big proponent of both of those technologies, so - I'll stick with what I know.

Following these instructions, I'm a little worried already, because I'm on K8S 1.16 and this says it has only been tested up to 1.15. Fingers crossed! It runs through without error, so -- let's try the sample and see if that works, too.

Sample is at this location. Running through all of it would have been completely straightforward, if I had just remembered to wait for all the pods to come up before the final instruction, where one tries to get back some sample output. Killing that final step as it hung on the un-started pods, and then restarting it, produced the desired output. I can comfortable that my Istio installation is now running properly on top of Kubernetes!

I want to use Node for this part of the build, so I do a quick Google to find the smallest Docker image for Node and I'm referred to mhart/alpine-node:latest. So, we'll start with a Dockerfile for that.

I pair this with a similarly-simple index.js file...

Bundling that up into a Docker file and running it with the appropriate switches to expose port 80 allow me to easily consume this from a local browser. So - no news flashes here, very easy to get a Node app using Express up-and-running quickly with Docker.

Now the question becomes: how to deploy this onto Kubernetes in a way that takes optimal advantage of my Istio service mesh. In following the tutorial here, I become aware of YAML-less deployments, with which I confess I was not previously familiar. So, apparently I *could* (if I chose) deploy a pod with a simple command like...

kubectl run hellodemo --image=hellodemo:v1 --port=9095 --image-pull-policy=IfNotPresent

And then expose a service for it with a simple command like...

kubectl expose deployment hellodemo --type=NodePort

Note that the "hellodemo" in the two commands above appears to be how the manually-exposed service is connected to the manually-deployed pod.

Reading to the end of the tutorial above, it looks like I will need to create a VirtualService and point it at my standard service above, in order to get cluster-based access running through Istio. Then, I will have to create a Gateway and point it at my VirtualService in order to get access to my service from outside the cluster. This seems to be the way I'd want to go, since I ultimately want to expose this as an API and consume it from my UI (in addition to whatever might want to consume the API directly).

I don't like command lines, since I want to be able to re-apply and change whatever I wind up doing, so I decide to create 4 YAML's for this. Rather than running them individually, I decide to use Helm to orchestrate them altogether.

The official Helm installation instructions here aren't useful at the moment, as the link for the installation instructions points back to a paragraph of text saying "we assume you've already installed Helm". I try running "helm init" as suggested, but of course this doesn't exist on my machine. I proceed to try installing it via snap on my K8S master, as the Ubuntu help suggests. This gives me some odd warnings I don't like, but this article says it is OK, so I proceed.

After this, I re-run the "helm init" command and - thankfully - it says a bunch of good stuff, one of which is that tiller (the server-side piece of helm) has now been installed on my cluster. Rock on!

I have not yet created a proper Bitbucket repo for this, but, if I'm going to start creating a Helm chart, I'm going to want to be able to share that between my K8S master where I started this Helm stuff and my actual development laptop. So, I create a new repo called cards-ui and clone it down to my K8S server.

A couple of words about my thinking here vis-a-vis that name. I'm thinking I want this whole thing ultimately to be a Micro UI via Web Component technology. So, I'll have one endpoint here that serves up my Vue.js Web Component and the rest of the endpoints will be the data services for that component. I'll set up another service that provides the overall shell for hosting that Web Component.

I run "helm create cards-ui-service" to create the basic structure of my chart, then push it up to Bitbucket and pull it back to my main development machine, so I can work on it in Visual Studio Code.

Looking at the default templates, I see that what we get from scratch appears setup to deploy nginx running on port 80, and a service in front of that. I don't really want any of that, so I elect to replace it with my own Node app and its internal service - basically the non-Istio 50% of the formula above.

I see that they like to run the initial app on 9095, so I switch my code and Dockerfile to use port 9095.

There are so many good templates in the default distribution, with so many great variable substitutions that I hate to loose them, but this is rather jumping in at the deep end, and I can always recreate this with a "helm create", so I decide to delete everything from the Template directory and just start with 4 hard-coded files, to begin with.

I push my Docker image up to Docker Hub and create an app.yaml file to reference it, then try deploying to my Kubernetes cluster using "helm install ./cards-ui-service". It returns with the error "Error: no available release name found." I take this to mean that the name of the folder needs to match some internal label, so I change every name in app.yaml to "cards-ui-service" and try again.

I receive several more errors, so I find an article that indicates I may need to create a different service account using the following commands...

kubectl create serviceaccount -n kube-system tiller

kubectl create clusterrolebinding tiller-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:tiller

kubectl --namespace kube-system patch deploy tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'

I decide to focus my efforts on getting a pre-existing package deployed first - like Tensorflow Serving.

It had been failing before the service account steps above, with a message about not being able to download the package. Now, it seems to install easily.

I rerun "helm install ./cards-ui-service" and - now it works! I change the names back that I reset above and it still works. So, apparently it all had to do with the lack of a proper service account above.

Having gotten our flashcardui pod up-and-running with our Node process, let's start from the other end now and try to get our Istio Gateway up-and-running. This is the piece that will be right at the edge of our cluster, facilitating Ingress to our Node app. We add a YAML called "gateway.yaml" and try re-deploying our chart with the same command.

I get an error indicating that there is no Gateway resource in the Istio installation.

I suspect that my installation of Helm and Tiller interfered with my installation of Istio. Note to self - if you're going to run Helm and Istio on a K8S cluster - looks like it is best to do Helm/Tiller first, then use that to install Istio. Oh well! I am going to follow the instructions at the bottom of this page to see if I can fix things up.

helm install install/kubernetes/helm/istio-init --name istio-init --namespace istio-system --debug

The above command was run from the istio-1.3.4 directory where I downloaded Istio earlier. Rerunning the chat yields an error about my pod already existing, but the Gateway is still deployed. This creates the question in my mind - when I change the code in my pods, am I going to have to update the version and/or delete before recreating or will it "just work." I suspect that this is part of the work that would have been done for me if I hadn't deleted all the more sophisticated YAML a few steps ago.

I make a trivial update to my Node app and re-push to Docker, just to see what the behavior is when I push my next piece.

I added a YAML to attach a Service running on port 9094 to the Node app and try to redeploy. As expected the release fails because of the presence of the Node app pod from previous releases. So, I use "helm list" to list all the releases and "helm delete" to remove everything other than Istio. Then I rerun the helm install and everything looks great, the service went up easily. I'll have to put back the more sophisticated pieces later on to avoid having to do these deletes prior to every new version of my software.

Last but not least - the VirtualService, which connects the release Service above with the Gateway above. Once this goes up, I will have all 4 pieces and should be able to hit my service from off the cluster (like from my development machine, I hope). The last piece is easy to write and I repeat my pattern of deleting the previous release before promoting, but it blows up because the delete doesn't happen right away. So, I delete, pause for a bit, and then try the install. This time it shows as successful on the console.

However, at this point, I make a rather interesting discovery. Istio is completely gone from the istio-system namespace. Literally -- not a single pod still running. So, I run the remainder of the instructions on this page and hope for the best.

export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')

curl http://localhost:$INGRESS_PORT/flashcardsui

This yield an HTML response. The response says that the URL declared does not support the GET operation. I mess around with different variations of the URL stem and all of the other variations produce no response at all, so I am confident that I am getting somewhere - there just must be some link broken somewhere along these 4 links in the chain.

After a bit of consideration, I realize that my Node app is only written to serve traffic on the root path / . So, I update the path in the VirtualService to connect with the root path, delete and reinstall everything via Helm, browse to the root path and... success!

Not exactly stunning, but delivered via Istio, so let's take a quick look at the various monitoring available then we'll be done.

Unfortunately, as soon as I try this, I realize that most of the pieces I want haven't been installed. Grafana, for example - isn't there. I look back at the Istio installation instructions and realize that I chose the most minimal installation, so I need to revisit that.

Of course, simply running the Istio install command with the proper attributes to install the demo packaging fails, because the whole thing already exists. I use "helm delete" to delete every single application on the cluster and start over. Upon trying to reinstall Istio, it tells me I need to use a special purge attribute, so I run "helm del --purge istio" and try again. There is still stuff there, so I get brutal: "kubectl delete namespace istio-system". Why do I feel a strong likelihood that I'm about to trash my cluster? :-/

helm install install/kubernetes/helm/istio --name istio --namespace istio-system --values install/kubernetes/helm/istio/values-istio-demo.yaml

helm install ./cards-ui-service

kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=grafana -o jsonpath='{.items[0].metadata.name}') 3001:3000 --address 0.0.0.0 &

kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=jaeger -o jsonpath='{.items[0].metadata.name}') 15032:16686 --address 0.0.0.0 &

kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=kiali -o jsonpath='{.items[0].metadata.name}') 20001:20001 --address 0.0.0.0 &

The first command above paused for about 3 minutes before finally returning with what it claimed to be success information. So I proceeded to the second command, and it also returned the success message I had seen when running this before.

My simple Node app responds correctly in the browser - so this looks promising. I go ahead and try to access the Grafana dashboard...

Very nice! And Jaeger?

Kiali turns out to be a little trickier. It was installed as a part of the demo configuration we used above, but there was no secret created for the default user's username and password. So, we have to follow the instructions at the top of this article to create a secret with our desired username and password.

Those instructions actually don't seem to help, but I manage to find elsewhere that the default credentials are admin/admin. Trying this gets me in...

And with that, I have put the infrastructure in place on my local K8S cluster to support the UI service I want to build - I'll do the actual coding in the next episode. :-)

 
 
 

Comentários


  • Facebook
  • Twitter
  • LinkedIn

©2018 by Machine Learning for Non-Mathematicians. Proudly created with Wix.com

bottom of page