When Mr. Top10 vBlogger mentions you and your VMworld Session. It is appropriate to always say thank you. If you are interested in what is going on with Pure Storage at VMworld be sure to read through Cody’s post to see all of our sessions. I will have some demos in the booth of Kubernetes on VMware vSphere with PKS (and more). So please be sure to come by and check them out.
Get going with MicroK8s
Last week I was getting stickers from the Ubuntu booth during the Open Infrastructure Conference in Denver. I asked a sorta dumb question, since this was a so new to me. My very first Open Infra Conference (formerly OpenStack Summit). I was asking a lot of questions.
I saw a sticker for MicroK8s (Micro-KATES).
Me: What is that?
Person in Booth: Do you know what MiniKube is?
Me: Yes.
Person in Booth: It is like that, but from the Ubuntu Opinionated version.
Me: Ok, cool, my whole lab is Ubuntu, except when it isn’t. So I’ll try it out.
Ten minutes later? Kuberenetes is running on my Ubuntu 16.04 VM.
Go over to https://microk8s.io/ to get the full docs.
Want a quick lab?
snap install microk8s --classic
microk8s.kubectl get nodes
microk8s.kubectl get services
Done. What? What!
So this was slightly annoying to me to type microk8s.blah for everyhing. So alias that if you don’t already have kubectl. I didn’t, this was a fresh VM.
snap alias microk8s.kubectl kubectl
You can run this command to push the config into a file to be used elsewhere.
microk8s.kubectl config view --raw > $HOME/.kube/config
Want the Dashboard? Run this:
microk8s.enable dns dashboard
It took my 5 minutes to get to this point. Now I am like OK lets connect to some Pure FlashArrays.
First we need enable priveleged containers in MicroK8s. Add this line to the following 2 config files.
–allow-privileged=true
# kubelet config
sudo vim /var/snap/microk8s/current/args/kubelet
#kube-apiserver config
sudo vim /var/snap/microk8s/current/args/kube-apiserver
Restart services to pick up the new config:
sudo systemctl restart snap.microk8s.daemon-kubelet.service
sudo systemctl restart snap.microk8s.daemon-apiserver.service
Now you can install helm, and run the Pure Service Orchestrator Helm chart.
More info on that here:
Namespace Issues when Removing CRD/Operators
With the latest release of Pure Service Orchestrator, we added support for a non-Helm installation for environments that do not allow Helm. This new method uses an Operator to setup and install PSO. The result is the same exact functionality but uses a security model more agreeable to some K8s distro vendors.
I do live demos of PSO a handful of times a day. Even though I use Terraform and Ansible to automate the creation of my lab K8s clusters I don’t want to do this many times a day. I usually just tear down PSO and leave my cluster ready for the next demo.
Removing the CRD and the Namespace created when installing the Operator has a couple of issues. One small issue is the Operator method creates a new namespace “pso-operator”. This is the default name, and you can choose your own namespace name during install time. I often choose “pso” for simplicity. As we have discovered, deleting a namespace that had a CRD installed into hangs in the status “Terminating”, for like, forever. FOR-EV-ER. This seems to be an issue dating back quite a ways in K8s land.
https://github.com/kubernetes/kubernetes/issues/60807#issuecomment-448120772
From a couple of GitHub issues and the help of Simon “I don’t do the twitter” Dodsley This is the process for deleting the CRD first and the Namespace. This method keeps the namespace form hanging in the state “Terminating”.
# Removing the pso-operator
kubectl delete all --all -n pso-operator
# If you haven't don't it already don't delete the namespace yet.
kubectl get ns
NAME STATUS AGE
default Active 2d21h
kube-public Active 2d21h
kube-system Active 2d21h
pso-operator Active 14h
kubectl get crd
NAME CREATED AT
psoplugins.purestorage.com 2019-04-17T01:37:31Z
# ok so...
kubectl delete crd psoplugins.purestorage.com
customresourcedefinition.apiextensions.k8s.io "psoplugins.purestorage.com" deleted
# does it hang? yeah it does
^C
# stuck terminating?
kubectl describe crd psoplugins.purestorage.com
# snipping non-relevant output
...
Conditions:
Last Transition Time: 2019-04-17T01:37:31Z
Message: no conflicts found
Reason: NoConflicts
Status: True
Type: NamesAccepted
Last Transition Time: <nil>
Message: the initial names have been accepted
Reason: InitialNamesAccepted
Status: True
Type: Established
Last Transition Time: 2019-04-18T13:54:36Z
Message: CustomResource deletion is in progress
Reason: InstanceDeletionInProgress
Status: True
Type: Terminating
Stored Versions:
v1
# Run this command to allow it to delete
kubectl patch crd/psoplugins.purestorage.com -p '{"metadata":{"finalizers":[]}}' --type=merge
customresourcedefinition.apiextensions.k8s.io/psoplugins.purestorage.com patched
# Re-run the crd delete
kubectl delete crd psoplugins.purestorage.com
# Confirm it is gone
kubectl get crd
No resources found.
# Remove the Namespace
kubectl delete ns pso-operator
namespace "pso-operator" deleted
#Verify removal
kubectl get ns
NAME STATUS AGE
default Active 2d21h
kube-public Active 2d21h
kube-system Active 2d21h
If you sort of ignored my warning above and tried to remove the namespace BEFORE successfully removing the CRD follow the following procedure.
Namespace Removal
# Find that pesky 'Terminating' namespace
kubectl get ns
NAME STATUS AGE
default Active 2d20h
kube-public Active 2d20h
kube-system Active 2d20h
pso Active 13h
pso-operator Terminating 35h
kubectl cluster-info
# run the kube-proxy
kubectl proxy &
# output the namespace to json
kubectl get namespace pso-operator -o json >tmp.json
# Edit the tmp.json to remove the finalizer the spec: should look like this:
"spec": {
"finalizers": [
]
},
# Now send that tmp.json to the API server
curl -k -H "Content-Type: application/json" -X PUT --data-binary @tmp.json http://127.0.0.1:8001/api/v1/namespaces/pso-operator/finalize
# Check your namespaces
kubectl get ns
NAME STATUS AGE
default Active 2d20h
kube-public Active 2d20h
kube-system Active 2d20h
pso Active 13h
# disable the kube-proxy, bring it back to the foreground and ctrl-C
fg
^C
What’s New in Pure Service Orchestrator?
This week (April 16, 2019), Pure released the 2.4.0 version of the Pure Service
- PSO Operator is now the preferred install method for PSO on OpenShift 3.11 and higher versions.
The PSO Operator packages and deploys the Pure Service Orchestrator (PSO) on OpenShift for dynamic provisioning of persistent volumes on FlashArrays and FlashBlades. The minimum supported version is OpenShift 3.11.
This Operator is created as a Custom Resource Definition from the pure-k8s-plugin Helm chart using the Operator-SDK.
This installation process does not require Helm installation. - Added flasharray.iSCSILoginTimeout parameter with default value of 20sec.
- Added flasharray.iSCSIAllowedCIDR parameter to list CIDR blocks allowed as iSCSI targets. The default value allows all addresses.
flexPath
config parameter location invalues.yaml
has been moved from version 2.2.1 from underorchestrator
field. Upgrading from version earlier than 2.3.0, needs change tovalues.yaml
to use the new location offlexPath
for PSO to work.
Some Highlights
The Operator is a big change for the install process. We are not leaving or abandoning Helm. I love Helm. Really. This was for our customers that do not allow Helm to run in their environments. Mainly the Tiller pod ran with more permissions than many security teams were comfortable with. Tillerless Helm is coming if you are worried now. The Operator will be the
The flexPath: changing places in the values.yaml is good to know. We wanted to make that setting a
Last but not least, the iSCSIAllowedCIDR limits the iSCSI targets PSO will have the worker node log into during the Persistent Volume mount process. This is important to environments that may serve many different clusters with their own iSCSI networks. The iSCSI interfaces on a FlashArray can be divided with
It is “NFSEndPoint”
I think I have updated my blog post and PSO guide to reflect this change. In case you are using Pure Service Orchestrator with
Sample values .yaml
arrays:
FlashArrays:
- MgmtEndPoint: "1.2.3.4"
APIToken: "a526a4c6-18b0-a8c9-1afa-3499293574bb"
Labels:
rack: "22"
env: "prod"
- MgmtEndPoint: "1.2.3.5"
APIToken: "b526a4c6-18b0-a8c9-1afa-3499293574bb"
FlashBlades:
- MgmtEndPoint: "1.2.3.6"
APIToken: "T-c4925090-c9bf-4033-8537-d24ee5669135"
NFSEndPoint: "1.2.3.7"
Labels:
rack: "7b"
env: "dev"
- MgmtEndPoint: "1.2.3.8"
APIToken: "T-d4925090-c9bf-4033-8537-d24ee5669135"
NFSEndPoint: "1.2.3.9"
Labels:
rack: "6a"
Another Kickoff and a New Year
November 2018 was my the finish of my 5th year at Pure. I really meant to write up a recap but let’s just say November and December were super busy.
I was in Barcelona for VMworld EMEA the beginning of November, then came home to visit more customers around the US and tell them about using PSO with Kubernetes and Docker. Then my amazing oldest daughter had a soccer tournament in Orlando, FL. It was a great time with the family and why I do what I do.
Then back out to AWS
January was about building out some content for our sales and company kickoff but also helping customers with their projects on K8s and Docker. That brings me to yet another Kickoff. What I call the Orangest show on Earth. A chance for me to see so many great friends and see how successful their last year was. It was very satisfying to see sales reps and SE’s that I worked with throughout the year get recognized for growth they brought to the company. It was very nice to be recognized by my leadership and peers with an award. When you work with such a wide range of regions and teams sometimes gets hard to see if you are making a difference, especially when you are remote like I am. At the beginning of 2018, almost no one at Pure knew what I was working on. Slowly but surely the excitement around K8s is growing, so I am looking forward to an even more exciting year here at Pure.
Somethings I would like to do in 2019
- Share more on the blog. The transition from VMware(I still do VMware stuff!) to Kubernetes has provided many learning opportunities for me to share.
- Work on Clusters as Cattle with Persistent data. Data is important and the app/cluster can or should move around it. Seamlessly.
- Finish some cloud/dev online classes I have started. Finding time with no distractions is key here.
New Pure Service Orchestrator Demo
You may want to make this full screen to see all the CLI glory.
What you will see in this demo is the initial install of Pure Service Orchestrator on
I would love to hear what you think of this and any other ways I can show this off to enable
Kubecon 2018 Seattle Pure Storage – also We are hiring
I will be at the Pure Storage booth at Kubecon next week December 11-13. Booth G7. Come see us to learn about Pure Service Orchestrator and Cloud Block Store for AWS. Find out How our customers are leveraging K8s to transform their applications and Pure Storage for their persistent storage needs.
It has been a fun (nearly 2 years) time at Pure working with our customers that already love Pure Storage for things like Oracle, SQL and VMware as they move into the world of K8s and Containers. Also helping customers that never used Pure before move from complicated or underperforming solutions for persistent storage to FlashArray or FlashBlade. With Cloud Block Store entering beta and GA later next year even more customers will want to see how to automate storage persistence on premises, in the public cloud or in a hybrid model. All of that to say if you are an architect looking to grow on our team please find me at Kubecon. I want meet you and learn why you love cloud, containers, Kubernetes and automating all the things in-between.
- Send me a message on twitter @jon_2vcps
- Find me at the Pure Booth
- Stop me in the hall between sessions.
I look just like one of the following people:
Pure Service Orchestrator Guide
Over the last few months I have been compiling information that I have used to help customers when it comes to PSO. Using Helm and PSO is very simple, but with so many different ways to setup K8s right now it can require a broad knowledge of how plugins work. I will add new samples and work arounds to this Github repo as I come across them. For now enjoy. I have the paths for volume plugins for Kubespray, Kubeadm, Openshift and Rancher version of Kubernetes. Plus some quota samples and even some PSO FlashArray Snapshot and clone examples.
https://github.com/2vcps/PSO-Guide
A nice picture of some containers because it annoys some people, that makes me think it is funny.
Kubernetes on VMware vSphere Demo and more
This post is a recap of my session at VMworld last week in Las Vegas. First, due to lighting, the demo was no very easily viewable. I am really disappointed this happened. I posted the full demo here on youtube:
All of the scripts and instructions are available here on my github repo.
https://github.com/2vcps/vmworld2018_vin3762bus
Coming up next is some work around kubespray and terraform.