This morning I needed to upgrade one of my dev clusters to 1.17.4. I decided to capture the experience. Don’t worry I speed up the ansible output flying by
I use Kubespray to deploy and upgrade my clusters. I didn’t do anything really to prepare. All of my clusters I can rebuild pretty easy from Terraform if anything breaks.
git clone git@github.com:kubernetes-sigs/kubespray.git
cd kubespray
## Make sure you copy your actual inventory. For more information see the kubespray github repo
ansible-playbook -i inventory/dev/inventory.ini -b -v upgrade-cluster.yaml
Take some time and upgrade
Watch it go for about 40 minutes in my case. Remember this is a dev cluster and the pods I have running can restart all they want. I don’ t care. Everything upgrades through the first part of the video. Now lets upgrade Pure Service Orchestrator.
Now if you watch the video you will notice I had to add the Pure Storage helm repo. This was a new jump box in the lab. So I had PSO installed just not from this actual host. It is easy to add. More details are in the Pure Helm Chart README.
I am excited to be at Kubecon yet again. I think this is my third time. Pure Storage will be in booth S92, come by and see some demos of our CSI plugin. Automating persistent storage is still big need for many K8s clusters. Pure can make it simple, scalable and highly available.
I will be at the booth and around a few sessions so please come and say hello.
Also, ask my all about how Pure will support K8s on VMware in all its various forms.
Come by the booth, see a demo, get a turtle approved reusable straw. Enter to win some cool things.
There was a question on twitter and I thought I would write down my process for others to learn from. First, a little background. Kubernetes is managed mostly using a tool called kubectl (kube-control, kube-cuddle, kube-C-T-L, whatever). This tool will look for the configuration to talk to the API for kubernetes management. A sanitized sample can be seen by running:
You can see there is Clusters, Contexts and Users. The following commands kubectl config get-context and use-context allow you to see and switch contexts. In my use case I have a single context per cluster.
kubectl config get-context
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* I-AM-GROOT@k8s-ubt18 k8s-ubt18 I-AM-GROOT
k8s-dev-1-admin@k8s-dev-1 k8s-dev-1 k8s-dev-1-admin
k8s-lab-1-admin@k8s-lab-1 k8s-lab-1 k8s-lab-1-admin
k8s-prod-1-admin@k8s-prod-1 k8s-prod-1 k8s-prod-1-admin
kubectl config use-context k8s-dev-1-admin@k8s-dev-1
Switched to context "k8s-dev-1-admin@k8s-dev-1".
Switching this way became cumbersome. So I now use a tool called kubectx and with it kubens. https://github.com/ahmetb/kubectx. Now you can see below my prompt shows my cluster + the namespace. Pretty sweet to see that and has saved me from removing deployments from the wrong cluster. “k8s-dev-1-admin@k8s-dev-1:default”
Now the kubectl tool will look in your environment for a variable KUBECONFIG. Many times this will be set to KUBECONFIG=~/.kube/config . If you modify your .bash_profile on OSX or .bashrc in Ubuntu(and others) you can point that variable anywhere. I formerly had this pointed to a single file for each cluster. For example:
This worked great but a few 3rd party management tools had issues switching between multiple files. At least for me the big one was the kubernetes module for python. So I moved to doing a single combined config file at ~/.kube/config
Now what do I do now?
so many configs
Here is my basic workflow. I don’t automate it yet as I don’t want to overwrite something carelessly. 1. Run an ansible playbook that grabs the admin.conf file from /etc/kubernetes on the masters of the cluster. 2. Modify manually the KUBECONFIG environment variable to be KUBECONFIG=~/.kube/config:~/latestconfig/new.config 3. Run kubectl config view –raw to make sure it is all there the –raw tag unhides the keys and such. 4. COPY the ~/.kube/config to ~/.kube/config.something 5. Run kubectl config view –raw > ~/.kube/config 6. Open a new terminal to use my original env variable for KUBECONFIG and make sure all the clusters show up. 7. Clean up old config if I am feeling extra clean.
Not really hard or too complicated. I destroy clusters pretty often so sometimes I will blow away the config and then remerge my current clusters into a new config file.
When Mr. Top10 vBlogger mentions you and your VMworld Session. It is appropriate to always say thank you. If you are interested in what is going on with Pure Storage at VMworld be sure to read through Cody’s post to see all of our sessions. I will have some demos in the booth of Kubernetes on VMware vSphere with PKS (and more). So please be sure to come by and check them out.
The sessions are filling up so it will be a good idea to register and get there early. I am very excited about talking about Kubernetes on vSphere. It will follow my journey of learning containers and Kubernetes over the last 2 years or so. Hope everyone learns something.
Last year, here I am talking about containers in front of a container. Boom!
In this paper, Simon uses the Pure Docker Volume Plugin to create persistent storage for CockroachDB. That is all well and good if you want to play with CockroachDB, but also shows the foundation for you to use the plugin to create persistent data for your app.
What applications are you using with containers that require persistent (and reliable) data storage? I would be very interested in seeing how this works for everyone else with their own apps.
Look for a post about going to In-n-Out some time soon, it is my tradition.
Be sure to check out what we will be doing at VMworld at the end of the month. Click the banner below once you are done being mesmerized by Chappy. Sign up for a 1:1 demo or meeting, I’ll be there are would love to meed with you. See how focused a demo I give.
Sessions to be sure to see featuring the Amazing Cody Hosterman
SDDC9456-SPO: Implementing Self-Service Storage Provisioning with vRealize Automation Xaas
VMware vCenter is no longer meant to be the end-user interface for requesting and managing virtual machines and related resources. Storage is no exception. Join Cody Hosterman as he discusses how vRealize Automation Anything-as-a-Service (Xaas) provides the ability to easily import vRealize Orchestrator workflows to control, manage and provision storage via the self-service catalog offering vRealize Automation.
Wednesday, Aug 31, 2:00 PM – 3:00 PM
NF9455-SPO: Best Practices for All-Flash Data Reduction Arrays with VMware vSphere
As All-Flash Data Reduction arrays are becoming common place in VMware environments due to their performance, flexibility and ease-of-use, it is important to understand how to best implement and manage them with EXXi. Data-reduction and flash changes how an administrator should think about various configuration options within VMware and those will be discussed in detail. VAAI, Space Reclamation, virtual disks, SIOC, SDRS Queue depths, Multipathing and other points will be highlighted.
So lets say the power goes out and half of the vm’s on your “lab storage that uses local disks” go into an infinite BSOD loop. I was lucky as one of the servers that still worked was a AD Domain Controller with DNS. Since I usually don’t try to fight BSOD’s and just rebuild. I did so. One very helpful page to move the AD roles was this article on seizing the roles. Which I had to do since the server holding the roles was DOA.
You may or may not have heard about Pure Storage and Cisco partnering to provide solutions together to help our current and prospective customers using UCS, Pure Storage, and VMware. These predesigned and tested architectures provide a full solution for compute, network and storage. Read more here:
There are more coming for SQL, Exchange, SAP and general Virtual Machines (I call it JBOVMs, Just a Bunch of VM’s).
Turn-key like solution for compute, network, and storage
Know how much and what to purchase when it comes to compute, network and storage as we have worked with Cisco to validate with actual real workloads. Many times mixed workloads because who runs just SQL or just Active Directory. It is proven and works. Up in running in a couple of days. If a couple of months was not good (legacy way), and then 2-4 weeks (newer way with legacy HW) wasn’t good enough, how about 1-2 days? For reals next generation datacenter. Also, scale compute, network and storage independently. Why buy extra hypervisor licenses when you just need 5 TB of space?
Ability to connect workload from/to the publics clouds (AWS, AZURE)
I don’t think as many people know this as they should, but Rob Barker “Barkz” is awesome. He worked hard to prove out the ability to use Pure FlashArray with Azure compute. Great read and more details here:
No secret here we are working hard to integrate with backup software vendors. Some have been slow and others have been willing to work with our API to make seamless backup and snapshot management integration with Pure and amazing thing.
Just one example of how Commvault is enabling backup to Azure:
Check how easy it is to setup the Commvault and Pure Storage.
https://www.youtube.com/watch?v=af-dxbYYo2g
Ease of storage allocation without the need of a storage specialist
If I have ever talked to you about Pure Storage and I didn’t say how simple it is to use or mention my own customers that are not “Storage Peeps” that manage it quite easily then I failed. Take away my Orange sunglasses.
If you are looking at FlashStack or just now finding out how easy it is now. Remember no Storage Ph.D. required. We even have nearly everything you need to be built into our free vSphere Plugin. Info from my here Cody Hosterman here.