Before it is too Late to Talk about 2024

As we embrace 2024, the landscape of application development, delivery, and management continues to evolve rapidly. The journey of Cloud Native at Pure Storage has been pivotal in shaping this transformation. Let’s revisit our milestones and gear up for an exciting future.

Journey So Far: A Recap

  • 2017-2020: The Advent of DevOps Integration
    Focus: Connecting our array with DevOps practices.
    Achievement: Streamlining operations and enhancing efficiency.
  • 2021: Kubernetes and Disaster Recovery
    Focus: Providing robust storage and disaster recovery solutions for Kubernetes environments.
    Achievement: Enabling more resilient and scalable applications.
  • 2022: Embracing Stateful Databases in Kubernetes
    Focus: Integrating stateful databases with Kubernetes.
    Achievement: Offering more dynamic and versatile database solutions.
  • 2023: Kubernetes as a Landing Zone
    Focus: Planning for Kubernetes at scale and exploring bare metal solutions.
    Achievement: Facilitating growth and scalability in dynamic environments. Explored paths to lower spend on virtualization.
  • 2024 and Beyond: The Vision Unfolds
    As we step into 2024, the landscape is ripe with opportunities and challenges:
    Broadcom’s Acquisition of VMware: This major industry move has prompted a reevaluation of workload management strategies.
  • Emerging Paradigms: Platform Engineering, DevOps 2.0, Agile methodologies, and Infrastructure as Code are now the norm for competitive businesses.
    Rise of Gen AI/LLMs: A transformative shift in data leverage, code delivery, and customer experience.

Kubernetes is the Orchestrator of Choice and remains at the forefront, offering unmatched orchestration capabilities, whether on VMs or beyond. It’s pivotal in automating, observing, and providing agility, independent of physical infrastructure, and is crucial in AI development for business needs.

Three Essential Questions for Leaders


VMware Escape Strategy: What’s our five-year plan post-VMware? Let’s not beat around the bush. Every VMware by Broadcom Customer must consider this question. It is also a valid decision to keep things the same.

Evolving DevOps to Platform Engineering: How do we enhance our practices for faster, better outcomes? Continual learning, improvement and change is a key indicator of the leaders in every industry.

Integrating AI: What are our strategies for embedding AI into our solutions securely and swiftly? Will you rebuild your own ChatGPT? Probably no, but a strategy for building LLM’s into your workflow, customer experience and overall roadmap is table stakes in 2024.

Pure Storage in 2024: Your Strategy Partner In 2024, Pure Storage is not just a solution provider but a partner in aligning your strategies with clear, achievable goals. We are committed to positioning your business for success now and in the future.

Upgrading K8s to 1.17.4 and PSO to 5.1.0

This morning I needed to upgrade one of my dev clusters to 1.17.4. I decided to capture the experience. Don’t worry I speed up the ansible output flying by

I use Kubespray to deploy and upgrade my clusters. I didn’t do anything really to prepare. All of my clusters I can rebuild pretty easy from Terraform if anything breaks.

git clone git@github.com:kubernetes-sigs/kubespray.git
cd kubespray
## Make sure you copy your actual inventory. For more information see the kubespray github repo
ansible-playbook -i inventory/dev/inventory.ini -b -v upgrade-cluster.yaml
Take some time and upgrade

Watch it go for about 40 minutes in my case. Remember this is a dev cluster and the pods I have running can restart all they want. I don’ t care. Everything upgrades through the first part of the video. Now lets upgrade Pure Service Orchestrator.

helm upgrade -n pure-csi pso pure/pure-csi -f dev-values.yaml

Done.

Now if you watch the video you will notice I had to add the Pure Storage helm repo. This was a new jump box in the lab. So I had PSO installed just not from this actual host. It is easy to add. More details are in the Pure Helm Chart README.

Look out San Diego! Here we come.

I am excited to be at Kubecon yet again. I think this is my third time. Pure Storage will be in booth S92, come by and see some demos of our CSI plugin. Automating persistent storage is still big need for many K8s clusters. Pure can make it simple, scalable and highly available.

I will be at the booth and around a few sessions so please come and say hello.

Also, ask my all about how Pure will support K8s on VMware in all its various forms.

Come by the booth, see a demo, get a turtle approved reusable straw. Enter to win some cool things.

Managing Multiple Kubernetes Clusters

There was a question on twitter and I thought I would write down my process for others to learn from. First, a little background. Kubernetes is managed mostly using a tool called kubectl (kube-control, kube-cuddle, kube-C-T-L, whatever). This tool will look for the configuration to talk to the API for kubernetes management. A sanitized sample can be seen by running:

kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://10.21.142.140:6443
  name: k8s-dev-1
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://10.21.142.130:6443
  name: k8s-lab-1
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://10.21.142.150:6443
  name: k8s-prod-1
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://10.21.142.160:6443
  name: k8s-ubt18
contexts:
- context:
    cluster: k8s-ubt18
    user: I-AM-GROOT
  name: I-AM-GROOT@k8s-ubt18
- context:
    cluster: k8s-dev-1
    user: k8s-dev-1-admin
  name: k8s-dev-1-admin@k8s-dev-1
- context:
    cluster: k8s-lab-1
    user: k8s-lab-1-admin
  name: k8s-lab-1-admin@k8s-lab-1
- context:
    cluster: k8s-prod-1
    user: k8s-prod-1-admin
  name: k8s-prod-1-admin@k8s-prod-1
current-context: I-AM-GROOT@k8s-ubt18
kind: Config
preferences: {}
users:
- name: I-AM-GROOT
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
- name: k8s-dev-1-admin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
- name: k8s-lab-1-admin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
- name: k8s-prod-1-admin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

You can see there is Clusters, Contexts and Users. The following commands kubectl config get-context and use-context allow you to see and switch contexts. In my use case I have a single context per cluster.

kubectl config get-context
CURRENT   NAME                          CLUSTER      AUTHINFO           NAMESPACE
*         I-AM-GROOT@k8s-ubt18          k8s-ubt18    I-AM-GROOT         
          k8s-dev-1-admin@k8s-dev-1     k8s-dev-1    k8s-dev-1-admin    
          k8s-lab-1-admin@k8s-lab-1     k8s-lab-1    k8s-lab-1-admin    
          k8s-prod-1-admin@k8s-prod-1   k8s-prod-1   k8s-prod-1-admin
kubectl config use-context k8s-dev-1-admin@k8s-dev-1
Switched to context "k8s-dev-1-admin@k8s-dev-1".

Switching this way became cumbersome. So I now use a tool called kubectx and with it kubens. https://github.com/ahmetb/kubectx. Now you can see below my prompt shows my cluster + the namespace. Pretty sweet to see that and has saved me from removing deployments from the wrong cluster. “k8s-dev-1-admin@k8s-dev-1:default”

(base) (⎈ |k8s-dev-1-admin@k8s-dev-1:default)owings@owings--MacBookPro15:~/Dropbox/gitproj/cattle-clusters$ 

Now the kubectl tool will look in your environment for a variable KUBECONFIG. Many times this will be set to KUBECONFIG=~/.kube/config . If you modify your .bash_profile on OSX or .bashrc in Ubuntu(and others) you can point that variable anywhere. I formerly had this pointed to a single file for each cluster. For example:

KUBECONFIG=~/.kube/config.prod:~/.kube/config.dev:~/.kube/config.lab

This worked great but a few 3rd party management tools had issues switching between multiple files. At least for me the big one was the kubernetes module for python. So I moved to doing a single combined config file at ~/.kube/config

Now what do I do now?

so many configs

Here is my basic workflow. I don’t automate it yet as I don’t want to overwrite something carelessly.
1. Run an ansible playbook that grabs the admin.conf file from /etc/kubernetes on the masters of the cluster.
2. Modify manually the KUBECONFIG environment variable to be KUBECONFIG=~/.kube/config:~/latestconfig/new.config
3. Run kubectl config view –raw to make sure it is all there the –raw tag unhides the keys and such.
4. COPY the ~/.kube/config to ~/.kube/config.something
5. Run kubectl config view –raw > ~/.kube/config
6. Open a new terminal to use my original env variable for KUBECONFIG and make sure all the clusters show up.
7. Clean up old config if I am feeling extra clean.

Example on my Ubuntu clusters:

ansible-playbook -i inventory.ini -b -v get-me-some-key.yml -u ubuntu
KUBECONFIG=~/.kube/config:~/latestconfig/config.prod01
kubectl config view --raw
cp ~/.kube/config ~/.kube/config.10.02.2019
kubectl config view --raw > ~/.kube/config

#IN New Window
kubectl config view
kubectl get nodes
kubectl get pods

#Using kubectx showing output
kubectx
I-AM-GROOT@k8s-ubt18
k8s-dev-1-admin@k8s-dev-1
k8s-lab-1-admin@k8s-lab-1
k8s-prod-1-admin@k8s-prod-1

kubectx I-AM-GROOT@k8s-ubt18
Switched to context "I-AM-GROOT@k8s-ubt18".

Not really hard or too complicated. I destroy clusters pretty often so sometimes I will blow away the config and then remerge my current clusters into a new config file.

https://giphy.com/gifs/guardians-of-the-galaxy-groot-baby-xUySTXEp8EhA9PG8j6

Thanks, @CodyHosterman. I am Incorrigible.

When Mr. Top10 vBlogger mentions you and your VMworld Session. It is appropriate to always say thank you. If you are interested in what is going on with Pure Storage at VMworld be sure to read through Cody’s post to see all of our sessions. I will have some demos in the booth of Kubernetes on VMware vSphere with PKS (and more). So please be sure to come by and check them out.

Unsure what Cody means…

VMworld 2018 in Las Vegas

I was going to write my own post, but Cody Hosterman already did a great one.

Cody’s VMworld 2018 and Pure Storage Blog

The sessions are filling up so it will be a good idea to register and get there early. I am very excited about talking about Kubernetes on vSphere. It will follow my journey of learning containers and Kubernetes over the last 2 years or so. Hope everyone learns something.

Last year,  here I am talking about containers in front of a container. Boom!

Deploying Persistent Storage in Docker Swarm using Pure Storage Whitepaper

Spreading the word about a new paper published by Simon Dodsley on Deploying Persistent Storage in Docker Swarm.

In this paper, Simon uses the Pure Docker Volume Plugin to create persistent storage for CockroachDB. That is all well and good if you want to play with CockroachDB, but also shows the foundation for you to use the plugin to create persistent data for your app.

What applications are you using with containers that require persistent (and reliable) data storage? I would be very interested in seeing how this works for everyone else with their own apps.

Come see @CodyHosterman at VMworld, and if he is too busy you can see me

Co89Fz2UAAAiAD9Look for a post about going to In-n-Out some time soon, it is my tradition.

Be sure to check out what we will be doing at VMworld at the end of the month. Click the banner below once you are done being mesmerized by Chappy. Sign up for a 1:1 demo or meeting, I’ll be there are would love to meed with you. See how focused a demo I give.

 

vmworld-sig chappyFB

Sessions to be sure to see featuring the Amazing Cody Hosterman

SDDC9456-SPO: Implementing Self-Service Storage Provisioning with vRealize Automation Xaas

VMware vCenter is no longer meant to be the end-user interface for requesting and managing virtual machines and related resources. Storage is no exception. Join Cody Hosterman as he discusses how vRealize Automation Anything-as-a-Service (Xaas) provides the ability to easily import vRealize Orchestrator workflows to control, manage and provision storage via the self-service catalog offering vRealize Automation.

Wednesday, Aug 31, 2:00 PM – 3:00 PM

NF9455-SPO: Best Practices for All-Flash Data Reduction Arrays with VMware vSphere

As All-Flash Data Reduction arrays are becoming common place in VMware environments due to their performance, flexibility and ease-of-use, it is important to understand how to best implement and manage them with EXXi. Data-reduction and flash changes how an administrator should think about various configuration options within VMware and those will be discussed in detail. VAAI, Space Reclamation, virtual disks, SIOC, SDRS Queue depths, Multipathing and other points will be highlighted.

Monday, Aug 29, 2:30 PM – 3:30 PM

Seizing AD Roles – File under Good to know

So lets say the power goes out and half of the vm’s on your “lab storage that uses local disks” go into an infinite BSOD loop. I was lucky as one of the servers that still worked was a AD Domain Controller with DNS. Since I usually don’t try to fight BSOD’s and just rebuild. I did so. One very helpful page to move the AD roles was this article on seizing the roles. Which I had to do since the server holding the roles was DOA.

 

https://technet.microsoft.com/en-us/library/cc816779(v=ws.10).aspx

 

Enjoy and file this under Good to Know

18e6fo