Migrating K8s Stateful Apps with Pure Storage

I have to move my harbor instance to a new cluster.

  1. old cluster – find all the PVC’s
kubectl -n harbor get pvc
NAME                                     STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
data-harbor-harbor-redis-0               Bound    pvc-aebe5589-f484-4664-9326-03ff1ffb2fdf   5Gi        RWO            pure-block     24m
database-data-harbor-harbor-database-0   Bound    pvc-b506a2d4-8a65-4f17-96e3-f3ed1c25c56e   5Gi        RWO            pure-block     24m
harbor-harbor-chartmuseum                Bound    pvc-e50b2487-2a88-4032-903d-80df15483c37   100Gi      RWO            pure-block     24m
harbor-harbor-registry                   Bound    pvc-923fa069-21c8-4920-a959-13f7220f5d90   200Gi      RWO            pure-block     24m
  1. clone in the FlashArray
    Find each PVC listed when you run the above command, you may either create a snapshot or a full clone.
  2. Bring up the new app with the same sized PVC’s on your new cluster.
kubectl -n harbor scale deployment --replicas 0 -l app=harbor
  1. scale app to 0 replicas on the new k8s cluster (example above)
  2. Clone and overwrite each volume on the FlashArray. Using the pvc volume name from the new cluster.
kubectl -n get pvc

  1. Scale app back to the required replicas. Verify it works.
kuebctl -n harbor scale deployment --replicas 1 -l app=harbor
  1. Point DNS to new loadbalancer/ingress
kubectl -n harbor get ingresses
NAME                    HOSTS                                         ADDRESS         PORTS     AGE
harbor-harbor-ingress   harbor.newstack.local,notary.newstack.local   10.xx.xx.xx  80, 443   32m

Change DNS to the new cluster.

All my data is now migrated

Kubespray and vSphere VMs

I build and destroy Kubernetes clusters nearly weekly. Doing it on VMs makes this super easy. I also need to demo Pure Service Orchestrator so having in guest iSCSI is a must. Following this repo should give any vSphere admin an easy way to learn kubectl, helm and PSO quite easily (of course PSO works with Pure FlashArray and FlashBlade). This uses Terraform to create the VM and Kubespray to install k8s. Ansible can also be used for a few automations of package installs and updates.

I am going to try something new and not recreate the github readme and just share the repo link.

https://github.com/2vcps/tf4vsphere

Pure Storage and Weaveworks Webinar – March 17

I am pretty excited to be doing a webinar with Weaveworks on Weave Kubernetes Platform and Pure Storage. I met Damani at Kubecon and Re:Invent and we have been talking about doing this for months. I am excited to integrate Pure Service Orchestrator and Pure Storage into a platform providing a full collection of what you need to run k8s. Some things we will cover:

  • How the Weave Kubernetes Platform and its GitOps workflows unify deployment, management, and monitoring for clusters and apps
  • How Pure Service Orchestrator accelerates application build and delivery with 6 9’s storage uptime. PSO works for ON PREM and Public Cloud
  • Live Demo – I am going to show some CSI goodness. Promise.
How does Pure make Stateful Apps a no brainer?

Use this link to register now!

Some other important questions you might have from this pic:

When did JO’s beard explode into this?

JO from Pure Storage’s North Georgia foothills HQ

Py-bot in a Container

So during Pure kickoff last week I did several sessions on Pure Storage and Kubernetes for our yearly Tech Summit. It was very fun to prepare for. I wanted to do something different and I decided to take my py-bot I was running on my raspberry pi and up-level with integration into K8s and the FlashBlade with PVC’s. This is the second post and covers how to build the docker container and deploy to k8s.

Image result for twitter
tweet tweet

Check out the repo on github: https://github.com/2vcps/python-twitter-bot

Take a look at the code in ./bots

  • autoreply.py – code to reply to mentions
  • config.py – sets the API connection
  • followFollowers_data.py – Follows anyone that follows you, then writes some of their recent tweets to a CSV on a pure-file FlashBlade filesystem
  • followFollowers.py – All the followback with no data collection
  • tweetgamescore.py – future
  • tweetgamesetup.py – future

Py-bot In Kubernetes

Prereqs

Step 1

Build the docker image and push to your own repo. Make sure you are authenticated to your internal repo.

$ docker build -t yourrepo/you/py-bot:v1 .
$ docker push yourrepo/you/py-bot:v1

Step 2

Create a secret in your k8s environment with the keys are variables. Side note: this is the only methond I found to not break the keys when storing in K8s. If you have a functioning way to do it better let me know.

edit env-secret.yaml with your keys from twitter and the search terms.

kubectl apply -f env-secret.yaml

Verify the keys are in your cluster.

kuebctl describe secret twitter-api-secret

Step 3

Edit deployment.yaml and deploy the app. In my example I have 3 different deployments and one pvc. If you play to not capture data make sure to change the followback deployment to launch followFollowers.py and not followFollowers_data.py. Addiotionally, remove the PVC information if you are not using it.

Be sure to change the image for each deployemnt to your local repository path.
Notice that the autoreply deployment uses the env variable searchkey2 and favretweet deployment will use searchkey1. This allows each app to seach on different terms.

Be careful, if you are testing the favretweet.py program and use a common word for search you will see many many likes and retweets.

Now deploy

kubectl apply -f deployment.yaml

kubectl get pod

NAME                          READY   STATUS    RESTARTS   AGE
autoreply-df85944d5-b9gs9     1/1     Running   0          47h
favretweet-7758fb86c7-56b9q   1/1     Running   0          47h
followback-75bd88dbd8-hqmlr   1/1     Running   0          47h

kubectl logs favretweet-7758fb86c7-56b9q

INFO:root:API created
INFO:root:Processing tweet id 1229439090803847168
INFO:root:Favoriting and RT tweet Day off. No pure service orchestrator today. Close slack Jon, do it now.
INFO:root:Processing tweet id 1229439112966311936
INFO:root:Processing tweet id 1229855750702424066
INFO:root:Favoriting and RT tweet In Pittsburgh. Taking about... Pure Service Orchestrator. No surprise there.  #PSO #PureStorage
INFO:root:Processing tweet id 1229855772789460992
INFO:root:Processing tweet id 1230121679881371648
INFO:root:Favoriting and RT tweet I nearly never repost press releases, but until I can blog on it.  @PureStorage and Pure Service Orchestrator join… https://t.co/A6wxvFUUY7
INFO:root:Processing tweet id 1230121702509531137

kuebctl logs followback-75bd88dbd8-hqmlr

INFO:root:Waiting... 300s
INFO:root:Retrieving and following followers
INFO:root:purelyDB
INFO:root:PreetamZare
INFO:root:josephbreynolds
INFO:root:PureBob
INFO:root:MercerRowe
INFO:root:will_weeams
INFO:root:JeanCarlos237
INFO:root:dataemilyw
INFO:root:8arkz

More info

My Blog 2vcps.io

Follow me @jon_2vcps

Python Twitter Bot

So during Pure kickoff last week I did several sessions on Pure Storage and Kubernetes for our yearly Tech Summit. It was very fun to prepare for. I wanted to do something different and I decided to take my py-bot I was running on my raspberry pi and up-level with integration into K8s and the FlashBlade with PVC’s. This first post is to just go over the python code and how it works and what you need to do to get it working for yourself.

Check out the repo on github: https://github.com/2vcps/python-twitter-bot

Py-Bot

This is a Twitterbot. Built to run on Kubernetes and also uses Pure Service Orchestrator for persistent data.
Take a look at the code in ./bots

  • autoreply.py – code to reply to mentions
  • config.py – sets the API connection
  • followFollowers_data.py – Follows anyone that follows you, then writes some of their recent tweets to a CSV on a pure-file FlashBlade filesystem
  • followFollowers.py – All the followback with no data collection
  • tweet_game_score.py – future
  • tweet_game_setup.py – future

Testing the code on your machine

Prereqs

  • python3
  • twitter account with API keys
  • Pure Service Orchestrator and working Kubernetes

Step 1
$ pip install -r requirements.txt

Step 2
Create env variables for each key. The config.py will pull from the local OS. In this case your local machine.

export CONSUMER_KEY='some key'
export CONSUMER_SECRET='some secret'
export ACCESS_TOKEN='some token'
export ACCESS_TOKEN_SECRET='some token secret'

For the autoreply.py and favretweet.py you need a search key too.

export SEARCH_KEY='the thing I search for'

Be careful, if you are testing the favretweet.py program and use a common word for search you will see many many likes and retweets.

Step 3
Run the code.If all is working you will see logs and action on twitter.

$ python ./bots/autoreply.py 
Example output:

INFO:root:API createdINFO:root:Retrieving mentions
INFO:root:1222564855921758209
INFO:root:1222564855921758209
INFO:root:1222564855921758209
INFO:root:1222564855921758209
INFO:root:1222564855921758209
INFO:root:1222564855921758209
INFO:root:1222564855921758209
INFO:root:1222564855921758209
INFO:root:1222564855921758209
INFO:root:1222564855921758209
INFO:root:1222564855921758209
INFO:root:1222564855921758209
INFO:root:1222564855921758209
INFO:root:1222564855921758209
INFO:root:1222564855921758209
INFO:root:1222564855921758209
INFO:root:Searching for purestorage kubernetes
INFO:root:Retrieving mentionsINFO:root:1222564855921758209
INFO:root:Waiting...


It will continue to run so hit control-C to exit.

Now it is time to impress your boss and use big words like kubernetes. Read on in the next post below about how to run this bot as a deployment in kubernetes.

Image result for dilbert kubernetes

Go to the next level:
Run py-bot in Kubernetes

Pure Service Orchestrator is Validated for Enterprise PKS

The Pure Service Orchestrator Team is excited to announce that PSO is now validated with PKS Enterprise 1.4. PSO is PKS Partner Ready.

Note: Most of this was written as vSphere Cloud Provider in K8s was transitioned to the Cloud Native Storage CSI driver. Use CNS and CSI when you can, the more stable versions of PKS don’t support CSI by default as the K8s version is older. Use PSO for ReadWriteMany on Pure FlashBlade and CNS for block on FlashArray with VMFS Datastores (vVols coming).

The Pure Service Orchestrator Team is excited to announce that PSO is now validated with PKS Enterprise 1.4. PSO is PKS Partner Ready.

You can find more information on the VMware Marketplace https://marketplace.vmware.com/vsx/solutions/pure-service-orchestrator-2-5-2?ref=search

Learn more about PKS and Pure Storage with these posts:

Installing PSO in PKS with Helm
Installing PSO in PKS with the Operator
Use PKS + VMware SDDC + Pure Storage
Migrating PSO Volumes into vVols and PKS

Why PSO and PKS?

I think it is crucial to understand the options for Storage in Kubernetes first. If you have seen me present in the last 12 months you may have already seen this graphic. When I talk about Hypervisor options I am referring to the vSphere Cloud Provider (and now Cloud Native Storage). At the time you deploy the cluster you provide credentials to contact vCenter and create/manage/destroy persistent volumes on a vSphere Datastore. This option is built into PKS Enterprise. This allows you to create a custom Storage Class per datastore or even use SPBM for provisioning. Yes, that means VMFS and vVols are supported today. There is no validation or certification needed to use this plugin (VCP or CNS) with Pure Storage. Customers of PKS are already doing this today. VCP works very well with vVols. I have advocated at VMworld during my session that this unlocks data mobility options between DIY K8s clusters and PKS (or even between PKS clusters). 

How does PSO fit in with PKS?
If you require “ReadWriteMany” aka RWX. This means I want many scalable containers to all attach to and use a single Persistent Volume Claim. The Pure Storage FlashBlade handles this use case. If you need simplicity in storage management across multiple devices both file and block PSO can consolidate this into a single orchestration layer deployed to any cluster with single command. PSO will also scale to new devices with a single command. Simplifying what was traditionally a very complex portion of a K8s environment.

Use vSphere Cloud Provider and Pure Service Orchestrator Side-by-Side

CNS + Pure Service Orchestrator

Due to the nature of the Storage Classes made by PSO and ones you manually create for VCP/CNS you can provide choice to the end users of PKS with only an initial install effort for the Ops team and nearly zero effort Day 2 onward. For more information please take a look at my post on “Getting Started with PKS” and the “How to install PSO in a PKS Cluster” posts.

The thing that happened was…

So if you follow k8s development at all, you know that CSI became GA at the beginning of 2019. The vSphere Cloud Provider “driver” that is in-tree is now deprecated. VMware is has released Cloud Native Storage, the CSI driver for Kubernetes clusters running on vSphere. This does not change the need for Pure Service Orchestrator for RWX volumes or even possible with in guest iSCSI too. Officially for RWO (read write once) you should be using CNS.

Learn more about PKS and Pure Storage with these posts:
Getting started with Persistent Storage and PKS

Installing PSO in PKS with Helm
Installing PSO in PKS with the Operator
Use PKS + VMware SDDC + Pure Storage
Migrating PSO Volumes into vVols and PKS

New Release: Pure Service Orchestrator 5.0.2

The latest version of the CSI enabled Pure Service Orchestrator is now available. Snaps and Clones for Persistent Volume Claims enables use cases for K8s clusters to now move data between apps and environments. Need to make instant database copies for dev or test? Super easy now.

Since this feature leverages the capabilities of the FlashArray the clones and snaps have zero performance penalty and only consume globally new blocks on the underlying array (saves a ton of space when you make a lot of copies).

Make sure to read more on the Pure Service Orchestrator github repo on what needs to be done to enable these features in your k8s cluster. See below for more information.

CSI Snapshot and Clone features for Kubernetes

For details see the CSI volume snapshot and CSI volume clone.

  1. For snapshot feature, ensure you have Kubernetes 1.13+, the feature gate is enabled via the following Kubernetes feature flag: --feature-gates=VolumeSnapshotDataSource=true
  2. For clone feature, ensure you have Kubernetes 1.15+, Ensure the feature gate is enabled via the following Kubernetes feature flag: --feature-gates=VolumePVCDataSource=true


https://github.com/purestorage/helm-charts/tree/master/pure-csi#csi-snapshot-and-clone-features-for-kubernetes

More on installing the CSI Operator 5.0.2:
https://github.com/purestorage/helm-charts/tree/master/operator-csi-plugin

More on installing the CSI Helm Chart 5.0.2
https://github.com/purestorage/helm-charts/tree/master/pure-csi

 

I ❤️ Tacos

Installing PSO in a PKS Cluster using the Operator

Learn more about PKS and Pure Storage with these posts:
Getting started with Persistent Storage and PKS

Installing PSO in PKS with Helm
Installing PSO in PKS with the Operator
Use PKS + VMware SDDC + Pure Storage
Migrating PSO Volumes into vVols and PKS

Remember to have the K8s cluster created within PKS and remember to think about how those PKS vm’s can communicate with the FlashArray and FlashBlade.

More information and detail:
https://github.com/purestorage/helm-charts/tree/master/operator-k8s-plugin
First we must download the git repo with the installer for the Operator.

$ git clone --branch <version> https://github.com/purestorage/helm-charts.git
$ cd helm-charts/operator-k8s-plugin
$./install.sh --namespace=pso --orchestrator=k8s -f values.yaml
$ kubectl get all -n pso
NAME                                    READY   STATUS    RESTARTS   AGE
pod/pso-operator-b96cfcfbb-zbwwd        1/1     Running   0          27s
pod/pure-flex-dzpwm                     1/1     Running   0          17s
pod/pure-flex-ln6fh                     1/1     Running   0          17s
pod/pure-flex-qgb46                     1/1     Running   0          17s
pod/pure-flex-s947c                     1/1     Running   0          17s
pod/pure-flex-tzfn7                     1/1     Running   0          17s
pod/pure-provisioner-6c9f69dcdc-829zq   1/1     Running   0          17s
NAME                       DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/pure-flex   5         5         5       5            5           <none>          17s
NAME                               READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/pso-operator       1/1     1            1           27s
deployment.apps/pure-provisioner   1/1     1            1           17s
NAME                                          DESIRED   CURRENT   READY   AGE
replicaset.apps/pso-operator-b96cfcfbb        1         1         1       27s
replicaset.apps/pure-provisioner-6c9f69dcdc   1         1         1       17s
 

Sample deployment you can copy this all to a file called deployment.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: minio-pv-claim-rwx
  labels:
    app: minio
spec:
  storageClassName: pure-file
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 101Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  # This name uniquely identifies the Deployment
  name: minio-deployment
spec:
  selector:
    matchLabels:
      app: minio
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        # Label is used as selector in the service.
        app: minio
    spec:
      # Refer to the PVC created earlier
      volumes:
      - name: storage
        persistentVolumeClaim:
          # Name of the PVC created earlier
          claimName: minio-pv-claim-rwx
      containers:
      - name: minio
        # Pulls the default Minio image from Docker Hub
        image: minio/minio:latest
        args:
        - server
        - /storage
        env:
        # Minio access key and secret key
        - name: MINIO_ACCESS_KEY
          value: "minio"
        - name: MINIO_SECRET_KEY
          value: "minio123"
        ports:
        - containerPort: 9000
          hostPort: 9000
        # Mount the volume into the pod
        volumeMounts:
        - name: storage
          mountPath: "/storage"
---
apiVersion: v1
kind: Service
metadata:
  name: minio-service
spec:
  type: LoadBalancer
  ports:
    - port: 9000
      targetPort: 9000
      protocol: TCP
  selector:
    app: minio

Now apply the file to the cluster

# kubectl apply -f deployment.yaml

Check the pod status

$ kubectl get pod
NAME                               READY   STATUS    RESTARTS   AGE
minio-deployment-95b9d8474-xmtk2   1/1 Running 0 4h19m
pure-flex-9hbfj                    1/1 Running 2 3d4h
pure-flex-w4fvq                    1/1 Running 1 3d23hpure-flex-zbqvz                    1/1 Running 1 3d23h
pure-provisioner-dd4c4ccb7-dp76c   1/1 Running 7 3d23h

Check the PVC status

$ kubectl get pvc
NAME                 STATUS VOLUME                               CAPACITY ACCESS MODES STORAGECLASS AGE
minio-pv-claim-rwx   Bound pvc-04817b75-f98b-11e9-8402-005056a975c2   101Gi RWX pure-file 4h19m

Learn more about PKS and Pure Storage with these posts:
Getting started with Persistent Storage and PKS

Installing PSO in PKS with Helm
Installing PSO in PKS with the Operator
Use PKS + VMware SDDC + Pure Storage
Migrating PSO Volumes into vVols and PKS

Installing PSO in PKS with Helm

Learn more about PKS and Pure Storage with these posts:
Getting started with Persistent Storage and PKS

Installing PSO in PKS with Helm
Installing PSO in PKS with the Operator
Use PKS + VMware SDDC + Pure Storage
Migrating PSO Volumes into vVols and PKS

To get started installing PSO with your PKS cluster using helm follow these instructions.
Before installing PSO the Plan in Enterprise PKS must have the “allow privileged” box checked. This setting allows the access to mount storage.

Scroll way down…

Apply the settings in the Installation Dashboard and wait for them to finish applying.

Create a cluster. Go get a Chick-fil-a Biscuit. 

# pks create-cluster testcluster -e test.domain.local -p small

Quick install for FlashBlade and NFS

Install Helm more info here and https://helm.sh/docs/using_helm/#role-based-access-control

  1. Setup the rbac role for tiller.
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system
  1. # helm init  –service-account tiller

Install PSO

  1. # helm install -n pso pure/pure-k8s-plugin -f values.yaml

This is the quickest method to getting PSO up and running. We are not adding any packages to the PKS Stem. NFS is built in therefore supported out of the box by PKS.

Installing PSO for FlashArray

Before deploying the PKS Cluster you must tell Bosh director to install a few things at runtime.

Details and the packages are on my github page:

https://github.com/2vcps/pso_prereqs

This is the same method used by other vendors to add agents and drivers to PKS or CloudFoundry. 

Once you finish with the intructions you will have PSO able to mount both FlashArray and FlashBlade using their respective StorageClass, pure-block or pure-file.

Please pay attention to networking

PKS does not allow for the deployment to add another NIC to the vm’s that are deployed. With PKS and NSX-T this is also all kept behind logical routers. Please be sure that VM’s have access. I would prefer no firewall and no routing from a VM to the storage, this may not be possible. You may be able to use VLANS to reduce this routing to a minimum. Just be sure to document your full network path from VM to Storage for future reference.

Using PSO

Sample deployment you can copy this all to a file called deployment.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: minio-pv-claim-rwx
  labels:
    app: minio
spec:
  storageClassName: pure-file
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 101Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  # This name uniquely identifies the Deployment
  name: minio-deployment
spec:
  selector:
    matchLabels:
      app: minio
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        # Label is used as selector in the service.
        app: minio
    spec:
      # Refer to the PVC created earlier
      volumes:
      - name: storage
        persistentVolumeClaim:
          # Name of the PVC created earlier
          claimName: minio-pv-claim-rwx
      containers:
      - name: minio
        # Pulls the default Minio image from Docker Hub
        image: minio/minio:latest
        args:
        - server
        - /storage
        env:
        # Minio access key and secret key
        - name: MINIO_ACCESS_KEY
          value: "minio"
        - name: MINIO_SECRET_KEY
          value: "minio123"
        ports:
        - containerPort: 9000
          hostPort: 9000
        # Mount the volume into the pod
        volumeMounts:
        - name: storage
          mountPath: "/storage"
---
apiVersion: v1
kind: Service
metadata:
  name: minio-service
spec:
  type: LoadBalancer
  ports:
    - port: 9000
      targetPort: 9000
      protocol: TCP
  selector:
    app: minio

Now apply the file to the cluster

# kubectl apply -f deployment.yaml

Check the pod status

$ kubectl get pod
NAME                               READY   STATUS    RESTARTS   AGE
minio-deployment-95b9d8474-xmtk2   1/1 Running 0 4h19m
pure-flex-9hbfj                    1/1 Running 2 3d4h
pure-flex-w4fvq                    1/1 Running 1 3d23h
pure-flex-zbqvz                    1/1 Running 1 3d23h
pure-provisioner-dd4c4ccb7-dp76c   1/1 Running 7 3d23h

Check the PVC status

$ kubectl get pvc
NAME                 STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
minio-pv-claim-rwx   Bound    pvc-04817b75-f98b-11e9-8402-005056a975c2   101Gi      RWX            pure-file      4h19m

Learn more about PKS and Pure Storage with these posts:
Getting started with Persistent Storage and PKS

Installing PSO in PKS with Helm
Installing PSO in PKS with the Operator
Use PKS + VMware SDDC + Pure Storage
Migrating PSO Volumes into vVols and PKS

Kubernetes on AWS with Cloud Block Store

Only a slight nudge at from @CodyHosterman to put this post together.

Kubernetes deployed into AWS is a method many organizations are using to get into using K8s. Whether you deploy K8s with Kubeadm, Kops, Kubespray, Rancher, WeaveWorks, OpenShift, etc the next big question is how do I do persistent volumes? While EBS has StorageClass integrations you may be interesting in getting better efficiency and reliability than traditional block in the cloud. That is one of the great uses of Cloud Block Store. Highly efficient and highly reliable storage built for AWS with the same experience as the on prem FlashArray. By utilizing Pure Service Orchestrator’s helm chart or operator you can now take advantage of Container Storage as a Service in the cloud. Are you using Kubernetes in AWS on EC2 and have questions about how to take advantage of Cloud Block Store? Please ask me here in the comments or @jon_2vcps on twitter.

  1. Persistent Volume Claims may will not always be 100% full. Cloud Block Store is Deduped, Compressed and Thin. Don’t pay for 100% of a TB if it is only 1% full. I do not want to be in the business of keeping developers from getting the resources they need, but I also do not want to be paying for when they over-estimate.
  2. Migrate data from on prem volumes such as K8s PVC, VMware vVols, Native physical volumes into the cloud and attach them to your Kubernetes environment. See the youtube demo below for an example. What we are seeing in the demo is creating an app in Kubernetes on prem, loading it with some data (photos), replicating that application to the AWS cloud and using Pure Service Orchestrator to attach the data to the K8s orchestrated application using Cloud Block Store. This is my re-working of Simon’s tech preview demo from the original launch of Cloud Block Store last November.

3. Simple. Make storage simple. One common tweet I see on twitter from the Kubernetes detractors is how complicated Kubernetes can be. Pure Service Orchestrator makes the storage layer amazingly simple. A single command line to install or upgrade. Pooling across multiple devices.

Get Started today:
Below I will include some links on the different installs of PSO. Now don’t let the choices scare you. Container Storage Interface or CSI is the newest API for common interaction with all storage providers. While flexvol was the original storage solution it makes sense to move forward with CSI. This is very true for newer versions of kubernetes that include CSI by default. So if you are starting to use K8s for the first time today or your cluster is K8s 1.11 we have you covered. Use the links below to see the install process and prerequisites for PSO.

FlexVol Driver:
Pure Service Orchestrator Helm Chart
Pure Service Orchestrator Operator

CSI Driver:
Pure Service Orchestrator CSI Helm
Pure Service Orchestrator CSI Operator

Talking Pure and K8s on the Virtually Speaking Podcast at #PureAccelerate