Hey, Don’t break EBS

TL;DR – EBS Volumes fail to mount when multipathd is installed on EKS worker nodes.

EKS and PSO Go Great together!

AWS Elastic Kubernetes Service is a great way to dive in with managed Kubernetes in the cloud. Pure Service Orchestrator integrates EKS worker nodes into the Cloud Block Store on AWS. I created this ansible playbook to make sure the right packages and services are started on my worker nodes.

---
- hosts: all
  become: yes
  tasks:
  - name:    Install prerequisites
    yum:     
      name: ['iscsi-initiator-utils', 'device-mapper-multipath']
      update_cache: yes
  - name:    Create directories
    file:
      path: "{{ item }}"
      state: directory
      mode: 0755
    with_items:
      - /etc/multipath
  - name: Copy file with owner and permissions
    copy:
      src: ./multipath.conf
      dest: /etc/multipath.conf
      owner: root
      group: root
      mode: '0644'
  - name: REstart iscsid
    service:
      name: iscsid
      state: restarted
  - name: REstart multipathd
    service:
      name: multipathd
      state: restarted

In my previous testing with PSO and EKS I was basically focused on using PSO only. Recently the use case of migrating from EBS to CBS has shown to be pretty valuable to our customers in the cloud. To create the demo I used an app I often use for demoing PSO. It is 2 Web server containers attached to a mySQL container with a persistent volume. Very easy. I noticed though as I was using the built in gp2 Storage Class it started behaving super odd after I installed PSO. I installed the AWS EBS CSI driver. Same thing. It could not mount volumes or snapshot them in EBS. PSO volumes on CBS worked just fine. I figure most customers don’t want me to break EBS.

After digging around the internet and random old Github issues there was no one thing seemingly having the same issue. People were having problems that had like 1 of the 4 symptoms. I decided to test when in my process it broke after I enabled the package device-mapper-multipath. So it wasn’t PSO as much as a very important pre-requisite to PSO causing the issue. What it came down to is the EBS volumes were getting grabbed by multipathd and the Storage Class didn’t know how to handle the different device names. So I had to find how to use multipathd for just the Pure volumes. The right settings in multipath.conf solved this. This is what I used as an example:

blacklist {
    device {
        vendor "*"
    }
}
blacklist_exceptions {
    device {
        vendor "PURE"
        product "*"
    }
}

I am telling multipathd to ignore everything BUT Pure. This solved my issue. So I saved this into the local directory and added the section in the ansible playbook to copy that file to each worker node in EKS.
1. Copy the ansible playbook above to a file prereqs.yaml
2. Copy the above multipath blacklist settings to multipath.conf and save to the same directory as prereqs.yaml
3. Run the ansible playbook as shown below. (make sure the inventory.ini has IP’s and you have the SSH key to login to each worker node.

# Make sure inventory.ini has the ssh IP's of each node. 
# prereqs.yaml includes the content from above

ansible-playbook -i inventory.ini -b -v prereqs.yaml -u ec2-user

This will install the packages, copy multipath.conf to /etc and restart the services to make sure they pick up the new config.

Leave a Reply

Your email address will not be published. Required fields are marked *