Storage Caching vs Tiering Part 2

Recently I had the privilege of being a Tech Field Day Delegate. Tech Field Day is organized by Gestalt IT. If you want more detail on Tech Field Day visit right here. In interest of full disclosure the vendors we visit sponsor the event. The delegates are under no obligation to review good or bad the sponsoring companies.

After jumping in with a post last week on tierless caching I wanted to jump in with my thoughts on a second Tech Field Day vendor. Avere presented a very interesting and technical presentation. I appreciated being engaged on an engineering level and not a marketing pitch.

Avere tiers everything. It is essentially a scale out NAS solution (they called it a FXT Appliance) that can front end any existing NFS. Described to me by someone else as file acceleration. The Avere NAS stores data internally on a cluster of NAS units. The “paranoia meter” lets you set how often the mass storage device is updated. If you need more availability or speed you add Avere devices. If you need more disk space you add to your mass storage. In their benchmarking tests they basically used some drives connected to a CentOS machine running NFS front-ended by Avere’s NAS units. They were able to get the required IOPS at a fraction of the cost of NetApp or EMC.

The Avere Systems blog provides some good questions on Tiering.

The really good part of the presentation is how they write between the tiers. Everything is optimized for that particular type of media, SSD, SAS or SATA.
When I asked about NetApp’s statements about tiering (funny they were on the same day). Ron Bianchini responded, “that when you sell hammers, everything is a nail.” I believe him.

So how do we move past all the marketing speak to get down to the truth when it comes to Caching and Tiering. I am leaning toward thinking of any location where data lives for any period of time as a tier. I think a cache is a tier. Really fast cache for reads and writes is for sure a tier. Different kinds of disks are tiers. So I would say everyone has tiers. The value comes in when the storage vendor innovates and automates the movement and management of that data.

My questions/comments about Avere.

1. Slick technology. I would like to see it work in the enterprise over time. People might be scared because it is not one of the “big names”.
2. Having came from Spinnaker. Is the plan to go long term with Avere, or build something to be purchased by a big guy?
3. I would like to see how the methods used by the Avere FXT appliance can be applied to block storage. Plenty of slow inexpensive iSCSI products that would benefit from a device like this on the front end.

4 thoughts on “Storage Caching vs Tiering Part 2”

  1. Hi Jon,

    More great insights. I agree that much of the “marketing speak” around caching and tiering confuses the issue. But I do see one big difference between the two approaches that has implications for companies considering these technologies. Most caching solutions keep an extra copy of data in the cache, while most tiered solutions keep only one copy of the data in any given layer. The downside of a caching then is the cost of keeping an extra copy, but if the data lives in a much cheaper media (say SATA disk vs a flash cache), this extra cost is minimal. But caching brings two very important benefits: (1) cached data does not need to be RAID protected, saving up to 50% in capacity, and (2) cached data can be overwritten at any time without having to copy it down to a slower tier, making it possible to build a system much more responsive to workload changes.

    Ron and his team have built a great solution for companies with high performance NAS needs. As for your wish for accelerated iSCSI solutions, that’s exactly what we’ve built at Nimble Storage! If you are interested, we would love to tell you more about what we are up to.

    Dan Leary
    Nimble Storage

  2. driving an iscsi backend is a “trivial” add-on to the logic Avere has. They already do block-level tracking and migration. Just because they wrapped a NAS interface around it doesn’t change the block internals. It would be just as easy to present an iSCSI front-end as well talk to a iSCSI backend. Frankly put a FC card in there and you can do all of it.

    I’d like to put a Linux project together to create a MD module that does the same thing. I already do a very poor man’s version by putting EXT3 journals on a raid-1 between ram disk and SSD with the latter on lazy-write.

    “tiering” is just “caching” with some notion of persistence and more than 1 level.

Leave a Reply to Dan Leary Cancel reply

Your email address will not be published. Required fields are marked *