Pearls of wisdom from Storage Mojo
Last night I got a chance to speak to Robin Haris and I asked permission to cross post his blog entries on mine. We had a great discussion and I encourage folks to watch his blog entries, because he is looking out for the storage user.

Below are some excerpts from his blog.

NetApp’s Battle Shots

June 12th, 2006 by Robin Harris in Enterprise, NAS, IP, iSCSI

NetApp’s announcement of multi-petabyte namespace support in its Data Ontap GX 7G storage operating system – my, doesn’t that just roll off the tongue! – should allow it corner several shrinking HPC markets. Industrial Light & Magic used the Spinnaker version to store 6 petabytes of Star Wars v3 special effects, including 300 TB of the opening battle shots. If only they’d stopped there.

Raquel Welch in One Million BC vs Net App In One Million IOPS
If the block storage people are wondering about NetApp’s intentions, wonder no more. They are gunning for the high-end data center storage market now dominated by EMC, IBM and Hitachi. One clue: the 1,000,000 I/O per second SPEC mark. True, it was mostly a stunt that flaunted the ripped abs of their 768 GB cache, and performance started degrading fast around 900k, but so what? This is about bragging rights, not real life.

As Greg Schulz points out NAS is an increasingly popular data center option for ease of management and scalability. Block storage isn’t going away anytime soon, but as the divergent stock prices of NTAP and EMC proclaim, Wall St. is more interested in your future growth rate than your current market share.

NetApp Goes Hollywood?
As Silicon Graphics can attest, Hollywood has zero brand loyalty. So the ILM endorsement means only that they haven’t found anything cheaper that will do the job. That will change when the cluster version of ZFS rolls out. Nor is geophysical modeling for finding oil a a growth industry. HPC is traditionally a graveyard for companies that focus on it: too few customers who are too demanding and unreliable. Ask Cray Research.

Yet the six petabyte array is a significant technical achievement. I hope their marketing does a good job of selling the benefits of large namespaces and storage pools, because right now people are still caught up in the whole disks and volumes mindset. NetApp can legitimize the storage pool concept in data centers, paving the way for software solutions like ZFS to grow.

NetApp’s web site notes that Yahoo Mail uses NetApp equipment. They also claim in one of their 10-K reports:

NetApp success to date has been in delivering cost-effective enterprise storage solutions that reduce the complexity associated with managing conventional storage systems. Our goal is to deliver exceptional value to our customers by providing products and services that set the standard for simplicity and ease of operation.

Uh-huh. Like those 520 byte sector disk drives with the Advanced Margin Enhancement Technology?

Second, the problem of read failures. As this note in NetApp’s Dave’s Blog explains, complete disk failures are not the only issue. The other is when the drive is unable to read a chunk of data. The drive is working, but for some reason that chunk on the drive is unreadable (& yes, drives automatically try and try again). It may be an unimportant or even vacant chunk, but then again, it may not be. According to Dave’s calculations, if you have a four 400GB drive RAID 5 group, there is about a 10% chance that you will lose a chunk of data as the data is recovered onto the replacement drive. As Dave notes, even a 1% chance seems high.

Where Dave and I part company is in our response to this problem. Dave suggests insisting on something called RAID 6, which maintains TWO copies of the recovery data. Compared to our RAID 5 example above, this means that instead of having 2000GB of usable capacity, you would have 1600GB. And now RAID 1 would only have 25% less capacity. I say drop RAID 5 and 6 and go to RAID 1+0, which is both faster and more reliable.


This entry was posted in Uncategorized. Bookmark the permalink.