Large Margin Technology

Robin Harris at Stragemojo.com uses the term Enhanced Margin technology to describe the mark up’s that storage vendors make on reselling their commodity based hardware. But perhaps we should call it Venture Capital Margin Technology (VCMT) , because many of these companies were founded with Venture Capital and they need a healthy Return On Investment to justify the irinitial investors interest. Mining profits when you are the first viable enterprise in a newly identified market and technology niche depends on attracting attention and gaining quick market acceptance and it is not easy, but can be quite profitable. Many of these companies find it very difficult to impossible to maintain a healthy margin when their technology commoditizes. I don’t want to say it matures, since storage technologies are evolving so quickly.

Looking over the storage landscape today, I see some companies that identified Storage market niches that are now abandoning those niches and trying to go up market or down market. This might be a good strategy for corporate ego’s, similar to the VW attempt to market their $100,000.00 Phaeton. But when the customers don’t perceive the value add, there is little chance of gaining new customers and a big chance of losing old customer relationships.

The downward spiral starts with a trickle as customers start to break the vendor lock in by seeking other sources to maintain or replace their technology, but the early adopter and influencer in the marketplace can be quite effective in eroding customer allegiance in proprietary technologies.

Robin Harris understands finance much better than I do. In a recent conversation, he said to watch what the insiders are doing with their stock purchases and sales. According to Robin, the big shots at the large storage vendors are sellers of stock currently. What does this mean for customers who are seeking a five year duty cycle out of their storage arrays? Perhaps, it is time to be much more aggressive in their negotiations with their storage vendors for price and service concessions?

Posted in Uncategorized | Comments Off on

Pearls of wisdom from Storage Mojo
Last night I got a chance to speak to Robin Haris and I asked permission to cross post his blog entries on mine. We had a great discussion and I encourage folks to watch his blog entries, because he is looking out for the storage user.

Below are some excerpts from his blog.

NetApp’s Battle Shots

June 12th, 2006 by Robin Harris in Enterprise, NAS, IP, iSCSI

NetApp’s announcement of multi-petabyte namespace support in its Data Ontap GX 7G storage operating system – my, doesn’t that just roll off the tongue! – should allow it corner several shrinking HPC markets. Industrial Light & Magic used the Spinnaker version to store 6 petabytes of Star Wars v3 special effects, including 300 TB of the opening battle shots. If only they’d stopped there.

Raquel Welch in One Million BC vs Net App In One Million IOPS
If the block storage people are wondering about NetApp’s intentions, wonder no more. They are gunning for the high-end data center storage market now dominated by EMC, IBM and Hitachi. One clue: the 1,000,000 I/O per second SPEC mark. True, it was mostly a stunt that flaunted the ripped abs of their 768 GB cache, and performance started degrading fast around 900k, but so what? This is about bragging rights, not real life.

As Greg Schulz points out NAS is an increasingly popular data center option for ease of management and scalability. Block storage isn’t going away anytime soon, but as the divergent stock prices of NTAP and EMC proclaim, Wall St. is more interested in your future growth rate than your current market share.

NetApp Goes Hollywood?
As Silicon Graphics can attest, Hollywood has zero brand loyalty. So the ILM endorsement means only that they haven’t found anything cheaper that will do the job. That will change when the cluster version of ZFS rolls out. Nor is geophysical modeling for finding oil a a growth industry. HPC is traditionally a graveyard for companies that focus on it: too few customers who are too demanding and unreliable. Ask Cray Research.

Yet the six petabyte array is a significant technical achievement. I hope their marketing does a good job of selling the benefits of large namespaces and storage pools, because right now people are still caught up in the whole disks and volumes mindset. NetApp can legitimize the storage pool concept in data centers, paving the way for software solutions like ZFS to grow.

NetApp’s web site notes that Yahoo Mail uses NetApp equipment. They also claim in one of their 10-K reports:

NetApp success to date has been in delivering cost-effective enterprise storage solutions that reduce the complexity associated with managing conventional storage systems. Our goal is to deliver exceptional value to our customers by providing products and services that set the standard for simplicity and ease of operation.

Uh-huh. Like those 520 byte sector disk drives with the Advanced Margin Enhancement Technology?

Second, the problem of read failures. As this note in NetApp’s Dave’s Blog explains, complete disk failures are not the only issue. The other is when the drive is unable to read a chunk of data. The drive is working, but for some reason that chunk on the drive is unreadable (& yes, drives automatically try and try again). It may be an unimportant or even vacant chunk, but then again, it may not be. According to Dave’s calculations, if you have a four 400GB drive RAID 5 group, there is about a 10% chance that you will lose a chunk of data as the data is recovered onto the replacement drive. As Dave notes, even a 1% chance seems high.

Where Dave and I part company is in our response to this problem. Dave suggests insisting on something called RAID 6, which maintains TWO copies of the recovery data. Compared to our RAID 5 example above, this means that instead of having 2000GB of usable capacity, you would have 1600GB. And now RAID 1 would only have 25% less capacity. I say drop RAID 5 and 6 and go to RAID 1+0, which is both faster and more reliable.


Posted in Uncategorized | Comments Off on

When NetApp purchased Spinnaker I was startled, as I could not understand the reason behind the purchase. Spinnaker, like Panasas, could have been a viable company, but they were both late to the marketplace, and could not get the market acceptance that the Internet boom provided both EMC and NetApp. But Spinnaker had identified a few niche markets, as has Panasas. Additionally, the Spinnaker technology was based on the Andrews File System & the NetApp system is based on BSD. So they really could not be easily integrated. In my humble opinion, Panasas would have made more sense to purchase from a technology point of view for NetApp. So, I was very interested in reading this article by Chris Mellor over the weekend.

Gigabit Ethernet clustering just doesn’t give you the performance and future headroom that Infiniband does.

Anderson says: “NetApp uses Infiniband to cluster two nodes. When NetApp bought Spinnaker it then made a mistake. It tried to add features out of the Spinnaker product into ONTAP. But clustering can’t be done that way; it has to be in the DNA of the system. NetApp’s approach didn’t work. Two years ago NetApp reversed direction. Dave Hitz (NetApp CEO) announced that Data ONTAP GX is a Spinnaker foundation with NetApp features added to it.”

Anderson added this comment: “(Data ONTAP GX) is namespace organisation. It’s not clustering. It’s RAID behind the veil and can still take eight hours to rebuild a disk. They’ll be performance problems downstream. It’s a bandaid. It’s a total kluge.”

With Isilon file data and parity data is striped across up to 9 nodes. A failed disk can be re-built in 30 minutes to an hour. In effect, Isilon’s striping technology renders RAID redundant.

Anderson says suppliers like Acopia ‘do it in the switch layer. It’s not rich, it’s lightweight.’ Again there will be performance problems downstream.

A virtualised pool of NAS resource requires the NAS nodes to be clustered for smooth performance scaling. It also requires N + 2 protection so that the system can recover from two failed disks and not just one. (NetApp’s RAID DP provides protection against two disk failures.)



Posted in Uncategorized | Comments Off on

The Disk Cleaning & Sanitization issue:

Recently a growing number of customers have been asking us to help them SANITIZE their disks after they retire their storage equipment. We do this with our proprietary solution for a daily on site rate, because we can never tell going in how many hours it will take to cleanse all the disks in the FC arrays. I hope to have a forum on this during the next Disaster Recovery conference, because I am not the only one who considers the possibility of private data getting into the wrong hands a disaster!

The Wikipedia actually has very good summary of the problem and this piece is really interesting to many customers we speak with:

The bad track problem

A compromise of sensitive data may occur if media is released when an addressable segment of a storage device (such as unusable or “bad” tracks in a disk drive or inter-record gaps in tapes) is not receptive to an overwrite. As an example, a disk platter may develop unusable tracks or sectors; however, sensitive data may have been previously recorded in these areas. It may be difficult to overwrite these unusable tracks. Before sensitive information is written to a disk, all unusable tracks, sectors, or blocks should be identified (mapped). During the life cycle of a disk, additional unusable areas may be identified. If this occurs and these tracks cannot be overwritten, then sensitive information may remain on these tracks. In this case, overwriting is not an acceptable purging method and the media should be degaussed or destroyed.

Here are two links that address the issues :
http://www.hipaadvisory.com/tech/disksan.htm
http://en.wikipedia.org/wiki/Data_remanence

What is your corporate policy on excess equipment and disk sanitization?

Posted in Uncategorized | Comments Off on

The Small Business perspective and a Goliath’s

Last night I was reading the Wall Street Journal of June 6th 2006, and on page B5 there is an article by Gwendolyn Bounds about how an independent radio station in Philadelphia maintains its market leadership under private ownership. The article ends with this statement by the station’s owner ‘ Whenever there’s a decision to be made, I ask myself two questions: “will I make money in the next 12 months?’ and ” will I make money in 5 years?” ‘ In the spirit of someone without public shareholders to consider, Mr Lee adds ” The five-year one is the only one that matters” 🙂

While I was in Tampa last week I got into a discussion about how a small business like Zerowait can compete against a large leviathan like NetApp for service and support of Legacy NetApp equipment. I tried to explain that because we are small means that we can discuss tactical and strategic ideas between our departments and make decisions quickly based on our customer’s requests and emerging requirements. But the reality is much simpler, ask your NetApp, Hitachi, or EMC salesman and management what their five year expectation is for his employment and the duty cycle of their products. Then call Zerowait and compare the answer, Zerowait is always working toward the strategic five year time frame, while most Enterprise storage manufacturers are looking at the quarterly sales figures.

Mr Lee summed up my thoughts beautifully on our business.

Posted in Uncategorized | Comments Off on

Strategic and tactical thinking about your storage infrastructure.

When you consider your enterprise storage strategic plan do you consider a ROI of three years, or five years? What does your vendor consider the term of their strategic alliance to be? What happens to your service and support budget and QOS if your chosen vendors’ agreement falls apart with their suddenly not so strategic partner? What happens if a vendor cancels support after 18 months for the product you just purchased?

Many enterprise storage companies face this problem, and when the vendors say that the only choice is to upgrade and spend even more money, and get an even more proprietary solution, customers often agree to the upgrade because they see no other solution. But there are other viable solutions, and avoiding vendor lock in is an enterprise customer’s best defense.

I was at a conference recently in Tampa, and I was speaking to some managers of big data centers, every one of them was interested in how to avoid vendor lock in. Each of them had a nightmare story about their storage vendor and hidden lock in costs. We had a great discussion about the different tactics there are to fight vendor lock in. Fighting vendor lock in starts at the negotiations stage, but it is very important that you add addendum’s to the vendor’s RTU (Right To Use) license agreement and also make certain your PO reflects the special changes you want starting with the right to a transferable license and the right to use third party support without any changes to your warranty. Tactically you can save tens of thousands of dollars at purchase, but strategically you can save hundreds of thousands by negotiating aggressively with your storage manufacturer.

Posted in Uncategorized | Comments Off on

NetApp & IBM agreements

We get a lot of calls from customers who are trying to figure out how to get the best deal on NetApp equipment and we advise them to shop around. The agreement between NetApp and IBM provides a way to negotiate a better deal on your new NetApp equipment’s because now you have two competing manufacturer sales forces offering identical equipment. Since both sales forces have to meet their quotas a savvy customer can play them against each other.
Additionally each company has a reseller channel which can also be contacted to get quotes from.

Since both NetApp and IBM are offering NetApp software and support services, there is no differentiation in product or service other than price. It is very similar to going between different car dealers and negotiating.

I recommend some caution in purchasing the IBM branded equipment because NetApp in the past had a similar OEM agreement with Dell. But when it fell apart the Dell customers were left without support. As the article shows IBM just discontinued its last NAS head, how long will it be before they discontinue support for the NetApp brand?

Posted in Uncategorized | Comments Off on

During last week’s Disaster recovery Summit in Tampa I was surprised to see so few storage vendors and storage resellers. I would have expected to see a whole bunch of Storage resellers at the conference, since recovering data is such an important part of Business Continuity. It seemed very odd, although the attendees were probably very happy not to see them. Zerowait was the only storage support and services company in attendence, and I was the only non- manufacturer on the panel discussions.

In a strange coincidence today in a press release by Tech Data anouncing their NetApp distribution agreement they say: “Whether it’s ensuring critical data is accessible in case of disaster or complying with regulations that mandate increased digital document retention, businesses of all sizes are turning to IT resellers to develop innovative, cost-effective storage solutions,” said Pete Peterson, Tech Data’s vice president, Systems Product Marketing.

Tech data is in Clearwater and the conference was in Tampa, at the Airport Marriott. If Data Availalbility in the face of a disaster is so important to Tech Data and NetApp, I wonder why they did not drive over the bridge to the conference? It would have been a great opportunity for them to introduce their new product.

Posted in Uncategorized | Comments Off on

The 2006 Disaster recovery Summit is history now and it was the best conference I have ever attended. And here is why:
1) Attendees were really interested in the topics.
2) End user experiences were clear,enlightening and well presented.
3) Vendors were not allowed to give their standard powerpoint commercials.
4) When Vendors made unverifiable claims, Toigo questioned them on their statements.
5) After conference dinner party was outstanding .

And for Zerowait, we found a lot more customers who are interested in our service and support offerings.

I wish other conferences were as well run, focused and informative.

Posted in Uncategorized | Comments Off on

Disaster recovery or disaster prevention?

This week there is a conference in Tampa about preventing a catastrophic loss of data. Although the conference’s focus is based on FEMA types of events, every enterprise needs to be aware of the costs of lost data.

Companies in this market niche break down the possibility of disaster and the recovery of data into as many facets as there are in a prism. The daily concerns of data deliverability to your customer or clients desktops and the security of the data so data does not end up on your competitors desktop breaks down into a few specific areas.

Network security and vulnerability – Can your users access data easily while preventing unauthorized viewers from seeing your data?

Data tape storage vulnerability- Is your off site tape vault secure or are there vulnerabilities to tape loss and theft in the process.

End of Life of Disk and subsystems – How does your company dispose of disks at the end of life or end of lease of your storage subsystems? Some of our customers keep all of their disks at the end of lease but this is very costly, but many are uncertain as to how to clean disks before returning them or disposing them.

At Zerowait we are recognized for providing High Availability networking services & storage services to our customers, so many of our customers have adopted our thoughts on disaster prevention instead of disaster recovery. Using a combination load balancing switches, VPN’s and data mirroring we keep our data in two separate locations, and many of our customers do the same thing now. It really does not cost any more than implementing a D/R site and strategy, and the advantages during data migrations and network changes are numerous. But a Secure VPN between multiple locations introduces a whole new set of issues about virtual site location security.

Some of our customers have been the targets of the tape loss scandals that have recently been covered in the media and it should come as no surprise that these losses occur, in a competitive environment the low cost provider will win some business, but to lower their costs they must forgo some security. You get what you pay for. Implementing a Disaster prevention site strategy could have prevented these data loss stories from hitting the media.

Recently many customers been asking us to help them clean their disks. When using a subsystem like NetApp there are a whole bunch of challenges to doing this. And this has become a growing part of our business. But at the lowest common denominator you want to be certain that there is no visible proprietary data on your disks when you are done with them.

I hope to cover some of this during my time on the panel discussion at the conference, and I hope to see you there.

Posted in Uncategorized | Comments Off on