NetApp Storevault

Lately it seems like everyone wants my comments on Storevault by NetApp. All I know about the product is what I have read in articles on the Internet and heard from other folks. About a month ago we received a mailing to become a reseller of Storevault, but we did not respond to the mailing. I have read Dave Hitz’ blog on the product – but I don’t really understand the product placement at all.

Doing the math:
It seems like an expensive unit to me for what it delivers. 1TB Raw from (4) 250GB drives for $5000.00 seem a bit high. If it is using RAID DP than two disks of the four are needed for parity and with one disk as a spare, it does not seem to leave much room for data. I have no idea what the yearly software license and support costs will be. I have heard that the product is going to be completely supported out of India, but I have no confirmation of this. The last article I read about the product seems to confirm the 1 TB Raw unit price, but does not provide a cost for 1TB usable.

StoreVault S500 starts at about $5,000 for the base model with 1TB, consisting of four 250GB drives. It is available immediately through NetApp’s resellers.

The math as it works out from the articles & blogs I have read. (See Toigo here , Hitz here, Strange here)
Step 1) 4 disks of 250 GB = 1TB raw.
Step 2) Rightsizing of disks typically works out to about 10%, so the Net Disk Space available after rightsizing is 900GB available.
Step 3) RAID DP requires two disks for Parity 900GB – 2*(225GB) = 450GB available.
Step 4) NetApp suggests that users keep a spare disk 450GB – 225GB =225GB available.
Step 5) Typically NetApp says to keep a 20% Snap reserve for snapshots. 225GB *.8 = 180GB available.
Step 6) File system overhead – I don’t know how big the Ontap Lite OS is, but let’s assume it takes 5GB 180GB – 5GB=175 GB available

$5000.00 for somewhere between 175GB usable and 225GB usable out of 1TB raw does not seem like a very good deal to me. At CDW a 1.6 TB Snap Server costs $4859.99

I don’t know what the street price will be for the unit with 1 or 2 TB of usable data and three year software and hardware support. If the unit can provide 3TB of usable storage for under $10,000.00 it might be be a viable product.

Most of the resellers with feet on the street I speak to need a minimum of 25% margin before they can make any money on a product. It costs a lot of money to keep a salesman on the street and a Sales Engineer on staff. Will resellers be able to sell enough of these units at the $5000.00 – $10,000.00 price point to cover their costs of sales?

Perhaps a company like CDW can make a go out of the product. But when it costs about $140,000 in salary alone for a reseller company to keep a sales team on the street, it might be awfully difficult to make the team concentrate on an unproven product. If the average street price is $10,000.00 and the sales team can sell 2 units a week , they may be able to sell a $1,000,000 a year. Will there be profit left after the costs of sales and taxes to provide a profit for the owner? It is awfully expensive to do missionary sales work on a new product, it will be interesting to see what happens. See TMC here.

Please email me if you think I have made a mistake in my math. Bringing the benefits of Ontap to small business would be a great thing to do. I hope NetApp succeeds in its efforts, but there is enormous competition in the small business sector.

Happy Fourth of July!

Posted in Uncategorized | 2 Comments

Senator Ted Stevens of Alaska has some interesting views on the Internet, VOIP and Email, that will give us all confidence in our legislative branch’s understanding of technology. Enjoy this over the holiday…

Your Own Personal Internet

The Senate Commerce Committee deadlocked 11 to 11 on an amendment inserting some very basic net neutrality provisions into a moving telecommunications bill. The provisions didn’t prohibit an ISP from handling VOIP faster than emails, but would have made it illegal to handle its own VOIP packets faster than a competitor’s.

Senator Ted Stevens (R-Alaska) explained why he voted against the amendment and gave an amazing primer on how the internet works.


There’s one company now you can sign up and you can get a movie delivered to your house daily by delivery service. Okay. And currently it comes to your house, it gets put in the mail box when you get home and you change your order but you pay for that, right.

But this service isn’t going to go through the interent and what you do is you just go to a place on the internet and you order your movie and guess what you can order ten of them delivered to you and the delivery charge is free.

Ten of them streaming across that internet and what happens to your own personal internet?

I just the other day got, an internet was sent by my staff at 10 o’clock in the morning on Friday and I just got it yesterday. Why?

Because it got tangled up with all these things going on the internet commercially.

So you want to talk about the consumer? Let’s talk about you and me. We use this internet to communicate and we aren’t using it for commercial purposes.

We aren’t earning anything by going on that internet. Now I’m not saying you have to or you want to discrimnate against those people […]

The regulatory approach is wrong. Your approach is regulatory in the sense that it says “No one can charge anyone for massively invading this world of the internet”. No, I’m not finished. I want people to understand my position, I’m not going to take a lot of time. [?]

They want to deliver vast amounts of information over the internet. And again, the internet is not something you just dump something on. It’s not a truck.

It’s a series of tubes.

And if you don’t understand those tubes can be filled and if they are filled, when you put your message in, it gets in line and its going to be delayed by anyone that puts into that tube enormous amounts of material, enormous amounts of material.

Now we have a separate Department of Defense internet now, did you know that?

Do you know why?

Because they have to have theirs delivered immediately. They can’t afford getting delayed by other people.

[…]

Now I think these people are arguing whether they should be able to dump all that stuff on the internet ought to consider if they should develop a system themselves.

Maybe there is a place for a commercial net but it’s not using what consumers use every day.

It’s not using the messaging service that is essential to small businesses, to our operation of families.

The whole concept is that we should not go into this until someone shows that there is something that has been done that really is a viloation of net neutraility that hits you and me.

What can I add to that statement?

Posted in Uncategorized | Comments Off on

StorageWiki – A very cool resource

I like the idea of a Free NAS solution, especially for the SMB market. Buy some cheap drives, put free software on a cheap server, and invest the money for corporate growth! I know that logic makes sense for my dentist, and my buddies with small businesses around the country. They love these projects. Storage commoditization is coming, are you ready? Here are some interesting projects to keep an eye on.

First is a project that might help demonstrate that all storage devices don’t have to be proprietary. The openfiler efforts are quite fascinating and I wonder if they had any reason to name it the way they did?

If you are a systems administrator looking for a way to take control of your storage resources without having to pull off the modern equivalent of The Great Train Robbery in order to afford it, Openfiler is the answer to your prayers. Openfiler is a serious tool meant for professional systems administrators with a keen desire for the ability manage network storage in an efficient and cost-effective manner.

Second is the project for a Free NAS solution based on Free BSD. I seem to remember hearing that Network Appliance Corporation’s Dave Hitz started with a FreeBSD kernal when they started working on their software after they left Auspex.

Everything old is new again!

By the way Dave Hitz wrote to me a while ago, but I mis-moderated I guess – here are his comments. Posted by Dave Hitz to Zerowait High Availability at 5/21/2006 08:16:55 PM

People who are interested in verifying these TCO (Total Cost of Ownership) claims might be interested in looking at the detailed report from Mercer, which is the management consulting company that did the study for us. (See http://www.netapp.com/library/ar/ar1038.pdf)

They identified three categories of cost:

(1) Product Acquisition & Ongoing Vendor Costs (hardware, software, implementation, training, service, support)

(2) Internal Operational Costs (labor, facilities, environmental)

(3) Quantifiable Business Cost of Downtime

So it looks like they were trying to be pretty thorough when it comes to capturing all the costs you have.

In their summary of why NetApp is lower, they point to several factors. For the same size DB, people tend to use less storage with NetApp, because of features like snapshots and cloning. NetApp tends to take fewer people to manage. And snapshots let you recover from errors faster.
Posted by Dave Hitz to Zerowait High Availability at 5/21/2006 08:16:55 PM




Posted in Uncategorized | 1 Comment

The coming Data Explosion – will there be a need for short term Caching?

I have been reading a lot more about the coming of RFID. It seems everyone agrees that this will cause a data explosion with manufacturing, distribution and supply chain applications. But no one seems to be addressing the requirements of the data scanning and accumulation within the networks. For low volume high value merchandise the data might be small, however , for high volume middle value products, I can see where there might be some bottlenecks created within our current network & data infrastructures.

Will companies build store and forward caching networks to capture information before sending production information batches to the accounting & inventory systems or will they try to build real time inventory systems to capture all of the RFID information? I imagine at first companies will try to cobble together solutions using existing technology, but most of the information used to track the RFID tag progress through a production or transport system will be transitory. Will the Government step in with impossible to enforce HIPPA and Sarbanes Oxley like regulations about RFID data tracking and data retention?

Technology has come up with a short term answer with new Perpendicular Drive Technology. RFID may cause a whole new wave of technological evolution. For one thing , it should certainly change the bar code readers in tape libraries. RFID will also probably cause an evolution in the addressing & classification of storage. Interesting stuff!

Posted in Uncategorized | Comments Off on

Zerowait High Availability
Over the last 17 years in business our company has earned a reputation for High Availability in Networking and Storage design. There have been many technology changes in the marketplace and the equipment that we use to build and maintain our customers’ High Availability infrastructure, but there has always been one constant, Zerowait’s High Availability commitment to its customers. We don’t just talk about it we provide it 7/24 every day of the year, and have for many years.

In the mid 1990’s Zerowait’s customers needed to deploy vast web server networks on the Internet, and so Zerowait began to install Radware’s load balancing products. After we installed these products we found that our customers could not quickly deploy their Storage and so we went to NetApp’s offices on San Tomas, and spoke to Dave Hitz about becoming a dealer for them in our High Availability niche. The mix of load balancing and NetApp Storage was a lucrative one and Zerowait was very successful in installing a lot of equipment for both Vendor’s as can be seen in this link.

The Radware equipment was unable to keep up with our customers’ high traffic loads and the folks from Arrowpoint came to us to help them introduce their ASIC switch based product to the marketplace. We became an OEM and private labeled the products, they told us that we represented 10% of all their sales prior to the Cisco purchase. After Cisco purchased Arrowpoint we had several customers who were left without hardware support for their load balancers. And so we developed a High Availability service and support program for these customers.

During that same period of time NetApp canceled our reseller agreement and we started our third party parts, service & support for our NetApp customer base which has become the largest part of our business. Now that NetApp has canceled support for the NetApp F760 line this business is growing quickly. Over the next few weeks we plan to open our European service operation, which will help our European customers maintain their legacy NetApp equipment more affordably. Exciting things are happening at Zerowait!

Posted in Uncategorized | Comments Off on

Pittsburgh, Spinnaker and roads not taken.

I have to go to Pittsburgh today to visit some customers. I used to go to Pittsburgh quite often before NetApp purchased Spinnaker networks. A few years ago Zerowait helped Spinnaker with its early ideas about marketing their products to certain niche markets. Our engineers traveled to Pittsburgh to meet with Mike Kazar’s team, and the Spinnaker folks visited our offices and were very interested in gaining entry into our larger accounts to talk to our customers about their products. This was all occurring near the time that NetApp was taking all of Zerowait’s largest and best Filer accounts and making them into direct accounts for NetApp’s sales department. I recall that our NetApp reseller manager told us that it was too expensive to have a reseller on the bigger accounts, once we had established the accounts NetApp figured they could make more money without us. In hindsight, it was a interesting twist of events when Spinnaker was sold to NetApp, because we really thought Spinnaker was going to be a viable alternative to NetApp for Zerowait’s customers that NetApp had taken away from us. But it did not turn out that way. And in another interesting turn of events, our customers that NetApp took direct had their service and support prices raised almost immediately by NetApp. These were among the customers who asked us to create an affordable third party support organization to support their filers once our affiliation with NetApp had ended. They became our first independent service and support customers for Network Appliance products, and many of them are still our customers.

The Spinnaker technology was way ahead of its time, in a similar way to the Pick Operating System back in the 1980’s. Being absorbed by NetApp was probably good for the VC’s involved in Spinnaker, but it left a void in the marketplace – which is still unfilled. Our engineers felt certain that the AFS based Spinnaker was a superior platform for ‘grid storage’ when compared to the BSD based Ontap platform. Whether NetApp was able to shoehorn the AFS capabilities into Ontap will be seen as the market absorbs the technology. Can the Spinnaker Technology Turbocharge Ontap? I doubt that the new NetApp OS will provide all of the possibilities that the Spinnaker technology would have provided. But time, experience and marketing dollars will ultimately answer these questions.

Zerowait was forced to take the road toward independent legacy service, support and maintenance of NetApp equipment. At the time there was not the option of traveling both the road of third party support and new NetApp sales. There was a divergence in the paths, and Zerowait took the road less traveled, and it has made all the difference.

With homage to Robert Frost….

Posted in Uncategorized | Comments Off on

Myopia and SMB Storage

I went to the local eye doctor last week to get my new eyeglasses. The eye Doctor knows that Zerowait has something to do with computers and storage and he started peppering me with questions. First he wanted to know if I could help him with his backup. I asked what he was doing now for networking and storage infrastructure. I explained that we specialize in high end storage and high availability networks but I would hear him out. I learned that he has three networks in his office because he is afraid of hackers. However, two networks can’t get to the Internet. The three networks can’t communicate with each other and are backed up to tapes. I asked if he ever tried to test his data with a restore from tape. He said that he had not, but he had five copies of his tapes. I asked how he could tell if he was backing up anything useful since he has not tested a restore. He paused and said that he did not now how reliable his back ups were.

As we proceeded with my eye glass examination other questions surfaced, I tried to explain why he might want to install a firewall and centralize his storage onto a single system, which would also be simpler to back up or mirror. He wanted to know what would it cost. I explained that the cost would be in the thousands for a firewall and centralized storage and backup to meet his data storage requirements. He looked at me and assured me that he could not spend more than $1000.00 for his backup system, and that is why he does not want any firewalls and a centralized storage solution.

The Doctor is a nice guy and he has at least as many employees as we do at Zerowait. I imagine his gross sales are similar to mine on a yearly basis, although I don’t know. But he has the problem that I see so many times when I discuss computer storage with my friends who are the fabled ‘ SMB customers’. They don’t see the value of secure back ups and centralized storage. I have a close friend who has a land planning and engineering company with 30 people. He does not understand the value of centralized storage either, but we have at least finally convinced him that he should have a firewall.

After over 20 years in the business of High Availability networking and storage, I have seen a very small percentage of small and medium businesses that see the long term ROI in investing in centralized storage, back ups and coordinated networks. Perhaps my definition of small and medium sized business is wrong. But most of my friends work at companies with less than 50 people. The rarity among our friends is the folks working for companies with more than 500 people.

When I read articles about the huge SMB marketplace for High Availabilty storage, I am left wondering where it is.

Posted in Uncategorized | Comments Off on

Large Margin Technology

Robin Harris at Stragemojo.com uses the term Enhanced Margin technology to describe the mark up’s that storage vendors make on reselling their commodity based hardware. But perhaps we should call it Venture Capital Margin Technology (VCMT) , because many of these companies were founded with Venture Capital and they need a healthy Return On Investment to justify the irinitial investors interest. Mining profits when you are the first viable enterprise in a newly identified market and technology niche depends on attracting attention and gaining quick market acceptance and it is not easy, but can be quite profitable. Many of these companies find it very difficult to impossible to maintain a healthy margin when their technology commoditizes. I don’t want to say it matures, since storage technologies are evolving so quickly.

Looking over the storage landscape today, I see some companies that identified Storage market niches that are now abandoning those niches and trying to go up market or down market. This might be a good strategy for corporate ego’s, similar to the VW attempt to market their $100,000.00 Phaeton. But when the customers don’t perceive the value add, there is little chance of gaining new customers and a big chance of losing old customer relationships.

The downward spiral starts with a trickle as customers start to break the vendor lock in by seeking other sources to maintain or replace their technology, but the early adopter and influencer in the marketplace can be quite effective in eroding customer allegiance in proprietary technologies.

Robin Harris understands finance much better than I do. In a recent conversation, he said to watch what the insiders are doing with their stock purchases and sales. According to Robin, the big shots at the large storage vendors are sellers of stock currently. What does this mean for customers who are seeking a five year duty cycle out of their storage arrays? Perhaps, it is time to be much more aggressive in their negotiations with their storage vendors for price and service concessions?

Posted in Uncategorized | Comments Off on

Pearls of wisdom from Storage Mojo
Last night I got a chance to speak to Robin Haris and I asked permission to cross post his blog entries on mine. We had a great discussion and I encourage folks to watch his blog entries, because he is looking out for the storage user.

Below are some excerpts from his blog.

NetApp’s Battle Shots

June 12th, 2006 by Robin Harris in Enterprise, NAS, IP, iSCSI

NetApp’s announcement of multi-petabyte namespace support in its Data Ontap GX 7G storage operating system – my, doesn’t that just roll off the tongue! – should allow it corner several shrinking HPC markets. Industrial Light & Magic used the Spinnaker version to store 6 petabytes of Star Wars v3 special effects, including 300 TB of the opening battle shots. If only they’d stopped there.

Raquel Welch in One Million BC vs Net App In One Million IOPS
If the block storage people are wondering about NetApp’s intentions, wonder no more. They are gunning for the high-end data center storage market now dominated by EMC, IBM and Hitachi. One clue: the 1,000,000 I/O per second SPEC mark. True, it was mostly a stunt that flaunted the ripped abs of their 768 GB cache, and performance started degrading fast around 900k, but so what? This is about bragging rights, not real life.

As Greg Schulz points out NAS is an increasingly popular data center option for ease of management and scalability. Block storage isn’t going away anytime soon, but as the divergent stock prices of NTAP and EMC proclaim, Wall St. is more interested in your future growth rate than your current market share.

NetApp Goes Hollywood?
As Silicon Graphics can attest, Hollywood has zero brand loyalty. So the ILM endorsement means only that they haven’t found anything cheaper that will do the job. That will change when the cluster version of ZFS rolls out. Nor is geophysical modeling for finding oil a a growth industry. HPC is traditionally a graveyard for companies that focus on it: too few customers who are too demanding and unreliable. Ask Cray Research.

Yet the six petabyte array is a significant technical achievement. I hope their marketing does a good job of selling the benefits of large namespaces and storage pools, because right now people are still caught up in the whole disks and volumes mindset. NetApp can legitimize the storage pool concept in data centers, paving the way for software solutions like ZFS to grow.

NetApp’s web site notes that Yahoo Mail uses NetApp equipment. They also claim in one of their 10-K reports:

NetApp success to date has been in delivering cost-effective enterprise storage solutions that reduce the complexity associated with managing conventional storage systems. Our goal is to deliver exceptional value to our customers by providing products and services that set the standard for simplicity and ease of operation.

Uh-huh. Like those 520 byte sector disk drives with the Advanced Margin Enhancement Technology?

Second, the problem of read failures. As this note in NetApp’s Dave’s Blog explains, complete disk failures are not the only issue. The other is when the drive is unable to read a chunk of data. The drive is working, but for some reason that chunk on the drive is unreadable (& yes, drives automatically try and try again). It may be an unimportant or even vacant chunk, but then again, it may not be. According to Dave’s calculations, if you have a four 400GB drive RAID 5 group, there is about a 10% chance that you will lose a chunk of data as the data is recovered onto the replacement drive. As Dave notes, even a 1% chance seems high.

Where Dave and I part company is in our response to this problem. Dave suggests insisting on something called RAID 6, which maintains TWO copies of the recovery data. Compared to our RAID 5 example above, this means that instead of having 2000GB of usable capacity, you would have 1600GB. And now RAID 1 would only have 25% less capacity. I say drop RAID 5 and 6 and go to RAID 1+0, which is both faster and more reliable.


Posted in Uncategorized | Comments Off on

When NetApp purchased Spinnaker I was startled, as I could not understand the reason behind the purchase. Spinnaker, like Panasas, could have been a viable company, but they were both late to the marketplace, and could not get the market acceptance that the Internet boom provided both EMC and NetApp. But Spinnaker had identified a few niche markets, as has Panasas. Additionally, the Spinnaker technology was based on the Andrews File System & the NetApp system is based on BSD. So they really could not be easily integrated. In my humble opinion, Panasas would have made more sense to purchase from a technology point of view for NetApp. So, I was very interested in reading this article by Chris Mellor over the weekend.

Gigabit Ethernet clustering just doesn’t give you the performance and future headroom that Infiniband does.

Anderson says: “NetApp uses Infiniband to cluster two nodes. When NetApp bought Spinnaker it then made a mistake. It tried to add features out of the Spinnaker product into ONTAP. But clustering can’t be done that way; it has to be in the DNA of the system. NetApp’s approach didn’t work. Two years ago NetApp reversed direction. Dave Hitz (NetApp CEO) announced that Data ONTAP GX is a Spinnaker foundation with NetApp features added to it.”

Anderson added this comment: “(Data ONTAP GX) is namespace organisation. It’s not clustering. It’s RAID behind the veil and can still take eight hours to rebuild a disk. They’ll be performance problems downstream. It’s a bandaid. It’s a total kluge.”

With Isilon file data and parity data is striped across up to 9 nodes. A failed disk can be re-built in 30 minutes to an hour. In effect, Isilon’s striping technology renders RAID redundant.

Anderson says suppliers like Acopia ‘do it in the switch layer. It’s not rich, it’s lightweight.’ Again there will be performance problems downstream.

A virtualised pool of NAS resource requires the NAS nodes to be clustered for smooth performance scaling. It also requires N + 2 protection so that the system can recover from two failed disks and not just one. (NetApp’s RAID DP provides protection against two disk failures.)



Posted in Uncategorized | Comments Off on