Trust but Verify week part 3
Hey NetApp – what does Commercial Off The Shelf (COTS) mean to you? Please define…..

Read the complete interview that the excerpts below are from. the Australian
(Interviewer) Over time do you see NetApp growing more into a software company than a hardware company?

(Netapp) We think of ourselves in the hardware sense as using standard off-the-shelf components. We’ve never done an integrated circuit design or an ASIC or whatever.

Question 1) If this statement is true why would Xyratex who makes NetApp’s shelves tell folks who try to order a DS14 mk2 shelf or an ESH2 that these parts are proprietary NetApp designs? Aren’t standard off the shelf components available to the general public? – According to the Wikipedia definition of the term they should be.

Question 2) If this statement is true how come when folks call Jabil Circuit to buy NVRam cards they are told they are NetApp proprietary designs? Aren’t standard off the shelf components available to the general public? – According to the Wikipedia definition of the term they should be.


From Wikipedia

Commercial off-the-shelf (COTS) is a term for software or hardware products that are ready-made and available for sale, lease, or license to the general public. They are often used as alternatives to in-house developments or one-off government-funded developments. The use of COTS is being mandated across many government and business programs, as they may offer significant savings in procurement and maintenance. used immediately.

 

 

 

 

Posted in Uncategorized | Comments Off on

Trust but Verify week part 2

Partnerships for the common good are great for everyone involved, but long term partnerships require trust and verification. NetApp has a history of making statements supporting strategic relationships with manufacturing and reseller partners. However, they seem to have a problem with keeping relationships for the long term. There are many examples of this over the years, and it is worthy of a discussion with NetApp, whose trail of broken relationships is lengthy, to find out how long they will support their systems and customers. Does NetApp consider its customer sales techniques tactical or strategic? What length of time does NetApp consider to be strategic?

Question 1)
A) How long did the Dell / NetApp relationship last?
B) What happened to cause the Dell /NetApp relationship to fall apart?
C) What was NetApp’s strategic support strategy for the customers with the Dell branded filers?
D) How does the current Dell/EMC relationship affect NetApp’s long term market penetration?
E) Did anything NetApp do cause Dell to begin a relationship with EMC?

Question 2)
A) How long did the Hitachi / NetApp relationship last?
B) What happened to cause the Hitachi/NetApp relationship to fall apart?
C) What is NetApp’s strategic support strategy for the customers with the Hitachi branded filers?
D) How does the current Blue Arc/Hitachi realtionship affect NetApp’s long term market penetration?
E) Did anything NetApp do cause Hitachi to begin a relationship with Blue Arc?

Question 3)
A) Based on past experience how long can customers expect the IBM / NetApp relationship to last?
B) What will be the support strategy for customers if the IBM/NetApp relationship to falls apart C) What volume of IBM sales will transfer IBM sales from the Incremental sales volumes referred to by NetApp management to respectable sales volumes?
C) What is IBM’s strategic support strategy for the customers with NetApp product sold through IBM’s channels if the relationship falls apart?

“If you fail to get it right at the start, it may cost you dearly to fix it later – that is if you are even permitted the opportunity to fix it.”
Curtis E. Sahakian

Posted in Uncategorized | Comments Off on

Trust but Verify week – Part 1

Last week while driving between appointments I got some time to think about a statement by NetApp which seems to raise a lot of questions.

“With revenues up 36% over Q3 of last year, and 100 petabytes of storage shipped during the quarter, NetApp is quickly becoming the vendor of choice for enterprise customers’ storage and data management needs.”

Question 1) Can this 100 petabytes be verified by NetApp? Providing a listing of the disks types and quantities sold would be the easist way to verify this claim by NetApp. Currently there is a question as to whether this is raw disk capacity, or was this number the capacity of the filer heads sold ? This can vary a lot since most systems are not sold with the full capacity of drives and shelves.

Question 2) NetApp right sizes disks quite aggresively, what is the capacity of the disks once they are right sized? NetApp should provide a real usable storage number after right sizing so customers can see what the ‘actual raw capacity’ of the storage purchased is, instead of the marketing capacity. If right sizing is 10% of disks for example, NetApp only sold 90 PB.

Question 3) What is the system overhead on the systems sold? Systems sold with large numbers of disks may pay a substatial penalty for the operating system’s overhead and spares allocated. Additionally, RAID DP takes an additional fraction of the raw disk space. If system overhead and Raid DP takes 30% of disk space then 90 PB * (1-.3) = 63 PB . That is still a lot of disk. So why not just talk about usable storage in press releases?

Providing customers and analysts with a verifiable number of disks by capacity and disk type would allow them to check and see what the actual petabytes sold by NetApp was in the quarter. The numbers in the article can mean almost anything and therefore are crying out for verification.

Posted in Uncategorized | Comments Off on

I spent the last few days in Dallas and Austin. It was great getting together with our customers and hearing how much they like our customer service and engineering support for their filers. Almost every customer was interested in the Google and CMU articles which you can download from our website http://www.zerowait.com/ you will see them at the top. It was interesting to see the the Main Stream Computer Media picked up on these articles this week and you can read Computerworld’s take here

Disk drive failures 15 times what vendors say, study says
Drive vendors declined to be interviewed

In April I will be at a conference with Jon Toigo and this subject will be discussed in detail.

UPDATE – A few of the customers in TX that I visited were interested in my new truck, and asked me to post a picture the truck. It is a 1951 M37

Posted in Uncategorized | Comments Off on



Last week we learned that NetApp’s executives consider IBM sales efforts incremental. If that is the case , then I suspect after looking up IBM sales numbers, that IBM management must consider their rebranded NetApp sales insignificant to their bottom line.

“IBM is very focused on what I call white space, which is where we are not covered,” Mendoza said. “For example, state and local government, and retail. So it’s largely incremental for us.”

IBM Numbers for 2005

Revenue $91,134.000,000.00
Cost of Goods Sold $54,602,000,000.00
Gross Profit $36,532,000,000.00
Gross Profit Margin 40.1%
NetApp’s number for IBM’ sales = about $60,000,000
That means that IBM branded NetApp’s unit sales
are much less than 1% of IBM’s sales. By the way, NetApp is reporting sales of $2,066,000,000.00 for 2006.

Would you consider that insignificant to IBM, or incremental as NetApp’s Mendoza states? It could be both. But I think the sales are probably much more important to NetApp than to IBM. I wonder how it makes IBM sales folks feel to know their efforts are merely incremental? Does IBM have a bonus plan and sales contest to help determine who is their most insignificant and incremental NetApp sales person of the year? : )

Posted in Uncategorized | 1 Comment

This could be significant.
After all the speculation, Dell has announced it will start selling computers with pre-installed Linux distributions instead of Microsoft Windows.

Posted in Uncategorized | Comments Off on

NetApp says that IBM’s branded sales are just incremental.

Tom Mendoza, president of NetApp, said that improving sales through IBM is not affecting NetApp’s solution providers. “IBM is very focused on what I call white space, which is where we are not covered,” Mendoza said. “For example, state and local government, and retail. So it’s largely incremental for us. They’ve done an excellent job of minimizing conflicts by focusing on where we are not.”

The article linked below says that IBM is 3% 0f Netapp sales. 3% of sales seems like a big increment to me… Do the math $2,000,000,000.00 in NetApp Sales * 3% = $60,000,000.00

The OEM deal with IBM accounted for almost 3% of NetApp’s revenue during the quarter, reflecting a relationship that NetApp claimed is going better than expected.

I have to wonder how much NetApp’s big resellers are pushing out the door if IBM sales are only incremental? Could there be some perspective or marketing spin issues in play? How much longer will NetApp support this small incremental sales channel?

The word in the street since the announcement last spring by IBM that it would be reselling a portion of NetApp’s product line has been at best hushed and at worst that it’s a disaster. I heard that there were channel conflicts all over the place and that field compensation hadn’t been worked out to a point where the two could work together.

I’ll bet Larry King can find out the answers…

Talk-show host Larry King and his wife, Shawn Southwick, have agreed to buy a Beverly Hills home listed for just under $12 million.

The five-bedroom, Tuscan-style home was on the market less than two weeks before the couple agreed to buy it. The almost 10,000-square-foot house, built in 1989, has a skylit foyer and a master suite with a sitting room and twin bathrooms. There is also a two-bedroom guest house and a pool. Local agents identified Mr. King as the buyer. Listing agent Stephen Resnick of Westside Estate Agency confirmed that a contract has been signed, with the deal set to close later this month.

Mr. Resnick wouldn’t name the sellers but said they spent two years renovating the house. Records show the sellers are Thomas and Kathy Mendoza, who bought the home three years ago. Thomas Mendoza is president of Network Appliance, a Sunnyvale, Calif., data-storage company.

The King and Mendoza families declined to comment. Mr. King, 73 years old, is the longtime host of the CNN talk show “Larry King Live.” Ms. Southwick, 47, is a former host of the television show “Hollywood Insider.”

You just can’t make this stuff up !

Posted in Uncategorized | Comments Off on

It is always nice when we get a letter like this…

Hi Mike,

Thank you for your letter following up on the order we placed. I am
impressed by the quality of customer service that your company has
provided, and fully satisfied both with the order and the helpfulness of
your staff.


BXXXXX NXXX ( Name x’d out for privacy)
Manager

Posted in Uncategorized | Comments Off on

One of our customers read the Google study in the previous post and sent along this link and the conclusion is fascinating. I certainly hope that all our customers will read this . You can download it here

Disk failures in the real world:
What does an MTTF of 1,000,000 hours mean to you?

Bianca Schroeder Garth A. Gibson
Computer Science Department
Carnegie Mellon University
{bianca, garth}@cs.cmu.edu


7 Conclusion

Many have pointed out the need for a better understanding of what disk failures look like in the field. Yet hardly any published work exists that provides a large-scale study of disk failures in production systems. As a first step towards closing this gap, we have analyzed disk replacement data from a number of large production systems, spanning more than 100,000 drives from at least four different vendors, including drives with SCSI, FC and SATA interfaces. Below is a summary of a few of our results.

* Large-scale installation field usage appears to differ widely from nominal datasheet MTTF conditions. The field replacement rates of systems were significantly larger than we expected based on datasheet MTTFs.

* For drives less than five years old, field replacement rates were larger than what the datasheet MTTF suggested by a factor of 2-10. For five to eight year old drives, field replacement rates were a factor of 30 higher than what the datasheet MTTF suggested.

* Changes in disk replacement rates during the first five years of the lifecycle were more dramatic than often assumed. While replacement rates are often expected to be in steady state in year 2-5 of operation (bottom of the “bathtub curve”), we observed a continuous increase in replacement rates, starting as early as in the second year of operation.

* In our data sets, the replacement rates of SATA disks are not worse than the replacement rates of SCSI or FC disks. This may indicate that disk-independent factors, such as operating conditions, usage and environmental factors, affect replacement rates more than component specific factors. However, the only evidence we have of a bad batch of disks was found in a collection of SATA disks experiencing high media error rates. We have too little data on bad batches to estimate the relative frequency of bad batches by type of disk, although there is plenty of anecdotal evidence that bad batches are not unique to SATA disks.

* The common concern that MTTFs underrepresent infant mortality has led to the proposal of new standards that incorporate infant mortality [33]. Our findings suggest that the underrepresentation of the early onset of wear-out is a much more serious factor than underrepresentation of infant mortality and recommend to include this in new standards.

* While many have suspected that the commonly made assumption of exponentially distributed time between failures/replacements is not realistic, previous studies have not found enough evidence to prove this assumption wrong with significant statistical confidence [8]. Based on our data analysis, we are able to reject the hypothesis of exponentially distributed time between disk replacements with high confidence. We suggest that researchers and designers use field replacement data, when possible, or two parameter distributions, such as the Weibull distribution.

* We identify as the key features that distinguish the empirical distribution of time between disk replacements from the exponential distribution, higher levels of variability and decreasing hazard rates. We find that the empirical distributions are fit well by a Weibull distribution with a shape parameter between 0.7 and 0.8.

* We also present strong evidence for the existence of correlations between disk replacement interarrivals. In particular, the empirical data exhibits significant levels of autocorrelation and long-range dependence.


I wonder why these folks are not invited to speak at storage conferences like SNW ? Could it be that the vendor community has something to hide?

Posted in Uncategorized | Comments Off on

Google releases significant research paper on Disk Failure.

Download it here

 

I wonder why NetApp has not done this research, or if they have why didn’t they release it? Perhaps they can do a follow up study on FC disks?

Posted in Uncategorized | Comments Off on