Jon Toigo Conversations continue

Jon Toigo initiated another dialogue with me which you can read below.

Conversations with Mike 2

Once again, I exchanged some emails with a guy who is in the trenches with customers as much as I am: Mike Linett, CEO of Zerowait. This is from a continuing email exchange that I thought was worth sharing.

Hi Mike,

Several inquiries have been coming my way to perform storage assessments as companies seek to drive cost out of infrastructure. One, in particular, is looking for a better management plan to reduce the costs of labor associated with storage administration, but the majority of the others are seeking tactical cost reductions through the use of less name brand-y gear and the replacement of service contracts on the gear they already have (sourced from their OEMs) with qualified third party maintenance and support. Many have moved older gear into a secondary storage role, which I take to mean that they are not taking it off the line, but they are using it to store less frequently accessed data.

Are you seeing similar trends? What, honestly, are the benefits and the potential foibles of such a strategy?

As I see it, going to off-brand storage arrays that do a yeoman’s job of primary storage but without the brand name OEM pricing is a tactic that is well worth a look in this economy. The problem is that some of my clients are already hurting for staff and seem to be buying into the claims of brand name vendors that wrap around service and support contracts are an offset to being short handed. True?

Second, band name or not, the real cost drivers in storage are mismanagement of data and lack of visibility into infrastructure that enables proactive redress of problems that are building up. There is also a huge disconnect between the storage folks (most of whom are server folks) and the infrastructure changes that they are making and whoever is doing disaster recovery planning. Data volumes are being added or moved and mirrors/backup schemes are being broken unnoticed. That’s scary.

Third, I have always been a proponent of qualified third party maintenance as an alternative to overpriced maintenance agreements from OEMs, as you know. I mean, why should you pay for maintenance from the OEM, which tends to increase in cost as the gear gets older, if you have migrated the gear itself into a secondary storage role?

Finally, some customers tell me that they are being wooed to outsource – I’m sorry, cloud-ify – their secondary storage at 20 cents per GB. As an aside, I wonder if anyone is doing the math on drive technology advances, like Toshiba’s 5TB per square inch drives, which are coming to market within the next 36 months? Assuming that drives are enterprise quality and follow the same cost pattern as previous units, it seems to me that we are looking at pennies per GB on massive drive capacities in the near future. So, why use a cloud for capacity?

Looking forward to your response.

Hi Jon:

Storage assessments and a reappraisal of the way organizations are structuring their data has become a large part of our engineering team’s daily discussions with customers. Even the folks in the executive offices understand that a Seagate drive should not cost 400% more when purchased through their storage OEM’s, than from NewEgg. The perceived value of enterprise storage vendor’s wares has declined as the effects of the financial panic of 2008 and 2009 have trickled down and curtailed the Enterprise storage acquisition budgets of most organizations.

It used to be that when we would offer a quote for support to customer they would ask us “How can you charge so little?” In 2010, savvy customers ask us “ Why are OEM’s charging so much for support for of their equipment?” The answer remains that for some organizations there is a perceived value to having a different vinyl sticker or bezel color that makes the high price seem rational.

Often, I go to see a customer and we walk through their Data Center and assess the equipment within it. Most customers recognize that their equipment is made up of component parts from the same manufacturers, and that there are vast differences in price based on the OEM’s sales and marketing staff. IT managers understand that they don’t have the staff or resources required to integrate and debug hardware and firmware, but they can certainly see that a Seagate drive or an LSI card is the same in their Dell server as in their high priced array. So, they have to question the pricing models that the manufacturers’ are using, and they do not see the offsetting value of proprietary software from vendor’s offsetting their engineers’ costs. In fact, just the opposite is occurring. Customers are quickly recognizing that their engineering generalists can manage more open source equipment than their specialists can manage of proprietary arrays. Therefore, the higher priced array’s specialization is costing more to manage than an open sourced simple storage solution.

This model for 80% of an IT infrastructure creates more value out of best of breed components then piecing together proprietary solutions from a mix of vendors, and paying support for equipment which is used with F150 Pickup requirements.

JT – Second, band name or not, the real cost drivers in storage are mismanagement of data and lack of visibility into infrastructure that enables proactive redress of problems that are building up. There is also a huge disconnect between the storage folks (most of whom are server folks) and the infrastructure changes that they are making and whoever is doing disaster recovery planning. Data volumes are being added or moved and mirrors/backup schemes are being broken unnoticed. That’s scary.

IT today is very similar to business in general and the 80/20 rule applies. 80 percent of most business comes from 20% of the customer list. In IT, 80% of most organizations’ IT assets handle 20% of traffic, while 20% handle 80% of traffic. What is great about managing open source solutions is that it is easy to create a POSIX like solution for your own architecture and environment. Once your organization determines a “good enough” server and storage solution, your favorite integrator can build all types of hardware with minor changes in cards, memory and processor that use homogenous components. Using open source software the maintenance and support costs are lower and since there is an easily tapped community of experts to answer questions, a good staff of IT generalists will be able to provide a larger portion of an Organization’s IT infrastructure without specialized training.
Don’t panic. Embrace the wisdom of the marketplace. Slowly migrate to a solution of best of breed commodity hardware, and a staff of IT engineering generalists who know how to find answers.

JT – Third, I have always been a proponent of qualified third party maintenance as an alternative to overpriced maintenance agreements from OEMs, as you know. I mean, why should you pay for maintenance from the OEM, which tends to increase in cost as the gear gets older, if you have migrated the gear itself into a secondary storage role?

I sort of covered this earlier in my response, essentially people and organizations have different perceived values on the prices that OEM’s charge. Hard times demand more attention be paid to these costs, and this inevitably drives more companies to embrace third party support, as well as other self support models. Essentially, a movement toward commoditization and rationalization of computer hardware is occurring because of the tightened IT budgets.

JT -Finally, some customers tell me that they are being wooed to outsource – I’m sorry, cloud-ify – their secondary storage at 20 cents per GB. As an aside, I wonder if anyone is doing the math on drive technology advances, like Toshiba’s 5TB per square inch drives, which are coming to market within the next 36 months? Assuming that drives are enterprise quality and follow the same cost pattern as previous units, it seems to me that we are looking at pennies per GB on massive drive capacities in the near future. So, why use a cloud for capacity?

I have been trying to understand the foggy logic of cloud computing for a while, and although the marketing and sales pitches are really nice, the dollars and sense logic has escaped me. I can understand why certain applications can make sense as an outsourced solution, but I don’t see the benefits in cloud computing for most organizations that need data security as well as uninterrupted connectivity. I think there will be some good technologies developed through the cloud computing efforts that are being marketed today. I don’t see cloud computing taking away the corporate responsibility of providing a reliable network of systems to deliver content and data to users for the enterprise to thrive.

Mike

Jon and I would like your comments .

This entry was posted in Uncategorized. Bookmark the permalink.