When Jon Toigo writes I don’t like to mess with it. The best thing I can do is just put his own words here for you all to read!
From Toigo’s column, this is very important!
Some Out of the Box Thinking from Zetera
I just had a conference call with slide deck walkthrough with some folks from a startup called Zetera. I have to tell you that their stuff knocked my socks off.
Forget what you think you know about networked storage. There isn’t any out there right now.
We all agree that server-attached storage is not networked storage: the storage is treated as a peripheral of the server. It might come as a surprise that neither contemporary NAS (so-called network attached storage) nor SAN (Fibre Channel fabric-attached storage) are networked storage, either.
NAS is a thin server OS bolted to the side of an array: server-attached storage any way you cut it.
FC SANs are just server attached storage with a switch in the middle that makes and breaks point to point connections at high speed. It is still direct attached storage for all intents and purposes.
Control of a SAN requires an additional connection to every device (usually an IP network connection) because Fibre Channel is, as the name says, a channel protocol and not a network protocol. The guys who wrote FCP say that they weren’t setting out to create a network and deliberately excluded all IP stack-like functions from the protocol. They were trying to come up with a serial implementation of SCSI that could run over a thin wire so they wouldn’t keep tripping over the big fat SCSI cable every time they walked around their rack.
iSCSI moves us a bit closer to real networked storage, but it still follows the conventions of a channel architecture. The only advantage of iSCSI from an architect’s perspective is that it combines control and data paths into the same wire — something you will also be able to do with FC using a 10GbE network wiring infrastructure very soon.
What Zetera told me about is very different. I’m planning a column covering it in more detail at ESJ.com in a week or two. Basically, disk drives are connected directly to an IP net. UDP and multicasting are used to provide transport layer functions and to replace RAID. Gone is the need for an HBA, a RAID array controller, and an FC switch (if you have one of those). Just plug the drive into a “Tailgate” that connects it to the network, load some driver software and start building storage infrastructure directly on the network.
That’s network storage, in my book. I won’t endorse the product until we have had a chance to kick the tires in our labs. But I’ll report what we learn.