Notes on Various Storage-related ISVs
I'm reading the material on the websites for various storage software startups - trying to get past the grand claims of how their widget will solve all your storage and data management problems and dig out the hidden clues to what their products can, and can't do. Here they are, in no particular order:
Clearpace
Actually not a bad website. They provide an algorithm for compressing structured data records. That, in itself is nothing new but what's unique is while compressed, they retain the ability to search based on keywords (using SQL) to help compliance with data laws. So, for example, you could keep all your OLTP transaction records for the last year on nearline VTLs and the finance or legal department could query them at any time.
The amount of compression varies. According to their website, the algorithm de-duplicates fields in records, keeping only one copy, replacing other copies with references to the one single copy of that field. The presumption is that most databases store many copies of the same data in different records. I don't know how to verify that but they claim you can achieve up to 10-1 compression.
Seems like useful technology that meets a real need. What I don't have a good feel for is how to operationalize this. Do you create a snapshot every day, then run it through this application as you copy to the archive? Do you run the daily incremental backup through it and can the application put that together with previous incremental backups? I'm curious. I'm also curious whether most databases really duplicate that much data and, if they do, how long will it be before they add a built-in feature to compress the database to create a similar searchable nearline archive.
Continuity Software
Software that analyzes your SAN topology and identifies data protection and availability risks such as configuration errors, inconsistent LUN mapping, unprotected data volumes, etc. Includes a knowledge-base that provides suggestions for best-practices around disaster recovery and data recovery.
What's not clear is how the applications gets the data to analyze and how it gets updates when changes are made. The software is 'agent-less' but they claim it has automation to detect the configuration on its own. They also sell a 'service offering' (translated - you pay for the labor for them to come into your datacenter and do the work). They collect the configuration and enter it into the tool which in turn shows you the risk.
Scalant
Scalant produces a layer of system software that turns a compute grid into a 'super cluster' providing HA and load balancing.. This is something Sun claimed to be doing a few years ago (anyone remember N1?). It includes three components. Like traditional clusters, it includes a layer on each compute node that monitors the health of that node and provides a heartbeat so others know it's alive. The second component, unlike traditional clusters, is a monitor (which itself runs on an HA-clustered pair of servers) that monitors the overall health of each compute node and receives heartbeats. It also stores the 'context' of the various applications running on the grid. It detects the failure of an application on a compute node and restarts it on another one. The third component is the config and monitoring GUI.
What's interesting to me about this is the implications on the storage network. FC is not a good choice for this type of compute grid. One, it's too expensive and not available onboard these types of scaleable compute nodes. Mostly, it doesn't have good functionality for the sharing and dynamic reconfiguration you really want to support automatic migration of applications around a large compute grid. You really want a SAN like I described in My Ideal SAN.
First, you want sub-pools of compute nodes running the same OS configs and you want easy scaleability. So no internal boot disks to manage. You want IP-based boot with a central DHCP server to route the blade to the right boot LUN. You would like data services in the array so all these sub-pools can boot from the same volume. Then, you would like the Application Contexts managed by the Scalant cluster monitor to include references, by name, of the data volumes that application needs so when it instructs a compute node to startup an app, it knows how to find, and mount the data volumes it needs. Finally, you would like some form of object-based storage that can share data between multiple nodes to support parallel processing as well as HA failover clusters.
CopperEye
OK, this is the first company I've researched today who doesn't have the smiling person on the homepage. I like them already.
Coppereye is addressing the same problem as Clearpace above. The need to quickly search large transaction histories based on content/keywords. Unlike Clearpace, Coppereye indexes data in place and builds a set of tables that fit in a relatively small amount of additional storage. They claim their algorithms and structure of the tables allow for flexible, and high-speed searches. Although they never explicitly mention structured vs. unstructured data, their site usually talks in the context of searching transactions so I think the focus is structured data. I didn't see a mention of SQL but they do have a graphical UI. Here's their description:
CopperEye Searchâ„¢ is a specialized search solution that allows business users to quickly find and retrieve specific transactions that may be buried within months or years of saved transaction history. Unlike enterprise search solutions, CopperEye Search is specifically targeted at retrieving records such as credit card transactions, stock trades, or phone call records that would otherwise require a database.
Datacore
Datacore is not new and is not a startup although they are a private company. They provide a software suite that lets you turn a standard x86 platform into an in-band virtualization device. A key feature is the ability to under-provision volumes and keep a spare pool that is dynamically added to volumes as necessary. Other features include intelligent caching, data mirroring, snapshots, virtual LUNs, LUN masking, etc. It runs on top of Windows on the native virtualization device so it can use the FC or iSCSI drivers in Windows, including running target mode on top of either.
This looks like a nice product. It's been shipping since 2000 and is up to rev 5 so it ought to be pretty robust and stable. It runs on commodity hardware and can use JBODs for the back-end storage. Provided the SW license is reasonable, this can be a nice way to get enterprise-class data management on some very low-cost hardware.
Avail
Avail has developed a SW product that works with the Windows File system to synchronously replicate files among any number of Windows systems. They call it the Wide Area File System (WAFS). It replicates only the changed bytes to minimize data traffic and works through the HTTP protocol so it can pass through any firewall that enables HTTP traffic so it truly works over WANs. It can replicate from a desktop to a server or between desktops. Users always open a local copy of the file, but the local agent gets notified if a change has been made to one of the remote copies and it makes sure any reads return the most recent copy of the data. It does this by implementing a lightweight protocol so that at soon as a file or directory (folder) is updated, all mirrors get quickly notified, although actual data movement may happen in the background.
Provided this is robust, it's kind of cool technology. It allows both peer-to-peer file sharing as well as backup/replication.
That's all for today. I'll follow up with a Part II in a few days.
<< Home