Coldago Research has identified Hammerspace as a Gem for 2020, I invite you to read the dedicated page. We also have positioned Hammerspace as a specialist in the Map 2021 for File Storage. What we expect is to see market adoption, several various deployments and an effective conversion of the roadmap.
The first opinion I have is really on the vision David Flynn, CEO and founder, delivered as a real ambassador of Hammerspace. He truly believes in his idea for many years mixing present and future product capabilities. His role is fundamental as nobody in the company can tell the story as he does, he IS Hammerspace.
As I already wrote several times, the genesis of Hammerspace came from Primary Data and before from Tonian Systems and its pNFS universal metadata model. This is key as it gives the parallel approach of the product. For readers' reference here are 2 articles on Tonian I posted in 2011 and 2012, more than 10 years ago. For Primary Data, I let you search in the related field on the right but you should find at least 7 articles from me being one of the most prolific author in that space for a few decades.
It's not always easy to understand market directions but clearly the team has anticipated users' needs and how to solve several IT challenges especially in some verticals segments.
Hammerspace has repositioned the product under a new name Global Data Environment aka GDE that creates some confusion. Global and Data are good words but Environment is quite bizarre, an environment is more a configuration enabled by the company's product. And this is fuzzy, what do users buy? GDE? no they buy and subscribe to Hammerspace technology. Other companies playing in this category uses services or system words but we get the point as everyone tries to create some differentiators in their favor and here clearly this is the goal.
During that session I was surprised to read that the problem Hammerspace tries to solve is "Enabling local data access and data services across multiple local, cloud, and remote storage resources" according to the agenda slide. In fact it is not the problem they solve and this is not what users needs, they solve the problem by providing global access to dispersed data - local, cloud and remote as they list - from any location, from any application and any user and the local access is just a differentiator in their capabilities giving a key feature and great user experience. The slide below summarized what they do pretty well.
It requires a multi dimension approach, at users, applications, access methods and storage levels and the idea is to mask complexity, distance, latency... and give an Any to Any model via a global NAS philosophy.
The difficulty resides in several aspects, among them the discovery and merge of metadata, the scale of the metadata service and how to make it resilient, the global presentation of various network file systems and other object storage back-ends, the data access and data propagation mechanism, the locking and consistency model and last but not least the mandatory maintain of corporate security rules...
The slide below makes things a bit more clear as the team uses the term Multi Data Center NAS that is also a good qualification of the service instantiated by Hammerspace.
And by multi data center, it implicitly means on-premises corporate DCs but also public ones. The core idea is to establish a global file system layered on top of each independent ones. A way to see it is the following, all file systems are mounted by Hammerspace engine and re-exposed to clients as new mount points or sub-dirs from a global virtual root. Immediately it means that machines that wish to use Hammerspace must stop to use direct mount points and sub-sequentially mount what is exposed by GDE. And this had to be even more detailed as pNFS mount or NFS 4.2 are differently managed from NFSv3 or SMB that require DSX nodes only. Interestingly Hammerspace continues to use Primary Data product component names as DSX was one of them.
For cross platform data access between Linux and Windows, the team has implemented the RFC2307bis. For object storage, objects and buckets are assimilated by GDE and represented as file system shares on the client view thus providing a global unified view of all dispersed data.
Let's dig a bit into GDE. First it exists 2 entities, the Metadata Server aka MDS and Data Services, the famous DSX. The latter offers NFSv3 and SMB clients via the Portal that maintains also locks for SMB shares, the mover instance that propagates data within the environment across file servers, NAS and object storage, it offers the connection to on-premises and public object storage ad finally the local block storage formatted with xfs and exposes via NFSv3 as stated earlier.
We see 4 key software core functions: Universal Data Access Layer, Flexible Data Orchestration, Automated Data Services and Expansive Storage Options.
- Universal Data Access Layer: A simple single virtual file system is instantiated to present file to clients via industry standard file sharing protocols such NFS and SMB and as well as CSI. pNFS belongs to the NFS standard, no need to mention here. Obvisouly data format is not change, imagine the opposite, things have to be reversible, if not you're locked. This layer extends to a global scale the data sharing capabilities of network file shares.
- Flexible Data Orchestration: With metadata at the core stored and operated by RocksDB, Hammerspace exposes its virtual file layer to any client. The data orchestration works at the file level and makes things transparent for users giving "a local experience". This metadata control place leverages the philosophy of pNFS with dedicated metadata servers, in fact here the intelligence that exposes file system to users, I mean metadata info only.
- Automated Data Services: This is one of the aspect that can creates some differentiators with competition with replication, tiering, reduction. snapshot & clone, file versioning, audit, WORM, enrich metadata or anti-virus...
- and Expansive Storage Options: With the integration of any kind of storage, file, object and block. As said above, block storage array support is covered by DSX nodes that generate xfs format on them and expose them as NFSv3. That confirms once again that block is not supported, file and object are. The support of block is when a raw device is integrated and here you need xfs + nfs to be managed by Hammerspace.
Hammerspace goes beyond traditional LANs barriers as it extends geographically NFS and SMB to remote users.
I also wish to mention famous articles about Network File Management and Network File Virtualization posted on this blog and on StorageNewsletter. It's worth to list players such Attune Systems, AutoVirt, Acopia Networks, NeoPath, NuView or Rainfinity. Many of them got acquired or even disappeared.
Among competitors depending on use cases we can list today Data Dynamics, Avere, Panzura, Nasuni, CTera Networks, Morro Data, StrongBox, Peer Software, JuiceData, LucidLink or Komprise.
I also introduced several years ago, the notion of U3 storage for Universal, Unified and Ubiquitous storage and I have to say that Hammerspace is a typical example of a U3 product.
2022 will be a pivotal year for the company as globally the project exists for many years and we all expect an acceleration of the business and its market adoption.
0 commentaires:
Post a Comment