Sep 26, 2017

Veritas Vision 2017 Recap

Veritas Vision 2017, the annual user conference just took place in Las Vegas at the Aria complex and we have counted 2,000 attendees representing 52 countries for at least 33 official sponsors and exhibitors.

As a true believer of cloud with a clear multi-cloud strategy, Veritas has selected the four top sponsors – Microsoft, Google, IBM and Oracle – who lead the cloud computing industry with AWS, very present last year but absent for this edition.

Among other sponsors we noticed the presence of Aptare, Cloudian, Infinidat, NEC, NetApp and Seagate and the presence of Komprise and StorReduce in the Google Cloud both with small pods, allowing them to gain visibility.

The company has articulated the conference around four key topics that describe its products and solutions: Data Protection, Software-Defined Storage (SDS), Cloud and Risk & Compliance articulated around the 360 Data Platform (360DM).

Data Protection
NetBackup, the unified enterprise data protection offering for multi-cloud, physical and virtual environments, will be available under 8.1 release September 26. Still available as software or appliance, the company adds several interesting features:
  • CloudCatalyst, available as software or appliance, offers a new deduplication mechanism connected clouds,
  • Parallel Streaming to support NoSQL, Hadoop and Cassandra environments, crystalizing new users’ needs already validated by players such DatosIO or Imanis Data,
  • and a comprehensive and extended support for Microsoft Azure, OneDrive for Business, SQL Server, Office 365, Exchange but also Google Drive and Cloud Storage, Oracle databases, Box…
SDS
Long time leader in SDS, Veritas continues to expand the product family and prefer to use SDS for Block, File or Object to qualify its offerings. Hopefully Veritas doesn't use the ridiculous term “Server SAN” that finally creates confusion and complexity in users mind. Veritas unveiled two key announcements:
  • an object storage (SDS for Object) represented by Veritas Cloud Storage,
  • a new iteration of Scale-out NAS (SDS for File) named Access Appliance powered by Seagate, available early 2018. Access is also available as pure software,
  • and HyperScale for Container (SDS for Block) as the new companion of HyperScale for OpenStack.
Cloud
Information Map receives 23 new connectors for data sources for on-premises and cloud-based environments. It is also available as a standalone product. The company has also gave some previews of the next big thing at Veritas, the Multi-Cloud Data Management platform, that will impact and weaken many tentatives from small vendors.

Risk & Compliance
Among other topics, GDPR was covered in various places and several extensions and new features for Enterprise Vault were introduced.

Microsoft
Microsoft and especially its Azure business were also well covered during the show. Veritas announced 360DM for Azure but also Veritas Resiliency Platform (VRP), SDS Access, Information Map again, NetBackup (see above), Backup Exec and Enterprise Vault. The company introduced two needs products:
  • CloudPoint to manage snapshot on-prem and make quick recovery a reality in the cloud. The product is available via a freemium model.
  • CloudMobility to migrate workload to and from the cloud.
All these Veritas announcements are summarized in these three following PRs (1, 2 and 3).

This second edition of Veritas Vision as an independent company has confirmed that the company is back, leading the data management market with several product lines.
Share:

Sep 25, 2017

Veritas Cloud Storage will shake the industry

Last year, same place, same Vision conference, I spoke with Mike Palmer, former SVP and GM for datacenter products and now EVP and chief product officer of Veritas Technologies LLC, about Object Storage and the huge lack of solution from the leader of Data Management and the need to build something.

Software-Defined Storage is almost synonymous of Veritas as the company is recognized for the leading platform independent open system file system and volume manager since early 90’s.

But even when the pioneers of object storage launched their company around 2005, I mean here Caringo and Cleversafe, Veritas didn’t investigate in that direction. Really lack of anticipation and vision? Not sure as Symantec owned the company at that time for more than a decade and finally froze some developments and initiatives.

Announcement and context
One year later and just 16 months following the split between Symantec and Veritas, Mike Palmer on stage announced Veritas Cloud Storage (VCS), the company’s answer to the need of high capacity and high resiliency dedicated to unstructured data, I mean here the object storage product from the software giant we expect for several years.

Interesting to notice that the term “object storage” is not mentioned in the press release that announced the product as the term generates very often mixed feelings.

Product management did a great job except for one thing: the name of product. We understand that object and cloud could be exchanged in that case but as a common practice many people name product by acronym. NetBackup is known as NBU, Backup Exec as BE or Veritas Cluster Server as VCS. And this one clashes with Veritas Cloud Storage. I have to admit that InfoScale product line has introduced a new name InfoScale Availability to replace Cluster Server. So let consider this as a detail, we have to also adapt ourselves.

The new product manager for the product is Chad Thibodeau who has spent a few years at Cleversafe, the object storage leader by far.

Product detail
Also named on stage the “ZetaScale” product, VCS is a pure Veritas development with hopefully several similar concepts and ideas we find in other object storage products but also many differentiators.

It runs on commodity hardware, bare metal, virtualization or even with container leveraging standard components and Linux OS. Every node is deployed with a local file system (ext2, XFS or VxFS among others) for internal drives and every drive is independent.

The philosophy chosen by the engineering team is based on consistent hashing with a columnar data store backend and the data pivot size is 64MB.


In term of data protection, metadata are protected by replication and data by replication or erasure coding. In that case, it’s based on Reed Solomon implemented with the Jerasure library.

For access methods, the product is very rich, probably one of the richest on the market, with NFS and SMB, S3, Swift plus a http REST API, MQTT and CoAP (for IoT), JDBC connector for BI and plugins for Apache projects such HDFS, Kafka and Storm.

The design offers a choice when you configure the product between two data consistency models: eventually or strictly consistent.

VCS provides also a workflow engine and implements several features such data indexing (hopefully), classification, analytics and categorization without the need to add any third party product on top of it.

Market climate
With more than 20 vendors trying to find a niche and a position for their product, the object storage market segment has real difficulties to grow and exist. Recently, even backed by famous investments firms or entities such A16Z, Dell Ventures and Mayfield Fund among others, two companies died, Coho Data and Formation Data Systems, event that confirmed a probably too important number of actors for a limited number of business opportunities.

It’s probably not even a market but more a technology that helps to address limits of “classic” architectures and reduce significantly the cost to store data. It’s not a product either like a NAS even if some companies were built on that and try to push that approach. Remember Data Domain, as the first company in enterprise-class data deduplication, they were able to build a product from a feature. And it's not an architecture also like SANs are, where you select components, design a network and finally deliver an orchestrated connected IT environment.

Object storage vendors have also created their own difficulties surfing and leveraging the SDS and commodity approach. The result is all products are very similar and suffer from lack of differentiators. Imagine a blind test, like we do for yogurts, and you will be surprised that users can't make any difference among offerings. It's true today when you ask for S3 access, erasure coding, geo-aware, commodity based… finally lots of them can do the work. Is it what we call a maturity plateau? Or an other sign of tough times?

In that aspect open source plays its role and has put pressure on commercial offerings. In two words, why pay for software you can download? So differentiators and ecosystem are key. Probably use cases could be the last opportunity to survive, potentially oem as well if you find the right partner, as large vendors now offer object storage, it was not the case a few years ago. I invite you to read 2 articles about that 1 and 2.

Finally, as Veritas is already a leader in several data management segments, the market penetration for this product should be pretty straightforward as the natural extension should be one of the messages, something like up-sell, cross-sell.... For sure, for new projects and existing Veritas customers, VCS will be the de facto and default choice, no need to check elsewhere. For new customers, the battle is on but as the leader in data management with a huge portfolio effect, the game should be over pretty soon as well, we already saw that from other vendors. The product should soon skyrocket. Independent small object storage vendors will have some difficulties to survive especially for ones who have just one product and no real differentiators. Let’s start counting…
Share:

Sep 22, 2017

DataCore will unveil MaxParallel in a few days

DataCore Software (www.datacore.com), leader in storage virtualization for... a long time, has discreetly developed MaxParallel™ and the company plans to announce it very soon, I should say during Microsoft Ignite next week as we're just a few days before the event.

The product is not listed on DataCore web site but if you search on Internet you should find a dedicated page about it on its web site. Oops, someone probably forgot to hide it. You can even find a datasheet and even a solution paper about it on that microsite.

For several quarters, the company has promoted Parallel I/O approach illustrating gains with some famous benchmark results. Lots of details were published on this page.

Now the company opens a new directions not directly related to storage as it targets applications, I/O scheduling and memory. Gains are pretty good:
  • from 3x to 8x quicker response,
  • up to 3x more transactions,
  • and from 2x to 10x faster reports following users tests and utilizations.
The main idea is still « Divide and Conquer » applied here to SQL queries. In fact, multiple independent queries and updates that run on distinct cores don't need to wait any longer on a single queue. Thus these queries access data in parallel and reduce access times significantly. The product maintains the order of arrival for dependent writes and updates. Based on this design, the product can deliver interesting results for workloads that suffer from I/O latencies.


This new product is a pure software approach that doesn't require SANsymphony as it resides on the application server. The administrator doesn’t need to change anything on the server and there is no special configuration required by it.

DataCore MaxParallel targets two deployment models: on-premise and cloud. The first supported environment is Microsoft SQLServer, especially SQL Server 2016, 2014, 2012, 2008 running on Microsoft Windows Server. It is also available on Microsoft Azure Marketplace and its prices starts at $0.06/hour as listed on the Azure portal. Super announce and great technology approach, we wait to learn even more details next week.


Share:

Sep 21, 2017

Atavium, new player in the data management

Atavium Inc. is a pretty recent company founded in 2016 and operating from Minneapolis, Minnesota. The firm was founded by a group of storage veterans with proven track record especially with famous stories and exits with NuSpeed, Isilon and Compellent respectively acquired by Cisco, EMC and Dell:
  • Ed Fiore, CEO, was VP of Storage at Dell and spent time at Compellent, Isilon and NuSpeed.
  • Mark Bakke, COO & VP Engineering, came from Dell, Cisco and NuSpeed.
  • Mike Klemm, CTO, was also at Dell and Compellent.
  • Marc Olin, Chief Architect, also at Dell and Compellent.
We found 12 employees on LinkedIn.

The company has raised last May 2017 $8.65 million Series A from Rally Ventures and Grotech Ventures with participation from Origin Ventures, Correlation Ventures, Brightstone Venture Capital and G-Bar Ventures.

The idea of the Atavium solution is to provide a platform to control the lifecycle of unstructured data with organization, classification, automation, management and storage functions wherever data reside, in the cloud or on-premise. The domain and web site exist but except a global idea and the team, nothing is really developed on the site.
Share:

Sep 20, 2017

Arxscan, old player for new need

Arxscan (www.arxscan.com), old player in storage infrastructure management, offers an interesting solution that we categorize as a storage resource management (SRM) even if they but claim to provide more than that with a business oriented approach.

The company was founded in 2007 by Mark Fitzsimmons and Michael Bowers and is based in Asbury, New Jersey, with less than ten employees.

Arxscan developed Arxview Datacenter Analytics Engine (DCAE), a standalone software, that is fully hardware agnostic supporting all major product lines including Dell EMC, NetApp, IBM, HPE, Hitachi, Oracle, Pure Storage, Violin Memory, VMware, Brocade and Cisco in various configurations of SAN, NAS, DAS, virtual storage and fibre channel switching.

The solution shows capacity utilization, trending, reporting, analysis, performance, chargeback, storage/device mapping, power use, line-of-business dependencies and network topologies for end-to-end clarity. Arxview doesn't require any agent or in-band devices and can be deployed in less than an hour.

The product is listed on HPE and IBM partners portal and distributed by many resellers, vars and integrators.

It reminds me products such Veritas SANPoint Control or CommandCentral Storage, AppIQ, Onaro SANscreen, Tek-Tools, Highground, Astrum, WQuinn and some others logical and physical SRM. More recent competition seems to be Aptare, EMC Storage Analytics, IBM Spectrum Control Storage Insights or CloudPhysics. I hope to learn more about it soon.
Share:

Sep 18, 2017

AeroFS now a Redbooth product

Air Computing Inc., producer of AeroFS (www.aerofs.com), merged recently with Redbooth. It was announced silently on companies’ blogs with the same post (12) and only Redbooth has published a PR about it. We usually collapse the name of AeroFS and Air Computing to retain only AeroFS, finally the only known and recognized brand.

The new name of the merged company is Redbooth and Yuri Sagalov, co-founder and CEO of AeroFS, is the new CEO of Redbooth, based in Palo Alto, California.

The global idea is pretty simple but ambitious: deliver a leading solution dedicated to Project and Task Management with File-based Collaboration. Surprisingly, AeroFS product is not listed, yet, on Redbooth web site, you still have to go to AeroFS.com, strong name on the market. AeroFS is already under the Redbooth umbrella showing a new logo with the addition of “by Redbooth” below the classic AeroFS image.

Finally, for respective customers, it’s important to understand the future of product from each company:
  • AeroFS and Redbooth will continue to be supported, developed and maintained.
  • Amium will be shut down December 15th, 2017 and various functionalities will be merged and added to Redbooth.
Surprise or not AeroFS is the new victim of the pretty tough business climate for such storage approaches if product is limited to data storage and protection. In fact, it confirms the difficulty for such business to sustain a viable activity if there is no vertical or business integration; almost all “old” players (Kerstor, Ubistorage, Space Monkey, Symform, Transporter, Tudzu, Wuala) got acquired or ceased activity and only recent ones still exist. The only exception is Aetherstore. We’ll see if these new ones will finally reach same destiny in a few quarters, in this category we can list Blockade, Cloudplan, Ugloo, Sia.tech or Storj to name a few.
Share:

Sep 15, 2017

Phazr.IO, Information Dispersal Algorithm for masses

Phazr.IO (www.phazr.io), develops an Information Dispersal Algorithm (IDA) technology for storage as an advanced method to protect and secure data.

Founded in 2016, the company is based in Los Angeles, CA, and was created by four co-founders, Gary Jin, Dr. Donald Chang, Steve K Chen and Jim Cheung. It has currently 6 people, but only Gary Jin is listed on LinkedIn. The company is currently in a pre-Series A phase and plans to announce the product around Q1 or Q2 2018.

The web site is pretty limited but the team has chosen to demonstrate their capability in IDA with a cloud storage product named PhazrBits, currently in 1.1 release, available on the AppleStore for iOS devices. It works as a multi-cloud aggregation gateway that splits, encodes and distributes data such photos or documents among multiple public cloud storage targets. The product is free and supported cloud storage back-ends are Google Drive, Microsoft OneDrive, Dropbox, Box and Apple iCloud.

They also started to collaborate with the OpenStack Swift community and pointers exist on openstack.org. Following this phase, the future product will be implemented probably within network adapters or controller boards to provide offload process capabilities. So OEM should be a good go-to-market model.

The IDA approach is a non-systematic erasure code form like the Mojette Transform promoted by Rozo Systems. You can find some entries for the libphazr on GitHub especially with PyEClib or liberasurecode. Phazr.IO is not open source at all, Rozo offers a standard version on GitHub. Tests made by the Phazr.IO team have shown 3x to 10x better than Jerasure and 2x to 3x faster than Intel ISA-L. So we expect a pretty innovative approach. The first use case targets bandwidth environment with video activity, perfect candidate with large files. And we expect to see more iteration soon.
Share:

Sep 14, 2017

Rozo Systems adds DataFrameworks as partner and supports iRods

Rozo Systems (www.rozosystems.com), one of the fastest existing Scale-Out NAS, continues its market penetration with an extension of its ecosystem. Now DataFrameworks is officially a partner and Rozo supports ClarityNow and they also add the support of iRods, pretty famous open source data management software used in various industry and perfectly aligned with the strategy of the company.
Share:

Aug 21, 2017

Vexata hired Rick Walsworth

Vexata (www.vexata.com), new player in NVMe Flash storage, hired silently and last executive recruited is Rick Walsworth, coming from Formation Data Systems where he drove the Marketing effort. We find also Ashish Gupta, we have met at SpringPath, in the process to be acquired by Cisco for $320M. Vexata will be one of the interesting new sponsors of the 25 edition of The IT Press Tour in December. All the team is looking forward to that session. The company has published recently an interesting paper about IBM SpectrumScale deployed on top of Vexata 100F NVMe Flash array.
Share: