Friday, October 29, 2021

Talk with Renen Hallak, Founder and CEO, VAST Data

I had the privilege to speak and interview recently Renen Hallak, founder and CEO of VAST Data, a young company that disrupts the file storage market segment leveraging SCM, QLC Flash and NVMe-oF, for the 28th interview series and globally 77th episode of the French Storage Podcast. We invite you to listen to it below, enjoy.
Share:

Friday, October 22, 2021

Wednesday, October 13, 2021

iXsystems and Nextcloud collaborate

iXsystems and Nextcloud, two open source champions, signed a partnership to extend a converged market reality. In fact thousands of systems are already deployed with the Nextcloud content collaboration platform on TrueNAS.

With this announcement, iXsystems will offer Nextcloud Hub with Files, Groupware, Talk and Collabora Online and the product can be installed very rapidly thanks to a plug-in available on TrueNAS GUI. This one is based on Nextcloud 22 and TrueNAS CORE 12.0-U6. The solution will be fully supported by both companies and we expect the collaboration suite to be deployed on HA and scale-out configurations.



Share:

Thursday, October 07, 2021

AuriStor promotes a new generation of OpenAFS

We met a second time AuriStor during a recent session of The IT Press Tour and we measure all the progress they did since our first meeting 7 years ago under the name Your File System. They accomplished and delivered a lot of things.

We wish to refresh again our audience about AFS and its derivative. Historically the project came from Carnegie Mellon University in Pittsburgh, Pennsylvania, with the idea to address the need of sharing data present on multiple servers with thousands of clients machines, let's say professor and students systems. The subsequent idea was to avoid considering large file servers with everything in it with all people connected to that central point. Rapidly Transarc corporation was established to become the commercial force to promote and sell AFS. And finally IBM landed on the project acquiring Transarc in 1994 with the gift in 2000 of AFS to the open source community as OpenAFS.

AFS philosophy is to share files between producers - data servers - and consumers - client systems - within a campus or geographically dispersed over a country or even larger than that in a way that data location if completely masked presented as superior or master root directly named /afs. This directory works as a global virtual namespace with strong authentication service based on Kerberos and data proliferation operated by file replication across systems. Clients receive a copy of the requested file on their local storage thus working as a cache copy that need later to be synchronized back with the server thanks to a subtle call-back mechanism. Cache plays its role as subsequent access are just honored via the local presence of the file.


AuriStor extends OpenAFS and developed AuriStorFS as richer flavor of the product both in performance, scalability, resiliency and platform support. They recently signed with Red Hat. They boost performance in several dimensions: on the network side, they reach 8.2GB/s per listener thread reducing significantly the number of packets exchanged, on the file server side, 1 AuriStorFS server can replace 60 OpenAFS 1.6 file servers, also a better UBIK management, locking mechanism, and, as said, larger platform support such MacOS, Linux and even iOS. For more detailed information on things improved by AuriStor the vendor creates a web page you can access here.


AFS globally and various distributions, OpenAFS and AuriStorFS, targets large wide environments where other techniques could reach some limits. Their pricing philosophy and adoption model is also a major difference. We invite our readers to try the AuriStor flavor to feel all the power of such approach.

You can access below the presentation the team used with the press team.

Share:

Tuesday, October 05, 2021

ScaleMP doesn't respond, swallowed by SAP

Navigating the web recently I discovered that ScaleMP disappeared, its web site is not responding, see the image below. So I decided to investigate and discovered that the company has landed in Germany swallowed by SAP before the summer. This Israeli article unveiled this.


Also its founder and long time leader, Shai Fultheim, works now at Huawei as CTO & Chief Expert, Future Computing Platforms.



Share:

Monday, October 04, 2021

Fungible is shaking the data center storage landscape

Fungible, founded in 2015 by Pradeep Sindhu and Bertrand Serlet with more than $310M raised, has a mission, the mission to change the data center and in particular how the storage is deployed and managed at scale. Considering that these environments hit a wall, the company changed radically the data center approach with a new processor centric model based on the Data Processing Unit (aka DPU) model it invented. Having this third element in the infrastructure beyond classic CPU, for some time GPU, brings a new level of performance, reliability and scalability to large data centers. We had the opportunity to meet Fungible during a recent IT Press Tour where the team shared his view, product details and strategy.

The vision of the company is summarized by the 2 keywords disaggregation and composability. First, the disaggregation term means the capability to manage devices and elements at a very fine grain level and decouple storage, memory, processors and other components from servers. An other way to present that is to think about an exploded model where you can see each elements. Having that list, map, directory and topology of all the entities, it requires next a logic to associate, group, unify or pool together all these elements. You can build a virtual storage pool if the logic is limited to storage or virtual machines if the composability embraces cpu, memory, network cards and storage. And for such approach, NVMe-oF is a fundamental method to achieve this.

For Fungible, this new model for the data center is driven by DPU and its capability to mask complexity and boost manageability, performance and reliability fueled by TCP associated with NVMe.

The first product released by Fungible was the storage cluster named the FS1600 announced a few months ago. It is a 2U chassis split in 2 internal zone each with its DPU - a Fungible F1 -, Ethernet ports, memory, SSD and power supply. It embeds 24 hot pluggable NVMe SSD for a total capacity of 70TB and delivers 13M IOPS, 120𝝻s of latency and 75GB/s. In term of data management functions embedded by the DPU, we find thin provisioning, erasure coding, replication, snapshots and clones, encryption, compression and QoS. All these FS1600 deployed are controlled and managed by the Fungible Composer acting as the control plane.

This storage array leverages the enterprise network widely deployed i.e TCP/IP over Ethernet. The gain is immediate with a reliable fast network already in place and without considering a dedicated network and technology such InfiniBand or Fibre Channel. Using TCP to exchange NVMe messages makes topology easier presenting network drives as local ones thanks to NVMe.

Beyond the array, Fungible accelerates his strategy and just announced a new component in the infrastructure with a storage initiator card equipped with a Fungible S1 DPU. This PCIe Gen 4 card is available at 50, 100 or 200GE delivering 2.3M IOPS per client. It offloads demanding from the CPU such compression and encryption done in-line with such card.

In this domain, Fungible clearly wishes to drive the market initiative promoting a real data center model for the future starting today.

Share:

Friday, October 01, 2021

Nexustorage brings an innovative SDS approach

We had the privilege to meet Nexustorage during the 40th IT Press Tour and discovered a very interesting Software-Defined Storage approach. I discovered this company several months ago when I spoke with its CEO and founder, Glen Olsen, and we wait together the best moment to meet them. We reach that point and we're pretty honored that Glen picked the tour to unveil to the world its company, mission, product and technology.

The company, officially launched by Glen in February 2021, is located in Paraparaumu, a bit north of Wellington in New Zealand. Before that adventure, he spent some time at Caringo, PSInet, Logica CMG and EDS to name a few.

Nexustorage develops a universal data lifecycle management solution based on an intelligent storage layer named Nexfs that spans file, block and object storage. The team is finally ready to unveil its third public available release of Nexfs.


The central concept is the storage pool that glues diverse storage entities deployed on commodity hardware and organized hierarchically them to received data based on access and data placement rules. Nexfs re-exposes this combined layer via iSCSI, NFS, SMB, and S3 and therefore can be qualified as a unified storage product.


Nexfs layer relies on a multi-tier storage pool that hides data movement and data location on this pool. The solution runs on existing hardware and be coupled with new hardware, but globally it represents a way to protect investment and bring a new life for existing servers and storage units.

This new scalable file system is available free of charge with the community edition and is validated with Centos, Ubuntu and Debian Linux, being a software only solution. Disk filesystems xfs and ext4 with extended attributes are needed for tier-2 file systems.

The team has developed several key technology elements - SmartProtect, SmartTier, SmartClone and Nexassert - to enable this transparent optimized data placement logic:
  • SmartProtect is a method to generate data protection by using replication for the primary to the secondary S3 compatible storage. The feature provides granularity to generate file copy each time a data chunk changes.
  • SmartTier is the perfect companion of SmartProtect but can run also independently. This feature enables the move of data chunks to secondary storage within a pool built and controlled by Nexfs. Supporting 3 tiers in this first release with SSD, SATA and object storage, the data placement engine moves back and forth data between tiers. This data management layer masks the physical location of data blocks with the beauty to have some of them residing on fast storage and others on slow ones i.e unfrequent data on SATA and cold on object or cloud storage. In other words, the granularity is at sub-file level what Nexustorage team called a chunk and that chunk cn have different sizes. Frequent access chunks are maintained on fast storage and deliver the needed QoS with the right cost alignment among all tiers. The fact that an access to a migrated file is initiated, it doesn’t copy back the entire file but only the needed chunk.
  • SmartClone is the capability to create redundant child file gold images on tier 3.
  • Nexassert, under patent review, is the name of the technology developed also by Nexustorage to access data chunk where they resides. Thus, after the solution is used, the normal operation mode shows that files are not homogeneous stored in the same storage technology, i.e one storage technology, and finally their data chunks are distributed among SSD, HDD and object/cloud storage based on their access and frequency needs.
Exposing file and object are now pretty common but exposing a block interface, here iSCSI, built on object creates some huge constraints on the storage path. The team has made great development on this and supports for that Amazon S3, Cloudian and MinIO. You can boot and run Microsoft Windows and Linux servers that reside in cloud and object storage, Glen did a demo and booted a Windows VM from a VMDK stored on Nexfs.

For the future, Glen told us that his team plan to release SmartBackup that should work a bit like NetApp SnapDiff, SmartGuard to address the ransomware challenge, I anticipate some immutability here, and SmartIntegrityAssure. Nexustorage will introduce also a horizontal model to support multi-node cluster.

On the file system layer, Nexfs reminds what Veritas can do with VxFS and its Dynamic Storage Tiering feature layered on multi-volumes, each one having its own characteristics. VxVM creates a meta-volume called a volume set that encapsulates each volume. In that case, the file path doesn’t change at all, only the block map changes but the entire file is managed i.e the granularity is the file itself not a sub-file like it is with Nexfs. We also remember Enmotus who developed block-based tiering but it seems that the company ceased its operations now.

Nexfs is available via 2 components:
Finally, I invite you to check below the presentation used by Glen during that session.

Share: