Friday, October 22, 2021

Wednesday, October 13, 2021

iXsystems and Nextcloud collaborate

iXsystems and Nextcloud, two open source champions, signed a partnership to extend a converged market reality. In fact thousands of systems are already deployed with the Nextcloud content collaboration platform on TrueNAS.

With this announcement, iXsystems will offer Nextcloud Hub with Files, Groupware, Talk and Collabora Online and the product can be installed very rapidly thanks to a plug-in available on TrueNAS GUI. This one is based on Nextcloud 22 and TrueNAS CORE 12.0-U6. The solution will be fully supported by both companies and we expect the collaboration suite to be deployed on HA and scale-out configurations.



Share:

Thursday, October 07, 2021

AuriStor promotes a new generation of OpenAFS

We met a second time AuriStor during a recent session of The IT Press Tour and we measure all the progress they did since our first meeting 7 years ago under the name Your File System. They accomplished and delivered a lot of things.

We wish to refresh again our audience about AFS and its derivative. Historically the project came from Carnegie Mellon University in Pittsburgh, Pennsylvania, with the idea to address the need of sharing data present on multiple servers with thousands of clients machines, let's say professor and students systems. The subsequent idea was to avoid considering large file servers with everything in it with all people connected to that central point. Rapidly Transarc corporation was established to become the commercial force to promote and sell AFS. And finally IBM landed on the project acquiring Transarc in 1994 with the gift in 2000 of AFS to the open source community as OpenAFS.

AFS philosophy is to share files between producers - data servers - and consumers - client systems - within a campus or geographically dispersed over a country or even larger than that in a way that data location if completely masked presented as superior or master root directly named /afs. This directory works as a global virtual namespace with strong authentication service based on Kerberos and data proliferation operated by file replication across systems. Clients receive a copy of the requested file on their local storage thus working as a cache copy that need later to be synchronized back with the server thanks to a subtle call-back mechanism. Cache plays its role as subsequent access are just honored via the local presence of the file.


AuriStor extends OpenAFS and developed AuriStorFS as richer flavor of the product both in performance, scalability, resiliency and platform support. They recently signed with Red Hat. They boost performance in several dimensions: on the network side, they reach 8.2GB/s per listener thread reducing significantly the number of packets exchanged, on the file server side, 1 AuriStorFS server can replace 60 OpenAFS 1.6 file servers, also a better UBIK management, locking mechanism, and, as said, larger platform support such MacOS, Linux and even iOS. For more detailed information on things improved by AuriStor the vendor creates a web page you can access here.


AFS globally and various distributions, OpenAFS and AuriStorFS, targets large wide environments where other techniques could reach some limits. Their pricing philosophy and adoption model is also a major difference. We invite our readers to try the AuriStor flavor to feel all the power of such approach.

You can access below the presentation the team used with the press team.

Share:

Tuesday, October 05, 2021

ScaleMP doesn't respond, swallowed by SAP

Navigating the web recently I discovered that ScaleMP disappeared, its web site is not responding, see the image below. So I decided to investigate and discovered that the company has landed in Germany swallowed by SAP before the summer. This Israeli article unveiled this.


Also its founder and long time leader, Shai Fultheim, works now at Huawei as CTO & Chief Expert, Future Computing Platforms.



Share:

Monday, October 04, 2021

Fungible is shaking the data center storage landscape

Fungible, founded in 2015 by Pradeep Sindhu and Bertrand Serlet with more than $310M raised, has a mission, the mission to change the data center and in particular how the storage is deployed and managed at scale. Considering that these environments hit a wall, the company changed radically the data center approach with a new processor centric model based on the Data Processing Unit (aka DPU) model it invented. Having this third element in the infrastructure beyond classic CPU, for some time GPU, brings a new level of performance, reliability and scalability to large data centers. We had the opportunity to meet Fungible during a recent IT Press Tour where the team shared his view, product details and strategy.

The vision of the company is summarized by the 2 keywords disaggregation and composability. First, the disaggregation term means the capability to manage devices and elements at a very fine grain level and decouple storage, memory, processors and other components from servers. An other way to present that is to think about an exploded model where you can see each elements. Having that list, map, directory and topology of all the entities, it requires next a logic to associate, group, unify or pool together all these elements. You can build a virtual storage pool if the logic is limited to storage or virtual machines if the composability embraces cpu, memory, network cards and storage. And for such approach, NVMe-oF is a fundamental method to achieve this.

For Fungible, this new model for the data center is driven by DPU and its capability to mask complexity and boost manageability, performance and reliability fueled by TCP associated with NVMe.

The first product released by Fungible was the storage cluster named the FS1600 announced a few months ago. It is a 2U chassis split in 2 internal zone each with its DPU - a Fungible F1 -, Ethernet ports, memory, SSD and power supply. It embeds 24 hot pluggable NVMe SSD for a total capacity of 70TB and delivers 13M IOPS, 120𝝻s of latency and 75GB/s. In term of data management functions embedded by the DPU, we find thin provisioning, erasure coding, replication, snapshots and clones, encryption, compression and QoS. All these FS1600 deployed are controlled and managed by the Fungible Composer acting as the control plane.

This storage array leverages the enterprise network widely deployed i.e TCP/IP over Ethernet. The gain is immediate with a reliable fast network already in place and without considering a dedicated network and technology such InfiniBand or Fibre Channel. Using TCP to exchange NVMe messages makes topology easier presenting network drives as local ones thanks to NVMe.

Beyond the array, Fungible accelerates his strategy and just announced a new component in the infrastructure with a storage initiator card equipped with a Fungible S1 DPU. This PCIe Gen 4 card is available at 50, 100 or 200GE delivering 2.3M IOPS per client. It offloads demanding from the CPU such compression and encryption done in-line with such card.

In this domain, Fungible clearly wishes to drive the market initiative promoting a real data center model for the future starting today.

Share:

Friday, October 01, 2021

Nexustorage brings an innovative SDS approach

We had the privilege to meet Nexustorage during the 40th IT Press Tour and discovered a very interesting Software-Defined Storage approach. I discovered this company several months ago when I spoke with its CEO and founder, Glen Olsen, and we wait together the best moment to meet them. We reach that point and we're pretty honored that Glen picked the tour to unveil to the world its company, mission, product and technology.

The company, officially launched by Glen in February 2021, is located in Paraparaumu, a bit north of Wellington in New Zealand. Before that adventure, he spent some time at Caringo, PSInet, Logica CMG and EDS to name a few.

Nexustorage develops a universal data lifecycle management solution based on an intelligent storage layer named Nexfs that spans file, block and object storage. The team is finally ready to unveil its third public available release of Nexfs.


The central concept is the storage pool that glues diverse storage entities deployed on commodity hardware and organized hierarchically them to received data based on access and data placement rules. Nexfs re-exposes this combined layer via iSCSI, NFS, SMB, and S3 and therefore can be qualified as a unified storage product.


Nexfs layer relies on a multi-tier storage pool that hides data movement and data location on this pool. The solution runs on existing hardware and be coupled with new hardware, but globally it represents a way to protect investment and bring a new life for existing servers and storage units.

This new scalable file system is available free of charge with the community edition and is validated with Centos, Ubuntu and Debian Linux, being a software only solution. Disk filesystems xfs and ext4 with extended attributes are needed for tier-2 file systems.

The team has developed several key technology elements - SmartProtect, SmartTier, SmartClone and Nexassert - to enable this transparent optimized data placement logic:
  • SmartProtect is a method to generate data protection by using replication for the primary to the secondary S3 compatible storage. The feature provides granularity to generate file copy each time a data chunk changes.
  • SmartTier is the perfect companion of SmartProtect but can run also independently. This feature enables the move of data chunks to secondary storage within a pool built and controlled by Nexfs. Supporting 3 tiers in this first release with SSD, SATA and object storage, the data placement engine moves back and forth data between tiers. This data management layer masks the physical location of data blocks with the beauty to have some of them residing on fast storage and others on slow ones i.e unfrequent data on SATA and cold on object or cloud storage. In other words, the granularity is at sub-file level what Nexustorage team called a chunk and that chunk cn have different sizes. Frequent access chunks are maintained on fast storage and deliver the needed QoS with the right cost alignment among all tiers. The fact that an access to a migrated file is initiated, it doesn’t copy back the entire file but only the needed chunk.
  • SmartClone is the capability to create redundant child file gold images on tier 3.
  • Nexassert, under patent review, is the name of the technology developed also by Nexustorage to access data chunk where they resides. Thus, after the solution is used, the normal operation mode shows that files are not homogeneous stored in the same storage technology, i.e one storage technology, and finally their data chunks are distributed among SSD, HDD and object/cloud storage based on their access and frequency needs.
Exposing file and object are now pretty common but exposing a block interface, here iSCSI, built on object creates some huge constraints on the storage path. The team has made great development on this and supports for that Amazon S3, Cloudian and MinIO. You can boot and run Microsoft Windows and Linux servers that reside in cloud and object storage, Glen did a demo and booted a Windows VM from a VMDK stored on Nexfs.

For the future, Glen told us that his team plan to release SmartBackup that should work a bit like NetApp SnapDiff, SmartGuard to address the ransomware challenge, I anticipate some immutability here, and SmartIntegrityAssure. Nexustorage will introduce also a horizontal model to support multi-node cluster.

On the file system layer, Nexfs reminds what Veritas can do with VxFS and its Dynamic Storage Tiering feature layered on multi-volumes, each one having its own characteristics. VxVM creates a meta-volume called a volume set that encapsulates each volume. In that case, the file path doesn’t change at all, only the block map changes but the entire file is managed i.e the granularity is the file itself not a sub-file like it is with Nexfs. We also remember Enmotus who developed block-based tiering but it seems that the company ceased its operations now.

Nexfs is available via 2 components:
Finally, I invite you to check below the presentation used by Glen during that session.

Share:

Thursday, August 26, 2021

Panasas announced PanFS 9

Panasas, a historic player in HPC Storage with its parallel file system, today unveils the 9th major release of PanFS.

With its ActiveStor Ultra product line, the company continues to address adjacent needs beyond HPC with AI/ML needs, an other very demanding environment.

In this release, the team has added some new security features like file labeling for Security-Enhanced Linux and support of hardware-based encryption. The approach works at 2 levels, on the logical space, the access control uses the file-labeling capabilities and on the physical layer, the storage engine leverages AES-256 to guarantee automatic instant protection for data at rest. On the key management side, the firm partners with leading solutions.

Despite a real technology expertise and industry recognition, the company continues to suffer from a lack of visibility except from experts and lost some opportunities against new file storage players, both more classic but also in the same category. We expect to visit and meet again Panasas during a future edition of The IT Press Tour and looking forward to meeting the company during SC in Saint Louis in November.

Share:

Friday, August 20, 2021

Kalray pre-announced its Flashbox

Kalray, a french leader in new generation of processors for intensive data processing, seeded the industry landscape with its joint announcement with Viking Enterprise Solutions, a division of Sanmina. Together they design, develop, market and promote FLASHBOX, a new generation of disaggregated NVMe flash array. This solution is based on the Viking VDS2249R announced before the summer as a 2U chassis with 24 x 2.5' NVMe drives coupled to 2 100GbE ports or 6 100 GbE ports operating Kalray's Smart Storage Acceleration Card based on its MPPA DPU. This solutions is presented as a super fast storage array delivering 15M IOPS with less than 10µs.

This system is directly in competition with the Fungible FS1600s.

As the press release stated, the official launch is planned for end of September.

Share:

Wednesday, August 18, 2021

DDN launches Exa6

DDN, the #1 private storage company, unveiled recently the new major release of EXAScaler 6

EXA6 runs on the new EXAScaler Management Framework aka EMF.

This is a new iteration of Lustre-based parallel file systems with several new features and developments. Among them the engineering team adds the full support of NVIDIA Magnum IO GPUDirect Storage, online upgrades as it is a must without any possibility to restart such system, new tiering capabilities, API integration for external partners tools and Hot Nodes for client-side persistence to boost local access and processing. With AI and its multiple-read phases, having data locally on the GPU-based nodes and its NVMe storage generates significant performance boost for the application reducing drastically time to result. At every release iteration, EXAScaler makes serious progress with key enhancements and new features especially in data management confirming that the product is a real foundation for AI, Analytics/Big Data and of course HPC.

As I wrote again recently AI represents the new battlefield for File Systems and one of the most stressful workloads different from transactional, Media and HPC.

Coldago Research confirms the positioning of DDN in its recent 2020 File and Object Storage Maps and in its June 2021 Storage Unicorn Note.

We'll learn more about EXA6 and other key DDN developments and directions during the next IT Press Tour in October.

Share:

Friday, August 13, 2021

AI, the new battlefileld for file systems

I wrote originally this article for StorageNewsletter published April 16th, 2021

The recent Nvidia GTC 2021 conference was once again the opportunity for storage vendors to refresh, update and promote their storage offering for AI aligned with new product announcements from the GPU giant.

Historically HPC was essentially deployed at research centers, universities and at some scientific/technical sites for very specific needs. Some vendors have tried to promote HPC into enterprises and some storage players follow the direction polishing the space and products with new design. Essentially covered by parallel file systems, this effort has anticipated a much larger adoption of systems dedicated to AI, ML and deep learning specifically. I notice a sort of convergence between HPC and AI both in terms of needs, requirements and vendors’ solutions.

As said, AI brings and extends HPC to the enterprise with some similarities, but of course differences, and really shakes some historical storage approaches as applications are highly demanding. AI presents new IO patterns with a need for high bandwidth, low latency with a mixed of file size, a mixed of file access pattern – small and large, random and sequential – but clearly read operations dominate the IO interaction with storage. These operations have a clear impact on the training phase and it can be limited by the data reading rate and of course multiple re-reads. Memory and storage hierarchy play a fundamental role here. And as a general idea, bring parallelism at every stage is an answer illustrated by the famous “Divide and Conquer” mantra to deliver linear scalability. In other words, performance is the dominant factor and requirement being a must have.

On the network side, of course IB, RoCE and Ethernet – multiple 100Gb/s or 200Gb/s ports are pretty classic here – configured as non blocking networks are well deployed with some capability to group interfaces for some vendors.


I tried to summarize the file storage solutions I see coupled with Nvidia DGX systems, POD and SuperPOD. I see essentially 2 file storage families here with NAS and parallel file systems all based on NVMe to satisfy performance requirements. At the limit, HDD could be considered for tier-2 of these but tiering has to be managed carefully to avoid any impact of data access.

To boost IOs, Nvidia introduced GPUDirect Storage to avoid the CPU path and role in data exchange providing data to GPU faster. This feature is enabled via the Magnum IO API superset.

For NAS, I don’t mean classic file server but rather a super fast scalable NAS such Pure Storage FlashBlade with AIRI or VAST Data Universal Storage and the LightSpeed specific flavor leveraging NFS over RDMA. Some of them develop their own internal architecture like VAST Data with a shared everything model or Pure Storage with a specific hardware-based shared nothing approach, all these playing in the scale-out NAS area. In some NFS-based configurations I also see NetApp AF-Series and Dell PowerScale, also a shared nothing model, as well and all these players use the nconnect (maximum is 16) NFS mount option to gain some parallelism effect from the NFS farm.

On the other category, parallel file systems are present as they’re aligned by design with the parallelism need of AI workloads. It’s important to keep in mind that this model is intrusive with an agent or a piece of software installed on the DGX side. These offering are represented by DDN AI400X embedding ExaScaler based on Lustre, WekaIO with Weka AI based on WekaFS, IBM Spectrum Scale and BeeGFS promoted by NetApp among others. As I wrote recently I'm still surprised that HPE, with Cray coupled with Lustre having also WekaIO on their catalog for AI, picked IBM Spectrum Scale claiming they need it for AI. And if you check the Weka AI reference architecture below, you will see some HPE Proliant in the configuration and this HPE white paper continues to illustrate the fuzzy HPE strategy. Also as said, DDN leverages Lustre with ExaScaler within AI400X especially for AI. As an example an AI400X delivers 50GB/s in read and 3 million IO/s and DDN recommends one appliance for every 4 DGX. The linear scalability was demonstrated with 10 AI400X coupled to 16 DGX offering 500GB/s in read. But it’s almost a philosophical or even religious decision, at least what is good is the wide choice users can consider.


Clearly, AI sets a new level of reference architecture (RA) and all the vendors listed above published some RA for DGX, some with POD and it seems that DDN is the only one validated for SuperPOD. You can visit this Nvidia page listing DDN, Dell, IBM, NetApp, Pure Storage, VAST Data and WekaIO:
Of course, it exists other vendors outside of this official Nvidia list who published RAs as well, I can list Pavilion Data or HPE with WekaIO for instance.

I also see the multi-protocol aspect of these storage as an attractive attributes like the possibility to expose the namespace via S3 and NFS without the need to copy or duplicate content. It could be used for instance for remote data ingest via S3 from dispersed IoT sensors and then processed accumulated data “locally” via NFS.

The term U3 – Unified, Universal and Ubiquitous – I introduced some time ago could be used also to qualify these offerings.

Definitely AI is the new battlefield for file systems and innovation runs fast in that domain.
Share: