Friday, June 07, 2019

WekaIO breaks a new record

Benchmark results are key when you address technical computing and especially HPC. It exists essentially 2 flavors of benchmark: the ones to beat records and most of the time at a very high price and only a few entities can afford them and ones on users' hardware to beat performance thresholds on more common deployments.

We met WekaIO at their new HQ 3 days ago with The IT Press Tour and we got a very good session as usual following the visit we did at their R&D center in Tel-Aviv in November 2016, almost 3 years ago, the company has made lots of progress.

WekaIO has chosen to publish regularly various performance benchmarks on different hardware platform like SPECsfs and more recently did the STAC-M3 Antuco and Kanaga test on the FrostByte and Relion systems made by Penguin Computing.

This test uses the Kx time-series distributed database kdb+ 3.6 deployed across 7 Relion servers with data stored on FrostByte storage with 9 NVMe SSD per 1U server, Mellanox router and adapters with WekaIO MatrixFS 3.2.2 as displayed below. The result is impressive with 37.5GB/s and 2.5M 4K IOPS.

MatrixFS has a new design addressing limitations of "classic" distributed file system such Lustre and Spectrum Scale. It is what we called a Symmetric Parallel Distributed File System: Symmetric means that metadata service is delivered by any server in the cluster, Parallel implies that clients distributes/chunks/splits/stripes file data among multiple data servers thanks to the agent loaded on each client system and Distributed refers to the file system layout existing across data servers.

As financial demanding applications require high performance, financial institutions consider seriously such results that serve as a very good vehicle to boost WekaIO customer adoption.


But there is a mystery and just a miss: WekaIO is listed on the Penguin partners page but the FrostByte product page continues to list BeeGFS, Red Hat Gluster and Ceph and Lustre and no WekaIO as I wrote last year. Bizarre.

Share:

0 commentaires: