May 15, 2018

LucidLink introduces a new cloud file system approach

LucidLink, a recent cloud storage innovation company, was founded in 2016 by two ex DataCore technical leaders Peter Thompson and George Dochev.

The team develops a cloud file system, pretty unique by its features, that exposes a local file storage connected to a S3 object storage back-end, localized on-premise, in private or public clouds. The file storage space is POSIX compliant and appears to be local and so it's fully transparent for applications. There is no limit in size for that file storage space and it can grow dynamically. Beyond that, applications can work in an offline mode with no internet connection to the data repository.

The product combines a few key ideas to address network and internet latency. First, data are stored and streamed from multiple S3 connections in parallel and delivered on the edge compute systems natively in file mode. We can even imagine deployments on AWS EC2 connected to S3 within the cloud. No download nor synchronization are needed meaning that caching and prefetching are mandatory to boost user experience.

The client could be a server, a desktop or a laptop and we'll use the term client from now. The client software or agent is available for Linux (32 or 64 bits), MacOS and Windows (32 and 64 bits) representing the compute side of the solution. On Linux the mount point is a simple directory "as usual" and on MacOS and Windows, a special Lucid folder is created and all files stored and accessed by Lucid belong to this structure. Metadata are stored on clients and on the back-end with an intelligent coordination service for both security, integrity, protection, ubiquity, mobility and scalability reasons.

Of course data are encrypted on the client with AES-256, moved and stored encrypted securing the data end-to-end from the creation to the end point of storage. The client can act itself as a NFS or SMB server for the rest of machines within an enterprise.

The back-end must be a S3-compatible object store residing on-premise or offered by cloud service providers: AWS S3 of course, Minio S3 on GCP and Azure, AliCloud and Wasabi plus on-premises solutions such Cloudian or Caringo for instances.

The solution is super flexible, OS agnostic - Linux, MacOS and Windows are the 3 majors environments - and S3 is the de-facto cloud storage standard, representing a new Software-Defined Storage approach with no appliance or special gateway logic. To try the product, you can register on this page.

In term fo pricing, LucidLink is charged on a pay as you go model.

I think you get the idea, LucidLink is more than just a mount point on top of an S3 back-end, you can find plenty of such products, but this one is about streaming data and serving them fast with very limited latency thanks to a parallel design inviting you to consider this approach versus some local ones.

Last news illustrated the interest of the solution with the extension of LucidLink team with 3 famous advisors: Mark Templeton, former CEO of Citrix for 14 years, Umesh Maheshwari, co-founder and CTO of Nimble Storage, and Peter Ziebelman, lecturer at Stanford and founding partner of Palo Alto Venture Partners.

And we'll have the privilege to meet the team during the next IT Press Tour in June in California.
Share:

May 14, 2018

Cask joins Google

A few days following the Velostrata acquisition intent by Google, Cask Data is announcing similar destiny within Google Cloud Platform (GCP) and we understand that Google accelerates data and workloads move and migration towards GCP.

Founded in 2011 in Palo Alto with a total of $37.5M raised in 5 VC rounds, Cask develops an open source unified platform for Hadoop and Spark named Cask Data Application Platform (CDAP), released under the Apache 2.0 license model. The platform idea is to simplify, standardize and provide potential portability between environments originally started as on-premise deployments.

At the same time Twitter announced a partial move of its giant Hadoop environments and cold data storage to GCP. Already an AWS user, Twitter doesn't replace AWS. We also saw that Atos and Google signed recently a strategic partnership for hybrid cloud. All these news clearly show a strong offensive in favor of GCP.
Share:

May 8, 2018

UpBound, new player around container and Rook

UpBound a recent company started by Bassam Tabbara, inventor of Rook, and former CTO of Quantum following Symform acquisition a few years ago, just emerges with $9M raised from GV.

Based in Seattle, the company develops a cloud-native computing approach on Kubernetes that enables organizations to run, scale and optimize their services across multiple clusters, clouds and hybrid deployments. To refresh minds, Rook is an open source framework for Kubernetes initiated by Mr Tabbara dedicated to stateful workloads.
Share:

May 7, 2018

Dell Technologies World 2018 Recap


New edition of the famous conference with a different experience. The conference name changes and now EMC name was dropped, only survives the name of the acquirer Dell with the word Technologies to reflect the holding structure on top of 7 entities: Dell, DellEMC, Pivotal, RSA, Secureworks, Virtustream and VMware.

For several years until 2016, it was EMC World, then last year Dell EMC World and starting now Dell Technologies World to promote the global approach. 14,000 attendees globally from all the planet with 300 press, analyst and influencers, 3500 partners and more than 110 exhibitors and sponsors, among them Aptare, Axway for Syncplicity, Carbonite, Datrium, DefendX formerly NTP Software, DriveScale, Komprise, Liqid, Nexenta, Nutanix, Peer Sofware and Varonis. It was again a huge success.

The difference resides in the nature of companies Dell and EMC, EMC World was a storage centric event, Dell Technologies World is an IT conference covering servers, storage, networking, cloud… to illustrate Dell business. In other words, EMC World was the biggest storage show even from one vendor, we don't have one any more except NetApp Insight probably but the size is really different representing 1/3 of the EMC event size.

The conference theme was around Transformation and Dell leaders have articulated that journey around 4 pillars: IT, Digital, Workforce and Security Transformation, supporting all Dell developments and initiatives in all directions.

Michael Dell was a bit enigmatic about the future options for Dell as a company and decided to not answer a question from a journalist redirecting him to the SEC brochure about it.

Dell unveils PowerMAX, the new EMC high-end storage array

The key storage announcement was about PowerMAX, the “new” VMAX wishing also to invite users to forget the DSSD acquisition and the failure associated with it. PowerMAX represents the new high-end storage array with full NVMe support with dual ported drives, NVMef should arrive later, and a 5:1 inline data reduction. Currently DellEMC uses WDC flash drives but plan to add a second manufacturer soon and as Intel is a key strategic partner we can estimate it will be Intel, just an obvious comment.

DellEMC continues to promote an architecture with dual active controller, the famous Brick, supporting 2x NVMe SSD tray of 24 2.5’ drives. Intel Optane and other Storage Class Memory (SCM) elements will arrive in the future. Starting now, PowerMax exists in 2 models: PowerMAX 2000 and 8000, respectively with 1 or 2 bricks and 96 drives of 1.92, 3.84 or 7.68TB supporting open environments and with 1 to 8 Bricks supporting same drives for open and mainframe.

Both internal connectivity is InfiniBand and external interface is FC or GbE plus FICON for the 8000 model. Both also expose NAS protocols if users wish to consolidate workload on same big iron.

Data protection is RAID only with Raid 5(7+1) the default for both models, Raid6 (6+2) for both and PowerMax 2000 adds Raid 5(3+1) as it starts smaller. Capacity is 1PB effective for PowerMAX 2000 delivering 1.7 M IOPS (RRH 8k) and up to 4PB effective for PowerMax 8000 reaching up to 10M IOPS.

Each brick includes software available in 2 editions:
  • Essentials: SnapVX, Deduplication, Compression, Non-disruptive Migration, QoS, and iCDM Basic
  • Pro: Essentials + SRDF, D@RE, eNAS, PowerPath, SRM, and iCDM Advanced.
The software leverage Machine Learning for data placement, pattern recognition and predictive analytics to sustain high performance processing 40 million data sets in real-time per array, driving six billion decisions per day. PowerMax is available today May 7.

XtremIO offers now an new efficient WAN replication mechanism and is available also with an entry-level iteration with a significant cost reduction.

We also learnt that Isilon will be available very soon on Google Cloud Platform and we invite you to read the dedicated post about this.

On the Hyper-Converged Infrastructure (HCI) side, Dell unveils new VxRail Appliances and VxRack SDDC Systems. The first arrives with NVMe, Intel Xeon Scalable processors, NVidia Tesla P40 GPU and 25Gbps networking and the second leverages the PowerEdge 14th generation of servers.

Dell continues to enhance CloudIQ, the DellEMC answer to Nimble Storage InfoSight now an HPE product. It will support PowerMAX this summer.

The event concluded with a concert of Sting who played several famous The Police songs, amazing again...

Next year it will be at the same place approximately the same date.
Share:

May 4, 2018

Isilon available on Google Cloud Platform

It was one of the storage news of Dell Technologies World 2018 but almost hidden by the team with very low coverage on the press as well. We didn't get lots of details about that one but understood Google will offer a NAS service powered by Isilon technologies based on collocated Dell systems.

In others words, it won't be the software flavor IsilonSD to be deployed on Google Cloud Platform (GCP) but real physical systems. The service is named Isilon Cloud for GCP and will be GA very soon, now under a sort of controlled release or early access flavor. This dedicated file service infrastructure for each user, account or company will offer a sub-millisecond latency network connected to GCP clusters and will be fully supported and monitored by Dell.


On GCP, file services are represented: single node file server via the launcher, GlusterFS, Avere and more recently Elastifile also via the launcher.

The battle of enterprise file service in the cloud is on as this iteration follows a few other ones already engaged by the two others cloud giants, Microsoft with Azure and of course AWS.

For Microsoft, the company acquired early 2018 Avere Systems, one of the most advanced and powerful file storage service well known for its NAS quality. This completes what Azure announced during NetApp Insight in October with an Enterprise NFS service based on NetApp NFS stack. Microsoft offers also SMB with Azure File Services (AFS) essentially based on SMB. AFS is not the Windows SMB running on Azure nodes but a completely new SMB server implementation based on Azure Tables and Blobs as the backing store, Tables for metadata and Blobs for file data.

For AWS, the service is essentially covered by Elastic File System aka EFS and the bridge Storage Gateway with the on-premises instances which can exposes block and file protocols.
Share:

May 3, 2018

Qumulo recruits its CMO

Qumulo, a leader in file storage, just announced the recruitment of its CMO, Peter Zaballos, a veteran marketing executive with a proven track record. He is director at the board of directors at QuickPivot, chairman at Insignia Systems and left his previous CMO position at SPS Commerce last January. This is a normal addition in the development of a company that needs to strengthen its leadership layer.

Interesting, he never worked for a storage company except at LSI Logic 30 years ago, we notice his period at RealNetworks like many Qumulo and Isilon people in the past. It's an indicator that the company is looking for something different with an unique market approach.

The arrival of Peter Zaballos means also that Jay Wampold, VP of Marketing, who arrived in February 2017, leaves the adventure.

The company is now ready for a next stage of growth with potential extensions of its leadership in file storage in Media and Entertainment to other market segments, a new round but seriously an IPO. But will the market let Qumulo play as an independent disruptor, not sure, as the company is very attractive for a few gorillas, who are already in partnership such Cisco and HPE.
Share:

April 29, 2018

A report from NAB 2018

The 2018 NAB (National Association of Broadcasters) Show edition confirms trends and past conferences audiences with more than 100,000 attendees coming from all around the planet with a total of 1,700 exhibitors including 819 from outside the United States. Among these 244 were first-time exhibitors.

This edition confirms, if needed, that storage, in its various form, has strong footprint within the media and entertainment (M&E) industry with 4K the becoming working norm, 8K around the corner and 16K on the horizon, complex media workflow, live streaming in large resolution, new generation of drones, special effects, VR and cloud-connected operations. This sector is very competitive. This year, machine learning and analytics have made a new iteration within vendors' solutions.

We noticed the absence of Oracle and NetApp, they didn't have a booth but meeting rooms which is not the same indicator of presence for attendees looking for a real booth with product open to the public, illustrating that M&E is not a priority or at least not a segment they address with serious solutions today.

Another big company, HPE, had a pretty small booth but IBM got a significant one.

For Oracle it's a real surprise and a bad news for users, as a reminder the giant software company acquired Front Porch Digital in 2014, one of the market reference in rich media content storage management and finally reduced seriously its M&E footprint. Surfing on that market reality, several companies such Masstech, who acquired SGL last year, StorageDNA, XenData or Atempo have started to develop DIVA archives migration services.

Object storage has a limited visibility as today M&E workflows rely on file storage. We didn't notice significant presence except the S3 marker leader Cloudian, DDN, WDC/HGST and of course Object Matrix who has won the storage category in IABM’s new Broadcast and Media awards. Cisco showed object storage solutions from partners and confirmed that they don't care about any of them as they promoted all of them but rather gaining market share for UCS server series. It reminds me what HPE did with 4 solutions 3 years ago, they just wished to sell their hardware platform with any software. Others vendors were anecdotic.

WDC/HGST confirms that "basic" object storage suffers from competition with the lack of data services, it is perfectly illustrated by the recent agreement with StorReduce and explained difficulties met by several object storage vendors.

Two Ceph based players exhibited as well, SoftIron and Concurrent, owned by Vecima Networks, who develop Ceph appliance respectively named HyperDrive and Aquari.

Building a strong file access and storage solution on object storage layer is really a mission and we don't identify, for several years, any serious product in that space that can compete with players with native and strong file storage products represented by Avere, Dell EMC Isilon, Panasas, Pure Storage, Quantum, Qumulo or Rozo Systems, all present at NAB this year.

In addition to this list, the expo was the opportunity to meet specialized storage vendors like Active Storage, Avid, DDP, EditShare, Facilis, Harmonic, Scale Logic, SNS and Tiger Technology who develop dedicated file storage solutions.

All these file storage players approach the market with different solutions: NAS in both Scale-Up or Scale-Out flavors or with a dedicated approach based on their shared file system, in the past named SAN file system or SAN file sharing system who was renamed due to the emergence and huge presence now of IP/Ethernet storage networks. Some products can expose both access methods NAS or proprietary mode.

Content tiering is also key in that market sector and companies like Komprise and StrongBox Data Solutions illustrated how their data migration approach fit.

We discovered a recent company, Storbyte, as they exhibit for the very first time with a full flash disk array.

Of course, cloud was everywhere and no industry is ignored, AWS, Google and Microsoft had strong presence. AWS has Elastic File System, Storage Gateway and partners solutions, GCP as mentions on their web site "doesn't currently provide a native filer solution as a service, but you can run a filer on Cloud Platform in a variety of ways" like a single node filer via Google Cloud Launcher, GlusterFS or Avere. A recent example is Elastifile who announced its file system ECFS available via the Google Cloud Launcher to provide NFS service. Microsoft signed a partnership with NetApp to provide NFS on Azure named Azure Enterprise NFS Service and acquired Avere early this year.

Battle is on between big cloud providers for on-premises enterprise file storage migration and services to the cloud. Battle is on between alternative cloud storage providers especially Wasabi and Backblaze B2 developing rapidly their ecosystem and providing cloud storage at a very attractive price.

We also saw several booths displaying LTO solutions, of course the LTO consortium booth, but also XenData, StorageDNA, IBM, Qualstar or Spectra Logic, displaying LTFS solutions.

We found some interesting NVMe integrations like the Quantum Xcellis Scale-out NAS configured with NVMe devices and associated software, illustrating a full IP configuration. Moving to all-IP workflows is a reality today simplifying integrations, scalability and flexibility like file-based approach did a few years ago. And finally, we wish to mention that Samsung and Intel, with impressive booths showed interesting products especially NVMe devices.
Share:

April 19, 2018

Interview with Mike Lauth, President, CEO and Co-Founder, iXsystems

Philippe Nicolas (PN): Mike, recently The IT
Press Tour met up with iXsystems. Could you share with us the genesis of the company, as your products are much more known than the company behind (them)?
Mike Lauth (ML): My partner Matt Olander and I acquired the hardware business of Unix vendor BSDi in the early 2000s when the company was being divided up and sold off. The software side of the business went to embedded software vendor Wind River Systems. We think we got the better end of the deal because the hardware manufacturing operation not only had strong capabilities but also a valuable portfolio of Silicon Valley customers. We knew that if we treated them right and watched our bottom line, we could ride every wave and weather every storm that tore through the Valley. And we have!

PN: You're a true believer of Open Source, what are the advantages you see in such philosophy for you as a builder and for users?
ML: We have always been in the server hardware business and have never believed in putting artificial constraints on the hardware we sold to limit or control how it is used. That may sound obvious for hardware but the same philosophy can also apply to software. Open Source software should never have any artificial restrictions placed on it that might limit what you can do with it. In practice, this makes open source software an equally-solid building block as the hardware to a business that chooses to embrace it. While permissively-licensed software like FreeBSD has been criticized for not forcing companies to “give back”, few companies actually survive the closing of software that is otherwise open to everyone. We follow the FreeBSD model with FreeNAS and at over ten million downloads, we’ve learned that giving away our storage software with no strings attached can win you millions of users, thousands of quality assurance partners, and many thousands of hardware customers. With FreeNAS we leverage the community as part of the QA process; users get a product that has been exposed to thousands of QA partners in exchange for meeting their storage needs. The availability of the FreeNAS source code allows members of the community to change the software to meet specific needs and this transparency also allows a business to ensure the security of their data.

PN: What are the open source projects you drive or ones in which you are a key participator?
ML: We develop the TrueOS desktop/server operating system based on FreeBSD but our runaway hit is FreeNAS, the Open Source file, block and object enterprise-grade Open Source Software Defined Storage. The FreeNAS project was facing an uncertain future back in 2009 and as FreeNAS users and community members ourselves, we didn’t want to see the project fade away. We negotiated to acquire the FreeNAS project in 2010 and set about rewriting it, modernizing it, and incorporating the ZFS file system. Multi-terabyte hard drives had recently gained adoption in the market, and we instantly recognized the potential of FreeNAS to deliver enterprise-class storage services, coupled with our hardware, to a market that was desperate to leverage Open Source economics to address rapid data growth.

PN: You produce and build your own products with your assembly entirely in San Jose? Please explain to us the production chain, what do you source outside…? From whom…?
ML: We are definitely not a VAR; we are a System Builder. We don’t resell off-the-shelf systems like many other vendors do. Instead, we source individual components and platforms from leading distributors and ODMs for assembly, integration, and testing at our San Jose facility. Even though we have a standard set of products, most systems we produce can be custom-configured. Our goal is to be flexible and build our products around our customers’ requirements instead of requiring our customers to squeeze their requirements into a narrow product line, as most of our competitors do. You name a manufacturer or distributor and we buy from them, giving us a global footprint that rivals our largest competitors.

PN: Also, do you serve other HW vendors? If yes, who are they and what are the reasons they select iXsystems?
ML: In addition to creating custom internal SKUs, as well as our our own TrueNAS and FreeNAS storage appliances, we do build server appliances for partners who brand them to meet their needs. There is a huge gap between buying off-the-shelf servers and ordering container loads of custom systems from Asia. We fill that gap well by combining flexibility with raw on-site assembly capacity. Of the partner relationships we can discuss openly, we build the ScaleEngine media streaming appliance which, like Netflix’s Open Connect appliance, gets positioned at strategic data centers around the world to service regional loads. Keeping their server and storage configurations standard is critical for ScaleEngine to manage and scale their operation.

PN: How many customers do you have? How many systems do you produce a month? May we have the split between servers and storage? How many PB did you ship in 2017?
ML: We have thousands of customers around the world, many of which have benefited from a multi-decade relationship with iXsystems. Our customer base runs the gamut from hyperscale data centers and customers building public clouds to individuals deploying our FreeNAS Mini storage system in small offices. We called 2016 the “Year of the Petabyte” due to the sheer number of customers, such as McGill University we earned that were using TrueNAS to manage multiple petabytes of storage. By the end of 2017 we had nearly 5,000 customers and over 1 exabyte shipped. Both our server and storage businesses continue to grow year over year, but we do see faster acceleration in our storage growth. This is due to the increased reach of companies using FreeNAS. It’s pretty common for us to talk to one of our server customers and end up finding out that they’re also using FreeNAS in some capacity for their storage needs. Those customers are pretty excited to learn that there is an enterprise version of FreeNAS, which makes the transition to TrueNAS seamless for their mission-critical use cases. Those customers typically have experience using FreeNAS and love it but need a supported storage solution for business operations, which makes for an easy sale and migration. We’re always going to continue to serve the enterprise server market, but it is really exciting that we have made a name for ourselves as an enterprise storage company over the past seven years.

PN: I understand you’re profitable, what is your current growth rate? In Servers and Storage?
ML: Server sales have been solid for decades and regularly experience surges with new data center deployments. Storage sales have consistently seen over 50% annual revenue growth since launching TrueNAS in 2011 and are taking on a life of their own.

PN: iXsystems acquired and then rewrote FreeNAS. What were the reasons behind this enormous task?
ML: When we acquired FreeNAS, its GUI was rather dated and it was lacking important enterprise features. We knew that ZFS was a natural addition to bring those features into FreeNAS and so we concluded that a ground-up rewrite was due. The original FreeNAS PHP-based GUI was limiting, so we moved first to a Django-based web framework which we are now retiring in favor of an AngularJS-based one later this year. The initial rewrite was indeed a monumental task but we’ve found that it addressed many customer needs. The rewrite also allowed us to move to a more evolutionary development model thanks to our separation of the FreeNAS middleware from its user interface. This also allows us to provide a “REST” API for automated management of FreeNAS systems.

PN: FreeNAS is clearly a best-seller if I can use that term. Could you tell us which partners use it and how many installations you have so far? What is the total capacity FreeNAS has under control?
ML: At over ten million FreeNAS downloads we have put OpenZFS in more hands than anyone and we believe this is true to the vision of the original ZFS team at Sun. We have indeed seen other vendors around the world either ship hardware tailored to FreeNAS or offer a turn-key solution with it including video editing solutions in Japan, Scandinavia, Canada and the USA. These are not official partners but we still see their efforts as extending our marketing reach by increasing our installed base. In addition to countless FreeNAS systems hiding behind corporate firewalls, we know that over 200,000+ FreeNAS systems check for updates on a regular basis. The exact number of FreeNAS systems will never be known but we conservatively estimate it to be at over half a million. As for the total number of Exabytes under control of FreeNAS, your guess is as good as ours given that we do not spy on our users. If every one of our more than 200K FreeNAS users has 4TB under management -- again a very conservative estimate -- this would mean that FreeNAS users around the world manage in excess of 8 exabytes.

PN: What was the motivation also to incorporate OpenZFS?
ML: OpenZFS truly is as good as people say it is. It is a next generation file system without equal and nothing can touch it at any price, nor will they for some time. It encompasses the functionality of traditional file systems, volume managers, RAID controllers, and more, with consistent reliability, functionality and performance. From its continuous data validation to its unlimited snapshots, OpenZFS truly puts the competition to shame.

PN: Now about TrueNAS, the enterprise flavor of FreeNAS, what are the differences? What do you bring to the table? What about scale-out storage?
ML: It’s funny but we give away the number one Software-Defined Storage solution [FreeNAS] while others struggle to make a profit from SDS. We learned the hard way that enterprise hardware and software must be tightly-integrated to provide enterprise-class performance, reliability, and a consistent customer experience, and TrueNAS is exactly that: a custom, user-serviceable hardware platform with enclosure management, high-availability, performance optimizations, 3rd party certifications, and up to 24/7 global support. I look at an Android phone as a perfect example of how the availability of the software does little to provide you a meaningful experience as a user until you seamlessly match it with custom hardware. And, who makes the best Android phones? Arguably, Google. Why? Because they can design the software and hardware in concert. As for scale-out, we are very pleased with OpenZFS’ scale-up architecture because it can not only support multiple petabytes with today’s hardware but also provides the building block of a yet-larger scale-out infrastructure. Unless it’s part of a truly replicated infrastructure, scale-out is more often a high-latency crutch for limited-capacity storage systems than truly scale-out storage. Scale-out storage has its place, but when you can build scale-up architectures in double-digit petabyte scale that can outperform most scale-out storage for fractions of the cost, you start to question its importance.

PN: I understand that the TrueNAS M-Series represents the third generation of TrueNAS. What features distinguish it from previous generations of TrueNAS and competing products?
ML: While some upgrades are no-brainers like upgrading from SAS2 to SAS3, doubling max capacity to 10.5PB while decreasing the physical footprint, and enhancing throughput to 100Gb ethernet and 32Gb Fiber Channel, we have really embraced flash technologies like NVMe and NVDIMM with the ”M” Series platform to achieve the best price/performance ratio possible. Any vendor can replace their HDDs with SSDs for a performance boost but integrating NVDIMM technology requires careful coordination between software and hardware engineering. Technologies like NVDIMM truly separate even the highest-end DIY FreeNAS systems from TrueNAS.

PN: You offer block and file storage, what about object storage such as S3? Any plan and what is/who is the one you have selected?
ML: Object has indeed joined file and block as the third essential network storage protocol and we are meeting demand for it by both serving S3-compatible storage and by supporting replication to Amazon S3, Azure Blob Storage, Backblaze B2, and Google Cloud Storage. True to our open architecture, we would like to support every popular cloud service available.

PN: Is there a lot of upsell between people who start with FreeNAS and then adopt TrueNAS or do you sell TrueNAS directly? In that case is it to replace some brands? And what are the top 3 brands you replace?
ML: Given that we have never met the majority of FreeNAS users, most of them work their way up our storage product line on their own, driven mostly by word of mouth. FreeNAS isn’t a household name, but it is a household name in IT. So, we focus our marketing efforts on TrueNAS, but we cannot ignore the steady flow of users who discover FreeNAS at home or in their dorm room and bring their storage management skills to work. I’d say that Dell/EMC, NetApp, and HPE are our top competitors, all of which we aggressively challenge on value -- measured by features, performance, and capacity per dollar -- and with our streamlined storage product line.

PN: Can a general-purpose hybrid and AFA solution take on niche AFA solutions?
ML: I would argue that they do every day but it comes down to the individual workload that a user is throwing at their NAS system. While an AFA may be needed for a specific workload, it is a very expensive “easy button” for a mixed “unified” workload. We encourage users to carefully analyze the characteristics of their real-world workloads and understand when AFA or hybrid are the way to go, or even have a clean division of the load between two systems. Your database may demand AFA while its backup is obviously fine with only rotational media. Niche solutions are exciting but we’re still focusing on the careful pairing of a general-purpose system to a workload because a lot can be achieved by understanding the requirements and abilities of each.

PN: What storage trends do you think will drive the next five years?
ML: There is no question that persistent memory -- like NVDIMM -- is a perfect glimpse at the future of many aspects of the storage stack: silica sits adjacent to silica with few busses or abstractions to slow it down. “All NVDIMM” storage however will only serve the smallest of workloads for years to come, leaving NVDIMMs as useful components in a larger storage stack rather than as a platform for primary storage. With the possible end of Moore’s Law rapidly approaching, the optimization of the storage technologies we have today will play a key role in the years to come. Brute-force solutions like AFA-everywhere will not be economically viable for some time. I also feel that as exponential data growth continues, the need for Open Source economics becomes all the more pressing. Addressing that need just isn’t sustainable for most companies with the “traditional” vendors or the public cloud.

PN: Do you have a hyperconvergence play and if so, how will you distinguish yourselves from niche players like Nutanix?
ML: The recent Meltdown and Spectre CPU-level vulnerabilities kindly reminded us that as with flash technologies, virtualization is a useful complement in a larger computing stack rather than its sum. Our hyperconvergence play began nearly a decade ago with “Jail” containers in FreeNAS and now the addition of the bhyve hypervisor. We are very excited about the potential of these technologies but have not yet integrated them into TrueNAS given the surprises that hosting an OS within an OS can introduce. You should not jeopardize the performance of your storage system for the sake of saving on an attached compute device. Careful capacity planning aside, the real question is how hypervisors can serve the storage stack. For example, a consistency-sensitive backup scenario such as a database is not adequately validated simply by knowing that you have a duplicate of its raw data. To be properly validated, a backup of that database should at a minimum be imported into an on-backup database server and ideally be interrogated by a validation application. The online nature of OpenZFS unlocks opportunities like this that were never possible with tape. You really want to be the person who can confidently say, “Yes boss, our tertiary backups are good” when asked, and a combination of OpenZFS and a hypervisor will ultimately achieve that.
Share:

April 17, 2018

Qumulo plans for replication

Qumulo, leader in file storage category, had a great presence in NAB with a reasonable booth. As a clear dominant player in the media and entertainment, we spent a few minutes with their team to speak about a few planned enhancements.

We learned two great informations from them:
  • Qumulo File Fabric aka QF2 is already available for AWS and the company will announce in the next few months a second platform support, it should be Azure or GCP but GCP is stronger in the M&E segment.
  • As mentioned in a recent post, the current replication mode is Asynchronous Automatic Continuous One-Way Replication. Distributed model like 1 - N is available but N -1 consolidation not yet and cascading topology could be deployed. A second iteration will introduce bi-directional mode on same directories normally in 2018 and the phase 3 will unveil single namespace and tiering across clusters around mid-2019.
Share:

April 13, 2018

C3DNA names its new CEO

C3DNA, emerging leader in multi-cloud workload management, just announced its new CEO, Max Michaels, coming from IBM where he led the Network Services activity. He also worked for Cisco and AT&T in the past.

We have met in March 2017 C3DNA with the IT Press Tour and we impressed by the capabilities of the product, probably one of the most agile and agnostic solutions on the market.

The product is represented and sold in Europe by CTI, Clougnitive Technology Incorporated, founded by Kamel Kerbib and Bernard Salvan, two well known IT people in France.
Share: