Friday, January 30, 2026

VergeIO, a serious VMware alternative

VergeIO joined The IT Press Tour in California this week and it invited us to learn more about the company and the solution with a very interesting market adoption. As the solution represents a serious alternative to VMware, meeting VergeIO is key to understand their momentum and technology.

VergeIO is the developer of VergeOS, a private cloud operating system designed to replace traditional multi-layer virtualization stacks with a single, unified software platform. Founded in 2012, VergeIO positions VergeOS as a fundamentally different approach to private cloud infrastructure - one that collapses compute, storage, networking, automation, and management into a single codebase rather than integrating multiple independent products. This architecture is intended to reduce complexity, improve efficiency, and dramatically lower cost compared to legacy stacks such as VMware, Nutanix, or multi-vendor three-tier architectures.


At the core of VergeOS is a single-kernel, single-SKU design built on a KVM/QEMU foundation with extensive proprietary extensions. Instead of separate hypervisor, storage, network, and management layers, VergeOS integrates all services directly into the operating system, eliminating translation layers, duplicated metadata, and operational silos. VergeIO emphasizes that this unified codebase—roughly 400,000 lines of code compared to tens of millions in traditional stacks—enables higher performance, lower latency, and easier lifecycle management.

Key platform components include VergeFS, an integrated storage system with global inline deduplication across disk, memory, and data movement; VergeFabric, a built-in software-defined networking layer providing Layer 2/Layer 3 services, micro-segmentation, routing, and security without external controllers; and ioClone-based snapshots that create fully independent, immutable copies with no performance penalty. VergeOS supports live VM migration across nodes and storage tiers, mixed hardware generations, heterogeneous CPU vendors, and flexible “vulture-converged” deployments where compute-heavy and storage-heavy nodes coexist within the same system.


For resilience and data protection, VergeOS introduces ioGuardian, a kernel-level local protection mechanism that enables near-instant recovery from catastrophic failures by dynamically retrieving only required data blocks, and ioReplicate, a WAN-optimized replication engine that transmits only unique deduplicated data. Disaster recovery is built around virtual data centers (VDCs), VergeOS’s native multi-tenancy model, which captures entire environments—networking, storage, policies, and VMs - in consistent snapshots that can be restored in minutes with a reported 100% customer DR success rate.


VergeOS is licensed per physical server, with all features included, regardless of core count, memory, or storage capacity. This model, combined with hardware reuse (including VxRail, Nutanix, and commodity servers), enables customers to modernize infrastructure, repatriate workloads from public cloud, and exit VMware with reported cost reductions of 50–80% and significant operational simplification. Overall, VergeIO positions VergeOS as a full private cloud operating system that transforms infrastructure from a complex stack into a cohesive, software-defined platform optimized for efficiency, resilience, and long-term sustainability.

Share:

Wednesday, January 28, 2026

InfoScale as a modern availability approach to serve modern applications services

InfoScale participated this week to The IT Press Tour in California. It was key to understand the Cloud Software Group strategy following the acquisition last summer of Arctera and the recent split in 3 business entities with BackupExec.com, Arctera and InfoScale that finally marked the return and the reference to VERITAS Software with VxFS, VxVM and VCS.

InfoScale is an enterprise software platform focused on delivering real-time operational resilience across applications, data, and infrastructure in hybrid, multi-cloud, and on-prem environments. With roots in Veritas and decades of evolution through Symantec, Arctera, and now Cloud Software Group (CSG), InfoScale positions itself as a foundational resilience layer for mission-critical enterprise systems across industries such as finance, healthcare, telecom, energy, and insurance. The company reports widespread enterprise adoption, significant downtime reduction, and large operational efficiency gains among customers.


The core challenge InfoScale addresses is the growing complexity and risk in modern IT operations, driven by hybrid cloud environments, increasing cyber threats, AI-driven workloads, and shrinking tolerance for downtime. Enterprises now operate thousands of interconnected applications and infrastructure layers, where outages, ransomware, human error, maintenance activities, and cloud disruptions can cause severe financial and operational impact. Traditional point solutions for availability, backup, and recovery are siloed and reactive, creating blind spots and fragmented resilience strategies.


InfoScale’s approach is “software-defined resiliency,” integrating application-aware clustering, storage resiliency, orchestration, failover, replication, snapshots, and automation into a unified full-stack platform. It provides real-time application health monitoring, rapid recovery, data integrity controls, secure snapshots, and cross-cloud mobility, enabling near-zero RPO and RTO while supporting heterogeneous operating systems, storage platforms, and cloud environments. The platform is designed to orchestrate dependencies across applications, infrastructure, and data, delivering automated failover, disaster recovery, and cyber resilience at scale.


The platform emphasizes operational intelligence and orchestration, allowing enterprises to simulate failures, test recovery plans, detect anomalies, and coordinate recovery workflows across complex environments. Future roadmap capabilities include predictive analytics for failure forecasting, intelligent workload optimization, automated recovery architecture design, fault simulation, contextual alerting using AI, and proactive threat detection for ransomware and cyber attacks. These features aim to move resilience from reactive recovery to autonomous, predictive operations.


Overall, InfoScale positions itself as a comprehensive enterprise resilience engine that unifies data, infrastructure, and application management, helping organizations reduce downtime, mitigate cyber risk, modernize IT operations, and maintain continuous business operations in increasingly complex digital environments.

Share:

Monday, January 26, 2026

Globus for a comprehensive data management approach

Globus joined the current IT Press Tour in California and it was the right moment to dig into a global largely adopted solution.

The Globus initiative is a nonprofit research IT platform developed and operated by the University of Chicago, with a mission to increase the efficiency and effectiveness of data-driven research through sustainable software. For nearly 30 years, Globus has evolved from early distributed computing and grid technologies into a leading software-as-a-service platform for managing research data, computation, and collaboration at scale. Its work has supported major scientific advances and global research infrastructure, including grid computing contributions associated with Nobel-recognized discoveries and large international scientific collaborations.


The platform is designed specifically for the research market, which has unique requirements such as secure but open science, highly distributed multi-institutional collaborations, and diverse domain-specific tools and workflows. Researchers typically operate in environments with on-premise compute and storage, high-performance networks, and access to distributed national or international computing resources, creating a need for unified, secure, and reliable data and compute services across heterogeneous systems.


Globus provides a comprehensive research IT platform that includes managed data transfer and synchronization, collaborative data sharing, unified data access across storage systems, publication and discovery services, remote compute execution, automation workflows, and metadata indexing for data discovery. Its hybrid architecture integrates hosted cloud services with local agents and institutional resources, enabling secure, federated access and orchestration across laptops, labs, on-prem infrastructure, cloud storage, and HPC facilities. Key capabilities include “fire-and-forget” reliable data transfers, secure tunneling across security boundaries, fine-grained access control for sharing, and federated authentication. 


Globus also supports protected and regulated data, offers automation and orchestration tools (Flows), and enables scalable compute execution across diverse environments (Globus Compute). Adoption metrics indicate widespread global use across thousands of institutions, users, and data collections, with significant data volumes transferred daily. The platform follows a freemium subscription model, with free access for nonprofit research and paid tiers for enhanced features, compliance requirements, and commercial use. Target customers include research universities, national labs, supercomputing centers, government agencies, and commercial research organizations. 


Overall, Globus positions itself as a horizontal, domain-agnostic infrastructure layer for modern research, addressing the challenges of distributed science, data-intensive workflows, and collaborative research ecosystems.

Share:

Tuesday, January 20, 2026

Back in California for the 66th Edition of The IT Press Tour

The IT Press Tour, a media event launched in June 2010, announced participating companies for the 66th edition scheduled the week of January 26 in Silicon Valley, CA.

During this edition, the press group will meet 8 hot and innovative organizations:
  • Globus, a widely used unstructured data management solution built by University of Chicago,
  • Helikai, a young agentic AI company launched by Jamie Lerner,
  • InfoScale, the commercial entity that promotes historical Veritas Software products,
  • The Lustre Collective, an independent organization assuring the development of Lustre,
  • Novodisq, a recent player based in New-Zealand building a very dense flash-based storage system,
  • Scale Computing, a reference in workloads consolidation for the edge,
  • VergeIO, a innovative server virtualization thsat replaces VMware,
  • and Zettalane, a flexible block and file SDS for the cloud.

I invite you to follow us on Twitter with #ITPT and @ITPressTour, my twitter handle and @CDP_FST and journalists' respective handle.
Share:

Monday, December 15, 2025

DBaaS, a must have on-premises and in the cloud

Severalnines delivered an interesting presentation during The IT Press Tour last week in Athens, Greece.

The CEO, Vinay Joosery, outlines a practical and timely vision for Sovereign DBaaS, addressing growing concerns around cloud lock-in, cost control, regulatory compliance, and data sovereignty. Drawing on more than 20 years of experience in databases and open source, Severalnines positions itself as an enabler for organizations that want the benefits of Database-as-a-Service without surrendering control to hyperscalers or proprietary vendors.


The presentation opens by framing the current cloud market imbalance. Hyperscalers dominate infrastructure spending with investments measured in tens of billions of dollars annually, while European cloud service providers operate on dramatically smaller budgets. This structural gap, combined with increasing regulatory pressure around data residency and sovereignty, has created a crisis for organizations seeking long-term control over their data platforms. Traditional DBaaS offerings deliver convenience and speed, but at the cost of vendor lock-in, escalating expenses at scale, and limited deployment flexibility.


Severalnines describes the evolution of DBaaS in three phases. The first phase, led by hyperscalers, introduced managed databases optimized for rapid deployment but tightly coupled to a single cloud environment. The second phase saw database vendors offering their own managed services, reducing cloud lock-in but introducing database-level lock-in and still limiting on-premises or hybrid deployments. The third and emerging phase is Sovereign DBaaS, where organizations build and operate their own DBaaS platforms using open-source databases, retaining full control over infrastructure, configuration, security, and costs.

Sovereign DBaaS is defined as a vendor-neutral, self-implemented model that supports polyglot database environments across on-premises, private cloud, public cloud, or hybrid setups. It emphasizes portability, ownership, and freedom of choice, allowing enterprises to meet regulatory and governance requirements while avoiding forced migrations or proprietary constraints. This model aligns with the reality that most enterprises are now multi-cloud or hybrid, managing hundreds of database instances across diverse platforms and technologies.


At the core of Severalnines’ approach is ClusterControl, which delivers the operational experience of DBaaS without ceding control. The platform automates the full database lifecycle - deployment, scaling, monitoring, backup, recovery, security, upgrades, and performance optimization - while focusing on Day-2 operations. It supports a wide range of open-source databases including MySQL, PostgreSQL, MongoDB, Redis, and others, across heterogeneous infrastructure environments.

The presentation highlights key benefits: predictable cost management using open-source licensing, infrastructure and database ownership, compliance with data residency requirements, and consistent operations across environments. A customer case study from ABSA (formerly Barclays Africa Group) illustrates this approach at scale, with thousands of servers managed in a hybrid environment using open-source automation instead of proprietary DBaaS platforms.

In conclusion, Severalnines argues that Sovereign DBaaS represents the future of database platforms for enterprises seeking independence, resilience, and long-term sustainability. Rather than rejecting cloud technologies, it reclaims control by combining automation, open source, and flexible deployment - delivering DBaaS “your way,” without lock-in, hidden costs, or loss of sovereignty.

Interesting topics to follow in the coming quarters.

Share:

Friday, December 12, 2025

Enakta Labs to promote DAOS with a pretty exclusive expertise

Enakta Labs, participated to The IT Press Tour this week in Athens, Greece and we discover a very interesting approach for large scale HPC and AI storage environments with a special expertise on DAOS.

The presentation introduces Enakta Labs, a storage software company founded in 2023 with the goal of making DAOS (Distributed Asynchronous Object Storage) practical, usable, and supportable for enterprise and high-performance environments. Founded by Denis Nuja and Denis Barakhtanov - both long-time contributors to DAOS and members of the original DAOS Foundation team - Enakta Labs builds on years of hands-on experience deploying DAOS-based systems in demanding production and government environments.


Enakta Labs positions its platform as a bridge between the raw performance of DAOS and the operational realities of enterprise IT. While DAOS is widely recognized as one of the fastest storage engines available—particularly for HPC and AI workloads—it has traditionally been difficult to deploy, manage, and integrate. Enakta Labs addresses this gap by delivering a fully managed DAOS lifecycle platform that runs on bare metal, removes operational complexity, and exposes familiar enterprise interfaces.

At the core of the offering is Enakta Labs Platform v1.3, based on DAOS 2.6.4. The platform provides a highly available management framework, containerized data services, and a slim, immutable Linux OS image booted entirely via PXE. This approach allows entire clusters - from small systems to hundreds of nodes—to be deployed in hours rather than days or weeks. All services run in lightweight containers, and the system avoids legacy dependencies such as TFTP, favoring modern HTTPS-based PXE booting.

A major differentiator is usability and integration. Enakta Labs provides a fully featured, high-performance SMB interface, enabling DAOS to be consumed by traditional enterprise applications, alongside a limited-API S3 interface (currently in technical preview). Native PyTorch integration allows direct acceleration of AI and machine-learning workflows, while support for RoCE over 400-Gb Ethernet enables extreme throughput with low latency on modern networking hardware.

Performance and scalability are central themes of the presentation. Enakta Labs demonstrates real-world benchmarks from large GPU clusters, including participation in the IO-500 benchmark on Core42’s 10,000-GPU “Maximus-01” system. These results show the platform scaling linearly with hardware, achieving extremely high bandwidth and IOPS while approaching physical hardware limits. The company emphasizes that future gains will come automatically as faster NVMe drives and network adapters are introduced—without requiring architectural redesigns.

The platform is designed to eliminate common pain points in high-performance storage: GPU idle time caused by slow I/O, excessively long filesystem checks, over-provisioning to meet performance targets, and the need for deep DAOS expertise just to operate a system. Enakta Labs also highlights its customer-centric philosophy: transparent roadmaps, simple licensing, no mandatory professional services, and direct access to engineering support - even outside business hours.

Target use cases include HPC, AI/ML, fintech, and media and entertainment, particularly environments where storage performance directly impacts business or research outcomes. The go-to-market strategy is partner-first and hardware-agnostic, with simple node-based perpetual licensing and subscription options for service providers. Looking ahead, the roadmap includes broader S3 and NFS support, continued performance optimization, and deeper ecosystem partnerships, while maintaining a deliberate focus on product quality and customer satisfaction over aggressive VC-driven growth.


Overall, Enakta Labs presents itself as a pragmatic enabler of next-generation storage - unlocking DAOS performance for real-world enterprise and AI workloads without the traditional complexity barrier.

The coming months will be interesting as some vendors like HPE has clearly selected DAOS for HPC and AI storage already deployed at famous US labs.

Share:

Thursday, November 27, 2025

65th Edition of The IT Press Tour in Athens, Greece

The IT Press Tour, a media event launched in June 2010, announced participating companies for the 65th edition scheduled December 9 and 10 in Athens, Greece.

During this edition, the press group will meet:
  • 9LivesData, the developer of NEC Hydrastor, introducing a new product named high9stor compatible with HYDRAstor,
  • Enakta Labs, a recognized expert team on DAOS,
  • Ewigbyte, an innovator around long-term data preservation on glass,
  • HyperBunker, an interesting dual air-gap model,
  • Plakar, a fast growing backup and recovery open source software,
  • and Severalnines, a reference in DBaaS.

I invite you to follow us on Twitter with #ITPT and @ITPressTour, my twitter handle and @CDP_FST and journalists' respective handle.
Share:

Thursday, October 23, 2025

Shade shakes the media and entertainment storage

Shade participated recently to The IT Press Tour. The company is positioning itself as a new foundational layer for creative work in an era where traditional cloud storage is buckling under the weight of exploding media files and AI-driven production. During its IT Press Tour presentation in New York, the company painted a stark picture of a creative economy hitting structural limits: by 2027, requests for creative data are expected to jump from 15 million terabytes per minute today to more than 100 million, fueled by higher-resolution media and generative AI workflows.

The presentation opened with unfiltered customer testimonials describing Dropbox, Box, and Google Drive as slow, brittle, and operationally hostile to modern creative teams. From multi-hour downloads for short video edits to accounts frozen for accessing large projects, Shade argues that general-purpose cloud storage was never designed for large, collaborative, media-heavy workflows.

Shade’s core thesis is that “every company is now a creative company,” whether in marketing, sports, media, or enterprise communications. Yet most teams rely on a fragmented stack of tools—LucidLink for fast access, Frame.io for review, Dropbox or Box for sharing, and assorted archives for cold storage. This fragmentation creates cost overruns, security blind spots, and a massive productivity tax on creative directors who become de facto file managers.

Shade proposes a single alternative: an intelligent, cloud-native NAS designed specifically for creative teams. Its platform combines real-time file streaming, large-project support, built-in review and annotation, version control, and web-based sharing into a single “source of truth.” Instead of downloading massive files, users stream them instantly. Instead of juggling tools, teams upload once and collaborate across desktop and web interfaces.

A major differentiator is Shade’s embedded AI layer. The platform offers AI-driven autotagging, semantic search, facial recognition, transcription, and automated previews for complex media formats. Tasks that once took hours—such as tagging hundreds of videos or locating assets from years-old shoots—are reduced to seconds. Shade positions this not as novelty AI, but as infrastructure that finally makes large creative archives usable.

The economic argument is equally central. Shade claims customers can reduce storage and workflow costs by up to 70 percent by consolidating tools. Real-world examples in the deck show organizations spending hundreds of thousands of dollars annually across multiple platforms, compared with significantly lower all-in costs on Shade.

Looking ahead, Shade plans to extend beyond creative teams into broader business workflows. Its roadmap includes automation via APIs, deeper integrations with creative and AI ecosystems, push/pull data movement across S3-compatible storage, and the launch of Shade Vault for optimized cold media storage. Long term, Shade envisions content flowing automatically between creative, marketing, finance, and operations—turning media from a cost center into a connected business asset.

In short, Shade is betting that creative storage is no longer a niche problem but a systemic one—and that solving it requires rethinking storage, collaboration, and intelligence as a single platform rather than a patchwork of tools.

Share:

Tuesday, October 21, 2025

TextQL to make Big Data queryable without the complexity

TextQL joined the recent edition of The IT Press Tour in New-York city. The firm positions itself as a pragmatic response to one of enterprise data’s most persistent frustrations: extracting value from large, complex datasets without forcing every user to become a data engineer. The company has built its proposition around a simple idea—querying data should be as intuitive as writing a question - while still meeting the performance, scale, and governance demands of modern organizations.

At its core, TextQL provides a natural-language interface that allows users to ask questions about their data in plain English and receive structured, SQL-ready outputs. Instead of replacing SQL or existing analytics platforms, TextQL acts as a translation layer between human intent and technical execution. Business analysts, product managers, and operational teams can query data conversationally, while the system generates optimized queries that data teams can inspect, refine, or deploy directly.

What differentiates TextQL from earlier “chat with your data” attempts is its emphasis on reliability and enterprise-readiness. The platform is designed to understand database schemas, relationships, and constraints, reducing the risk of ambiguous or misleading queries. Rather than producing opaque answers, TextQL outputs transparent, auditable SQL, helping organizations maintain trust in the results and align with governance and compliance requirements.

The company is clearly targeting environments where data complexity has outpaced usability. As organizations accumulate data across warehouses, lakes, and operational systems, access often becomes bottlenecked by a small group of specialists. TextQL aims to relieve that pressure by democratizing access while keeping technical control in place. In practice, this means faster insights for non-technical teams and fewer ad hoc requests landing on data engineers’ desks.

TextQL also frames its technology as an accelerator rather than a disruption. It integrates with existing databases, BI tools, and workflows, allowing organizations to adopt it incrementally. This “fit-into-what-you-already-have” approach reflects an understanding of enterprise realities, where wholesale platform replacement is rarely an option. By working alongside established data stacks, TextQL positions itself as a productivity layer rather than yet another system to manage.

From a market perspective, TextQL sits at the intersection of analytics, AI-assisted development, and data accessibility. Its pitch resonates particularly in sectors where data-driven decision-making is widespread but unevenly distributed, such as finance, SaaS, retail, and operations-heavy enterprises. The value proposition is less about flashy AI outputs and more about shaving time, reducing friction, and lowering the barrier between questions and answers.

In an era where organizations are awash in data but still struggle to use it effectively, TextQL’s approach reflects a broader shift: success is no longer defined by how much data you store, but by how easily people can work with it. By translating human language into structured queries with precision and accountability, TextQL is betting that the future of analytics lies not in replacing experts, but in empowering everyone else to think - and ask questions - like one.

Share:

Thursday, October 16, 2025

CTERA extends it data platform with an intelligent data fabric

At The IT Press Tour in New York, CTERA outlined a long-term vision for how enterprises must rethink unstructured data management as they move deeper into hybrid cloud, edge computing, and generative AI. The company argues that unstructured data—files, media, logs, sensor outputs—has become both the greatest operational burden and the largest untapped asset inside modern organizations.

Founded as a software-defined storage company, CTERA now positions itself as a leader in distributed enterprise data services, targeting a $10B+ file storage market with an object-based, globally distributed architecture. The company reports strong momentum, with recurring high-margin software revenue, 35% growth, and 125% net retention, largely driven through a partner-centric go-to-market model.

The presentation framed CTERA’s strategy around three “waves of innovation.” The first, Location Intelligence, addresses the fragmentation created by on-premises systems, edge locations, and multiple clouds. As enterprises triple unstructured data capacity by 2028, architectural complexity has surged, with hybrid models becoming the norm. CTERA’s response is a global file namespace that spans data centers, clouds, and edge sites, delivering local performance via caching while maintaining centralized control, security, and disaster recovery.

The second wave, Metadata Intelligence, focuses on turning this unified environment into a secure data lake. As ransomware and insider threats intensify, CTERA argues that storage itself must become security-aware. Its platform incorporates immutable snapshots, block-level anomaly detection, and continuous file activity monitoring. Recent product launches—including Ransom Protect, Insight, and metadata-driven analytics—aim to provide visibility, forensic insight, and automated response without copying or relocating sensitive data.

The third wave, Enterprise Intelligence, addresses the growing gap between AI ambition and reality. While enterprise GenAI spending is projected to exceed $400 billion by 2028, studies show that the vast majority of pilots fail. CTERA attributes this not to model limitations, but to poor data quality caused by silos, inconsistent formats, weak metadata, and security constraints. The company’s answer is an intelligent data fabric that curates high-quality datasets by enforcing access controls, enriching metadata, filtering content, and enabling semantic retrieval directly over existing files and object storage.

Rather than positioning AI as an abstract analytics layer, CTERA envisions “virtual employees”—permission-aware AI agents that operate as domain experts within defined guardrails. Built on Model Context Protocol (MCP), these agents can search, summarize, retrieve, and act on enterprise data while respecting existing ACLs and compliance requirements.

Across case studies—from global branding agencies to government fleets and healthcare legal firms—CTERA emphasizes a consistent theme: organizations do not need more data, but better data infrastructure. By evolving from distributed file storage to an AI-ready data fabric, CTERA aims to turn unstructured data from a growing liability into a durable competitive advantage.

Share: