







VergeIO is the developer of VergeOS, a private cloud operating system designed to replace traditional multi-layer virtualization stacks with a single, unified software platform. Founded in 2012, VergeIO positions VergeOS as a fundamentally different approach to private cloud infrastructure - one that collapses compute, storage, networking, automation, and management into a single codebase rather than integrating multiple independent products. This architecture is intended to reduce complexity, improve efficiency, and dramatically lower cost compared to legacy stacks such as VMware, Nutanix, or multi-vendor three-tier architectures.

At the core of VergeOS is a single-kernel, single-SKU design built on a KVM/QEMU foundation with extensive proprietary extensions. Instead of separate hypervisor, storage, network, and management layers, VergeOS integrates all services directly into the operating system, eliminating translation layers, duplicated metadata, and operational silos. VergeIO emphasizes that this unified codebase—roughly 400,000 lines of code compared to tens of millions in traditional stacks—enables higher performance, lower latency, and easier lifecycle management.

Key platform components include VergeFS, an integrated storage system with global inline deduplication across disk, memory, and data movement; VergeFabric, a built-in software-defined networking layer providing Layer 2/Layer 3 services, micro-segmentation, routing, and security without external controllers; and ioClone-based snapshots that create fully independent, immutable copies with no performance penalty. VergeOS supports live VM migration across nodes and storage tiers, mixed hardware generations, heterogeneous CPU vendors, and flexible “vulture-converged” deployments where compute-heavy and storage-heavy nodes coexist within the same system.

For resilience and data protection, VergeOS introduces ioGuardian, a kernel-level local protection mechanism that enables near-instant recovery from catastrophic failures by dynamically retrieving only required data blocks, and ioReplicate, a WAN-optimized replication engine that transmits only unique deduplicated data. Disaster recovery is built around virtual data centers (VDCs), VergeOS’s native multi-tenancy model, which captures entire environments—networking, storage, policies, and VMs - in consistent snapshots that can be restored in minutes with a reported 100% customer DR success rate.

VergeOS is licensed per physical server, with all features included, regardless of core count, memory, or storage capacity. This model, combined with hardware reuse (including VxRail, Nutanix, and commodity servers), enables customers to modernize infrastructure, repatriate workloads from public cloud, and exit VMware with reported cost reductions of 50–80% and significant operational simplification. Overall, VergeIO positions VergeOS as a full private cloud operating system that transforms infrastructure from a complex stack into a cohesive, software-defined platform optimized for efficiency, resilience, and long-term sustainability.

InfoScale is an enterprise software platform focused on delivering real-time operational resilience across applications, data, and infrastructure in hybrid, multi-cloud, and on-prem environments. With roots in Veritas and decades of evolution through Symantec, Arctera, and now Cloud Software Group (CSG), InfoScale positions itself as a foundational resilience layer for mission-critical enterprise systems across industries such as finance, healthcare, telecom, energy, and insurance. The company reports widespread enterprise adoption, significant downtime reduction, and large operational efficiency gains among customers.

The core challenge InfoScale addresses is the growing complexity and risk in modern IT operations, driven by hybrid cloud environments, increasing cyber threats, AI-driven workloads, and shrinking tolerance for downtime. Enterprises now operate thousands of interconnected applications and infrastructure layers, where outages, ransomware, human error, maintenance activities, and cloud disruptions can cause severe financial and operational impact. Traditional point solutions for availability, backup, and recovery are siloed and reactive, creating blind spots and fragmented resilience strategies.

InfoScale’s approach is “software-defined resiliency,” integrating application-aware clustering, storage resiliency, orchestration, failover, replication, snapshots, and automation into a unified full-stack platform. It provides real-time application health monitoring, rapid recovery, data integrity controls, secure snapshots, and cross-cloud mobility, enabling near-zero RPO and RTO while supporting heterogeneous operating systems, storage platforms, and cloud environments. The platform is designed to orchestrate dependencies across applications, infrastructure, and data, delivering automated failover, disaster recovery, and cyber resilience at scale.

The platform emphasizes operational intelligence and orchestration, allowing enterprises to simulate failures, test recovery plans, detect anomalies, and coordinate recovery workflows across complex environments. Future roadmap capabilities include predictive analytics for failure forecasting, intelligent workload optimization, automated recovery architecture design, fault simulation, contextual alerting using AI, and proactive threat detection for ransomware and cyber attacks. These features aim to move resilience from reactive recovery to autonomous, predictive operations.

Overall, InfoScale positions itself as a comprehensive enterprise resilience engine that unifies data, infrastructure, and application management, helping organizations reduce downtime, mitigate cyber risk, modernize IT operations, and maintain continuous business operations in increasingly complex digital environments.
The Globus initiative is a nonprofit research IT platform developed and operated by the University of Chicago, with a mission to increase the efficiency and effectiveness of data-driven research through sustainable software. For nearly 30 years, Globus has evolved from early distributed computing and grid technologies into a leading software-as-a-service platform for managing research data, computation, and collaboration at scale. Its work has supported major scientific advances and global research infrastructure, including grid computing contributions associated with Nobel-recognized discoveries and large international scientific collaborations.

The platform is designed specifically for the research market, which has unique requirements such as secure but open science, highly distributed multi-institutional collaborations, and diverse domain-specific tools and workflows. Researchers typically operate in environments with on-premise compute and storage, high-performance networks, and access to distributed national or international computing resources, creating a need for unified, secure, and reliable data and compute services across heterogeneous systems.

Globus provides a comprehensive research IT platform that includes managed data transfer and synchronization, collaborative data sharing, unified data access across storage systems, publication and discovery services, remote compute execution, automation workflows, and metadata indexing for data discovery. Its hybrid architecture integrates hosted cloud services with local agents and institutional resources, enabling secure, federated access and orchestration across laptops, labs, on-prem infrastructure, cloud storage, and HPC facilities. Key capabilities include “fire-and-forget” reliable data transfers, secure tunneling across security boundaries, fine-grained access control for sharing, and federated authentication.

Globus also supports protected and regulated data, offers automation and orchestration tools (Flows), and enables scalable compute execution across diverse environments (Globus Compute). Adoption metrics indicate widespread global use across thousands of institutions, users, and data collections, with significant data volumes transferred daily. The platform follows a freemium subscription model, with free access for nonprofit research and paid tiers for enhanced features, compliance requirements, and commercial use. Target customers include research universities, national labs, supercomputing centers, government agencies, and commercial research organizations.

Overall, Globus positions itself as a horizontal, domain-agnostic infrastructure layer for modern research, addressing the challenges of distributed science, data-intensive workflows, and collaborative research ecosystems.
The IT Press Tour, a media event launched in June 2010, announced participating companies for the 66th edition scheduled the week of January 26 in Silicon Valley, CA.

The CEO, Vinay Joosery, outlines a practical and timely vision for Sovereign DBaaS, addressing growing concerns around cloud lock-in, cost control, regulatory compliance, and data sovereignty. Drawing on more than 20 years of experience in databases and open source, Severalnines positions itself as an enabler for organizations that want the benefits of Database-as-a-Service without surrendering control to hyperscalers or proprietary vendors.

The presentation opens by framing the current cloud market imbalance. Hyperscalers dominate infrastructure spending with investments measured in tens of billions of dollars annually, while European cloud service providers operate on dramatically smaller budgets. This structural gap, combined with increasing regulatory pressure around data residency and sovereignty, has created a crisis for organizations seeking long-term control over their data platforms. Traditional DBaaS offerings deliver convenience and speed, but at the cost of vendor lock-in, escalating expenses at scale, and limited deployment flexibility.

Severalnines describes the evolution of DBaaS in three phases. The first phase, led by hyperscalers, introduced managed databases optimized for rapid deployment but tightly coupled to a single cloud environment. The second phase saw database vendors offering their own managed services, reducing cloud lock-in but introducing database-level lock-in and still limiting on-premises or hybrid deployments. The third and emerging phase is Sovereign DBaaS, where organizations build and operate their own DBaaS platforms using open-source databases, retaining full control over infrastructure, configuration, security, and costs.
Sovereign DBaaS is defined as a vendor-neutral, self-implemented model that supports polyglot database environments across on-premises, private cloud, public cloud, or hybrid setups. It emphasizes portability, ownership, and freedom of choice, allowing enterprises to meet regulatory and governance requirements while avoiding forced migrations or proprietary constraints. This model aligns with the reality that most enterprises are now multi-cloud or hybrid, managing hundreds of database instances across diverse platforms and technologies.
At the core of Severalnines’ approach is ClusterControl, which delivers the operational experience of DBaaS without ceding control. The platform automates the full database lifecycle - deployment, scaling, monitoring, backup, recovery, security, upgrades, and performance optimization - while focusing on Day-2 operations. It supports a wide range of open-source databases including MySQL, PostgreSQL, MongoDB, Redis, and others, across heterogeneous infrastructure environments.
The presentation highlights key benefits: predictable cost management using open-source licensing, infrastructure and database ownership, compliance with data residency requirements, and consistent operations across environments. A customer case study from ABSA (formerly Barclays Africa Group) illustrates this approach at scale, with thousands of servers managed in a hybrid environment using open-source automation instead of proprietary DBaaS platforms.
In conclusion, Severalnines argues that Sovereign DBaaS represents the future of database platforms for enterprises seeking independence, resilience, and long-term sustainability. Rather than rejecting cloud technologies, it reclaims control by combining automation, open source, and flexible deployment - delivering DBaaS “your way,” without lock-in, hidden costs, or loss of sovereignty.
Interesting topics to follow in the coming quarters.
The presentation introduces Enakta Labs, a storage software company founded in 2023 with the goal of making DAOS (Distributed Asynchronous Object Storage) practical, usable, and supportable for enterprise and high-performance environments. Founded by Denis Nuja and Denis Barakhtanov - both long-time contributors to DAOS and members of the original DAOS Foundation team - Enakta Labs builds on years of hands-on experience deploying DAOS-based systems in demanding production and government environments.

Enakta Labs positions its platform as a bridge between the raw performance of DAOS and the operational realities of enterprise IT. While DAOS is widely recognized as one of the fastest storage engines available—particularly for HPC and AI workloads—it has traditionally been difficult to deploy, manage, and integrate. Enakta Labs addresses this gap by delivering a fully managed DAOS lifecycle platform that runs on bare metal, removes operational complexity, and exposes familiar enterprise interfaces.
At the core of the offering is Enakta Labs Platform v1.3, based on DAOS 2.6.4. The platform provides a highly available management framework, containerized data services, and a slim, immutable Linux OS image booted entirely via PXE. This approach allows entire clusters - from small systems to hundreds of nodes—to be deployed in hours rather than days or weeks. All services run in lightweight containers, and the system avoids legacy dependencies such as TFTP, favoring modern HTTPS-based PXE booting.
A major differentiator is usability and integration. Enakta Labs provides a fully featured, high-performance SMB interface, enabling DAOS to be consumed by traditional enterprise applications, alongside a limited-API S3 interface (currently in technical preview). Native PyTorch integration allows direct acceleration of AI and machine-learning workflows, while support for RoCE over 400-Gb Ethernet enables extreme throughput with low latency on modern networking hardware.
Performance and scalability are central themes of the presentation. Enakta Labs demonstrates real-world benchmarks from large GPU clusters, including participation in the IO-500 benchmark on Core42’s 10,000-GPU “Maximus-01” system. These results show the platform scaling linearly with hardware, achieving extremely high bandwidth and IOPS while approaching physical hardware limits. The company emphasizes that future gains will come automatically as faster NVMe drives and network adapters are introduced—without requiring architectural redesigns.
The platform is designed to eliminate common pain points in high-performance storage: GPU idle time caused by slow I/O, excessively long filesystem checks, over-provisioning to meet performance targets, and the need for deep DAOS expertise just to operate a system. Enakta Labs also highlights its customer-centric philosophy: transparent roadmaps, simple licensing, no mandatory professional services, and direct access to engineering support - even outside business hours.
Target use cases include HPC, AI/ML, fintech, and media and entertainment, particularly environments where storage performance directly impacts business or research outcomes. The go-to-market strategy is partner-first and hardware-agnostic, with simple node-based perpetual licensing and subscription options for service providers. Looking ahead, the roadmap includes broader S3 and NFS support, continued performance optimization, and deeper ecosystem partnerships, while maintaining a deliberate focus on product quality and customer satisfaction over aggressive VC-driven growth.

Overall, Enakta Labs presents itself as a pragmatic enabler of next-generation storage - unlocking DAOS performance for real-world enterprise and AI workloads without the traditional complexity barrier.
The coming months will be interesting as some vendors like HPE has clearly selected DAOS for HPC and AI storage already deployed at famous US labs.

The presentation opened with unfiltered customer testimonials describing Dropbox, Box, and Google Drive as slow, brittle, and operationally hostile to modern creative teams. From multi-hour downloads for short video edits to accounts frozen for accessing large projects, Shade argues that general-purpose cloud storage was never designed for large, collaborative, media-heavy workflows.
Shade’s core thesis is that “every company is now a creative company,” whether in marketing, sports, media, or enterprise communications. Yet most teams rely on a fragmented stack of tools—LucidLink for fast access, Frame.io for review, Dropbox or Box for sharing, and assorted archives for cold storage. This fragmentation creates cost overruns, security blind spots, and a massive productivity tax on creative directors who become de facto file managers.
Shade proposes a single alternative: an intelligent, cloud-native NAS designed specifically for creative teams. Its platform combines real-time file streaming, large-project support, built-in review and annotation, version control, and web-based sharing into a single “source of truth.” Instead of downloading massive files, users stream them instantly. Instead of juggling tools, teams upload once and collaborate across desktop and web interfaces.
A major differentiator is Shade’s embedded AI layer. The platform offers AI-driven autotagging, semantic search, facial recognition, transcription, and automated previews for complex media formats. Tasks that once took hours—such as tagging hundreds of videos or locating assets from years-old shoots—are reduced to seconds. Shade positions this not as novelty AI, but as infrastructure that finally makes large creative archives usable.
The economic argument is equally central. Shade claims customers can reduce storage and workflow costs by up to 70 percent by consolidating tools. Real-world examples in the deck show organizations spending hundreds of thousands of dollars annually across multiple platforms, compared with significantly lower all-in costs on Shade.
Looking ahead, Shade plans to extend beyond creative teams into broader business workflows. Its roadmap includes automation via APIs, deeper integrations with creative and AI ecosystems, push/pull data movement across S3-compatible storage, and the launch of Shade Vault for optimized cold media storage. Long term, Shade envisions content flowing automatically between creative, marketing, finance, and operations—turning media from a cost center into a connected business asset.
In short, Shade is betting that creative storage is no longer a niche problem but a systemic one—and that solving it requires rethinking storage, collaboration, and intelligence as a single platform rather than a patchwork of tools.

At its core, TextQL provides a natural-language interface that allows users to ask questions about their data in plain English and receive structured, SQL-ready outputs. Instead of replacing SQL or existing analytics platforms, TextQL acts as a translation layer between human intent and technical execution. Business analysts, product managers, and operational teams can query data conversationally, while the system generates optimized queries that data teams can inspect, refine, or deploy directly.
What differentiates TextQL from earlier “chat with your data” attempts is its emphasis on reliability and enterprise-readiness. The platform is designed to understand database schemas, relationships, and constraints, reducing the risk of ambiguous or misleading queries. Rather than producing opaque answers, TextQL outputs transparent, auditable SQL, helping organizations maintain trust in the results and align with governance and compliance requirements.
The company is clearly targeting environments where data complexity has outpaced usability. As organizations accumulate data across warehouses, lakes, and operational systems, access often becomes bottlenecked by a small group of specialists. TextQL aims to relieve that pressure by democratizing access while keeping technical control in place. In practice, this means faster insights for non-technical teams and fewer ad hoc requests landing on data engineers’ desks.
TextQL also frames its technology as an accelerator rather than a disruption. It integrates with existing databases, BI tools, and workflows, allowing organizations to adopt it incrementally. This “fit-into-what-you-already-have” approach reflects an understanding of enterprise realities, where wholesale platform replacement is rarely an option. By working alongside established data stacks, TextQL positions itself as a productivity layer rather than yet another system to manage.
From a market perspective, TextQL sits at the intersection of analytics, AI-assisted development, and data accessibility. Its pitch resonates particularly in sectors where data-driven decision-making is widespread but unevenly distributed, such as finance, SaaS, retail, and operations-heavy enterprises. The value proposition is less about flashy AI outputs and more about shaving time, reducing friction, and lowering the barrier between questions and answers.
In an era where organizations are awash in data but still struggle to use it effectively, TextQL’s approach reflects a broader shift: success is no longer defined by how much data you store, but by how easily people can work with it. By translating human language into structured queries with precision and accountability, TextQL is betting that the future of analytics lies not in replacing experts, but in empowering everyone else to think - and ask questions - like one.
