Monday, December 15, 2025

DBaaS, a must have on-premises and in the cloud

Severalnines delivered an interesting presentation during The IT Press Tour last week in Athens, Greece.

The CEO, Vinay Joosery, outlines a practical and timely vision for Sovereign DBaaS, addressing growing concerns around cloud lock-in, cost control, regulatory compliance, and data sovereignty. Drawing on more than 20 years of experience in databases and open source, Severalnines positions itself as an enabler for organizations that want the benefits of Database-as-a-Service without surrendering control to hyperscalers or proprietary vendors.


The presentation opens by framing the current cloud market imbalance. Hyperscalers dominate infrastructure spending with investments measured in tens of billions of dollars annually, while European cloud service providers operate on dramatically smaller budgets. This structural gap, combined with increasing regulatory pressure around data residency and sovereignty, has created a crisis for organizations seeking long-term control over their data platforms. Traditional DBaaS offerings deliver convenience and speed, but at the cost of vendor lock-in, escalating expenses at scale, and limited deployment flexibility.


Severalnines describes the evolution of DBaaS in three phases. The first phase, led by hyperscalers, introduced managed databases optimized for rapid deployment but tightly coupled to a single cloud environment. The second phase saw database vendors offering their own managed services, reducing cloud lock-in but introducing database-level lock-in and still limiting on-premises or hybrid deployments. The third and emerging phase is Sovereign DBaaS, where organizations build and operate their own DBaaS platforms using open-source databases, retaining full control over infrastructure, configuration, security, and costs.

Sovereign DBaaS is defined as a vendor-neutral, self-implemented model that supports polyglot database environments across on-premises, private cloud, public cloud, or hybrid setups. It emphasizes portability, ownership, and freedom of choice, allowing enterprises to meet regulatory and governance requirements while avoiding forced migrations or proprietary constraints. This model aligns with the reality that most enterprises are now multi-cloud or hybrid, managing hundreds of database instances across diverse platforms and technologies.


At the core of Severalnines’ approach is ClusterControl, which delivers the operational experience of DBaaS without ceding control. The platform automates the full database lifecycle - deployment, scaling, monitoring, backup, recovery, security, upgrades, and performance optimization - while focusing on Day-2 operations. It supports a wide range of open-source databases including MySQL, PostgreSQL, MongoDB, Redis, and others, across heterogeneous infrastructure environments.

The presentation highlights key benefits: predictable cost management using open-source licensing, infrastructure and database ownership, compliance with data residency requirements, and consistent operations across environments. A customer case study from ABSA (formerly Barclays Africa Group) illustrates this approach at scale, with thousands of servers managed in a hybrid environment using open-source automation instead of proprietary DBaaS platforms.

In conclusion, Severalnines argues that Sovereign DBaaS represents the future of database platforms for enterprises seeking independence, resilience, and long-term sustainability. Rather than rejecting cloud technologies, it reclaims control by combining automation, open source, and flexible deployment - delivering DBaaS “your way,” without lock-in, hidden costs, or loss of sovereignty.

Interesting topics to follow in the coming quarters.

Share:

Friday, December 12, 2025

Enakta Labs to promote DAOS with a pretty exclusive expertise

Enakta Labs, participated to The IT Press Tour this week in Athens, Greece and we discover a very interesting approach for large scale HPC and AI storage environments with a special expertise on DAOS.

The presentation introduces Enakta Labs, a storage software company founded in 2023 with the goal of making DAOS (Distributed Asynchronous Object Storage) practical, usable, and supportable for enterprise and high-performance environments. Founded by Denis Nuja and Denis Barakhtanov - both long-time contributors to DAOS and members of the original DAOS Foundation team - Enakta Labs builds on years of hands-on experience deploying DAOS-based systems in demanding production and government environments.


Enakta Labs positions its platform as a bridge between the raw performance of DAOS and the operational realities of enterprise IT. While DAOS is widely recognized as one of the fastest storage engines available—particularly for HPC and AI workloads—it has traditionally been difficult to deploy, manage, and integrate. Enakta Labs addresses this gap by delivering a fully managed DAOS lifecycle platform that runs on bare metal, removes operational complexity, and exposes familiar enterprise interfaces.

At the core of the offering is Enakta Labs Platform v1.3, based on DAOS 2.6.4. The platform provides a highly available management framework, containerized data services, and a slim, immutable Linux OS image booted entirely via PXE. This approach allows entire clusters - from small systems to hundreds of nodes—to be deployed in hours rather than days or weeks. All services run in lightweight containers, and the system avoids legacy dependencies such as TFTP, favoring modern HTTPS-based PXE booting.

A major differentiator is usability and integration. Enakta Labs provides a fully featured, high-performance SMB interface, enabling DAOS to be consumed by traditional enterprise applications, alongside a limited-API S3 interface (currently in technical preview). Native PyTorch integration allows direct acceleration of AI and machine-learning workflows, while support for RoCE over 400-Gb Ethernet enables extreme throughput with low latency on modern networking hardware.

Performance and scalability are central themes of the presentation. Enakta Labs demonstrates real-world benchmarks from large GPU clusters, including participation in the IO-500 benchmark on Core42’s 10,000-GPU “Maximus-01” system. These results show the platform scaling linearly with hardware, achieving extremely high bandwidth and IOPS while approaching physical hardware limits. The company emphasizes that future gains will come automatically as faster NVMe drives and network adapters are introduced—without requiring architectural redesigns.

The platform is designed to eliminate common pain points in high-performance storage: GPU idle time caused by slow I/O, excessively long filesystem checks, over-provisioning to meet performance targets, and the need for deep DAOS expertise just to operate a system. Enakta Labs also highlights its customer-centric philosophy: transparent roadmaps, simple licensing, no mandatory professional services, and direct access to engineering support - even outside business hours.

Target use cases include HPC, AI/ML, fintech, and media and entertainment, particularly environments where storage performance directly impacts business or research outcomes. The go-to-market strategy is partner-first and hardware-agnostic, with simple node-based perpetual licensing and subscription options for service providers. Looking ahead, the roadmap includes broader S3 and NFS support, continued performance optimization, and deeper ecosystem partnerships, while maintaining a deliberate focus on product quality and customer satisfaction over aggressive VC-driven growth.


Overall, Enakta Labs presents itself as a pragmatic enabler of next-generation storage - unlocking DAOS performance for real-world enterprise and AI workloads without the traditional complexity barrier.

The coming months will be interesting as some vendors like HPE has clearly selected DAOS for HPC and AI storage already deployed at famous US labs.

Share:

Thursday, November 27, 2025

65th Edition of The IT Press Tour in Athens, Greece

The IT Press Tour, a media event launched in June 2010, announced participating companies for the 65th edition scheduled December 9 and 10 in Athens, Greece.

During this edition, the press group will meet:
  • 9LivesData, the developer of NEC Hydrastor, introducing a new product named high9stor compatible with HYDRAstor,
  • Enakta Labs, a recognized expert team on DAOS,
  • Ewigbyte, an innovator around long-term data preservation on glass,
  • HyperBunker, an interesting dual air-gap model,
  • Plakar, a fast growing backup and recovery open source software,
  • and Severalnines, a reference in DBaaS.

I invite you to follow us on Twitter with #ITPT and @ITPressTour, my twitter handle and @CDP_FST and journalists' respective handle.
Share:

Thursday, October 23, 2025

Shade shakes the media and entertainment storage

Shade participated recently to The IT Press Tour. The company is positioning itself as a new foundational layer for creative work in an era where traditional cloud storage is buckling under the weight of exploding media files and AI-driven production. During its IT Press Tour presentation in New York, the company painted a stark picture of a creative economy hitting structural limits: by 2027, requests for creative data are expected to jump from 15 million terabytes per minute today to more than 100 million, fueled by higher-resolution media and generative AI workflows.

The presentation opened with unfiltered customer testimonials describing Dropbox, Box, and Google Drive as slow, brittle, and operationally hostile to modern creative teams. From multi-hour downloads for short video edits to accounts frozen for accessing large projects, Shade argues that general-purpose cloud storage was never designed for large, collaborative, media-heavy workflows.

Shade’s core thesis is that “every company is now a creative company,” whether in marketing, sports, media, or enterprise communications. Yet most teams rely on a fragmented stack of tools—LucidLink for fast access, Frame.io for review, Dropbox or Box for sharing, and assorted archives for cold storage. This fragmentation creates cost overruns, security blind spots, and a massive productivity tax on creative directors who become de facto file managers.

Shade proposes a single alternative: an intelligent, cloud-native NAS designed specifically for creative teams. Its platform combines real-time file streaming, large-project support, built-in review and annotation, version control, and web-based sharing into a single “source of truth.” Instead of downloading massive files, users stream them instantly. Instead of juggling tools, teams upload once and collaborate across desktop and web interfaces.

A major differentiator is Shade’s embedded AI layer. The platform offers AI-driven autotagging, semantic search, facial recognition, transcription, and automated previews for complex media formats. Tasks that once took hours—such as tagging hundreds of videos or locating assets from years-old shoots—are reduced to seconds. Shade positions this not as novelty AI, but as infrastructure that finally makes large creative archives usable.

The economic argument is equally central. Shade claims customers can reduce storage and workflow costs by up to 70 percent by consolidating tools. Real-world examples in the deck show organizations spending hundreds of thousands of dollars annually across multiple platforms, compared with significantly lower all-in costs on Shade.

Looking ahead, Shade plans to extend beyond creative teams into broader business workflows. Its roadmap includes automation via APIs, deeper integrations with creative and AI ecosystems, push/pull data movement across S3-compatible storage, and the launch of Shade Vault for optimized cold media storage. Long term, Shade envisions content flowing automatically between creative, marketing, finance, and operations—turning media from a cost center into a connected business asset.

In short, Shade is betting that creative storage is no longer a niche problem but a systemic one—and that solving it requires rethinking storage, collaboration, and intelligence as a single platform rather than a patchwork of tools.

Share:

Tuesday, October 21, 2025

TextQL to make Big Data queryable without the complexity

TextQL joined the recent edition of The IT Press Tour in New-York city. The firm positions itself as a pragmatic response to one of enterprise data’s most persistent frustrations: extracting value from large, complex datasets without forcing every user to become a data engineer. The company has built its proposition around a simple idea—querying data should be as intuitive as writing a question - while still meeting the performance, scale, and governance demands of modern organizations.

At its core, TextQL provides a natural-language interface that allows users to ask questions about their data in plain English and receive structured, SQL-ready outputs. Instead of replacing SQL or existing analytics platforms, TextQL acts as a translation layer between human intent and technical execution. Business analysts, product managers, and operational teams can query data conversationally, while the system generates optimized queries that data teams can inspect, refine, or deploy directly.

What differentiates TextQL from earlier “chat with your data” attempts is its emphasis on reliability and enterprise-readiness. The platform is designed to understand database schemas, relationships, and constraints, reducing the risk of ambiguous or misleading queries. Rather than producing opaque answers, TextQL outputs transparent, auditable SQL, helping organizations maintain trust in the results and align with governance and compliance requirements.

The company is clearly targeting environments where data complexity has outpaced usability. As organizations accumulate data across warehouses, lakes, and operational systems, access often becomes bottlenecked by a small group of specialists. TextQL aims to relieve that pressure by democratizing access while keeping technical control in place. In practice, this means faster insights for non-technical teams and fewer ad hoc requests landing on data engineers’ desks.

TextQL also frames its technology as an accelerator rather than a disruption. It integrates with existing databases, BI tools, and workflows, allowing organizations to adopt it incrementally. This “fit-into-what-you-already-have” approach reflects an understanding of enterprise realities, where wholesale platform replacement is rarely an option. By working alongside established data stacks, TextQL positions itself as a productivity layer rather than yet another system to manage.

From a market perspective, TextQL sits at the intersection of analytics, AI-assisted development, and data accessibility. Its pitch resonates particularly in sectors where data-driven decision-making is widespread but unevenly distributed, such as finance, SaaS, retail, and operations-heavy enterprises. The value proposition is less about flashy AI outputs and more about shaving time, reducing friction, and lowering the barrier between questions and answers.

In an era where organizations are awash in data but still struggle to use it effectively, TextQL’s approach reflects a broader shift: success is no longer defined by how much data you store, but by how easily people can work with it. By translating human language into structured queries with precision and accountability, TextQL is betting that the future of analytics lies not in replacing experts, but in empowering everyone else to think - and ask questions - like one.

Share:

Thursday, October 16, 2025

CTERA extends it data platform with an intelligent data fabric

At The IT Press Tour in New York, CTERA outlined a long-term vision for how enterprises must rethink unstructured data management as they move deeper into hybrid cloud, edge computing, and generative AI. The company argues that unstructured data—files, media, logs, sensor outputs—has become both the greatest operational burden and the largest untapped asset inside modern organizations.

Founded as a software-defined storage company, CTERA now positions itself as a leader in distributed enterprise data services, targeting a $10B+ file storage market with an object-based, globally distributed architecture. The company reports strong momentum, with recurring high-margin software revenue, 35% growth, and 125% net retention, largely driven through a partner-centric go-to-market model.

The presentation framed CTERA’s strategy around three “waves of innovation.” The first, Location Intelligence, addresses the fragmentation created by on-premises systems, edge locations, and multiple clouds. As enterprises triple unstructured data capacity by 2028, architectural complexity has surged, with hybrid models becoming the norm. CTERA’s response is a global file namespace that spans data centers, clouds, and edge sites, delivering local performance via caching while maintaining centralized control, security, and disaster recovery.

The second wave, Metadata Intelligence, focuses on turning this unified environment into a secure data lake. As ransomware and insider threats intensify, CTERA argues that storage itself must become security-aware. Its platform incorporates immutable snapshots, block-level anomaly detection, and continuous file activity monitoring. Recent product launches—including Ransom Protect, Insight, and metadata-driven analytics—aim to provide visibility, forensic insight, and automated response without copying or relocating sensitive data.

The third wave, Enterprise Intelligence, addresses the growing gap between AI ambition and reality. While enterprise GenAI spending is projected to exceed $400 billion by 2028, studies show that the vast majority of pilots fail. CTERA attributes this not to model limitations, but to poor data quality caused by silos, inconsistent formats, weak metadata, and security constraints. The company’s answer is an intelligent data fabric that curates high-quality datasets by enforcing access controls, enriching metadata, filtering content, and enabling semantic retrieval directly over existing files and object storage.

Rather than positioning AI as an abstract analytics layer, CTERA envisions “virtual employees”—permission-aware AI agents that operate as domain experts within defined guardrails. Built on Model Context Protocol (MCP), these agents can search, summarize, retrieve, and act on enterprise data while respecting existing ACLs and compliance requirements.

Across case studies—from global branding agencies to government fleets and healthcare legal firms—CTERA emphasizes a consistent theme: organizations do not need more data, but better data infrastructure. By evolving from distributed file storage to an AI-ready data fabric, CTERA aims to turn unstructured data from a growing liability into a durable competitive advantage.

Share:

Tuesday, October 14, 2025

Arcitecta, ready for new data transformation and compute

Arcitecta is positioning itself as a quiet but consequential force in the rapidly expanding world of unstructured data management. At The IT Press Tour 2025 in New York, the company laid out how its flagship Mediaflux Data Platform is helping research institutions, cultural organizations, and enterprises regain control over data volumes that have grown beyond the limits of traditional storage and file systems.

At the center of Arcitecta’s message is a simple but increasingly urgent reality: data is growing faster than organizations can understand, govern, or preserve it. From scientific research and healthcare to archives and digital media, institutions are now managing petabytes of files across disk, cloud, and tape—often spread across silos, locations, and vendors. Mediaflux is designed to sit above that complexity as an end-to-end data fabric, converging data management, orchestration, metadata, access, and lifecycle automation into a single platform.

Unlike conventional storage solutions, Mediaflux is not positioned as “just another file system.” Instead, it operates as a vendor-agnostic control layer that can span heterogeneous infrastructure—from Dell and IBM storage to cloud object stores and tape libraries—while presenting users and applications with unified, multi-protocol access via NFS, SMB, S3, and more. A rich policy engine automates data movement, tiering, and archiving, reducing the need for manual intervention and brittle workflows.

Arcitecta placed heavy emphasis on metadata and intelligence as the real differentiator. Mediaflux’s XODB metadata engine not only catalogs files but also tracks relationships, temporal context, and vector embeddings, making data discoverable long after it has been pushed to cold storage. This capability is increasingly critical as organizations prepare data for AI, analytics, and long-term reuse. Rather than moving data to compute, Mediaflux enables compute-to-data, allowing analytics and AI workloads to run where the data already resides.

Customer case studies anchored the presentation in real-world scale. At Princeton University, Mediaflux underpins the TigerData platform, managing more than 200 petabytes of research data as part of a 100-year digital preservation strategy. The challenge is not just storage capacity, but future accessibility—ensuring that today’s Nobel-level research can still be found, understood, and reused decades from now. Similar themes appeared in deployments at Dana-Farber Cancer Institute, TU Dresden, and cultural institutions such as the Imperial War Museums, where Mediaflux has reduced manual workloads, accelerated discovery, and lowered costs by pushing cold data to tape or cloud while keeping it searchable.

Arcitecta also highlighted its growing role in AI-ready data infrastructure. By integrating vector databases, metadata enrichment, and WAN-optimized data movement, Mediaflux enables organizations to prepare unstructured data for RAG pipelines, generative AI, and large-scale analytics without ripping and replacing existing systems.

The presentation concluded with Arcitecta’s broader ambition: to make Mediaflux the “last data management platform” organizations will need. With ongoing investments in AI integration, deployment automation, and ecosystem partnerships, Arcitecta is betting that the future of data infrastructure lies not in faster storage alone, but in smarter, metadata-driven systems that turn overwhelming data growth into a durable, reusable asset.

Share:

Thursday, October 09, 2025

AuriStor revives and reinvents the global file system for a zero-trust era

AuriStor joined The IT Press Tour again a few days ago in New-York and we had the opportunity to get an update on the company development. As enterprises grapple with the explosive growth of unstructured data and increasingly fragmented IT environments, AuriStor is positioning its File System as a modern answer to a decades-old but still relevant vision: a single, secure, global namespace for data. Building on the architectural foundations of IBM’s Andrew File System (AFS), AuriStor argues that it has finally delivered on the promise that early distributed file systems could only partially fulfill.

At its core, the AuriStor File System (AuriStorFS) is designed to let organizations manage and access data seamlessly across data centers, cloud environments, and geographic boundaries—without sacrificing security or performance. Unlike conventional NAS or object storage platforms that often require overlapping tools and duplicated data sets, AuriStorFS aims to consolidate unstructured data into one coherent namespace while enforcing strict access controls and policy-driven security.

Security is the system’s defining theme. AuriStorFS adopts a “security first” model that integrates federated authentication based on Kerberos v5, strong AES-256 encryption, and fine-grained access control lists that can be applied not only to directories, but to individual files, symlinks, and volumes. The platform introduces a modern security framework called YFS-RxGK, replacing legacy AFS mechanisms with support for perfect forward secrecy, combined user-and-machine identity authentication, and encrypted callbacks between clients and servers. The goal, according to AuriStor, is to eliminate long-standing weaknesses such as cache poisoning and over-reliance on perimeter defenses.

Architecturally, AuriStorFS retains the proven AFS “cell” model, built around distributed services for location, protection, file access, volume management, and caching. These services are backed by a significantly enhanced version of the Ubik replicated database, which now scales to dozens of nodes, recovers from failures in seconds rather than minutes, and supports encrypted, integrity-checked communications. This allows AuriStorFS to deliver high availability and consistent metadata access even at large scale.

Performance is another area where AuriStor claims major advances. By redesigning its Rx network protocol and file server internals, the platform can process thousands of simultaneous remote procedure calls and fully exploit modern multi-core servers and high-bandwidth networks. In production environments, AuriStor reports support for tens of thousands of clients and hundreds of thousands of concurrent connections - levels that would overwhelm traditional AFS or OpenAFS deployments. Improvements such as dynamic thread pools, lock-free data paths, and copy-on-write volume snapshots are intended to make large, collaborative workloads practical again in a shared file system.

Importantly, AuriStor emphasizes continuity as well as innovation. The system remains backward compatible with existing AFS and OpenAFS clients, allowing organizations to migrate incrementally without disruptive “flag day” upgrades. Support spans Linux, Windows, macOS, and heterogeneous server and storage platforms, including cloud infrastructure.

In a market dominated by object storage, cloud file services, and siloed collaboration tools, AuriStor is making a contrarian bet: that a secure, high-performance global file system still has a vital role to play. By modernizing AFS for today’s threat landscape and scale requirements, the company believes it can offer enterprises something rare—a single file system that combines cloud-era reach with zero-trust security and enterprise-grade control.

Share:

Wednesday, October 01, 2025

64th Edition of The IT Press Tour back in New-York

The IT Press Tour, a media event launched in June 2010, announced participating companies for the 64th edition scheduled the week of October 6th in New-York City.

During this edition, the press group will meet 7 companies:
  • Arcitecta, a reference in data management,
  • AuriStor, a long time player in AFS based solution,
  • CTERA, a pioneer in distributed file services,
  • ExaGrid, a leader in secondary storage,
  • HYCU, an innovator cloud and SaaS data protection,
  • Shade, a fast growing M&E storage software,
  • and TextQL, a young AI player simplifying big data access.

I invite you to follow us on Twitter with #ITPT and @ITPressTour, my twitter handle and @CDP_FST and journalists' respective handle.
Share:

Tuesday, September 16, 2025

euroNAS, a file storage solution from Europe

euroNAS GmbH, a privately owned European software company, joined the recent IT Press Tour organized last week in Amsterdam, Netherlands. The firm was founded in 2005 and headquartered in Munich, specializing in enterprise storage, high availability, and virtualization solutions. With development roots in Europe and a team drawn from major IT vendors, euroNAS positions itself as an experienced yet agile alternative to large enterprise storage and virtualization providers. Its core vision is to make advanced storage and virtualization technologies simple, reliable, and affordable for organizations of all sizes.


At the heart of euroNAS’ strategy is the removal of traditional barriers in enterprise IT. The company addresses common market pain points such as excessive complexity, hardware vendor lock-in, high licensing costs, lack of built-in high availability, and poor support for small and mid-sized organizations. euroNAS solutions are designed to run on standard x86 hardware, allowing customers to choose, reuse, or upgrade infrastructure freely while maintaining enterprise-grade functionality. All platforms are managed through a unified, web-based graphical interface, eliminating the need for deep Linux or command-line expertise.

The product portfolio spans storage, clustering, scale-out architectures, and virtualization. euroNAS Premium delivers high-performance, multi-protocol storage supporting file, block, and advanced interfaces such as NVMe-oF and Fibre Channel, with optional ZFS features for snapshots, compression, and data integrity. euroNAS HA Cluster provides enterprise-grade failover with either mirrored configurations for site separation or multi-head shared-storage setups based on ZFS, ensuring continuous availability for business-critical workloads. For large-scale environments, eEKAS offers a simplified, GUI-driven Ceph implementation, enabling scalable file, block, and object storage without the operational complexity typically associated with Ceph.

A central pillar of the presentation is eEVOS, euroNAS’ hyper-converged virtualization platform. eEVOS combines virtualization, storage, and backup in a single system, positioning itself as a cost-effective alternative to VMware, Hyper-V, and complex open-source stacks. It includes live migration, high availability, integrated backup and recovery, granular resource controls, and Ceph-based distributed storage as a vSAN alternative—all managed from one interface and supported by a single vendor.


Real-world implementations highlight the flexibility of the portfolio, ranging from high-availability installations at motorsport facilities, industrial 24/7 environments replacing VMware vSAN, multi-node clusters with shared storage, and petabyte-scale Ceph archives built with partners such as Seagate. Additional use cases include backup servers with tape integration for long-term archiving in academic environments.

The presentation concludes with competitive positioning and business strategy. euroNAS differentiates itself from large enterprise vendors through hardware independence, personal support, and partner-friendly branding options, and from open-source solutions through integrated functionality, predictable roadmaps, and professional support. A channel-focused go-to-market model, perpetual licensing with support contracts, and a clear roadmap—including multi-tenancy, network virtualization, and expanded S3 capabilities—underline euroNAS’ ambition to be a future-ready, full-stack alternative in enterprise storage and virtualization.

Share: