The company has drastically extended the foundation technology to provide a global data virtualization service that go far beyond classic Network File Virtualization and Network File Management. It's about Data Orchestration and of course Virtualization as finally we obtain a virtual view of the data, where they resides. And as Lance Smith recently said during the interview "Primary Data doesn't touch any data, we manage meta-data and automate the flow of data across any type of storage with a fully agnostic approach".
With DataSphere 1.2, the solution scales to billions of files thanks to a super catalog design for metadata based on RocksDB, already proven in many very high demanding environments. The second point is the horizontal value with more support of heterogeneous file servers and NAS support as the user finally consider various entities and build a gigantic file data lake. As this data pool is very large with various generation and characteristic for file storage entities, Primary Data is also key for the storage services you can activate to fully control the lifecycle of data under control. Finally the file path never changes but the residence of data underneath is modified, this is the result of the total control of the meta data aspect of every file within the environment. One of the tier you can activate could be a S3 compatible target to evacuate data to large capacity and low cost storage sometimes deployed as a remote storage service. Associated with the storage classes is the notion of file management by objectives where you can enable intelligent policies to move, promote, demote... file between tiers based on various criteria. The other aspect of the solution is the performance to delivers, classic IOPS and BW but very high, but also impressive MData Ops/sec. And last point si related to a good benchmark you can run with Primary Data. Let's consider 1 file and 1 filer, no choice this file will be stored on the file server, now add Primary Data, some existing file servers added to the previous one with the same file and you see immediately the gain as the company leverages a parallel file system approach which splits and chunks the file in smaller part and store each of them on different file servers. Obviously it boosts the throughput and reduces drastically the latency. Try it you will love it.
0 commentaires:
Post a Comment