VDURA CEO Ken Claffey believes that the company should be classed alongside DDN, VAST Data, and WEKA as an extreme high-performing and reliable data store for modern AI and traditional HPC workloads.
The world of storage buyers needs to re-evaluate VDURA as the PanFS software that was the basis of Panasas’s HPC success has been completely overhauled since Claffey became CEO in September 2023. The company changed its name to VDURA in May 2024 to reflect its transformation and focus on data velocity and durability.

Claffey says VDURA combines the stable and linear performance of a parallel file system with the resilience and cost-efficiency of object storage.
VDURA’s microservices-based VDP (VDURA Data Platform, the updated PanFS) has a base object store with data accessed by clients through a parallel file system layered on top of that. There is a unified global namespace, a single control plane, and a single data plane. The metadata management uses a VeLO (Velocity Layer Operations) distributed key-value store running on flash storage with the object storage default being HDD.
Virtualized Protected Object Device (VPOD) storage entities reside on the HDD layer. For data durability, erasure coding is provided within each VPOD and across a VDURA cluster. The VeLO software runs on scale-out 1U director nodes with VDURA’s own hardware using AMD EPYC 9005 CPUs, Nvidia ConnectX-7 network interface cards, Broadcom 200Gb Ethernet, and Phison PCIe NVMe SSDs – Pascari X200s.
VDP has a unified namespace where Director Nodes handle metadata and small files via VeLO and larger data through VPODs. The Director Nodes manage file-to-object mapping, allowing seamless integration between the parallel file system and object storage. They also support S3.
VPODs can run on hybrid flash-disk nodes and also all-flash V5000 storage nodes, F-Nodes. The Hybrid Storage Nodes incorporate the same 1RU server used with the Director Node and 4RU JBODs running VPODs for cost-effective bulk storage with high performance and reliability.
The F-Nodes have a 1RU server chassis containing up to 12 x 128 TB NVMe QLC SSDs providing 1.536 PB of raw capacity. An F-Node is powered by an AMD EPYC 9005 Series CPU with 384 GB of memory. There are Nvidia ConnectX-7 Ethernet SmartNICs for low latency data transfer, plus three PCIe and one OCP Gen 5 slots for high-speed front-end and back-end expansion connectivity.
Coming ScaleFlow software will allow “seamless data movement” across high-performance QLC flash and high-capacity disk.
VDP is a software-defined, on-premises offering, using off-the-shelf hardware, and is being ported to the main public clouds. It will also support GPUDirect Storage (GDS), as well as RDMA and RoCE (v2) this summer.
Claffey says that the idea from three to five years ago that QLC flash prices would drop down to HDD levels has not come true. He tells us: “Enterprise flash would go from 8x to 6x to 4x and then all geniuses were saying, oh, it’s going to go to 2x and then 1x. Remember those forecasts? And then the reality is, the opposite happened. There was no fundamental change in the cost of the drive … Now if you go look at it, go to Best Buy, go wherever you want to go, the gap between a terabyte HDD and a terabyte SSD is close to 8x.”
You need a tiered flash-disk architecture to provide flash speed and disk economics. VDURA wants to build the best, most efficient storage infrastructure for AI and HPC. It doesn’t intend to build databases; that’s too high up the AI stack from its storage infrastructure point of view. Instead it will make itself open and welcoming to all AI databases.
VDURA believes it will be the performance leader in this AI/HPC storage infrastructure space. Early access customers using its all-flash F-Nodes, which go GA in September, say it’s very competitive.
Claffey says VDURA wins bids against rivals. This was exemplified in a US federal bid for a large system integrator. It said the SI looked at several competing suppliers who proposed parallel access systems offering performance sufficient to feed large x86 and GPU compute clusters – one of the world’s largest US defense clusters – with sub-millisecond latency. The bids were for a multi-year project with a 2025 phase 1 requiring 20 PB of total capacity and sustained >100 GBps performance. A phase 2 in 2026 will move up to around 200 PB of usable capacity and 2.5 TBps sustained performance.
VDURA bid a system with V5000 all-flash nodes for performance and HDD extensions for bulk capacity. It was selected by the SI because it matched the performance and capacity needs. It claimed it beat a rival on performance and TCO and added that the VDURA system had a better TB per watt rating and a lower carbon footprint than its competitors.
The company reckons it matches DDN and IBM Storage Scale on performance and claims it is easy to use, manage, and reliable.