The Age of Trusted Data

News

Every era of computing has faced its bottlenecks. In the early days, it was processing cycles; later, it was bandwidth; for years, it was sheer storage capacity. Today, in the age of AI and planetary-scale science, one of the obstacles is trust.

This theme was evident in a panel on Emerging and Enabling Technologies at the inaugural Datakamer, where representatives from Spectralogic, Wasabi, Arcitecta, IQM, and Cerabyte discussed how their technologies are shaping the future of data. And while each panellist approached it from a different angle — storage, cloud, data management, quantum, and long-term media — their stories converged.

We are generating data on a scale that would have been unimaginable even a decade ago. Exabytes are created and discarded in the span of weeks as AI models are trained, retrained, and tuned again. Storage systems are scrambling to keep pace, not just in terms of density, but also in power efficiency. A handful of GPU racks can consume megawatts; the challenge is no longer simply where to put the data, but how to house it responsibly.

At the same time, the cloud has made storage accessible everywhere, while also exposing it to new forms of fragility. An account takeover, a poorly timed compliance audit, or a region-wide outage can make data disappear as effectively as a cyberattack. Security is no longer just about keeping intruders out — it’s about designing systems where no single failure, human or technical, can erase what matters.

But the real transformation lies in how we make sense of the data we keep. Without context, petabytes are a liability. With metadata, they become infrastructure. Metadata turns opaque archives into searchable knowledge, allows systems to rewind through time, and lets researchers automate decisions about cost, location, and access. It is, in many ways, the nervous system of modern data.

And then there are the new frontiers. Quantum computing promising to change not just the speed of analysis but the kind of problems we can solve. Archival technologies built on ceramics and glass, designed to hold information for centuries at costs so low they redefine preservation itself. These aren’t separate revolutions – they are threads in the same fabric.

Taken together, these developments point toward a shift: from an era of bigger data to an era of trusted data.

Scale is still essential, but it is not sufficient. The future belongs to systems that are secure against both attack and error, efficient under crushing power budgets, rich in context and control, and durable enough to outlast generations.

For researchers, the implications are profound. Trustworthy data infrastructures are not just a technical necessity; they are the foundation for reproducibility, accountability, and discovery itself. If the data cannot be relied upon — to be present, correct, and legible — then the breakthroughs built on it cannot endure.

The technologies enabling this shift may look wildly different — from tapes to qubits, clouds to ceramics — but they converge on the same truth: the progress of science and society now depends less on whether we can generate more data, and more on whether we can trust the data we already have.

This essay draws on insights shared during the panel “Emerging and Enabling Technologies” at the inaugural Datakamer, featuring representatives from Spectralogic, Wasabi, Arcitecta, IQM, and Cerabyte.

To view this resource, complete the form below.

First Name Please enter your First Name
Last Name Please enter your Last Name
Email Please enter your Email
Company Please enter your Company name
I have read and agree to the Privacy Policy.
Privacy Policy is required.