Backup, as most of the industry understands it, is broken. It was designed for a world of megabytes and gigabytes—not petabytes. It assumes that data sits still, that it grows slowly, and that you can afford to lose hours—or days—before recovery kicks in. But that world no longer exists.
Today, enterprise datasets routinely exceed 10, 50, even 100 petabytes. We see customers working with hundreds of billions of files, spread across legacy file systems, object stores, cloud endpoints, and tape. In that environment, backup is no longer a separate system—it must be part of the data path itself. At Arcitecta, we’ve built our platform, Mediaflux, around that exact idea.
The illusion of traditional backup
If you’re still doing full and incremental backups of massive data volumes, you’re not protected—you’re hopeful. The system isn’t fast enough, the RPOs aren’t acceptable, and the idea of “recovering everything” often means recovering nothing in time.
"If you had a backup of 100 petabytes, it just doesn’t work. Traditional backup systems aren’t big data. They can’t move hundreds of terabytes per hour."
– Jason Lohrey, CTO, Arcitecta
Even modern approaches—such as snapshotting, replication, and virtualisation intercepts—struggle to deliver true continuity. They're limited by how fast they can scan, how often they can run, and how efficiently they can target only what matters.
A system that knows when things change
At Arcitecta, we took a fundamentally different approach: we built our own protocols, and that means we know the moment something changes. Whether it’s a write, rename, delete, or inode creation—Mediaflux detects the event immediately. That means we can capture that change in real time, creating a point-in-time snapshot that doesn’t depend on a scheduler or a third-party tool.
"Our point-in-time capability records every structural change. Every rename. Every delete. Every write. And every delete is soft."
– Jason Lohrey, CTO, Arcitecta
This gives us something traditional systems can’t offer: a Recovery Point Objective (RPO) near zero. The moment something happens, it's captured and protected—automatically. You don’t wait for the nightly job. You don’t pray the snapshot ran. It's already there.
Why scale demands selectivity
When you’re dealing with tens or hundreds of petabytes, you can’t back up everything equally. That’s not a technology limitation—it’s a physics problem.
"You can't back up 100 petabytes. You can't scan it all, and you can't move it fast enough. So, you need new mechanisms for protecting that data." – Jason Lohrey, CTO, Arcitecta
What matters isn’t that you have a copy, but that you know what needs to be protected, why it’s important, and how quickly you can recover it. That’s where Mediaflux excels. Because it’s not just a storage management platform—it’s a metadata engine, an orchestration system, and a policy brain. We can continuously evaluate data based on origin, usage, modification frequency, ownership, sensitivity, and time. Then selectively protect what’s critical—and nothing more.
Backup must become a conversation
Most data protection strategies start with “what we sell.” Ours starts with what the customer needs.
"You just ask, what data do you have, and what do you want to do with it? That opens up a real conversation."
– Jason Lohrey, CTO, Arcitecta
That shift—from tools to outcomes—is essential. Because backup is no longer just about compliance or disaster recovery, it’s about preserving operational continuity in real-time environments.
We’ve seen this in media production, where a lost file in a live stream workflow isn’t just an inconvenience—it’s a missed broadcast. We’ve seen it in research, where a deleted dataset can set back years of analysis. And we’ve seen it in healthcare, where losing imaging data isn’t acceptable—ever.
Zero RPO. Zero RTO. Why not?
When we ask customers if they want zero RPO and zero Recovery Time Objective (RTO), the answer is always the same: “Of course.” Who wouldn’t? The real surprise is that those outcomes are actually achievable—when the platform is designed with them in mind.
"If you write the stack yourself, you can do things like that. If something breaks, we can fix it in hours—not wait on an external dependency."
– Jason Lohrey, CTO, Arcitecta
Because we understand every bit of our stack, we can recover anything—at any moment, with minimal lag and zero guesswork. And we can do it whether the data sits on flash, tape, or cloud.
Not backup as you knew it. Protection as you need it.
We’re not replacing backup—we’re evolving it. Traditional backup still has a place. Systems will still need external copies. However, for large-scale, high-value, and rapidly changing data, protection must reside within the data flow itself. That’s what Mediaflux does: it watches every file, tracks every event, and enables you to recover instantly—with surgical precision. In the petabyte era, backup is not a job. It’s a function of the platform. That’s what we’ve built. And that’s what customers are now realising they genuinely need.
Learn more:
Jason explains how traditional backup breaks down at scale and why zero RPO/RTO is achievable when protection is integrated into the data platform—not an afterthought. Hear more in this insightful conversation with Anthony Spiteri on Great Things with Great Tech: Episode 102.
