5 C
Casper
Friday, December 20, 2024

As the ‘Age of AI’ Beckons, It’s Time to Get Serious About Data Resilience

Must read

Rick Vanover
Rick Vanover
Senior Director of Product Strategy at Veeam

As AI adoption accelerates, data resilience becomes critical. Let’s explore the importance of data protection, governance, and visibility in the age of AI.

Almost two decades ago, Clive Humby coined the now-infamous phrase “data is the new oil”. With artificial intelligence (AI), we’ve got the new internal combustion engine. The discourse around AI has reached a fever pitch, but this ‘age of AI’ we have entered is just a chapter in a story that’s been going on for years – digital transformation.

The AI hype gripping every industry right now is understandable. The potential is big, exciting, and revolutionary. Still, before we run off and start our engines, organizations need to put processes in place to power data resilience and ensure their data is available, accurate, protected, and intelligent so that their business continues to run no matter what happens. Look after your data, and it will look after you.

Take control before Shadow Sprawl does 

It’s far easier to manage with training and controls early on regarding something so pervasive and ever-changing as a company’s data. You don’t want to be left trying to ‘unbake the cake.’ The time to start is now. The latest McKinsey Global Survey on AI found that 65% of respondents reported that their organization regularly uses Gen AI (double from just ten months before). However, the stat that should give IT and security leaders pause is that nearly half of the respondents said they are ‘heavily customizing’ or developing their models. 

This is a new wave of ‘shadow IT’ – unsanctioned or unknown use of software or systems across an organization. For a large enterprise, keeping track of the tools teams across various business units might be using is already a challenge. Departments or individuals building or adapting large language models (LLMs) will make it even harder to manage and track data movement and risk across the organization. Having complete control over this is almost impossible, but implementing processes and training around data stewardship, data privacy, and IP will help. These measures make the company’s position far more defendable if anything goes wrong.  

Also Read: AI Hype vs. Real-World Impact

Managing the risk 

It’s not about being the progress police. AI is a great tool for organizations and departments to get enormous value. But as it quickly becomes part of the tech stack, ensuring these fall within the rest of the business’s data governance and protection principles is vital. For most AI tools, it’s about mitigating the operational risk of the data that flows through them. There are three main risk factors: security (what if an outside party accesses or steals the data?), availability (what if we lose access to the data, even temporarily?), and accuracy (what if what we’re working from is wrong?).     

This is where data resilience is crucial. As AI tools become integral to your tech stack, you must ensure visibility, governance, and protection across your entire ‘data landscape’. It comes back to the relatively old-school CIA triad – maintaining confidentiality, integrity, and availability of your data. Rampant or uncontrolled use of AI models across a business could create gaps. Data resilience is already a priority in most areas of an organization, and LLMs and other AI tools need to be covered.

Across the business, you need to understand your business-critical data and where it lives. Companies might have good data governance and resilience now, but if adequate training isn’t implemented, uncontrolled AI use could cause issues. What’s worse is you might not even know about them.

Building (and maintaining) data resilience 

Ensuring data resilience is a big task – it covers the entire organization, so the whole team needs to be responsible. It’s also not a ‘one-and-done’ task; things are constantly moving and changing. The growth of AI is just one example of things that need to be reacted to and adapted to. Data resilience is an all-encompassing mission that covers identity management, device and network security, and data protection principles like backup and recovery.

It’s a massive de-risking project, but for it to be effective, it requires two things above all else: the already-mentioned visibility and senior buy-in. Data resilience starts in the boardroom. Without it, projects fall flat, funding limits how much can be done, and protection/availability gaps appear. The fatal ‘NMP’ (“not my problem”) can’t fly anymore.

Also Read: 6 Hot Cybersecurity Trends—and 2 That Are Cooling Down  

Don’t let the size of the task stop you from starting. You can’t do everything, but you can do something, and that is infinitely better than doing nothing. Starting now will be much easier than starting in a year when LLMs have sprung up across the organization. Many companies may fall into the same issues as they did with cloud migration all those years ago; you go all-in on the new tech and wish you’d planned some things rather than having to work backward.

Test your resilience by doing drills – the only way to learn how to swim is by swimming. When testing, make sure you have some realistic worst-case scenarios. Try doing it without your disaster lead (they’re allowed to go on vacation, after all). Have a plan B, C, and D. By doing these tests, it’s easy to see how prepped you are. The most important thing is to start. 

More articles

Latest posts