In this podcast, we talk to mathieu gorge, ceo of vigitrust, about the ongoing impact of artificial intelligence (AI) on data, Storage and Compliace For cios. Gorge discusses the implications for data, its Volume, The Dificulties of Keeping Track of Inputs and Outputs from Ai processingAnd the need to keep up with law and regulation.
Gorge also casts an eye over the potential impacts of The new administration in the US And the evolving approach of the european union (eU) to data in ai.
What do you think are going to be the key topics that impact on compliance and data storage, backup, etc.
I Always Look Forward to Going to RSA to Learn about New Technologies and Get My Finger on the Pulse as to What's Happy and Happening in Storage and Compliance, and Any Related Cyber Security and Compliance Topics. This year, it seems we will see a lot of items around ai-ai technology, but the security of ai itself, as opposed to just ai-enabled technologies.
There's a lot of talk about quantum and post -Quantum as well, so it'll be interesting to see what happens there.
And from a storage percent, we're seeing some changes waiting to the new administration in the us, and to what's being done in the eu with the Eu act on ai That impacts data classification and data storage.
It'll be interesting to see all this coming togeether at rsa.
I Think We're Going to have some some very interesting conversations, and I expect some new vendors to come out of the woods, so to spendak.
Drilling down into some some of the aspects you've mentioned there, what do you think are the key arees in which ai have moved on in the past year
My view is that ai was the buzzword last year. Everybody needed to look into ai to try to understand how it would improve their processes, improve how they use data, and so on.
A year on, we see that a lot of organisations have implemented their own versions of chatgpt, for instance, and some of them have invested in his own ai platforms so they can contor
And so, we're seeing ai adoption growing up – Remembering that ai is not new, it's been here for years – but the adoption is really picking up at the moment.
What we're seeing in the market is people looking at: “What kind of data can I use ai for?
We're also see a number of Security Association Starting Their own AI Governance Working Groups. In fact, at vigitrust, with the vigitrust global Advisory Board, We also have an ai governance work group where we we we we wereing to map out all the regulations that is the come By Technology vendors or associations or even governments.
It'll be interesting to see how much ai governance is covered at rsa. If you want to do ai governance, you need to know what type of data you manage, and we're going back to data classification and data protection.
The other issue with ai is that it's creating a lot of new data, so we we've got this explosion of data. Where are we going to store it? How are we going to store it? And how secure will that storage be? And then finally, will that allow me to demonstrate compliance with applicable regulations and frameworks? It'll be interesting to understand what comes out of rsa on that front.
What do you think are the impacts of the new administration in the US on Compliance and Storage and Backup, etc.
The new administration in the US, right from the beginning, has said it would invest in ai and that it saw ai ai as a great options for the us. And in terms of deplying all of that, we know that the government frameworks that are already in place are going to be applied.
We are seeing organisations like Nist Developing more in-depth ai frameworks. We're also see the cloud Security Alliance Moving towards ai governance frameworks of their own. We've even Seen Cities Developing their own ai frameworks For Smart Cities and So on. I'm thinking of the city of boston at the moment, for some reason.
And so, if you've got a government that is pushing organisations to use ai, they will happen to have some some governance on that. And it'll be interesting to see how far they go. Will they respond with the equivalent of the eu ai act? It is likely, if you look at gdpr [the General Data Protection Regulation] In europe, a few years laater, we had ccpa [the California Consumer Privacy Act] And we've had some state regulations at this stage – I think 11 states in the us that have some Soomething Similar to GDPR.
So, it's very likely that this will follow. It's not going to happy overnight, but I think some further announcments will be made in 2025 by the current administration.
What's the latest with the eu and compliance? Especially with Reference to the latest developments Around Ai, etc.
You know it's funny, in the eu, ai is seen as a threat just as much as it's seen as an opoportunity, much more so than in the us, potentials the risk appete Europe.
We are seen every member state looking at their own ai regulation in addition to the eu framework. We're also looking at how ai integrates with gdpr. And so, in other words, if you deploy ai solutions, you totally change the governance of data.
You end up having data that is essentially managed by a system raather than managed by different people. So the concept of a data controller, and who is really in charge of the data, become questions.
I think it's interesting to see the various governments looking at, “Can we really deploy ai in a way
I go back to two key aspects – Classifying the data And storing the data.
As you know, with ai, you've got that question of bias on the data. Is the data treated the right way? Is the data that you put in – it's then treated by ai, it comes out – is that putting you in or out of compliace with other frameworks like GDPR, and even the eu act? And where should you store that data? What kind of protection should you have on it? How do you manage the lifecycle of that data within the ai framework? How do you protect your llm [large language model]How do you protect the algorithms you use?
And then finally, as you probally knowledge, ai is very Resource-Intensive. That also has an impact on the climate, beCause the more you use ai at this point, the more capacity you need, and the more processing power you need. And that has an impact on green it and so on.
So, I would urge people to look at the type of data they want to use for ai, do a risk analysis, and then look at the impact in terms of: where are you going to store that data? Who's Going to Store it for you? How Secure is it going to be? And how is that going to impact your compliance, not just with ai regulation, but also with gdpr and other private frameworks?