Debbie Reynolds Consulting LLC

View Original

E129 - Tharishni Arumugam, Global Privacy Technology & Operations Director,  Aon

Find your Podcast Player of Choice to listen to “The Data Diva” Talks Privacy Podcast Episode Here

Your browser doesn't support HTML5 audio

TThe Data Diva E129 - Tharishni Arumugam and Debbie Reynolds - (48 minutes) Debbie Reynolds

48:25

SUMMARY KEYWORDS

people, data, privacy, ai, happening, law, systems, consent, bias, person, understanding, asia pacific, thought, transparency, uk, ethics, technology, countries, agree, model

SPEAKERS

Debbie Reynolds, Tharishni Arumugam

Debbie Reynolds  00:00

Personal views and opinions expressed by our podcast guests are their own and are not legal advice or official statements by their organizations. Hello, my name is Debbie Reynolds; they call me "The Data Diva". This is "The Data Diva" Talks Privacy podcast, where we discuss Data Privacy issues with industry leaders around the world with information that businesses need to know now. I have a special guest on the show Tharishni Arumugam. She is Head of Global Privacy Technology and Operations Director at Aon, and she is in the United Kingdom. Welcome.

Tharishni Arumugam  00:41

Thanks. Thanks for having me. And thanks for inviting me. I'm really excited to be on the famous "Data Diva" podcast.

Debbie Reynolds  00:48

Yeah, this is great. So I always like to start the show with how we met; we were put together on a panel for Privsec in London. So we were able to meet in person and do this panel, and it was very well attended. And it was great to actually be there with you, to talk with you. One of the things that really struck me about our prep and my time meeting you in person is that you have like a very sparky personality, you're able to relate these concepts in very simple ways to people, and you just have a lot of energy, and a lot of passion, obviously, for privacy. But I would love for you to introduce yourself and tell me about your trajectory and your journey in privacy.

Tharishni Arumugam  01:42

Sure. Oh, and thank you for such kind words. So I started out in privacy in Malaysia, the land far, far away. And it was a really new concept. So we don't have the concept of privacy so much as data protection in Malaysia. And when I started as an apprentice in a law firm, in order to be a real lawyer, I kind of got bored of your traditional litigation and or arbitration because I was kind of doing arbitration for a bit, you know, going to court and arguing about construction points, or in a mediation room did not excite me, tax law did not excite me. And so they said, look like we really need a junior person to do this new thing that's coming up. We've got lots of clients having demand for, you know, understanding what the law means, understanding what they need to do to comply. And so that's how I started out doing the data protection because I did this other stuff. And it's like, not for me, and I was really close to leaving the law, honestly, before I found data protection, and privacy, which was, you know, just serendipitous. And then, I got hired by my current company and got moved to Singapore, where I got a more regional view of data protection and privacy. And that was, you know, really fun four and a half years because it was really new in Asia PAC, everyone who had responsibility over privacy in the scene in Singapore, we all connected, we all knew each other, we shared knowledge. And there's this Asia DPO community back in Singapore that still is very strong now. And, somehow, it's just become something that I've really been passionate about. I love what I do because it's so relatable. It's that part of the law that is so easily applicable to what you do on a daily basis. Unless you're a hermit and you shut yourself off from technology. Privacy law affects your life, I think, getting in more ways than then you would imagine. And then, you know, I wouldn't say that I have a technology background. But I like understanding things and breaking things down. And so every time someone talks about a piece of data and analytics or technology, I like breaking it down for myself, which I find helpful for people, for other people to understand as well how certain things work and what are the legal implications around it, but also the ethical implications, because you can't really give proper advice. You can't really understand the landscape without putting yourself in the shoes of the ultimate person that's affected by the technology that's being rolled out. And so I think that's the passion that you hear is I love the internet. All right, I am of that generation that absolutely enjoys the perks of the internet. And so and I love technology, so why would I not want to know more about it and understand how it can help and how we should regulate it as well?

Debbie Reynolds  04:46

Yes, phenomenal. Thank you for that background. No, I didn't know; I'm glad you brought this up about your experience working in Singapore and the Asia Pacific. And I agree; I'm sort of plugged into a lot of folks in those areas of Singapore and the Philippines. And I feel like a lot of times when we're talking about Data Privacy and data protection, a lot of times it's a lot of talk about the US and Europe, right? And there isn't enough talk about what's happening in Asia Pac, and it has very deep roots and data protection, very mature laws there. Tell me what's happening in that region that you think our listeners need to know about if they've only been thinking about Europe and the US.

Tharishni Arumugam  05:46

Yeah, look, the most important thing is to recognize that the GDPR isn't exactly the highest standard in the world, right? In Asia Pacific, for example, consent is the only viable legal basis for processing, unlike in Europe, where you can use, in a lot of cases, legitimate interests to process obvious variations across the Asia Pacific. So consensus, a pretty high bar. And that's like the go-to mechanism in a lot of countries in the Asia Pacific. And then the challenge is, unlike the EU, where you have a standardized piece of legislation. Every single country has a different political landscape. And so that political landscape informs how the legislation around personal data and privacy would go. And so, you know, I think teams out in the Asia Pacific are so understaffed because you're looking at maybe 14 Different laws. And one or two people taking care of the region. I mean, I started out my career, and I was the only person for two years, two and a half years doing the job of covering so many different countries. And then you're also seeing, you know, some countries have data localization in place. So if you are active in the Chinese market, you're going to have to take into account some of the new changes, the new laws, the new sec, is the data transfer requirements out there. You know, it's a lot of privacy work out there that needs to be done. And so if you are active in those markets, you really need to sort of stop looking at it as like, oh, I need to be concerned about GDPR; I only need to be concerned about what's happening in the northern hemisphere in the West, like there's so much happening in Asia Pacific, that requires a much more detailed look into all of the little things that could trip you up, right in terms of you know, even things like registering with a regulator your systems, and you're not just your data breaches. So there's a lot of these things that people need to be aware of when you talk about doing a global privacy program.

Debbie Reynolds  07:54

Yeah, I agree. You touched on a really important point that I think a lot of people don't really understand. So people are very, the GDPR popularized this idea of legitimate interest, right? It's kind of six different ones. And for GDPR, consent is the lowest one, right? So they want you to use these other ones first for you get to consent, where you have some countries where either consent is the top right, is the most favorable way, or like you say, for certain APEC countries, it really is the only way to be able to get stuff. So I think that is something that people really need to be aware of. You know, I've found that, like, for example, when I advise people on stuff that's happening in the UAE, you know, they're shocked because they're thinking, Oh, US legitimate interest, like no, this doesn't work that way. So I think understanding these different regions and understanding, you know, people like you who have that different perspective of different regions of what's happening is very important. Let's talk a little bit about standard contract clauses. So, I don't know; I feel like so I've been doing standard contract clauses as long as there has been a standard contract. And it was work that people didn't want to do, right, because it was kind of boring, people thought, but since we start having stuff about Schrems and different things, I think the standard contract clauses have gotten a lot more visibility, and some people talk about it as if it just is a new thing, and it isn't. So talk to me a little bit about just navigating standard contract clauses just in general. Or are you at least talking to people about it because it's like, well, we have to explain what we're going to do with data? Like we didn't really have to do that.

Tharishni Arumugam  10:04

Yeah, and look like, you're right, you know, standard contractual clauses have existed for a long time for Europe, and you're seeing other jurisdictions coming up to and saying, No, you also need to have the standard contractual clauses in place and explain what you're doing with the data with the client. And it's real accountability. Because if you don't know, as a data controller or data user, what your vendors doing with your data, you don't agree on fundamental basics when it comes to, you know, where our data is being processed. What systems are you using? Or what other vendors or sub processors you're using? Can you really be, you know, accountable for what's actually happening? Can you say you've done your due diligence? So I think it's a way for regulators to, you know, assist, you almost seems like this is how you do it, right? This is how you get some sort of assurance through your contracts, that this is what's happening, and both parties are agreeing to that. And there's, there's no room to negotiate these standard contractual clauses, because they are what they are. Right? I know. But you say that, and then I've seen the counterparty says, oh, no, I don't want that clause. Right. Yeah. It's the law, my friend. And it's not like something we've decided to draft, unlike our other clauses in the contract. Yeah. And look like I think, with the SRAMs example, obviously, SEC is got brought up because of the short turnaround time in order to get the new SC season. But I think the harder part was around the data transfer impact assessments that we had to conduct. Because you have, you know, Europe saying, Okay, you can transfer only if you meet certain, you know, thresholds when it comes to security and confidentiality and that sort of thing. But you have to risk rank the countries, you know, like, what is the risk of these particular countries because they can't make that political decision. But somehow you're expected as an organization based on what you know, in this country, or what lawyers in this country know about law and enforcement, and then say, well, okay, that's a high risk, or that's medium and low skin. And I just found that whole process to be incredibly challenging. And I do welcome actually India saying that they'll have a list of countries. I just read that that's, that's, you know, that's a bold political stance to take whatever is so helpful for in-house practitioners like us, right? Because then we don't have to come up with our own determination for then maybe down the line, the regular saying, Oh, actually, that's, that's, that's high risk. I don't know why you said the medium risk to that. I think there's, there's a little bit of that when it came to the, the shrimps to process that was really, really a challenging year and a half when that happened. And, and it coincided with me getting married as well. The dates of when you had to have your new SEC in place. And I was like, great. So I was working all the way up till, I think, three days before my actual way. Because of Max Schrems. Thanks if Max Schrems is listening to this. Thank you.

Debbie Reynolds  13:23

Oh, my goodness, that's a good story. Talk a little bit about transparency. So I think the culture shock that some businesses have with Data Privacy and data protection is that these laws or regulations are trying to bring in an unprecedented level of transparency that was never mandated or asked for before, right? So companies will take data, and they will do whatever. And then, you know, as long as they provide a service to a person, they've never had to say how they are handling data, or they've never had to reveal who their third parties were. So tell me a little bit about that business culture shock around adjusting to all these different requirements for transparency.

Tharishni Arumugam  14:20

Yeah, I think, you know, I think the whole point of transparency is so that the user understands what's happening with their data. But we've come to a point in history, I think that it's now almost hidden in very long legal text, which is the privacy notice, right? We try to put everything in there. And it's not like out of, you know, bad faith or anything. It's done in good, good faith by trying to comply with the law. But then, I think we're overcomplicating it now because we're putting in every single case may be scenario into this transparency required Chairman's that, you know, is this really something that goes to the spirit of the law in terms of transparency for individuals, and I think companies are doing that in good faith, but also trying to figure out okay, where can we not do that as well, right? What can we say? How do we pay a bunch of lawyers to say the most noninvasive thing about how we process personal data as well? And so I think we need to make transparency a much simpler process, a simpler way for people to access their data and to understand what happens with their data. Because I think people are now at that point where it's not, they're agreeing to everything, not because they trust everyone; they just feel like they don't have a choice anymore. And that's really sad, right? Because you want people to be able to have some sort of understanding and some sort of choice when it comes to what happens to their personal data. And that's not really what's happening right now today with transparency. But I like the fact that we have, you know, privacy advocates out there that are going through privacy notices, and, and finding out okay, for high-risk applications out there. Or high-risk systems that everyone's using, like Facebook, or, you know, the use of AI, etc. Like you have now privacy, advocacy groups that are helping individuals out, just like Consumer Protection Advocacy groups. And I think that's where if we get to a point where there's more of this work being done, then potentially we might come to a point where transparency now does benefit the end individual; they do understand what's happening. When they want to understand, like, half the time, you know, we do know, people don't really want to know too much about it, unless they've been wronged somehow.

Debbie Reynolds  16:59

So, very good. I want you to travel with me onto the philosophical plane, okay? Okay. I used to think the end all be all for privacy in terms of individual will be agency, so it'd be control of your data and stuff like that. But in the new, in the technological age that we're entering in now, there, there is beyond exponential growth of data, right? So we have, we're entering into an age where almost everything that you could possibly imagine about us is being recorded and collected and correlated, or whatever. And I feel like agency isn't enough. Because even if, let's say, you had agency, which gave you control over all your data, what were you, as an individual, able to do with all that, like, for example, let's say someone was on Facebook, and they request all their Facebook information, what would they do with all that information? Yeah, so I'm wondering if, although we're all, we all want to be able to see people go or go to the point where they have control of their data or agency, but I think part of agency needs to be, what can you as a human do with this information? I don't know, maybe AI solves that problem for us. But I'm just kind of riffing at this point. What are your thoughts?

Tharishni Arumugam  18:32

No, no, I like this philosophical train of thought. And I think it ties into what I said earlier of like, okay, well, like, I kind of know what's happening with my data; I kind of have a choice. But what does this choice mean to me? And how do I do something meaningful with my choices? And that's the thing that that educational piece that we're missing, right, with, you know, people, I think, and Consumer Protection has been really good at this, right? Like they've had a massive PR campaign, and people now have an understanding of what they can do as a consumer. But when it comes to their own personal data, it's a little bit like, okay, so if I get your right, if I get all of this information, if I ask Facebook to stop, what happens next, like, you know, they probably don't know that there's still going to be data that's being stored by Facebook, for example, in different places, and might be used on like a metadata perspective. Do people actually know that they probably don’t? And I find, actually, what's interesting is people go, Oh, I'm just a statistic or an aggregated result, but they don't realize how that's going to come back and affect them after it's been aggregated, right, like Oh, so now I'm being I'm in this bucket. And when you put me in this bucket, that's going to affect me in a different way as well. People think I just think like there's a lack of education, and I think that's one purpose. When it comes to companies that collect personal information for this sort of purpose, right of like building a whole profile, and making their entire livelihood from this particular type of model, and we're seeing obviously more and more of that sort of stuff popping up, you know, the world of data brokers is a scary one. When you see how much they can collect about you and how much they keep about you. And that's, that's, that's a really scary world. But yeah, I mean, look, agency is super important. What do you do about it? I think we need to be able to say at least someone's got the choice. But they need to have an informed choice is that next step; you can give people all of the choices in the world, but if they're not informed what it actually means that it's meaningless. I think philosophically.

Debbie Reynolds  20:53

I agree with that. Yeah. We'll have more deep thoughts about that in the future. I just love to ponder that idea. So I want to talk a little bit about the panel we did together at Privsec, where we talked about Data Privacy and AI; you had a lot of great, great ideas. But what concerns you right now with AI and Data Privacy?

Tharishni Arumugam  21:18

I think what concerns me right now is it's a lot. It's like a big laundry list when it comes to AI. And I think when you look at if you distill it down to certain principles, is it even accurate? You know, in terms of the results, like putting privacy aside, just in general from a general liability thing is, can you trust that the information or the end product that you're getting from an AI system is correct, is dependable? I don't think we're actually there yet. And I don't think any of you know, the generative AI vendors will put up their hands and say, Nope, it's 100%, you can trust it. I think that worries me. And then, too, I think the other thing that worries me is the lack of understanding or education around what AI is supposed to be, and it's supposed to do, I think we're hearing a lot of, you know, business leaders going, Oh, this is going to change things forever. And we're gonna see changes in labor and the labor market because of AI. But, like, are we actually there? Do you actually understand what generative AI is currently doing and where we're at with the sort of technology? That worries me too. And then third is the issue we talked about, which is the legality of the data that's being used for modeling the AI technology itself. Obviously, there's a lot of issues around IP and, you know, terms and conditions of websites. And there is no clear clarification on that. Right. And that really does worry me about using generative AI products because so far, and correct me if I'm wrong, we haven't heard any sort of statements from the providers saying no, our products are fine. You know, we've got all the permissions; we've got everything that's required. It's all good. And we've tested for bias.

Debbie Reynolds  23:16

Wow, yeah, that's true. That's true. So when we're talking about this, I think the next thing I want to talk about is consent. So it's tricky. And this is how I think a lot of these companies are getting around some of the other more tangled legal issues. Because when you consent to something, there's almost no ceiling to what consent could be, right? So, you can't legally sell your limbs. But that's about it. So almost any anything, right? So when you're in these AI models, I think that they are trying to rely on consent. And we know that consent is very asymmetrical. But I don't think we even know, we don't even understand the level of asymmetry there could be in an AI system. What are your thoughts?

Tharishni Arumugam  24:23

Yeah, absolutely. I think even without AI, there is an asymmetry of power. When it comes to consent in general. Like we know that if you want to use a service, and they make you consent to things, and unfortunately, people don't really have that symmetry of power to say, No, I don't want to do that. But then you put AI into it. And there's this whole problem with the explainability of the model and what it could potentially do with your data. And people go back again to the educational patient, but AI is do people know what they shouldn't be putting into the model and what they shouldn't be inputting into churches. Petey, are barred? I don't think so. Right? Because that's the and that's the legal disclaimer for a lot of these products is, oh, make sure you're not doing this, but it's embedded somewhere else, not at the point in which like you're putting your information through, as far as I understand it at least. And so this asymmetry of information now, so it's not there's no longer asymmetry of power, but the asymmetry of information and understanding of what happens to your information is what I'm concerned about when it comes to these products being pushed out. And I think, you know, people who are using, like your layperson, that's using chat GPT, or Bard, or Lanza AI, which was that art? One, I don't think they actually know that when you put in your image like it's there now is theirs, right? Or that a lot of the art that is being produced is scraped from other artists who have not consented for lens AI to have access to this. They're not earning a single cent from lens AI. And so this is, and so I feel like this asymmetry of information is being hidden behind Look at this cool thing that it can do. Yeah, look at this cool picture over here; look at this call. Responses that I can get from chat GPT Look at how cool it can draft my staff. And it really goes back to the principles of, of ethics. And I don't know if you read, but I think this week, if not last week, Microsoft fired its data ethics department. Yeah. Yeah. And so that's a little concerning, just as a punter, as a layperson. Yeah, I'm looking at that.

Debbie Reynolds  27:00

Right? Yes, yes, I agree. I agree with that. I think we, as women of color in privacy, have a special relationship with how bias plays out. Because we've been the victims of that, so understanding that or understanding why that's important, I think, is a great discussion to have. So give me your thoughts about bias and how that plays into privacy issues.

Tharishni Arumugam  27:31

Yeah, so I mean, first of all, is whether the data model has been sufficiently trained on a wide variety of people, their responses, their backgrounds, because if it's not, we both know, it's just not going to work, right, there's been, you know, a couple of years now, there's a female black researcher who's been doing work around facial recognition that doesn't recognize people of color. That's a major thing. And then, I've also asked Microsoft about, they're rolling out the whole co-pilot application that will transcribe notes from Microsoft Teams now. Have you trained your model to recognize different accents? You don't want to have that situation where someone is speaking English, but the model can't detect it, because it's a different accent. You know, it's like, when you watch a TV show, and someone's speaking English is a different accent, and there's subtitles. You know, it just means the person's ability to connect with other people just because they have a different accent. And then, you know, I think we talked about this on the panel as well is in recruitment systems of how the use of language, the demographics of your job application, where you went to school, even if you strip out all of these things if you've done certain types of jobs, and you've described it a certain way that could be ascribed to your ethnicity, or your socio-economic background, and that could lead to your ethnicity, again, because certain areas or certain types of jobs are done by people of a certain gender or, you know, certain ethnicity in different countries. And so, I think so much thought needs to go into bias and removing bias in the system. And I think some of I think there was a regular, you know, the regulators have thought about that of potentially introducing sensitive information to minimize bias, but who's the expert here? Right, I think that's what I find interesting is, is the expert, the data scientists, that's supposed to know, because I don't think that's who it is. You know, it has to be someone who understands social and economic situations in different countries. It's, you know, it's almost like an anthropology We study someone who understands diversity and inclusion and the far-reaching aspects of this that companies need to have in their arsenal in order to say that, okay, we've tested for bias in these different ways. We've tested it, and this sort of focus group and these sort of groups, and we think it's okay. That's, and again, you can never be 100% foolproof; that's what we know. But not thinking about that and not doing a stakeholder analysis with people who are of different backgrounds, ethnicities, genders, and gender identities, you're never going to find out if your system is biased or not.

Debbie Reynolds  30:42

Absolutely; I mean you're building these products for humans, not just certain types of humans. So it's like, if you test it on a small, or you have a very narrow group of people that are involved in the making of the system, and then you try to take the system and push it out on everybody, it's just not going to work. And that, though, a lot of those bias issues can turn into really bad issues, right, where we're seeing people being arrested because of a facial recognition system misidentified them? Are we seeing children being put out of school because maybe they're doing school from home, and the algorithms are not picking up like maybe their background is lit the right way or something? And they're like, oh, well, that person isn't in the class, and like, whoa, here I am, right here. So I mean, thinking about the harm that can happen to people and thinking about maybe groups that are either underrepresented in the making of these systems and also people who may be marginalized, need to be on the internet need to be able to do things in digital systems that don't harm me.

Tharishni Arumugam  32:07

Yeah, yeah. 100%, right. And because it, I'm so glad you brought up the fact of marginalized societies because these are the people who most likely would not have access to data literacy or technology literacy, right? And they wouldn't know how to maneuver to make sure that the bias doesn't affect them, affect them. And those are the people that when you're developing any sort of AI that's going to affect people at the end of the day, needs to take into account. And I think that's so key. I mean, we know that just traditional science has had women in the backseat for a long time, you know, in the field of medical science as well, different bodies, different ethnicities, it has taken a backseat, and we're only starting to analyze actually, you know, certain markers just in genetic markers are different. It's not a white of white person's liver doesn't is not going to look like a brown person's liver, for example. But we only can detect cancer in a white person's liver sometimes because that's the years of research of that we have right now. I think there was a really great John Oliver episode on AI that I watched, and that was actually the day before our panel. And he said something really interesting because they trained an AI model to look at photos of moles or different types of skin conditions to detect whether there was cancer. And what it detected was if there was a ruler in the photo, then there would be cancer because all of these photos would measure would have a photo of a ruler to measure how big the mole is or how big the skin condition is. And so we were not even there yet to figure out the data input for basic things like that, let alone, you know, ethnic bias or gender bias or sexuality bias in the systems. I think, you know, the example I gave as well during our panel was how a self-driving car hit a person because it didn't detect a person who was jaywalking because that wasn't part of their data input. A person is always walking on a crosswalk or a zebra crossing. So if those things can happen, and those have nothing to do with, you know, the kind of people they are like, imagine the kind of, you know, the bias that could happen when it comes to our negative impacts when it comes to gender and ethnicity and sexuality.

Debbie Reynolds  34:40

There was a medical study done where an AI was analyzing X-rays of people, and they could tell, they said the AI was able to tell the race of a person without them telling it. I don't know how that's possible. But that is a huge bias issue. That's a huge privacy issue as well, right? Because you're talking about, you're bringing in sensitive data categories into AI systems that could possibly create, like, discrimination or harm to those systems. So maybe, let's say the tech says, well, the African American group, they're not going to get the cancer treatment, but these other people will because that's what the data set says, you know, yeah. So yeah, pulling those markers out is very important. What are your thoughts?

Tharishni Arumugam  35:40

Absolutely. I think, you know, trying to figure out this is this comes down to the explainability of the AI model, right? Like, why has it made that decision? Because if you can't, you know, if you have a black box AI, that, who knows what it's doing, and why it's come up with this conclusion, as exciting as the possibility of technological advancement on that, if it impacts people, and you can't explain why you just can't fix it. Right. And, and definitely, I think, I think first principles, right, you know, computers don't create information right now. They're just regurgitating information that exists in a much faster way. Like, if our brains were that fast, we do this, we'd be able to do the same thing. And so if we're feeding it information that's already biased, it's not going to come up with anything but biased information. And the medical field is such a great one, such a great example because it is biased today; we know things like a BMI measurement, for example, is completely off because it's, it's based on a certain group of people, it's not supposed to be something that affects everyone that BMI scale. And yet, doctors today still will do your health tests and say, Your BMI is high. Like, I mean, like, I always get a BMI, obese, you know, because it doesn't take into account the kind of my ethnicity and how I'm built. It just is this random standard that people came up with years ago, and they're still using that today. So imagine if you put BMI information into an AI model, and it will just tell everyone without any further information, yep, you're unhealthy or unhealthy or unhealthy without actually going into the details of why it's come up with that. So until the source data gets cleared up in the medical field, I can't see how I can help. In these situations, you're absolutely right.

Debbie Reynolds  37:42

Yeah, is see AI tools that are being built. Now I see them more as something that can help. More low stakes, you know, I have a problem with are using these AI systems in high-stakes situations, especially if they're leading to decisions or people are abdicating their human judgment to the AI system, right? Because we have to be able to look at those blind spots to make sure that the things that we're putting in there the decisions that we're making aren't harming or excluding people.

Tharishni Arumugam  38:17

Absolutely, I think, I think the low stakes is where it's at right now. But in saying that, I think there's this misconception that if you're like a manual data entry, and sort of like a low, low skilled worker, or you're going to lose your job to AI, I actually think that's not the case. Right now, I think it's actually that second year, you know, we had, we had a demo of co-pilot, and I was like, this will only work if we have proper data management in place. And proper data inputs in place data input, it's still a big thing. But what it takes away is that layer of people making nice slides, or people assisting with making the sort of like, you know, client-facing documentation, that's the layer I think that's going to be affected if they don't know how to use the technology that's coming up. You know, you're in, I think, if the pandemic thought is anything is these low-skilled workers, and on the podcast is all inverted commas are people we rely on the most in the society like we can't replace them by sitting in your living room on your computer. That's just not the reality we live in right now.

Debbie Reynolds  39:32

That's true. I agree with that. I want to talk with you a little bit about the UK. You're in the UK right now. I feel like the UK is in a very interesting position. So this is after Brexit. You know you're not part of the EU anymore, but you're connected to the EU. You know, you have a strong relationship with the West, and it's like, the US and Europe are different, right, based on the data protection. And the UK has these new proposals about how they want to change things. So just give me an idea of what was happening in your world in the United Kingdom with Data Privacy.

Tharishni Arumugam  40:21

So my caveat is, as much as I am based in the UK, I monitor more of a global standard than I do in the UK. But I spoke at the IPP UK DPI two weeks ago, and John Edwards, the Information Commissioner, had a speech, it came out before the announcement of the new bill and what was going to be in the new bill. And I think you're going to see, you know, the ICAO trying to hold on to its values that it's had for a long time when it comes to their protection, which is very much informed by the GDPR. And that's still going to be there. But I think how this is going to affect businesses is if you're a local UK business, everything that's coming out is like, you know, plus points plus points, you know, it's easier, for example, from your records of processing activities. But if you're a global organization, this is where you're going to start struggling with the adequacy of having data transferred to the UK. We don't know how long this is going to last if this is the position that the UK is going to take. Now. Will the EU think the UK is as dangerous as the US? Probably not. But trying to get adequacy is going to be such a challenge. You know, with what's being proposed right now, there are fundamental principles that still adhere to GDPR. But you know, I think most organizations are probably still going to try to keep the GDPR set, and you've already started it; why would you roll back, to a less compliant, or I guess, less compliant might not be the right word more of a lower risk position from the work that you've already done. So it's an interesting time, for sure, with the UK regulations because they've also taken a different position when it comes to AI regulations. And so it's, it remains to be seen what the final product is going to look like. But it's the UK's position, I think, is that they want to it's more about commerce. Now, it's more about engendering business, the economy's not great. So it's going to be all about that making sure that it's more of a destination for technology, for development, economic development, than it is like a barrier that may be the advantage that they have with this over the EU when it comes to investing into technological structures.

Debbie Reynolds  42:52

Yeah, I think I think a lot of people are taking a wait-and-see stance on it. I mean, because nothing's changed yet is basically proposals, and things have to work their way through the political systems and stuff like that. And I advise my UK client just sit tight right now; who knows, really nothing to do at this moment, you know, we have to see how this plays out. But, it was kind of a nail-biter over here in the US, for us around adequacy between the UK and the EU, because I was like, oh, my God, is it going to happen? It's not going to happen. And we were so excited that it didn't happen, you know because it could be, it just creates more complication for people who have to manage a lot of these different areas. So I'm hoping to see more; even if we don't have laws that are the same, at least we can have some principles that we can agree on.

Tharishni Arumugam  44:03

Absolutely. And, you know, hate to bring it back to the AI discussion. I think that's what we're seeing in AI regulation. More so, the principles are all tied up to each other in the different countries that have AI regulations. So maybe that will be an easier landscape to maneuver. Once all the laws are in place compared, the privacy landscape potentially is a little bit more thorny because it's had a lot of time to develop in different countries and attachments to the political system as well.

Debbie Reynolds  44:37

Yeah, I agree. So if it were the world, according to you, to Tharishni, what would be your wish for privacy anywhere in the world? So whether that will be regulation, law, technology, human stuff, human behavior, what are your thoughts?

Tharishni Arumugam  45:01

That's, that's a big one, I have such a long list, but I'm going to, I'm going to tell you what I think the regulation should have is that every single company in the world needs to put data ethics into its core belief system. Because I think we're focusing too much on privacy or security or competition law, that if you have data ethics in place, I think everything flows down from that. Because it's, you care about what happens to the end individual, you care about how the data that you're collecting or using affects the individual more so than just following potentially a tick box exercise that privacy might be. I think privacy can become a bit of like, okay, you can do it, if you take these boxes, and you've done these things, but the should you do, it is very hard to legislate. And not enough corporations are asking themselves the question should you do it if you can do it? And that would be my big wish, as data ethics becomes the core and the founding principles for every company moving forward.

Debbie Reynolds  46:23

That's a good wish. I like that. I think what a lot of companies are doing or some companies aren't doing, they just feel like let's just do the bare minimum, let's, you know, comply with the regulation. And that's fine, right? But you're not going to get a gold star or a lollipop because you didn't break the law. You know, people aren't going to be impressed by that. But they will be impressed by what's your stance on the human's right to their data or how transparent you're going to be. So I think that there's an opportunity for businesses to be able to step out and regain that trust. But then also, ethics aren't laws. And not all laws are ethical. So being able to have ethics to be part of the DNA of organizations, I think, is the way forward.

Tharishni Arumugam  47:17

Absolutely. Absolutely. You've hit the nail on the head. It's that not all laws are ethical. At some time, sometimes made by lawmakers that have political agendas, we know that so that's, that's the laws don't exist, because they've been dropped down by the gods of data ethics, is it? Or ethics in general, so.

Debbie Reynolds  47:38

That's right. We agree on that. Thank you so much for being on the show. This was tremendous. And I know our audience will like it as much as I did.

Tharishni Arumugam  47:50

Thank you so much for having me. This is a really good conversation. I really enjoyed myself. You know, we don't get to talk like this a lot. And I'm happy to share my thoughts. Thank you so much for inviting me to this.

Debbie Reynolds  48:02

Oh, I really appreciate it, and we'll talk soon. Absolutely.