E139 -Antonio Rocha, Data Leader and Privacy Advocate, Europe

53:26

SUMMARY KEYWORDS

privacy, ai, people, data, happening, huge amount, deep, companies, business, business model, uk, collaboration, big, thinking, technical, data governance, organization, products, broadly, world

SPEAKERS

Debbie Reynolds, Antonio Rocha

Debbie Reynolds  00:00

Personal views and opinions expressed by our podcast guests are their own and are not legal advice or official statements by their organizations. Hello, my name is Debbie Reynolds; they call me "The Data Diva". This is "The Data Diva" Talks Privacy podcast, where we discuss Data Privacy issues with industry leaders around the world with information that businesses need to know now. I have a special guest on the show all the way from the United Kingdom, Antonio Rocha. He is a data leader and a Data Privacy Advocate. Welcome.

Antonio Rocha  00:42

Hello. Hi. There Nice to finally meet you.

Debbie Reynolds  00:47

Yeah, well, we're on LinkedIn together. I think I invited you twice because I love your deep comments. So the commentary that you do tells me a lot about you in terms of your operational understanding of data and information. And I'm always excited to see your comment and see the things that you say because, as someone who has deep operational knowledge, I feel like we're kin because you talk about these issues in such deep ways and also very understandable ways. So you have deep knowledge, but then you're also able to extrapolate that in a way that I think anyone can understand. So can you tell me a bit about yourself and how you ended up being a data leader, and a Data Privacy advocate?

Antonio Rocha  01:40

Well, first of all, thanks very much for having me in for your comments on some of my contributions on LinkedIn; I would say that probably it's important to say that English is obviously, my second language. So I'm still sometimes getting a hold of communicating certain broad strokes of the information that I want to while at the same time, it is always a challenge to kind of strive to simplify complexity and make it usable for the rest of the community. That's, that's always on my mind, usually, on some of my comments. In others, I might be, you know, just addressing some issues that I feel need to be addressed the other day, I was thinking about one of the first times we've we've met online, on a sort of a comment by the President of the IAPP. And that was, that was kind of almost shocking to remember that moment because it was a very kind of striking moment for both of us. I think so, yeah. Privacy is definitely an area that I love. I would say that my first big approach to privacy was as a contributor, as a project manager only. Essentially one of the guys who was doing the project with me, he was a data scientist, and we were trying to help social workers in Australia to find children at risk of being molested. And for all of the obvious reasons. Once you get into those sorts of projects, and especially being that the first big project, I was kind of hooked actually, because of the level of harm the level to which we could use this new technology to really make a small difference to a subset of users in the other side of the world. But at the same time, I was also helped to realize how important often small technical design decisions will have an impact on the overall solutions, products, etc. That one is making and especially when you're working for when you're; you're the third party. And when you're far away from the users, you're not seeing them even if you have very little data, and you're just doing something to help them in a very specific process. So part of my intensity or part of my entry into the privacy space was, was really because of that project that really marked me as a data professional.

Debbie Reynolds  05:39

Wow, that's very impactful work. One of the things I like about privacy, first of all, my entry of privacy is very selfish, right? So I want to know what my rights are, what I can or can't do, and stuff like that. But one of the things that I really love about privacy is that you feel like you help people, right, and ask for their feelings. So it's not just earning a paycheck, but you're actually doing something that could be very impactful to other people.

Antonio Rocha  06:08

Yeah, yes. Well, I mean, there's, there's obviously very different dimensions here. Privacy tends to be both a human rights issue or as you guys have seen it, in the US, more of a consumer issue. If I'm being honest, I usually try to stay more or less with a balanced view. Because in the end of the day, one of the big reasons why I moved from Portugal to the UK was the global aspect of actually being able to contribute to global projects, global companies. So when we have that requirement on top, we're always dealing with a huge amount of regulation, often regulation that is not easily relatable, translatable back and forward. So there is a degree of complexity to the products that we're doing into the approaches that we must have in order just to deliver, let's say, a basic set of quality, which is normally what the industry goes for. It's, you know, 51%, it's not 99% of the curacy. I mean, right?

Debbie Reynolds  07:40

So I didn't know that you moved from Portugal to the UK. But this is a great time to be in the UK around data protection. You guys, I feel like you're in the middle of everything. It's like two parents getting divorced with the US or the EU. And then Canada, and the UK is in the middle. Tell me what was happening in the UK around data protection that you think people need to know or the things you're noticing. Especially post Brexit and all the different geopolitical things that are happening in the UK?

Antonio Rocha  08:16

That's an interesting question. Thank you. Should I say this? I think there's definitely some interesting opportunities in, again, the global sense of the UK. We've always had a comprehensive amount of companies doing global work; global business in the UK ends up, for many reasons, to be a good place to do that. Broadly speaking, in terms of the privacy spectrum itself. I think there's some challenges there. If I'm being honest. My feeling is that the government itself is still trying to kind of grasp what is privacy, which is something odd say, is especially considering that we're talking about the same government that collaborates between the security sides of privacy and, let's say, the legal side of privacy. So let's be a little bit blunt in the sense that when we look back at privacy as a technical discipline as well. A lot of what happened after Snowden, for instance, was that we realized that a lot of the communications, A lot of the things that were happening around that time, were actually UK based. So having this sort of notion that the UK, for some sort of reason, doesn't really grasp the technical aspects of mass surveillance systems is not realistic. It's, I would call it, a sort of political play, which is, which is fairly common. As a country, we are very dual in our culture. And that's useful for business. But at the same time, we also have to be realistic and be upfront that this is our way of doing things. And probably this is a way that we should be able to maintain. Because not only it's useful for us, but it's also useful to have a degree of divergence from all of the other main players in which we are partners. Because the concept of partnership is you being able to maintain not only your importance but also to profit from it. Now, I'm not sure if we are all of us in the community are aligned to that thinking that maybe it's part of the UK is culture; we have a huge amount of contributions from people from all over the world. We are very diverse; we have, again, contributions and the society, which is made by people from all over the world. And that's a very interesting thing, sort of United States of Europe, so to speak. So this is important to understand, in the sense that privacy is also a lot about political aspects, but also about culture. It's deeply influenced by culture.

Debbie Reynolds  12:22

That's a great insight. I never thought about it that way. I guess that's true. There are a lot of diverse people in the UK, a lot of business people that come through there, a very important business place in Europe, for sure. So definitely is unique in that way. Is there anything in the world now that's happening and privacy that concerns you? Like, there's something that's happening? Like, I don't like this, or you know, what's happening with that?

Antonio Rocha  12:53

I would say that one of my main concerns is definitely the lack of implementation of privacy. As, as I've mentioned before, I think there's still a considerable gap between the technical world and the legal world. The technical word is always saying there is no privacy; privacy is impossible. And the legal world is saying, well, we have privacy more as a consumer thing. So here's some policies, here are some contracts that will obviously sort everything out. I guess that my concern is that although I see a degree of joining up between these two worlds, we are still in need of deeper implementation of privacy throughout all of our systems. Both from a security standpoint, but also looking at privacy as a more positive influenza, to the whole way that we manage systems. So broadly speaking, I would say that there's a huge gap worldwide between, let's say, a more technical-driven version of privacy and a more legalistic version of privacy. And the technical version is in an ecosystem that is more technical driven, let's say, China as an example. Whereas the legal version is much more towards the US and the EU. And to be honest, that doesn't make much sense because a lot of the technology was invented by a lot of the West. The reason why we let it arrive to this present state of affairs is that it was more profitable to do it that way. And I guess that in the present situation where we are, where a lot of our economies are really struggling, and that we're looking into a lot of waste, we can also see privacy as one of the main drivers to actually cut out a lot of the technical depth, and a lot of the waste that exists within AI systems within engineering in itself as a big lever to help that transformation. So we can position all of those high-priced people, such as data scientists, etc. And into the pipeline of actually being able to produce data products. In the far end, which is more technical driven, they have more intelligence put into it, and we don't waste, I would say, probably 60, or 40%, of all of our resources, just dealing with technical debt, because that doesn't make any sense at all. From an economical point of view, from a systems point of view. And at the same time, we also have to realize that in Asia as a whole, there is outside a much stronger appetite for AI usage for AI consumption. You know, I think it's important to discuss, you know, the new policies on AI. Oh, yes. Yes. So how should I put it? It's, it's challenging to say that, I guess the EU is approaching AI as the usual way of doing privacy legislation, which is quite heavy. without actually having a good understanding of how agile works. What is the data pipeline? How do we actually go from a database to an AI product? If they have, if they ever, you know, sit with developer team and actually understood what do they do in the practical sense of things? How do they choose data? What sort of algorithms do they use? I'm not sure if I'm being honest. I have that experience. I worked side by side with AI developers on a daily basis. And my reads of the President's approach to the regulation of AI is probably negative. Because if I'm being honest, doesn't make much sense. If we compare it to the recent events in AI legislation UK, I'm not sure that's the path, either. Because essentially, we're also we're almost saying, Well, you can do whatever you want, you'll be responsible for the outcomes, but you basically decide how you're going to implement this. And we're basically not going to check you up. I'm not sure I agree with that. There are certainly a lot of things in which AI regulation still needs to improve. But I would certainly like to have much more input from the regulator to help work groups, and industry groups, where we can, in a public way, present solutions that work for both sides. If I'm being honest right now, I kind of feel that everybody is pretending that they're doing something, but they're not really concerned. Over what is the output, what is the impact into those people? We will be influenced by a particular product. So I would also like to see some element of civil society of, you know, something that goes from civil society and says we are the representatives of civil society for this particular group. And we want that our input is considered when you're developing this be that consumer group; we have to be open and flexible. But I would like very much to see this conversation happen. Because I don't think there are enough communication bridges open. And I kind of feel that most of the regulations and most of the things are meant for you to politically say that you're doing something while you're able to also say, actually, I'm not responsible for that. And that's a mistake. As a society, we need to take ownership. And we need to make it work in the end of the day, those humans will be impacted by a certain solution in the end. And we have to care about each other. Because the impacts can be quite high. Yeah,

Debbie Reynolds  21:34

I agree with it. Wow, that's a lot to think about.  It's true, right? As we're when we think about the way that people are doing these AI products and stuff, I tell people we're in the it sucks to be you part of human development. Because it's sort of like a finger-pointing, like no one wants to take responsibility. That's like; it's your responsibility, it's your responsibility. And in the middle, people can be harmed because no one really wants to say, okay, I'm going to do this product, I'm going to take responsibility as opposed to right now we're like, people running with scissors in a way?

Antonio Rocha  22:12

Yes, definitely. I see that happen a lot. But at the same time, I want to be very honest and positive. Most of the people I've known for my entire professional life really care about these things. It's not, it's not like the harm happens because people are on purpose. Trying to do something that is not ethical, they actually care a lot. They, actually develop very complex ways to make sure that they're not doing any harm. But having said that, it's still somewhat of a fringe movement, movement, even within AI folks. And the reason is simple. Most of AI folks are not really AI folks. They're just engineers. They, like, their tickle development, training, and understanding of a good part of the tools that they're actually managing. Because we might have started, no, this more recent phase of AI, 10 years ago, but in the end of the day, the number of actual people who have data science training is actually pretty low. I would say that probably 70% of them are data engineers. That's it. Whereas they're not modelers, they're not doing the mathematical parts, which has a lot of implications for these solutions. So a lot of the arms themselves, they kind of happen because there is a huge amount of depth in the built into the past 10 years of systems. And there is a huge amount of lack of governance, simple things. And we've seen some of the recent reports from the IPP, etc., etc. We've seen the most basic elements of even privacy governance in terms of programs and etc. They weren't really happening. One of the most striking details that I've ever seen was the question of how many actual meetings do you have between legal privacy and tech? And the best was once a month, once, once a month is not a collaboration once a month is we're meeting to check up on, you know how things are doing. But we're not actually writing a book together. We're pretending we're writing a book together, you, you know, you write your chapter, I write mine, I don't read yours, you don't read mine, and we move on. The book needs to be written at the same time by the same people, and they actually meet to discuss the text on a daily basis. That's the level of collaboration that is required not only to do privacy but especially in privacy is just the easy part, so to speak. But to actually do AI audits, you need that level of collaboration; privacy is just the sort of the baseline. And if you look at privacy as sort of the baseline, then we have to go deeper, and say, Wait a minute, there's a baseline, which is security, which is actually quite weakened, and being weakened every single day that we speak. Now we ChatGPT. I mean, there's people already exploiting, using GPT to further exploit security. So my question is, it's impossible to have this level of complexity into all of the things that we want to manage without the investment, sound investment in all of the pillars. And one of the big pillars is not only privacy but security. Because without security first, you know, there's nothing left.

Debbie Reynolds  26:59

That's true. That's true. You touched on this twice. I guess I want to get your advice on how do we do this? So there's definitely a silo within organizations of the legal area of privacy in the technical area. How do you think we need to bridge that gap? Because I've seen that a lot. And I think this is mostly because this is the way organizations have traditionally worked, has been like Santa's workshop where everyone does their little part. And then, in the end, magically, something is supposed to come out the end, right? But it doesn't work that way.

Antonio Rocha  27:36

No, yes. I mean, unfortunately, or fortunately, we are in a very different era is no, everything that we're kind of talking about here ends up being kind of management problems, in one of the reasons why I look upon this thing as such, is that I've always loved management. I went into the legal side as a second option. But actually, now I'm returning to management in a much more tech-driven way of management. So I've always thought about most of these issues as management issues. Collaboration is hard because our societies are industrial. So we kind of specialized into silos to deliver very small things. One of the big reasons behind this challenge is that before we had huge companies, let's say, doing very well, everybody did a very tiny thing. Now unfortunately, or fortunately, due to the expansion of software, in which Software is eating the company, were arriving to a stage where the pyramid is much, much smaller, it produces a huge amount more, but he needs to do that in a smart way. Meaning the people inside that pyramid will have to be able to understand the technology that they're managing in a much more in-depth way. Both as leaders, as doers, etc. And not only that, before we had a business world and the technical world. And this divide is still pretty much life today. And one of the big reasons why this collaboration is so needed. We need to combine both the pyramids is not going to have the possibility of being strong. It will crumble if we don't do that. Asked in a balanced approach to business, the balanced approach to business requires deep investments in technology, skills, and skills, maintenance, and growth throughout the whole organization. And this is not his learning problem is an HR problem. Those are management problems. So we can't expect to obtain x amount of profit from a business model which is under attack and being eroded every single day because the new business model is the business model of data. So we need to be much more adept at the new business model with the right people who are able to use this business model in an intelligent way because all of the tools that they have available to them are state of the art we've now we are capable of putting AI in the hands of what I would say is, is, is a middle manager, middle employee. But the opportunity is also a big, big risk, even for AI itself. And I'll explain this very, very shortly. So before AI was really developed, very focused on one single product, right, the new time of AI is companies having the possibility to become what we describe as mathematical corporations. So something that is highly automated, highly orchestrated, the problem today is that we're having a sort of a cloud of AI is scattered all over the place. So now we have a little AI that goes through your CV and sees if you match. So we have AI is for HR, we have AI for finance, but this cloud of AIs, doesn't work very well. And it will make a mess out of things. Not allowing us to reach a technology phase where we have deep integration of AI into that pyramid. And that's, that's a risk. And maybe it's even a way that we can control it. But from a business point of view, what we want is to have less people, but that's these people, they are very efficient, and smart and long term thinking in the way that we are going about business. This quarter-like mentality that we've kind of developed in the West doesn't really play well with AI. Because AI is long term is long, long term thinking the real value of AI usually takes a while to hold, but after you have it, it can be quite transformational for the business model itself.

Debbie Reynolds  33:43

Wow. I hadn't thought about it that way. You know, you touched on something as well that I think is very important to realize. And that is what companies need to do as they're developing these new tools, I agree that there should be long-term thinking, but part of that is they need to get away from the idea that someone has a skill and they come into an organization, and then they don't get training. So it has to be a situation where people's skills are constantly refreshed. You're constantly learning about data and learning how to do it in a business. Because if you don't, you're not going to be able to get these long-term benefits from this technology.

Antonio Rocha  34:29

Companies really need to become learning or organizations. And the reason why is simple. We're dealing with a huge amount of complexity but also change on a daily basis. The only way that you can deal with complexity at scale is that the part of the company is constantly learning. But the thinking that you know only sea level can learn and think about the ways that organization goes through is, well, it's not fair. Not in the sense that no organization is a democracy, it isn't. But we're all contributors, we're all thinkers, we're all capable of actually having quite a lot of ideas. So my background is big companies. But at the same time startups, I've always kind of come back and forth between both startups always allowed me to focus on innovation Fast, fast and dirty. Whereas big in big companies, I've always learned how to make this startup innovation, how to grow at that scale, and how to think about it, embedded in a huge business model. So I was always going back and forth between these experiences of products of innovation of environments, etc., and startups are learning organizations because they're trying to find the business model. And the business model, in the beginning, is a known big corpse. Well, the US business model is quite well understood and quite well managed. The thing is, through the power of software and data ecosystem ecosystems, we can now build much smaller pyramids. So startups, who can actually present into almost exactly the same as a big org, without all of the costs of the big org, and probably with, you know, from one to 100 speeds, we can deliver products in a way that a big organization just can't. So I believe that we have to find a sort of a middle ground where big corporations really invest in learning. While at the same time, the startups Well, we also have to realize that a lot of the outputs that we're making often is so disruptive that we also need to consider that there will be an impact on society itself. My take is simple. AI is very useful. It should augment us. It should not ever substitute us, first and foremost, because it doesn't have the capability to do so. Period, it doesn't. And I have strong doubts that we'll have we'll, at least with the present level of development. But on the AI side, we've always kind of knew this, this hidden rule, which is, we'll come to a place where and some are saying that we are in that place, right? Now. We've come to a place where nothing happens. And all of a sudden, everything happens in which there is a switch, and general artificial intelligence is actually possible. Personally, I think that we are still a little bit far away from that. We might be seeing some signs, but I have strong doubts, at least from all of the conversations that I keep having with with leaders in the area. But there's certainly something that we can do; we can collaborate much more, both within startups, privacy teams, and security teams; we must have the humble spirit of saying what I do not know. And we must be able to understand each other very well. From the technical side to the legal side and vice versa. This requires transparency from the human side. Well, it's hard to do that in normal companies because of lots of the cultural ways that become developed. And I believe that collaboration is critical for us to be able to tackle these huge challenges

Debbie Reynolds  39:59

Now I agree with that; I think this is an unbelievable opportunity and moment, especially for smaller, more nimble companies, to be able to create value whenever they are using software AI tools in a way that they weren't before. So having those, as you were saying, the bare corporations, they have had the people who could develop these technologies in-house, now that we're seeing more of the technology spill out into the smaller, medium-sized businesses and even for just regular people being able to have their hands on it, I think, people who are smart and can learn and figure out how to solve problems with AI and these AI tools, I think it will just be a very different ballgame in the future for these people.

Antonio Rocha  41:04

I mean, it is, and, and certainly, certainly coming from new innovation and startups environment, I can say that probably, you know, 10 years ago, I would have to hire 100 200 people to do something. And today, I can do that with 20. That's it. So the cost that I have to actually disrupt a huge amount of business models and markets is actually much, much, much smaller. Especially today, in which we can find almost anybody online. We can recognize ourselves easily as builders. So we can set up teams quite fast. And specialist since COVID. No room for remote work as just Skype works. I mean, I'm having conversations with US startups. I'm having conversations with the other day startup in Chile, you know, things that are quite different. Having said that, I guess that one of the important things to go back in terms of collaboration is I think it's very easy for techies to have a sort of approach of oh, these people, they don't understand what I'm saying, or they don't understand that I can in five minutes. You know, read a million contracts, and probably in half an hour extract a lot of things that will require, you know, 10 years of ediscovery. Fair enough. So, but I guess that if we really want to approach this from a positive outlook, we have to be humble, and we have to, as a society, enhance professionals; we have to help each other to understand the implications of a lot of our work and understand the implications of the work of others around us. Because that's really the only way to do a collaboration. Not only from the AI side but also the privacy side, anything that is very technical, you is under such an amount of change that we really need to focus on this collaboration and leadership. Well, leadership needs to demand and invest in these deep collaborative approaches. It focuses more on probably the outcomes than, you know, pointing any sort of fingers because you don't know these or you don't want that. You know, as humans, we're fallible. Our algorithm is good because we are adaptable, and we're not machines. We can think machines cannot think, not really like us. So we will have always value, and we should use AI and privacy broadly speaking to rise the intelligence of humanity because actually, not much of our intelligence is used by corporations; only a very small, small amount of intelligence is used. And most of the people, they're not really being very creative. Ai allows us to do all of this, at scale.

Debbie Reynolds  45:24

That's amazing. Well, if it were the world, according to you, Antonio, and we did everything that you said, what would be your wish for privacy or data or information systems in the world?

Antonio Rocha  45:38

Well, now I think we have to go from good to the basics again, really. So what I would really like is to see privacy implemented. This means setting up an entry point, tagging all of the data, and using this data in a collection of ways that most of the silos in a company know they're able to vouch for it, they're able to see it. That's my simple wish. And I know it's not a simple one. But I've also; I also want to say this as a warning, which is we're doing a huge amount of data governance these days, right? The main reason why is exactly because of AI in because of data theft, data drift, etc, etc, etc. Now, data governance is like putting a very strong light in what was before a very dark room. It will really shine a huge light that unfortunately, for all the compliance, people, not just privacy, I would say broadly, compliance. That will record everything, every single thing that is happening. And this is already happening these days, as we speak, so sometimes we techies or we lawyers can have fields strange or wrongs because one of the sides of the equation or one of the sides of the room is actually recording the other, while the other side of the room in dark is kind of pretending that none of that is happening. That doesn't make any sense at all. It's it's, it's, it's just crazy. It's a level of risk. And of untrustworthiness, that should not be allowed to happen because businesses need to be respectful of the law. And the fact that we have a huge amount of businesses doing a huge amount of profits, while broadly not respecting the almost to the full extent, a lot of loss means, from a financial perspective, that most of our stock market is actually illegal. Because a lot of the profits are based on non-compliance with the law, now, that is, that is kind of striking.

Debbie Reynolds  48:49

Oh, it is striking when you think about it. I hadn't thought about it that way. That's true.

Antonio Rocha  48:57

Well, it is challenging. Look. Let's, let's go back to basics and say, No. Anybody was thinking that all of a sudden, we could stop the huge data flow from Europe to the UK to the US. The only way that anybody could stop that was to blast the cables. That's it. And, even then, we would have satellites. So I mean, only last the satellites do. It's impossible. It's not something that it's technically possible. And these are the conversations that I think we really need to start having because we need to be more practical about it. Lawyers need to start to focus on quite more difficult areas such as AI Audit, right? Whereas the, where the degree of complexity is much higher, then, you know, doing small contracts or, or huge programs of contracts, just to be able to say, Oh, we've got SRAM solved in, we wasted a quarter of the entire time of privacy while doing that, because that's not privacy implementation, I'm sorry, privacy implementation is right next to security right next to data governance, right next to the chief data officer. These are the deep collaborations that privacy needs. And this, you know, small circles of collaboration will then become a bigger one, from a process point of view.

Debbie Reynolds  51:00

That's tremendous. Thank you so much, I'm so happy that you were able to join me on the show. You really dropped some really deep thought, some deep knowledge for us. So this was great.

Antonio Rocha  51:13

I hope it wasn't seen as too spotty. Maybe I have my own way of thinking about things, but my background started with, you know, AI project management, then we went to data governance, and then to privacy. So I kind of made it through all of the major pillars of data of itself, right? So I tend to bring all of that up. And sometimes some people might see it as too high. But hopefully, at least some of my words had any meaning for the three pillars of data.

Debbie Reynolds  52:10

That's true; no, no such thing. We need people to be able to talk about these high concepts and these different areas of how they connect because the future is not Santa's workshop. So we need to really understand these different areas and how they connect together. And I think you have woven it together very well on the podcast.

Antonio Rocha  52:31

Nice. Nice. Well, hopefully, I'm always, always in for one of these chats. I feel that we could spend days in these talking about this.

Debbie Reynolds  52:46

Absolutely. Oh, my goodness. Well, thank you so much for joining me today. This has been tremendous. I'm sure we'll chat more on LinkedIn as we always do. Thank you so much.

Antonio Rocha  52:59

Thanks very much for having me. Everybody likes your dissertations about privacy.

Debbie Reynolds  53:08

Oh, absolutely. This is amazing. I'll talk to you soon. Bye bye. Thank you.

Previous
Previous

E140 - Dr. Keeper Sharkey, Founder, and CEO, ODE, L3C, Data Scientist, Expert in Quantum Computing

Next
Next

E138 - Sandor Slijderink. Expert CISO, All Things Information Systems