E214 - Emerald De Leeuw-Goggin, Global Head of AI Governance & Privacy, Co-founder of Women in AI Governance

[00:00] Debbie Reynolds: The personal views expressed by our podcast guests are their own and are not legal advice or official statements by their organizations.

[00:12] Hello, my name is Debbie Reynolds. They call me the Data Diva. This is the Data Diva Talks Privacy podcast where we discuss data privacy issues with industry leaders around the world with the information that businesses need to know.

[00:25] Now, I have a very special guest on the show all the way from Ireland, Emerald De Leeuw-Goggin. She is the global head of AI governance and privacy at a leading tech company and also a co founder of Women in AI Governance.

[00:41] Welcome.

[00:42] Emerald De Leeuw-Goggin: Thank you so much. I'm so excited I finally made it on Debbie's podcast. I feel like it took us so long to get here, but we did it.

[00:49] Debbie Reynolds: I think we've been kind of following each other for many years. I've been very impressed with all the work that you've done. And I think we were supposed to be at a conference, conference in Spain or something.

[00:59] I reached out before and then we weren't able to get together then. And then we decided we're going to do this. So I'm glad we were able to get together.

[01:07] Emerald De Leeuw-Goggin: Yeah, me too. And I think, yes, it is a shame I didn't get to ecosystems 2030. My FOMO was real, but hopefully next year. Hopefully.

[01:17] Debbie Reynolds: Well, I'm a bit starstruck myself, actually. I think we mutually enjoy each other's work and I'm super excited to be able to talk to you today, but you've had a tremendous career, a tremendous trajectory in your career.

[01:31] I had actually met Someone at Ecosystem 2030 works with you, Patricia, and we had a great time in Spain in your stead because you weren't there.

[01:42] But I would love for you to tell your story about your trajectory and how you came to be in this career that spans AI governance and privacy.

[01:52] Emerald De Leeuw-Goggin: Absolutely. And yes, I heard all of the amazing stories, but I was very glad that Patricia got to go and was there on our behalf. Representative presenting. So, look, I'll start at the beginning.

[02:05] So I grew up in the Netherlands, so I have an Irish mother, but I lived in Holland for 25 years and I studied law there and I eventually got a full scholarship to go and do my master's degree abroad.

[02:18] So I ended up going to Ireland because I've been coming here my whole life, because half of my family is obviously from Cork here in Ireland.

[02:26] And I ended up doing my LLM in E Law and Intellectual Property Law. And you'll have to remember this was 2012, many moons ago now, and I got the GDPR handed to me as a thesis subject.

[02:39] So that was really, really early, if you consider that it actually finally ended up coming into force in 2018.

[02:46] So I had spent nine months with this piece of legislation and writing my master thesis on the right to be forgotten. And I was struck by its territorial scope and the fact that it would have such sweeping consequences around the world.

[03:01] And because of that, my first instinct was not to go and join a law firm, but rather, how do I scale myself? That was my first thought because I was like, who has time for this?

[03:12] This seems like a lot for anyone who already has a job to actually get really good at. Now, obviously we have a world full of brilliant privacy professionals who have conquered that battle with a lot of confidence and vigor.

[03:26] But at the same time, at the time, me being just a student with no real connection to the real business world, I was kind of going, has time for this.

[03:33] There should be technology that helps you deal with this stuff. So I proceeded with actually paying all of the money to go and sit my lawyer exams in Ireland and do all this particular training, because the Netherlands is a civil law country and Ireland is a common law country.

[03:51] So I had a bit of, like, reschooling to do in that sense. So I was able to sit those exams. And I was a poor student. Like, I had no money.

[03:59] I was on a scholarship this. And I thank my lucky stars to this day that I got that scholarship. There was Cork Starship Weekend. And I'm sure you're familiar with these things.

[04:09] They run them all over the world. But for anyone who's listening, Startup Weekends run all over the world. People kind of get together as a group, whoever is interested. It could be a whole eclectic mix of people.

[04:19] And people pitch their idea at the beginning of the weekend, and then people vote for their favorite ideas. And then the people with the most votes get to work with like, a little team of those attendees over the weekend, and then they pitch to a set of judges.

[04:34] So I promise that there is, like, a reason why I'm telling the story. So I went to this Startup Weekend and I pitched a rudimentary version of what my first company was going to be, but I didn't know it at the time.

[04:49] So I had this idea about how I would scale myself. And basically it was one of the first privacy tech companies in the world, I'm sure, because it was only 2013 while that was happening.

[04:59] But I got this great feedback from the judges and it was all wonderful and we all went home happily. This must have been March by The time, it was September, October.

[05:09] I was studying for my lawyer exams. I had graduated and I remember sitting in my university library studying for these particular exams and I was studying the Irish Liability for fire.

[05:21] And remember, I had moved country to specialize in technology law.

[05:26] So I did something completely out of character. I quit on the spot.

[05:31] I went back to school and to do another master's, this time in Business Information systems, to basically plug that kind of data and business gap that was clearly there in my education.

[05:41] And I decided to pursue starting a software company. So I basically had a software company for quite a few years, stacked with consulting, and I went through this amazing journey of raising funding, building my initial prototype, flying all over the world, representing Ireland as part of the US mission to the Global Entrepreneurship Summit in Hyderabad.

[06:06] I went on all of these adventures and I was incredibly fortunate in that sense. At the same time, my business turned more into a consultancy practice than a technology company because I was so early.

[06:18] And I think the critical thing to know about anyone who's thinking of a startup, if there's one factor that you need to get right, it's your timing on top of that.

[06:27] And I have to throw this in for all the women and hopefully VCs who may be listening only I think it's less than 2% of all venture money goes to women founders, and it is horrendous.

[06:40] And that was definitely a factor in why I couldn't go at the speed that I would have liked to go. Because my idea was sound. Many companies have done it since.

[06:49] So basically what I ended up doing was I joined this amazing tech company and I joined as one of the privacy hires there. I was a senior manager. I became their chief Privacy officer.

[07:02] Wow. It's. I think it's four years ago in January. I think it might be. It's three or four years. I think it's four years. And I've also taken on global AI governance for.

[07:11] For the company. So that's been really interesting. But at the same time as I was taking on AI governance and kind of monitoring what was happening in the market, I noticed that particularly the people who may be most impacted by the more negative potential harms of AI were not the people doing the talking.

[07:32] I used to see, like, here is the current and future AI leaders of this country. And none of them looked like a lot of the women I worked with, or most of them were male and white.

[07:47] And I was kind of going, yeah, we need those guys too. And I'm sure they're all really brilliant. But at the same time, we do need much more diversity in this conversation, particularly with a lot of the leadership that we have seen in many other companies, it's often those, like other people are doing a lot of the heavy lifting.

[08:06] So in order to, I guess, draw some attention to this and be part of the solution, Shoshana Rosenberg and I, we started Women in AI Governance. And our intention initially was not at all to build this massive global community.

[08:21] I was rather. It was rather kind of us thinking, hey, surely we're not completely alone in this observation that we need more diversity in the AI conversation, so let's create a LinkedIn group.

[08:33] So we created this LinkedIn group, and within a couple of weeks, it had like a thousand members. So we were like, okay, clearly there is a need here. We should go do something.

[08:41] So with the help of our, like, incredible advisory board members, I'm not sure if you spotted them on our website, and basically with the help of everyone who's now involved, whether they're leading a chapter in a different country or they're leading a region, or they're just there as an expert to weigh in.

[08:59] Together as a global community, we're changing the conversation. We're bringing in different voices, and we're really letting people take a leading role themselves. So the way it works is it's not myself and Shoshana dictating how people should run their local communities.

[09:14] Quite the reverse. We provide them with the whole infrastructure they might need, such as the platform, the app, everything, and they can run their own communities and lead them in the way that they see fit for their local communities.

[09:28] Because there's no point in me sitting here in Ireland and telling everyone how it should work in their location. That does not make any sense.

[09:34] So that's a reasonably long story, but I think hits the most important parts of my career to date.

[09:41] Debbie Reynolds: Yes, absolutely. And I know Shoshana will. She's been on the podcast several years ago, and I really applaud what you are doing in AI Governance and because I think it is very much needed.

[09:54] And so I think when you think of AI, it's something that will touch all of us. So being able to have all of us represented in the conversation is vitally important.

[10:07] Emerald De Leeuw-Goggin: Yeah, I couldn't agree with you more. And it's been so heartening as well, and encouraging to see, like, all of this different expertise just flooding in, because I think, and I've highlighted this quite a couple of times on my LinkedIn posts as well.

[10:22] I think coming at this with some humility is really Important because AI, particularly if you were originally a privacy person, feels like this really vast and novel and sometimes overly technical concept.

[10:36] Right. And I think when I first, when will I, will I take this on? Will I just run at it? I was thinking, well, I can run away from it, which is probably not going to do me any good.

[10:45] It's probably terrible career advice. Or I can run at it and hope that I can surround myself with really great people, connect with great people, learn from others so that I can make better informed decisions.

[10:58] Because it's impossible to become an expert in computer science, an expert in all of the aspects that we now need to consider in order to do a good job at, I guess, ensuring that AI is deployed and developed responsibly.

[11:12] Debbie Reynolds: I agree. What's your thoughts about privacy and AI governance? So, and I think you're an example of this. So I'm seeing a trend where a lot of people who are in privacy roles or data roles, they're being tapped to wear this extra hat in AI or AI governance.

[11:32] So how was someone who's maybe in a strict privacy role right now, how do they position themselves to be able to be open to these types of opportunities?

[11:44] Emerald De Leeuw-Goggin: Yeah, it's such a great question and I have no problem sharing how I, I personally thought about this is I think one of the first things you need to do is make it known that this is what you want.

[11:56] There's no point in hiding the ball. Like if your manager, if you're a person with a manager, if they don't know that these are your goals, they can help you.

[12:04] And often if you're in an organization, your relationship with your manager is incredibly important.

[12:10] So making it known is probably one of your first steps, provided like, I hope you all have really supported supportive people around you. I think the other part of it, and that applies not just to going from privacy to AI governance, but also going from being an individual contributor to being a manager or going from anything to something else is get involved in something that is relating to this.

[12:36] Because if you can show that you can do it, it's a much easier way to convince people that you are the right person to also be working on AI governance.

[12:47] So I can give you an example. Many AI vendors you might look at as a privacy professional, at least I hope everyone is considering them from a privacy standpoint too.

[12:57] So that should definitely, that can be an easy in where you go, okay, I've considered everything I've normally considered, but what else that I, that normally doesn't fit neatly in the Privacy bucket can I bring up as something that we should consider so just that way, kind of broadening your scope.

[13:16] And then lastly, I would say, of course, there are the formal channels, such as adding to your education, trying to go to a summer school here and there, which I personally extremely passionate about.

[13:27] I've often had people kind of stare at me going, why are you here? You're already in a senior position. But I think that becoming complacent is so dangerous. The moment that you think you know everything, that's going to be your downfall.

[13:41] It's really important, I think, to always be learning and to go to places sometimes, maybe hear something again that you heard a few years ago. There's no harm in hearing it another time if you're going to a really good place where you're surrounded by, I don't know, leading academics and practitioners, things like that.

[13:59] I think it's good to sometimes listen to other people and to carve out time for that. And then maybe as I'm talking now, I'm thinking there is one more route.

[14:07] And that's one thing that I've personally done which is in the public domain on my LinkedIn is particularly last winter, I was like, okay, this generative AI train is moving.

[14:20] What can I learn and do? And how do I understand my stakeholders? So I started like this blog, which I've been terrible at maintaining because I just don't have time.

[14:29] But I started using these AI tools, I started playing around with them, and then I would write about them on my blog to force myself to get good at some of this.

[14:39] I'm not saying I got good at everything, but I was using it and therefore becoming better at it. And that then turned into maybe more of a legal analysis in some blogs as well.

[14:49] So, for example, when we had President Biden's executive order on AI, the OECD AI definition and the first the early act I did is comparative analysis of the three definitions and what was different.

[15:03] But I think if you're teaching, it's a great way to learn. So I think you can have like a multifaceted strategy and kind of figure out what might work for you in your particular situation.

[15:13] But those are a couple of suggestions.

[15:16] Debbie Reynolds: Those are all great suggestions. And one suggestion I really like, and that's more around self learning. So being able to understand that you can put your hands on some of this stuff, can look into it, you can read into it, you bring your specific lens and your expertise to maybe a new problem, and that just opens up a world of opportunities.

[15:39] Emerald De Leeuw-Goggin: I completely agree. And it's kind of the same. It reminds me a lot of the GDPR days where people at least were often really panicky about the GDPR. Particularly I'm thinking 2016, 2017.

[15:53] It felt like this massive hyped up thing. And of course it was a really important piece of legislation. But I always used to tell people, did you like, maybe you should open it up and have a look at it as opposed to seeing it as this like distant, abstract monster of a document.

[16:10] A lot of it is actually quite readable. It can be complicated once you need to become an advisor on it. But you know, if you take one of the articles and you have a look at it and you figure out what recitals relate to it, looking at those pieces together, I feel it doesn't require you to be a lawyer.

[16:26] You just need to be able to read and look at pieces of information in a critical way.

[16:31] Debbie Reynolds: I agree with that. Well, what's happening in the world right now, whether it's an AI or privacy that's concerning you most, what has your attention and your focus right now?

[16:41] Emerald De Leeuw-Goggin: Oh, quite a few things. Like of course we always worry about, like as a privacy person, you're always worried about a dystopian future. It's always going to be there. But I think more acutely, I think I've been talking quite a bit about deep fakes.

[16:57] I think that's a very obvious one that comes up a lot. But I think because it is so shockingly easy to create realistic content with AI, that's definitely top of mind.

[17:08] I'm particularly intrigued by the very broad definition in the European AI act because it basically says, and I'm paraphrasing now, but this is broadly what it boils down to in my reading, I'm happy to be challenged on this AI generated or manipulated content that can, that makes it look like it's a real person or object or place or event, et cetera, et cetera.

[17:32] So it is really broad definition and all of these require to be labeled. So I think when you mentioned deep fake to a regular person who's now heard about it, probably in the context of cybercrime, they often think about it as it always being a person or related to a person, like their voice or a video or a picture.

[17:51] But it's also objects and other things. So I think practically how this labeling is going to work is really an interesting thing to me. The other thing that I is a very, this is a very personal topic to me, but one that I'm particularly passionate about, and I will speak to this in my personal capacity, is AI models and things like that, like human models.

[18:16] I gave a ted talk in 2019, and during that TED talk I spoke about having almost died of anorexia in my early 20s. Like I was really quite severely ill, I had collapsed in university, ended up in the intensive care unit, things like that.

[18:30] It was quite a long journey after that to get well. But my TED talk was basically linking, I guess, those types of mental illnesses to the society we were already very much living in in 2019, which is this society that keeps feeding you more of what you will look at through the algorithms on many social media platforms.

[18:53] And I spoke specifically about, hypothetically, if I was trying to get well in a future that looked like are present at the time, where I was constantly being bombarded with these perfect people on perfect beaches, with perfect diets and bodies and plastic surgery and all of these things, I think I would have had a very hard time getting, well, getting more of the same, because these algorithms would have probably figured out that I had a problem with food and would have probably given me more of this pro anorexia content, which was quite pervasive.

[19:27] And it's kind of starting to rear its ugly head again on social media. So in that, just to link that back to AI, and what I'm personally really concerned about is that we're going to be living in a world where there's just going to be an awful lot of really perfect AI models.

[19:43] They're already popping up on social media and I worry particularly for young women, because if, like, humans can never be held to that standard, they're not real. They are fictions of someone's imagination, or at least a product of the AI and potentially photoshopped after it for an advertisement or whatever.

[20:05] And I'm really concerned of what that will do to people's mental wellbeing. I don't think it's a desirable situation to hold people to that type of standard. There was this big beauty company, I think it was Dove, but I'm not sure.

[20:21] I was in London and I saw their big massive billboards saying that they would never use AI models in their ads. And I thought that was such an amazing thing.

[20:31] And I think that's what we need more of in this world because I'm genuinely concerned about that. I'm not sure if you heard a lot about this or if anyone else brought this up with you, but I think it's a very concerning development.

[20:43] Debbie Reynolds: Yeah, it's funny that you mentioned Deepfakes, because I had recently done a podcast interview and they asked me the same question and I basically said, deep face.

[20:56] Yeah, definitely concerns me because a lot of the reason why these things exist and why they proliferate is because they can influence people. They can change people's behavior. They can prompt someone to maybe do something that they maybe may not have otherwise done.

[21:15] Or as you were saying in the anorexia example, maybe for someone to not get well because they're seeing more and more of things that are damaging, whether that be that or, you know, there's an example in the US actually around alcoholism where some advertisers, they may have found out that someone, let's say they went to a clinic for alcoholism or something, and marketers, because they want to sell more stuff and not necessarily be concerned with the health of the person, started pitching more alcohol ads to these people.

[21:49] And that's troubling development in society, 100%.

[21:54] Emerald De Leeuw-Goggin: And I think there's obviously a place for AI and maybe even for AI people in certain contexts, because there's also really great ways in which it can be deployed. Right? But I think, let's say having an ad for makeup done by a model who's not real but who is an AI, and creating that type of beauty standard for people, I think that really concerns me.

[22:20] I think, like, I don't know how anyone. Like, models are already often very beautiful. Do we need them to be an AI too? It kind of reminds me of the whole Photoshop thing, but worse.

[22:32] Debbie Reynolds: I want to talk a little bit about the AI act in the eu. So to me, the AI act, it feels like the early days of the GDPR in my view, where I think it will be tremendously influential, just like the GDPR has been around the world.

[22:50] I know I have many friends in the eu. Sometimes they get upset. Maybe they feel like the enforcement isn't where it should be. But you can't see that piece of legislation and not see how influential it's been since it's come out.

[23:05] We have so many laws and regulations around the world that borrow maybe not everything, but some things from that. Even, like the language and the way that these legislations discuss or name, you know, certain categories of things, like the language around data controller, data processor.

[23:21] All that is definitely for gdpr. Give me your thoughts about how much influence you think the AI act will have internationally.

[23:30] Emerald De Leeuw-Goggin: I completely agree with you. I think it will be quite impactful. I'm not sure if it will reach quite the levels of GDPR in terms of Hype. Maybe it's because people are like part of it feels the same in terms of the scope and kind of how the law itself works.

[23:46] And the GDPR was a little bit novel in that sense, particularly with its extraterritorial scope and things like that. I think it will be a very sweeping piece of legislation.

[23:56] Again, people are definitely talking about it a lot, which we, which is a good thing, right? Because we, we share knowledge with each other that way.

[24:05] At the same time, like, I think it's, it's the time feels similar where you're kind of in the lead up to parts of it coming into force. And of course some of that is relatively soon with like AI literacy requirements and things like that.

[24:20] But the other part of it, it does itself operate a little bit differently, right. In terms of how it's structured. So in that sense it's different. I also think there's still like a ton of learning to do because there are parts of it that to me anyway, still are too ambiguous for me to feel comfortable without some of that additional guidance.

[24:40] And again, that is also reminiscent of GDPR, right? Where there were so many initially Article 29 working documents that we were relying on. And then of course the EDPB that just were needed to clarify all of these questions.

[24:55] Because if you're really digging into a specific issue, sometimes you're going to run into a wall. You go to your recitals and then there's nothing else. And I think there are aspects that just need a bit more clarification, particularly things like how the AI act interplays with the radio equipment directive and things like that and all of those requirements.

[25:15] Debbie Reynolds: To me, the part of the AI act that I think is vitally important and that is the way that they rate AI systems by harm, right? So saying, okay, this is an unacceptable risk because it does XYZ like emotional AI or pre crime time types of things.

[25:35] And we've not seen any jurisdiction as far as I can tell. That's really. Take that tact or take that way of looking at it because I don't know what your thoughts are, but I think AI is very different.

[25:51] The ways we're using AI systems, we can't afford to wait till something bad happens before we regulate it. Right. And so really being able to think about AI systems and categorize it in terms of what the human harm or human impact would be, I think is very, very important.

[26:10] And I hope to see more of that calculus happen in different jurisdictions. What are your thoughts?

[26:18] Emerald De Leeuw-Goggin: Completely agree with you. I think maybe Obviously coming at this from a very kind of Eurocentric place. Having grown up in the Netherlands and then moved to Ireland, I've never lived in a place outside of Europe.

[26:30] So this kind of human rights law angle is very European, but I think it's the right one. I mean, we have international treaties now, human rights for a reason. Right.

[26:39] And I often get the question if I do a panel or a podcast where people go to me, so how do you, like, deal with all of these laws and all of these complicated concepts and how do you make sure people follow that?

[26:51] Well, what I always say to people is, what's really important is that you as an organization articulate what your values are and you communicate them well to the people in your organization.

[27:02] So you're all on the same page of what you believe, what, like what is right and wrong and what is an acceptable and I guess, responsible thing to do. That way people are already able to make the right decision that then also happens to be compliant because it's the right decision.

[27:18] And I mean by that, like, I guess morally right or responsible.

[27:23] And that can be a really good place to come at this from. Right. And I think a part of me, and maybe I'm just too much of a dreamer, but I would love for it to just be the case that we don't have to be waiting on a law for organizations to do what's what the right thing is right.

[27:42] And I know it's very hard to kind of standardize right thing, and some things are cultural and some things aren't, but we do agree globally on quite a few things across many countries about what is good and what isn't right.

[27:57] I mean, that's why we have international treaties on human rights.

[28:01] Debbie Reynolds: I certainly hope so.

[28:04] I want your thoughts because I hear this a lot being from the US and so I hear a lot of companies say that because of regulation, it's going to stop innovation in artificial intelligence.

[28:18] And I want your thoughts on that.

[28:20] Emerald De Leeuw-Goggin: Yeah, it's the forever conversation. I'm sure you get it all the time as well. In both the context of your privacy and AI expertise.

[28:28] It is the forever conversation. My view is very much that we can have both and we can have innovation and privacy and doing what's right by people at the same time.

[28:38] I also recognize that it's sometimes a bit more effort, so there does need to be a desire there to do the right thing. But I think when people say that to privacy professionals or other people working on compliance or security, I often think it's a very Lazy, lazy argument where I'm like, did you really look into it?

[28:58] Because initially the GDPR was said to stop all innovation and I'd argue we've done okay. Some significant things have happened since and I'm sure that the world is still turning and there's many successful companies and I don't think it's as difficult as people sometimes want to pretend it is.

[29:21] Is it sometimes a little bit painful? Yes, it is, but it's also being done for a reason, which kind of brings us back to where we just were, which is getting it right for the people that are our customers or the people we engage with in other capacities.

[29:38] You have to decide for yourself what's really important. And I think doing well by doing good is entirely possible. And privacy and using AI responsibly should be part of that.

[29:50] Debbie Reynolds: I agree with that.

[29:51] So if it were the world according to you, Emerald, and we did everything you said, what would be your wish for privacy or AI anywhere in the world, whether that be regulation, human behavior or technology?

[30:04] Emerald De Leeuw-Goggin: Oh, it's such a difficult question, I think, particularly based on what we just discussed. Like, I have a long list, but one of the things that I would personally love to see changed is that our vulnerabilities aren't exploited for monetary gain.

[30:20] I think that would be an excellent start.

[30:25] You know, where it's not that we get that there are no more algorithms that know how to pick up on our weak spots and then bring us a certain place, or that we aren't served content that is particularly targeted to upset us or make us angry or fearful or anything like that in order to get more of our attention.

[30:47] That would be, that'd be amazing. If it could just be non toxic, that would be a great start for me.

[30:54] Debbie Reynolds: That's a pretty good start. I call it psychological manipulation.

[30:59] Emerald De Leeuw-Goggin: That's exactly the right way to describe it. And I think like people who don't want to get onto the privacy train, they always look at the world in this binary way, right?

[31:08] It's like, oh, you're saying all data collection is bad, or you're saying we can't be innovative anymore. Or all of this stuff where it's like, no, we're just saying that if you're going to do X, you also need to do Y unless it's something completely crazy.

[31:24] And for those we have the prohibitions. But I think at the end of the day there's amazing professionals in the world who can usually help you on your path to what it is you're trying to do.

[31:34] It just requires you to sometimes take a slightly different route to that destination, as opposed to maybe doing it the initial way in order to do right by whoever's data you're collecting or processing or whatever else it is that people are doing.

[31:48] So I think that binary thinking is something that collectively, we as privacy and AI governance professionals probably still have some work to do.

[31:57] Because the hype around gdpr, for example, was great because most people have heard about it. Now, at the same time, there were so many myths about it that it turned into this monster in people's heads where they were like, oh, but it's like this impossible thing to deal with.

[32:12] And it absolutely is not like people can absolutely comply with these laws. And it's also not rocket science, even though sometimes we all like to pretend that it is.

[32:22] Debbie Reynolds: Well, I applaud your efforts to make it more simple for people to understand. And your voice is amazing. I love your brand and I love the way that you communicate and really share your expertise with the community.

[32:36] All of us around the world.

[32:38] Emerald De Leeuw-Goggin: Thank you. That feeling is entirely mutual.

[32:41] Debbie Reynolds: Aw. So sweet.

[32:44] Oh, my goodness. Well, it's been tremendous to have you on the show. Thank you so much. And I look forward to us being able to collaborate again in the future.

[32:52] Emerald De Leeuw-Goggin: Thank you, Debbie. And hopefully next time over wine and tapas in Spain, like, oh, that would be amazing.

[32:58] Debbie Reynolds: That was fun. Well, thank you so much. Have a good evening.

[33:03] Emerald De Leeuw-Goggin: Wonderful. Thank you so much, Debbie, for making the time to have me on. And I'll. I'll chat with you soon, no doubt. Take care.

[33:08] Debbie Reynolds: Ok, thank you.

[33:09] Emerald De Leeuw-Goggin: Bye.


Previous
Previous

E215 -Jennifer Pierce, PhD, Founder of Singular XQ, AI and Performance Anthropology 

Next
Next

E213 - Bill Buchanan, Professor of Applied Cryptography at Edinburgh Napier University (Scotland)