Debbie Reynolds Consulting LLC

View Original

E211 - Paul Starrett, Co-Founder, PrivacyLabs, Founder, Starrett Law (AI-Governance Technology, Law, Cyber Risk)

Find your Podcast Player of Choice to listen to “The Data Diva” Talks Privacy Podcast Episode Here

Your browser doesn't support HTML5 audio

The Data Diva E211 - Paul Starrett (45.17 minutes) Debbie Reynolds

Many thanks to the Data Diva Talks Privacy Podcast Privacy Visionary, Smartbox AI, for sponsoring this episode and supporting our podcast. Smartbox.ai, named British AI Company of the Year, provides cutting-edge AI, helps privacy and technology experts uniquely master their Data Request challenges, and makes it easier to comply with Global data protection requirements, FOIA requests, and various US state privacy regulations. Their technology is a game-changer for anyone needing to sift through complex data, find data,  and redact sensitive information. With clients across North America and Europe and a major partnership with Xerox, Smartbox.ai is bringing their data expertise right to our doorstep, offering insights into navigating the complex world of global data laws For more information about Smartbox AI, visit their website at https://www.smartbox.ai. Enjoy the show.

[00:00] Debbie Reynolds: The personal views expressed by our podcast guests are their own and are not legal advice or official statements by their organizations. Hello, my name is Debbie Reynolds. They call me the Data Diva. This is the Data Diva Talks Privacy podcast where we discuss data privacy issues with industry leaders around the world with information that businesses need to know. Now, I have a very special guest on the show, actually a friend. We've known each other for many years, Paul Starrett. He is the co-founder of Privacy Labs, also the founder of Sterritt Law. His area is AI governance, technology law and cyber risk. Welcome.

[00:43] Paul Starrett: Thank you. Thank you so much. And yes, I would concur that we are good friends and your respective colleague at Mark.

[00:50] Debbie Reynolds: Well, I always like to start the show with the origin story of how we met. So we met in Carmel, California. We were in. We were attending a conference there and I think both of us were speakers. You had just come over and started talking to me and we had the most riveting conversations. So you are a data person like me. You're a data scientist. And so we just had so much fun talking and we kept in touch. Over the years, I've been super impressed with what you do when people call me and ask me about a data person. You're like almost the first person I called. I call Paul. He knows. He knows.

[01:27] Paul Starrett: Well, I'll. I appreciate that. I'll pay you later for that. But yeah, I. Let me just add, if I could to that real briefly is I went to your presentation and I really appreciated your perspective on the panel you were on. And so, yes, then there was a meet and greet and I think that I was approaching you about your. Some of your comments. So it was something that was. We, you know, we hit it off early on, so it's been great ever since.

[01:55] Debbie Reynolds: Yeah, it's great. And I'm glad that we finally got a chance to do this show after many years. But tell your story, tell your data story. How did you end up where you are and all the things that you're interested in around data?

[02:08] Paul Starrett: Great, thank you. I would probably omit the first third of my life as basically youthful exploration, but I did start out in the area of security, sort of in investigations and audit. And then I, in the late 90s, it kind of date me here, I decided that it was a good idea to go technical. So I was a programmer in information security for RSA Security for about four years, five years with another company called Network Associates, which is McAfee. But that's where I got my exposure to information security. Into technology generally. And as we sort of fast forward through this process, I went to law school at night and took the bar and passed. And again, I'm skipping over a lot here, but eventually, about 10 years ago, I decided to get into AI and law formally. Ever since then I got a master's degree in Data Science 2016, and I was first chair of the American Bar Association Big Data Committee in 2013 through 2020 and literally lived and breathed this topic. And so I really span all the, what I would call the technical verticals within the area of AI. So the ability to define risks and controls and mitigations is really something that I feel I'm well placed to do. And so I guess that's basically it. And then met you along the way. So here we are.

[03:38] Debbie Reynolds: Yeah, that's the, that is definitely a short version of your journey, but it's, it was a very well mapped journey. I like the way that you've navigated your career and the way that you've positioned yourself. You're a rarity, I think, especially someone who is a lawyer who understands, you know, you sort of came to it from the data side first before you kind of got into law. So tell me, how is that? I know that. Let's just talk a little bit about legal. My impression of legal after having, you know, advised in that space for a really long time.

[04:17] Paul Starrett: Yes.

[04:18] Debbie Reynolds: Is that a lot of times legal is drag kicking and screaming into the future.

[04:24] Paul Starrett: Yes.

[04:24] Debbie Reynolds: But what is, what are your thoughts about that? Is that changing? Is it different? Is it the same? What do you think?

[04:30] Paul Starrett: Well, that's a good question. And I have to be, you know, respectful, I suppose, of my colleagues. Although I will tell you, I am not your typical, I don't have a typical pedigree in law. I went at night. I never really cared to work for a big firm. I was much more technical, far more technical actually. And so my journey was more on that side of the fence. So I didn't really live the life of an attorney proper, even though I've been general counsel, things like that. But to your question, I think really what happens is people kind of make a decision early in their life what they're going to do. A lot of people who are in law are sort of the, the writing skills, communication skills types, and they're not the detail oriented, hard science types. And the reverse can be true too. So you kind of start with that as a, your basic constituency, if you will. And so I think that it is generally well behind the technology curve. That's Kind of expected on some level, it's on another level. I hate to say this, but it's not really. It's not excusable in some cases because technology is not that hard to grasp. My, my path, I'm not naturally technical, but if you break things down into the component parts, relax and work through it, it's very doable. So I would say that we are probably a three or four out of ten where we should be. And I think the reason why it's important for us to be more insistent on bringing that data literacy, if you will, that technology literacy up is because we live in a digital world. And in order to do that, we really have to speak its language. We have to understand the value and the risks that are associated with it. And so when I approach a problem, I'm able to look at the technology at a very detailed level and find the risk and say, hey, this is really important, or to look across and say, well, how does one thing pull on another and tug on something else to kind of get that lower level, kind of ecosystem, holistic approach? So I hope that's answered your question. I think it's coming along. It's always been, like you said, they're always dragged kicking and screaming into the fire pit or the fray. And I think it's that even more so with AI though. They really are coming along. The IAPP has their AIG Artificial Intelligence Governance certification. And there's been a strong, you know, interest in that.

[07:00] Debbie Reynolds: This is what I think that's changed and I want your thoughts. So, you know, I go back, way, way back as well. Right. So I say I've been helping companies with digital transformation. Before digital was in it, it was just transformation back then. Right. And back then technology was like, you know, maybe it was a fad. Maybe people thought, well, I could still do things manually, but if I wanted to do it this different way, I had an option where I could do it in a technical or digital way. And what we found is that once we move more into digital systems or records and things like that, it was just impossible to do it manually anymore because of the amount of data, the complexity and things like that. And so the change to me was that back then, I think just adding information into digital spaces wasn't thought of as critical. You know, it was sort of optional. And maybe people thought of. It was a novelty at that time. And I'm going way back when people started putting computers in law firms, people who got those computers were secretaries, they weren't lawyers and stuff like that. So I think there was a bit of a little caste system thing happening with technology and we're sort of seeing that still play out. But I think the difference what's happened over the convening years is that you basically can't do your job without technology. Right. So it's not absolutely on attack on things like you need to respect the professor and respect people who help you do your job. You literally can't do your job without it. What do you think?

[08:43] Paul Starrett: Yes, and I'm going to go maybe even farther back, you know, when the horse and buggy, right, we got the car, right. So who wants to take a horse to work when they get there, you know, a tenth of the time and they don't have to feed the horse and stuff. I do remember some of that actually. Remember the big data craze back in late 2000s, early 2010s when the word big data, everyone remembers that you did have full volumes of data and you really had to learn how to wrangle that. So I think there's, for everyone, there's a natural kind of learning curve and adoption curve that you can't avoid because there is such a thing as rushing the general status quo into something because if you go too fast, you wind up within that loss. So yes, I do have a recollection of that. I do think, if I may, it kind of is a repeat of what we're seeing with Generative App, never mind normal machine learning. But I think as you said, it really, it helps you do your job. We're on the audio here in the background you have an AI tool that is transcribing what we're saying, right. Think about how valuable that is, how much time it's saving you and the risk is really nominal. So think of how that alone, very kind of immediate example there, it's like, oh yeah, you know, and how does, how is the AI built to create that capability? I transcribe my emails, I speak into a voice thing and it saves me enormous amounts of time. So I agree. I think it's something where it becomes the new normal, it becomes the new car, the new SaaS platform, whatever. It's, it, it's just keeping up with progress.

[10:28] Debbie Reynolds: You and I, we talked a couple weeks ago and we actually had this detailed conversation about just note taking apps in general. And I known someone that put a post up and their ideas like, oh, I would never have a client use a note taking app. And it was like, you know, you might as well be in the stone Age as far as Our concern. So let's talk about risk. How risky is it, Paul, that you and I are using the note taking app right now? Probably not. No. No risk. Really? I don't feel any risk here. Right. Maybe there are some situations where people feel risk. But to try to say it's an all or nothing thing about using technology, I think that's, you know, very kind of a Luddite type of way. And it's not practical.

[11:12] Paul Starrett: Yes, Luddite's a great word for that. No, I think that makes it very sort of pedestrian way of looking at how do you approach AI governance generally is you look at what's the value and what's the downside. You know, during this discussion, we're not really dealing with personal information. We're not dealing with, you know, trade secrets. We're not dealing with other people's data. Right. And it's saving you from what would for, be, for me be hours that sitting there playing, recording and typing it out and making mistakes along the way. So it's a very quick analysis, but it ports over to anything else. I teach a lot of master's pro data science program at University of Pacific here in the Bay Area. And one of the things I put to them is an algorithm, machine learning algorithm that lands planes automatically. So I have them go through the way the machine learning model was built, then I have them talk about negligence, contractual provisions. But what's the point is that they are exposed to a very, very high, high state thing. Landing an airplane. If something goes wrong, you could take the lives of hundreds of people. So how do you approach that? How do you. It's a very different thing than what we're doing here with the transcription. And you know, when it comes to risk, it's, it really is very much you kind of the devil's in the details. You just have to look, what's the benefit? What are you trying to do? Where can it go off the rail and if it does, how bad is that? And literally just, it's almost always bespoke, always contextual. Right. And so it's, that's, that's the way I see that risk approach. It's really very, very contextual.

[12:53] Debbie Reynolds: Yeah, it is very context driven. You do have to assess the risk almost on a case by case basis. It's not going to go away. So I think, I feel like some companies, they're like, well, let's shut the castle door and let AI in. But artificial intelligence is probably in almost any technology that you use. If you're on the cloud at all. You know, you're exposed to AI. You know, these features have seeped into almost any application that you're using. So the idea is somehow you're going to close a castle gate and AI is not going to get in. That's just not real either.

[13:27] Paul Starrett: So, well, and if you look at public companies who are supposed to, you know, remain viable, well, in order to do that they have to adopt AI to stay competitive, to retain customers, to do threat detection. It's a part of doing business. It's just the gasoline engine now and it's not the hoofs and the legs of the horse. And so it's like you can't close the castle door because you have a separate risk of not competing and now not being able to make up your money for your shareholders. So it's part is baked into the commercial quest of any enterprise.

[14:02] Debbie Reynolds: Now you've been involved in data science for a long time and you've been involved in AI even before the AI gold rush started with generative AI. I'm excited that it gets people talking about artificial intelligence. Even though I don't know about you, I kind of slap my face when I see some of the things that people post. I'm like, that's not true, that's not right. What are you talking about? What is concerning you now? About the way that people talk about AI or what, what is something that you would like to, to educate people for people who are confused about artificial intelligence? What do you want to tell them?

[14:39] Paul Starrett: Well, it's generally speaking it's an extension of what you already know and what you already have. It's an extension of the data that you have of your business purpose. So the idea is that if you think about it from the outset, just like think about privacy by design, right? And that's something that you know very well, where if you think about it early, it's, it's as easy to do it the right way as it is do the wrong way. So if you start out in the right place, it makes it a much more comfortable, manageable, warm and fuzzy topic than if you just rush ahead and try to put the toothpaste back in the tube later. So that's the first thing I would say. The other thing I would say is that don't use the AI unless you have a reason to. People try to rush into it and expect something from it. If your data's not good enough, your machine learning algorithm or generative or standard, what do you call Go find good old fashioned AI? Go find it learns from your data. The algorithms are just equations. They're algorithms, they're no smarter than your data. So you have to understand those things before you jump into the AI fray, you know, and the other thing that I would say is that try to keep it simple because there are a lot of cute ways to use different deep learning and neural nets and so forth, which is fine, but there are some aspects of, I don't get under the hood where if you can use a simpler model that may not be as performant, it's better to do that because at least you know what's going on. So to keep it simple. And the other thing is to the reason you have to keep it simple, because most machine learning projects, when I say machine learning, I'm including generative AI regression classification for those who know something about AI is in order. You have programmers, you have domain experts, you have IT folks all coming into this effort. Each one has to understand it so that they can provide their input into how well it's going to work, what the problems are going to be. So you have to keep it simple. So focusing on keeping it straightforward and understandable is key. If you start there, you're in great shape.

[16:55] Debbie Reynolds: Right, I agree with that. And then another thing, probably a misconception I want you to help clear up, and that is people have junk data and they think they're going to throw it into like a AI hopper and all of a sudden some, a diamond is going to come out the other end. What, what are your thoughts about that?

[17:14] Paul Starrett: Expecting too much from something. Right. It's like they're expecting it to come out with this nice, clean, you know, clear, kind of valuable thing. Annie, anytime someone mentions AI, you always have to start it and say, we're going into this thinking it might not be useful, it might not be worth it. Because if your AI, you don't know what it's doing, it might do things you don't know. Who knows what it's doing if you don't understand it? So I think that if you don't understand it, you are really running a risk. So if you expect the diamonds, make sure you know it's going to give you a diamond. Because I think back in the day, the late 2000s, early 2010s, and I think, I'm sure you saw this too. Is that what this is? Back in the ediscovery days with using, you know, predictive coding or using natural language processing to find relevant documents and so forth. Right. What these companies would do is they would say well, we'll all always give you 80% recall, which is to say we'll always find 80% of the documents that are relevant. That's impossible to know. Or they would just hire software salespeople to come in here, go sell, knew nothing about the fact that the data has to be good to begin with. They try to sell something that had no real possibility of having efficacy because they didn't look at it in the first place. So yeah, so I think, you know, again, you just have to be very clear at the outset. You have to always ask yourself, is this worth it? Do we have what we need? And then you could say, yes, I'm going to get a diamond out of it. That works.

[18:48] Debbie Reynolds: Let's talk a little about governance. So governance was almost like a dirty word within corporations for a long, long time because it was something you had to do like eat your vegetables and people didn't want to do. Governance projects typically aren't like the marquee moneymaker projects. People think of them more as like a cost. And so now that we have people who are really hot and bothered about trying to adopt AI, you have to have that governance conversation with them. A lot of times they're like, did you eat your vegetables? They're like, well, so what are your thoughts about that?

[19:26] Paul Starrett: Yeah, no, that's a good question. I think it kind of depends on how you define governance. I generally like to pull back and use the term GRC governance, risk and compliance. Feel free to rephrase that for me. By the way, the governance I've always the G of the GRC is the internal policies, the internal attitude at the top, that type of thing of how to govern your data and your operations. The risk is just like we said, you're actually taking something and saying, all right, what's the benefit? What the risk? Or what's the probability of harm? And you kind of analyze that. Compliance is external thing, laws, regulations, external to the enterprise that you kind of have to consider, you know, like regulations, GDPR and breach notification laws. I think high level is to recognize the reasons why those things are there. So we go right back to really the foundation is the explainability and simplicity, relatively speaking. So you know what it's doing. Because if you buy a car and it doesn't work, that's a risk because you bought the car to get you from point A to point B and it won't do that or it goes too slow or it costs too much. So that's kind of the initial threshold question. But also there are public Policy things we've heard about AI, no bias. Are we hiring people disproportionate to the, you know, too much in one class that shouldn't get that bias in their favor or against their, who they are? Or are we determining a sentence for a criminal based on past information that's going to give them more time than they really deserve. So these are all kind of fairness, public policy questions. They're there for a reason. You know, you want to say governance and you can look at the, you know, you can say, is it your vegetables? But vegetables can taste good and they're full of nutrition, nutrients, right. So that's the way I always look at it. It's always an important part of one, your commercial purpose. If your AI is not doing what it's supposed to, that's part of governance, then that's a problem. You want to look at the risk to make sure that you don't wind up paying more for something, then you should have to create more harm than benefit. And the compliance, you don't want to wind up, you know, class action lawsuit or some other thing. So if you look at the reasons behind it, I think that can help you one, not look at it as like eating your vegetables or looking at eating your vegetables as something that you can smile about. So that's, that's kind of the way I would put it, you know, governance, it's all about understanding what's going on, internalizing it on some level and then coming up with some sort of a plan. That's sort of the big kind of fluffy sentence that you can put around all that, I suppose.

[22:11] Debbie Reynolds: Yeah, absolutely. What about bias in artificial intelligence? Right. Part of this is understanding again the risk and how you're using AI technology and what outcome that you're looking for because you don't want to do something that's going to cause like a human harm. Right?

[22:34] Paul Starrett: Yeah.

[22:34] Debbie Reynolds: Like for example, I guess the example I like to give is, let's say, let's say you're, you know, selling shoes on an E commerce website and you're saying, well, let me show Paul blue suede shoes on a website. And so they're going to show you blue suede shoes but also some red shoes and some brown shoes. So it's not harmful. It's like, okay, well it's like mostly blue. We'll show you other stuff. But then when you're trying to use stuff, AI for things like sentencing people to jail and stuff like that, or trying to figure out like use a facial recognition system, say okay, is this a person or not? No. You really need more precision in those types of things. And it shouldn't be. Well, we think it could be any of these hundred people instead of this person. But what are your thoughts about just AI uses in that way, in terms of just thinking about risk and bias?

[23:27] Paul Starrett: Well, that's kind of a deep topic, one that I understand. I don't, I'm not. I don't do that area full time. There are people that do it. Well, it's all they do, I think, you know, bias and fairness, you know, it kind of like you said, if it's just showing me just blue shoes because it thinks I'm, you know, maybe a little bit colorful in my thinking or something, who cares? You know, it's, it's. Even though I'm getting biased, it's like, who cares? But I don't think anyone wants to, first of all be the victim of getting five years instead of three because some algorithm inadvertently, well, took data that was, that had bias baked into it because it was looking at historical data. Right. That historical data is biased because certain groups were targeted more or certain groups who got more sentences than others. Right. And so it's biased by its very nature. And so I don't want to have any person, let alone myself, spend a day more in incarceration or spend more in a fine than they should have to. And. Or conversely, someone who deserves more does not get. Right. There's a sense of fairness there. So I think it's typically, you kind of have to look really important is the type of data that you have. You have to make sure that you're applying the proper algorithms. There's a lot of things under the hood about how you sample, if you're sampling. And what do you consider to be balance or imbalance within classes that you have. When I say classes, publican, liberal, you know what I mean? Or Hispanic, African American, Caucasian and so forth. How you determine what is or isn't fair or biased. And I think that oftentimes is a political thing, because what's wrong. What's wrong in one culture may be completely acceptable in another. So I think that there has to be an understanding of the culture and understanding what the data is and what kind of algorithms you're using. And then you have to look at the, you have to look at the results and see if it makes sense. I probably haven't said anything new, but that's, that's the reality.

[25:41] Debbie Reynolds: No, it's very crystal clear. You know, you're always a clear Thinker and you're very fair in the way that you describe things. So I always appreciate that about you.

[25:49] Paul Starrett: I appreciate that.

[25:51] Debbie Reynolds: Tell me, what are you seeing maybe surprising to you about the types of challenges that companies are having with data and AI?

[26:02] Paul Starrett: Well, I think, well, what comes to my mind from my perspective is that there's a hype cycle that's nothing new. But as you get down into the weeds more on that, I think people don't know what to do with the risk. They just, they don't know where to start. There's so many things, there's application security, you know, the software that's being used to write the machine learning application. Is there data lineage, data discovery, where'd your data come from? And now just those two topics are kind of daunting. But you also have privacy enhancing technologies where the ability to identify well, that's part of the discovery. But you also have model governance platforms, you have data ownership, right? So you have all these things. How do you approach that? What laws apply, what do you need to put in place to make sure that your risk is where it should be and your compliance boxes are ticked off? I think that's the big question right now. And that's a place where both, both of our practice areas are really very valuable. So I think, I would say that that's where things are right now is the hype's hit. And now people are either they're not addressing it, rather just looking the other way for now. And if they're offering blue shoes on a website, they probably don't have to worry too much. But if they're landing airplanes, you know, it's, it's, it's just people don't know what to do. It's so new. And I just real quick, I'll give you a quick analogy that kind of helps is you kind of have to look at normal AI machine learning that is non generative as like trying to predict where a golf ball is going to go. You, you've got the ball, you got the club, you've got the wind, you've got the tide of the person, right? You've got the, whatever else. Can you predict where the ball is going to go if you hit it a certain way? Very straightforward. Generative AI is more like a pinball machine, right? It's contained but it waxed around and hits all these different things and it hits these flippers and stuff. So I think what you have to do, at least with regard to generative AI, to your question, is think of it like A pinball machine. It's okay. It will work. It looks like this crazy, you know, strapped to a rocket kind of thing. But it is possible to understand what it's doing, to have a sense of what's going on. And so that's another long way of saying, I think, generative, that the AI risk right now is really an unknown. No one really knows what to do.

[28:37] Debbie Reynolds: I like that analogy about the pimp machine. I think that's very apt.

[28:41] Paul Starrett: Thank you. Yeah.

[28:43] Debbie Reynolds: This is what I think about AI in general. And I want. I want your thought about the hype cycle, because I have an opinion about that as well.

[28:50] Paul Starrett: Okay. Please.

[28:52] Debbie Reynolds: For me, I think artificial intelligence is a thing that helps you do a thing. It's not the thing itself. Right. So part of that hype cycle is that people are trying to make it the centerpiece of whatever it is that a company is trying to do, where really the proper use for it is find something that you want to do and have AI help enable you do that thing. Right. I heard, I had a dear friend used to say, people don't want technology. They want things that work. So if you can create something to help someone do something that either was hard to do, they weren't able to do, they're going to be happy about that. Right. But they don't really care how it happens. So they don't care that it is AI. But what are your thoughts on that?

[29:37] Paul Starrett: Yeah, no, I think we're. You and I have both lived through the, you know, the data adoption, disruptive time, I guess, then the AI piece as it was, the elephant in the room at that time was ediscovery that you can now search millions of documents and not have someone look at each document, you know, one at a time. I always liken it to this. Where am I going? So if I'm going next door, do I need a car? No. If I'm going two blocks to the park, do I need a car? Probably not. If I'm going to the. To the store that's a mile away, do I need a car? Probably one, because it's far enough that it warrants taking the park. Right. And I might have groceries I need to put in the back. So you don't just. Or now let's say I want to go to Los Angeles for a weekend. Am I going to drive? Probably take a. Probably take a plane. You don't want to take a plane to your neighbor's house. Right. So it's the same kind of thing. What are you doing? What are you doing and why? So start out by saying, am I going to walk, am I going to drive, or am I going to fly, or am I going to run or ride a bike? So I think that's always been for me, the way to think about AI. If there's no reason to use it, then don't. You know, if things are working fine, why add complexity where you don't have to? And the other question is, of course, if I'm going to take my car, is the car working? It's going to break down. Am I going to run a stop sign and kill someone? We get on an airplane, is it made by Boeing or not? The joke therapy. That's the way I look at Hypes. That's the way I look at AI. And anything that's technical is, like you said, it helps a thing do a thing, which is great. That's about as technical as you're ever going to get, I think. But it's like people say, oh, gosh, there's this brand new shiny Porsche. I want to own one. Well, if you live in a very small town where you're never going to be able to use it, what's the point? It's just to show it off. So that's kind of a very Sesame street way of putting things. But that's the way I try to look at it, because you can't walk the AI into a problem. You just can't. Like we said earlier, you kind of have to start and work your way out.

[31:55] Debbie Reynolds: That's true. Well, let's talk about the hype cycle. I think there's always a hype cycle when people feel like you're using a technology in a new or different way. And obviously the media likes that, right? Because they can get eyeballs on stuff. People think, okay, well, this hype cycle, what's going to happen is that people are going to not want to use AI and then they're going to go back to the olden days, and that's just not going to happen. But what, what are your thoughts?

[32:20] Paul Starrett: No, that's a great question. And I think we've, we've touched on, you know, facets of it along the way here. I think, I do think that we learned our lessons from the last AI hype cycle, if you will. I think we, we all sort of said, oh, yeah, here's AI again. It's just another, you know, there's, there's a little bit of once bitten, twice shy, you know, So I think there was some skepticism baked in going in but when people saw that you could write an entire paper with just a few sentences for the prompt, or that you could ask it to summarize something, well, it's a wow factor. I mean it is the brightest, shiniest object we've seen in a long time. And there are things that are being done that are like, you know, talking. This isn't generative that the app that does the transcription, but it's the same analogy is that people see that and there is FOMO fear of missing out, which causes some of the hype cycle is that they figure, well, one, can I make a living at it? Two, can I make a startup from it or can I use it to cut costs in my enterprise and make myself look good? Can I use it to increase revenue and profit? Can I use it to compete? Right. So all these things cause people kind of get their face shoved in it because they have to. And so it's, I think that it's not as bad as it was in the late 2000s, early 2010s, but I think that the expectation, alluding to your thing with the diamond earlier, is that you really have to understand what the technology is capable of and what it does. And it's really, it's, it's, frankly, it's not any smarter than your data period. You know, you have human review and stuff. But I think that the hype cycle, if you look at it collectively is, it's, it's. I think it's going to be here to stay. I think it's going to become a new part of how life is done. Just like Internet, just like search engines, just like SaaS tools. They're all here and we're living with them and using them every day. I think generative AI is going to find that place in a bumpy but ultimately net gain way. I just, it just seems that that's the case. Despite you're dealing with a pinball machine, despite that you have what is really something that's, that's like a beehive, you know, it's, it's, it is manageable. A pinball machine is contained, that ball stays in that little environment, you got the little flippers and stuff, you know, it works. Put a quarter in and you have fun. So generally AI tool, whether it's for audio or visual or multimodal or text, it's. I think we're going to find a net gain proliferation.

[35:13] Debbie Reynolds: I agree with that. I think stories of the demise, especially gender of AI are greatly exaggerated. I don't think it will go away, I think, you know, I think artificial intelligence like that will have more of a horizontal impact on industries where we imbued in a lot of different little ways that maybe aren't kind of marquee, rah rah, front and center types of use cases, but it'll be kind of baked into a lot of stuff that we're going to be doing in the future. And so I think part of that settling down is for hopefully not people saying it's going to end the world in two years or it's going to cure cancer, but maybe it helps someone that maybe did a task that was 50 steps and now it's 10 steps. Right. Or is it helping, you know, someone who, like, for this transcription example, instead of me typing, listening to the audio and typing it out myself and having something that could help me, that I could just tweak or whatever and use and kind of go on and do something else.

[36:24] Paul Starrett: Exactly. And I could agree with you more. I think everything you said, from my perspective completely lines up with everything that I've mentioned. It is going to be very highly dependent on the vertical, the domain, because the type of AI you use to transcript, to transcribe audio, it's very different than the one that's going to analyze an, Mr. You know, an X ray for some sort of, you know, cancer and which is going to be different than the one that helps you interact with the customer online. Right. AI is very dependent on the use and the data that you have. And so I think you're, you're absolutely right. I do think that there's, there's hype, there's people out there who, you know, hucksters and people who don't think. And Russian. I do know there's been a lot of, I mean, here in Silicon Valley, so there's, we have a lot of investor events here just because they're fun to attend. But there's a lot of money that went into things just on the hype and people lost money, investors lost money because they, oh, AI, great here. So there's, there's the problem, I think, but I agree with you. I want to put a question to you. So you're, you're the, you're the data diva. You are a, you're a privacy expert. I mean, you're more than, I mean, you're broader than that, See, privacy thing a lot. So as you've come up, as you've kind of gone through your ascension and your path with privacy, can you compare the growth and maturation of privacy with the AI. Is there a similarity with the two? Because I think it parallels in a lot of ways. I think that for you, it's easy to kind of glean from your past things with regards to the AI. Is there an answer to that?

[38:10] Debbie Reynolds: Well, you always ask me the deep questions. That's why we're great friends. I think that there are some parallels. So for me, I've always been interested in how data moves, how it flows in systems. So part of that is what you can do with data, what you should do, what you shouldn't do. Right. So I think those questions come even more into focus and more interview when you bring AI in. So now you have technologies that can do things at scale that can be good or bad. Right. So you can accelerate bad things, you can compound bad things, but you can also compound or exponentially speed up good things. So for me, my personal interest in privacy, I was hoping people will be more interested in it when I was interested in it when I started it many years ago. But I'm happy that I think part of privacy, the reason why I started thinking about it more, is because it did move more into that compliance space where it became more like, oh my God, we're going to get fined. You know, not like, oh, let's have a warm and fuzzy conversation about doing the right thing in ethics. Right. But I think, I think that conversation is now joining together with AI because now companies are trying to, or have the capabilities to do things with data in ways that they never did before. And so it just raises those questions, like before they weren't capable of doing some of the stuff, and now because of AI, are capable of doing things, it could be really good or really bad as it relates to privacy. So for me, I think it just elevates that conversation. I think it does kind of track together and be intertwined together ever.

[39:58] Paul Starrett: That's very interesting. Yeah, I, I kind of thought there was something, there's clearly something there. But your perspective, because I know you're, you're very prolific and very well placed. So it's always good to hear your, your thoughts on that.

[40:10] Debbie Reynolds: Thank you, thank you, thank you. If it were the world according to you, Paul, and we did everything you said, what would be your wish for either AI or privacy anywhere in the world, whether that be regulation, human behavior or technology?

[40:28] Paul Starrett: Well, yeah, that's a great question. I think the first thing is to have proper expectations. But I think there's also, there's downside to AI as well. Obviously there's the extreme cases of, you know the Terminator 2, where it gets its own sort of fence of itself and, you know, attacking things that thinks are not good and just being headless in a sense. But there's also, you know, it's taking people's jobs, you know, a lot of things that you would hire out on, let's say Fiverr or Upwork or some of these other platforms now are being done by AI exclusively, generating PowerPoint presentations or even pictures, you know, illustrations, brochures. It impacts human beings and I think on some level it could impact the balance of, of wealth. I'm going way out. Well assemble, going kind of go off the rail here. But let's just be real that it has that impact. There's a collective effect of any new capability. And so I would hope that we would learn to grow in a way with AI or any technology that really accommodates all of us, accommodates the bigger picture that's fair that is used, you know, it's like a gun can be used to defend yourself or to, you know, commit a murder. And so that would be the first thing is to have this sort of ethical awareness of the impact that it has. And then the other piece would be to say, let's all learn what we can. I think that's, that's the biggest piece that's missing is the more you understand it, the more you can get it to work. So I guess those would be my two short answers. I, you know, could go on, but I think having expectations that are realistic, having any AI or machine learning or any technological advancement be done in a way that is not clunky.

[42:24] Debbie Reynolds: I like those answers. I guess maybe I'll expand upon your dream.

[42:30] Paul Starrett: Please. Your wish, please. Sure.

[42:32] Debbie Reynolds: And that is because I feel because of the rapid advancement and the rapid adoption of AI that people are going to need to be educated more often, maybe in incremental ways. And not like artificial intelligence isn't like a set it and forget it type of thing. So it has to be more human involvement in it. It's more tuning, more care and feeding, more thinking about what happens to the data all the way through the life cycle. And so I think corporations haven't traditionally done training that way or kind of thought about problems in that way. So I think this would be a new frontier. You know, the companies that really adopt that way of thinking will be, you know, far ahead of everyone else in the future. What do you think?

[43:20] Paul Starrett: I totally agree with you. I think it's humans really. AI is enabling human quest. The human quest. Right. That's what it's there for. And I think we have to look at it from that standpoint. And yeah, I mean, if people don't know how to use it or don't understand its risk on a daily use of it basis, like gear to shadow AI. Right. Where people are spinning up AI tools without their governance department knowing about it. Right. Or they're using it in a way that's risky. They're entering trade secret information through a prompt and something else is picking up that trade secret information. Right. And now this in some model somewhere. So now your trade secret's gone. So it's, it's people understanding how it works and how to use it properly is, I think, really important because that's where things go off the rail, I think, most of the time.

[44:17] Debbie Reynolds: You know, I agree with that. Well, thank you so much for being on the show. It's been my pleasure. That's great. I'm so glad we were able to chat and many years in the making for us to have this conversation, but I'm happy we're having it now.

[44:31] Paul Starrett: You're.

[44:31] Debbie Reynolds: You're right at the right place at the right time. You're in my stratosphere, so I can reach out to you anytime. And.

[44:38] Paul Starrett: And I should also thank you because I had you on our podcast. I think it's about a year and a half ago now, but thank you. It's my pleasure and any time. And we'd love to have you back on ours as well. And again, thank you. The opportunity to share my thoughts and be on your podcast. It's always good to talk to you.

[44:53] Debbie Reynolds: Yeah. Thank you so much. And we'll talk soon for sure.

[44:57] Paul Starrett: All right. Thank you.

[44:58] Debbie Reynolds: You're welcome.