E212 - Dr. Genevieve Bartuski, Founder and CEO of Bartuski Consulting, Data Privacy, Cyberpsychology, & AI Governance

[00:00] Debbie Reynolds: The personal views expressed by our podcast guests are their own and are not legal advice or official statements by their organizations. Hello, my name is Debbie Reynolds. They call me the Data Diva. This is the Data Diva Talks Privacy podcast, where we discuss data privacy issues with industry leaders around the world with information that businesses need to know. Now, I have a very special guest on the show, all the way from Virginia, Dr. Genevieve Bartuski, who is an MBA. She is also the founder and CEO of Bartuski Privacy Consulting. Welcome.

[00:42] Dr. Genevieve Bartuski: Thank you. I appreciate you having me here. And I. I love that they call you the Data Diva. I love that.

[00:47] Debbie Reynolds: Love it. Thank you. That's so sweet. That's so sweet. Well, you and I connected on LinkedIn. I thought your background is fascinating because, you know, a lot of times when we talk about cybersecurity and safety and privacy, we think about data and technology, but really it is kind of about people. And so because you. You talk about behavioral science, data privacy, cyber psychology, AI governance, it really piqued my interest. I was like, oh, wow, this is so unique. So why don't you introduce yourself and tell me how you came to be the founder of your own consulting company?

[01:25] Dr. Genevieve Bartuski: Sure. I appreciate that. It's a long and windy road, but I'll try to condense it down. So I was originally a forensic psychologist. That was my career. And I practiced at the doctorate level. I worked in a supermax prison, death row out in Oklahoma, here in Virginia, I worked for a state hospital and did the risk assessments and NGRI assessments, which is not guilty by reason of insanity competency. I testified civilly and in criminal cases, you know, giving recommendations to the court. And when Covid hit, I was like, I'm burnt out, completely burnt out. I was probably burnt out long before COVID hit. But just at that point, I was like, I need to do something else. And ended up going back for my mba. Still wasn't sure what I wanted to do. And two things happened that pushed me into privacy. The first one was I was offered a job straight out of my mba. And this is where the importance of accurate data comes into play. So the company that was offering me the position said, okay, we're going to do a background check. Great. I don't mind. I've had tons of them. I, you know, interned with the federal government when I was doing my doctorate, worked for states. I'm not concerned about my background. They used a private company to run it. And this private company actually had me listed with multiple Social Security numbers and that my alias was my sister's name and one of my alternate Social Security numbers was my deceased brother. So it ended up coming. I mean, it was really weird. Like, it was so bizarre. It was the most bizarre background check I ever had. It started to plant that idea in my head of, okay, like, like, data is very important, like the accuracy of data. Like, I dealt with it as a researcher, my doctorate, but seeing how it affects individuals was. I mean, it affected me to the point where they withdrew their job offer. But then I ended up working for a tech company. And because of my background, I was on the ethics committee within that tech company. And that's where I met our privacy officer. And he was based in Australia and me here in the US and that's the first time I ever got introduced. I had no idea that data privacy exists. Like, I knew it existed, but I didn't know as a field. To me, it was just something that you just did. And he and I would have lots of conversations, and he shared resources with me, and that's really, really what sparked my interest in data privacy. And then I wanted to branch out on my own, be my own boss. And, you know, I spent a little while thinking about, like, how can I bridge my background with what I know about technology and my interest in data privacy? And that's how I came up with my. My business.

[04:07] Debbie Reynolds: Wow, that's such a. Interesting route. I never heard of such a thing. I think your story about kind of that identity issue that you had, I find that when people do have issues like that, it really does capture their attention about privacy and why data is important, and it's hard for individual to fight that. Right. So being able to have people who are data custodians or data stewards, making sure that that data is correct is such a vital part. Because people who often are, you know, in those situations, like, you know, like you, you were able to overcome that. Some people can't overcome those things, right? So it kind of creates this caste system, I think, and this marginalization of people in society. Tell me a little bit about cyber psychology. So this totally piqued my interest when I saw that, because so much of what we talk about that happens in technology, especially privacy and cyber, is human, right? So I think we talk a lot about technology, but these are human issues. So tell me your thoughts on that.

[05:28] Dr. Genevieve Bartuski: So cyberpsychology is huge. So I mean, it encompasses everything from how we use the computers, how we use technology, to how it affects us in our daily lives. And I'm like, with my business, I'm very interested in how people use technology, but I also have interest in how technology affects people because I mean, if you, if you look like right now, one of the things that's really interesting me that I've been reading quite a bit about is the children of influencers. So where the children have grown up online without their consent and some of these very private moments have been broadcasted for the world to see, you know, and little bits are made, things are shared on TikTok and Facebook and all those. And then now they're coming to be about like 18 years old. So how is it affecting them that they've had these very personal moments of their lives? I'm very happy I'm giving away my age here, but Gen X where my, my youth was not online. So that's one area that just interests me on more personal level. And yeah, I think about like how technology can help people, how it can hurt people. I think about how people can use it for bad, how people can use it for good. So all of that kind of comes into cyber psychology.

[06:47] Debbie Reynolds: And let's talk a little bit about AI, artificial intelligence. How has artificial intelligence changed your work or changed the game? I feel like it's really heightened a lot of those privacy harms made them kind of made it privacy harm at scale in my view. But what are your thoughts?

[07:08] Dr. Genevieve Bartuski: Oh, I absolutely agree. You know, most of our listeners probably know, but if they don't like, AI needs a lot of data to learn through Internet scraping, all of that stuff. So I, you know, I have a couple concerns with AI and don't get me wrong, I enjoy using AI. I'm very happy to, to use chat GPT to help me generate ideas. I don't let it write for me because I just have an issue with that. But I'll use it to help me generate ideas. But there's a couple issues that I concerns that I have. Number one is, you know, like I know Google's using it now and different things like how much personal information are they taking to train the AI? Is that information accurate? I'm concerned about the biases, the inherent biases in AI because if you think about it, AI is taking information from people, right? So you also have society, you also have biases within society and then AI is going to use that and generate information. So I do have concerns about that. I have concerns about how that's going to affect people. As somebody who creates stuff. I'm actually writing a book right now, so I worry that is somebody going to use my work in AI is It going to passed off as somebody else's after I've put in all this background work, all this research. Those are some of my concerns with.

[08:26] Debbie Reynolds: AI, I guess one of my concerns, I have many concerns. You know, I'm a technologist and I love technology, but I don't love everything that companies try to do with technology. You know, I'm like, okay, well, this is a good use or this is a bad use, or have you really thought about this downside? But I guess one of the things that concerns me about artificial intelligence, and you touched on it a bit, is that these systems do need a lot of data to be able to learn. But I feel as though in the past when people were creating data systems, things were much more curated, right? Where someone was kind of using their judgment to say, okay, this should or shouldn't be in a data system. You know, this is, you know, whether it's accuracy, whether it's, you know, relevancy, you know, this is relevant, so is in this data system. But we have a situation where we have these cauldrons of data sets that are being created indiscriminately. So in a way, you're kind of creating a system where almost everything is important. Can't really tell, you know, what, first of all, what's real, what's fake, what's important was not what's relevant, what's not. And I feel like, especially people who are using AI systems, I'm concerned about them advocating their human judgment to these systems when they are just a cauldron of data. And it's the system, people are giving it authority that it should, probably shouldn't have because it doesn't have that human curation. What are your thoughts?

[09:59] Dr. Genevieve Bartuski: Oh, I 100% agree with that. It doesn't. You know, as you were talking, it was. I was thinking back to when I was doing research as a doctorate student and how we would look at the data. Does it fit? Is it accurate? How did we collect it? And you're right, like AI just kind of just takes everything. It's so new. There's not a lot of guardrails on AI. And it's kind of interesting. I was reading the, the EU AI Act, Artificial Intelligence act yesterday. One of the things that I really like that they're doing is that they are trying to put those guardrails in place. And I was actually going through the timeline yesterday. It's really interesting. But, yeah, like, one of, one of the things that they bring up as one of the things that they want to prohibit AI from doing goes back to my forensic background is predicting criminal behavior. And things like that worry me because we already deal with, you know, racial profiling and inherent biases as human beings. Like how, how awful would that be? Like, on a. On a big scale, you know, like how, you know, and I see like, my, my nephew's mixed, and I see that. I see just him going through his daily life and how people treat him differently because of the color of his skin. And something like that's kind of scary when you think about it at an AI level because you do have, like, CCTVs everywhere. Like, you just see the downfall and how that can really negatively affect people in a much like, in a massive way, on a massive scale.

[11:31] Debbie Reynolds: Right. Especially if you have data, first of all, data and systems that people don't know about or data where people are being judged or scored or rated in some way, especially children, you know, seeing how those, how that data goes in those systems and how that data gets transferred throughout their life. Right. I think that's definitely kind of an issue. One other thing I want your thoughts about on AI, and I guess it goes back to bias. I guess the example I'll use is in medical studies. So we've known historically that a lot of medical studies were primarily using men in studies. And so the bias that I'm concerned about in AI systems is that you have these systems that are making decisions that are not transparent. You don't know how they came to those decisions, but then you're going to apply, you know, those decisions or those learnings on a broader group of people for which it will not apply and it's a problem. What are your thoughts?

[12:35] Dr. Genevieve Bartuski: Oh, I. I absolutely agree. I can't speak to as much about medical studies, but definitely with psychological studies, because majority psychological studies are done with college students. And even when you look at like, the personality assessments and the IQ, you know, the intelligent quotations, like the, the WAIs, they're done primarily on white college students. And that's, that's how they're normed. So they don't, you know, and it doesn't take into account, you know, because, I mean, you know, and definitely one of the things that we do see at colleges is in universities that they are getting more diverse, but it doesn't take into account individuals who are in different cultures or who are different socioeconomic backgrounds or all of those things. You know, when you're on the waist, they have, which is the Weschler Intelligence Quotient Scale. Some of it, like, it has a section on comprehension. And some of those questions are, you know, you know, asked about, like, Martin Luther King. And I don't want to give too many of the questions away, but, you know, that has nothing to do with intelligence. That's more of your educational background. And so for kids who haven't had that education or didn't have the opportunity to have that, you know, the scale is going to be off. And we do score these by hand, actually. They can run them through computers now. They do have that. But it does concern me that what will the AI interpretations be of that and of personality tests, too? Like when you're doing, like, the mmpi, which is the Minnesota Multifaceted Personality Inventory, or the. Oh, there's bunches of different personality tests. Like how. How is a guy going to score those? Like, it doesn't have that human element to it. So, like, when I would give an intelligence test, like the waist or the wisp, which is for children, I would definitely. I would take into account, as I was writing up the report, all of those things. The, the culture, the background was, it is English the kid's first language? Is it their second language? How many languages do they speak? What's their environment? All of that goes into, you know, interpreting those tests.

[14:48] Debbie Reynolds: Right. I know it's very scary when you don't have a human in the loop because machine, I always say, AI is like a machete and not a scalpel. Right. So people think you could do these nuanced things with AI, and it's really not made to do that. And so those more nuanced things, I think that's where you really need a human in the loop to be able to interpret it. But then it's hard to do it if it's not transparent. Right. So you can't really take the. The result at face value without knowing more about how it arrived at that result, you know? Yeah. So what. What's happening in the world right now that's concerning you most?

[15:29] Dr. Genevieve Bartuski: Oh, goodness. I think there's. There's a lot of things out there. I don't know if there's one thing that concerns me the most. I mean, it's. This is more personal. One of the things that concerns me is that, you know, as I build my business, I have to be visible. I have to put myself out there, and I'm a relatively private person. And so one of the things that concerns me is how much of my information is going to be out in the world just as a person. I think that's one of the big Things that concerns me and also, you know, I'm concerned about the deep fakes and the AI videos. I'm also concerned about the exploitation of people. I think that's one of the things that's really concerning. Those are probably the biggest things. Like I do really think about how people are exploited with this data in different ways.

[16:15] Debbie Reynolds: Yeah, give me some examples. When you talk about exploiting, let's see.

[16:20] Dr. Genevieve Bartuski: So now people can take your kids pictures off of your Facebook and create child **** videos. So that's very concerning to me. There's also, yeah, I mean it's also, you know, some of the threat assessment sophomores out there, I'm a little concerned about that as well. I know we do need to keep people safe, but to what extent, how much information are we taking? How much of it is actually needed? And then I know in schools now there, there's very different threat assessment software systems out on the market and one of the big things that they're pushing now is being able to scan the students drives. One of my concerns about that is scanning a student's drive and being able to isolate pictures and stuff that may be considered **** or things that might be considered dangerous or a potential threat. We're isolating those, we're capturing them, we're putting them into a drive and now somebody has to look at that. So it's to me, I see, like, okay, if you're, if you're a kid, kids are going to do kid things. They're going to do stupid stuff. They don't. I mean that's just part of being a kid. I'm again, I'm very glad that I didn't grow up in this generation. They do. And it concerns me now that like that their drives can be scanned and if they've taken photos to share with their boyfriend, even though they shouldn't, we know I'm not going to say that they should. Now somebody's gonna have to go look at that. And if you have somebody who's not the best person at the school or at the company looking at those, we've essentially just combined like just handed them child ****. Like so there, there's a lot of things that concern me about that. I'm very glad that my nephew is out of school because I'm, I just feel like it's an invasion of his privacy for somebody to go look at his computer even though it is a school issued device. I, I get that.

[18:11] Debbie Reynolds: I agree and I share your concerns about deep fakes and how fast that information gets disseminated who sees it, what can be done with it? Also, we're seeing a push towards doing age verification for children for online. And one of the concerns I have there is like, okay, well, let's collect even more data.

[18:33] Dr. Genevieve Bartuski: Yes.

[18:34] Debbie Reynolds: Okay, well, that's a challenge, right? So what, what are you, what are your thoughts about that?

[18:40] Dr. Genevieve Bartuski: Yeah, it's one of, it's one of those things where I, I don't want to have to deal with age verification myself because I don't want to give that data. So I'll just not go to that site about me. I don't, I don't, I don't need that that much. So I do, I do see the need to keep kids safe. But yeah, you are right. It's actually just collecting more and more data.

[19:01] Debbie Reynolds: I think it's going to change. Just like you say, certain sites you wouldn't even go to. Do I trust this company enough to maintain the security of the data that I give them, or do I even think is necessary? I find myself pushing back when people ask me for certain personal information and I'm like, well, why do you need that? Like, why, why is that important that you have? And I think in the past it's just been, especially in the U.S. it's like, let's take as much data as we possibly can and maybe in the future we'll have some use for it. But we're seeing with a lot of the data breaches that are happening that people, they're more guarded now. They're, they're saying, hey, well, I'm going to be more careful or I'm going to think through the data that I give someone or see if I trust them. So what, what do you think? Do you think that people are getting more, more savvy about privacy and security in their data or do people just not care? Is it just us, like crazy, crazy women who are totally like, you know, obsessed with privacy? Do you feel like the general public are starting to care more about their data?

[20:13] Dr. Genevieve Bartuski: I do see a trend towards people caring more about their data, but I also see that people get almost, almost like privacy fatigue because they're constantly like, I can't tell you how many times, like my phone tells me updated privacy policy. And I'm like, you know, and even being in this field, I'm like, I don't feel like reading that. Like, and so I think it does have an effect on, on people. And also you mentioned, like, how they look, looking at their policies and it's written in legal ease. Why, why can't you just say very straightforward, like, this is what we're doing with your data, this is how we're storing your data, and this is what you do if you want your data removed. Like, I don't understand why we have to have these big, huge words. Why do we have to have that? Why can't we just be very straightforward? I agree.

[21:00] Debbie Reynolds: And I think in the EU especially, they've been pushing for many years to have more policies and things written in plain language, which I hope we have more of that here, because consumers are not lawyers generally. Right. So they, they just want to know what is relevant to them, what's important to them, and they don't need to read 80 pages of legal jargon to be able to find that out. Right. It's just not right. So, yeah, I agree with that.

[21:29] Dr. Genevieve Bartuski: Exactly. Yeah, I do like what the EU is doing with their data and how they considered privacy, data privacy, a human right.

[21:38] Debbie Reynolds: So since you've been reading the AI Act, I guess I'll ask you a bit about that.

[21:44] Dr. Genevieve Bartuski: I'm by no means an expert, but I am reading it.

[21:47] Debbie Reynolds: Yeah. Yeah. Well, tell me, what do you think the significance of the AI act will have, just kind of internationally in terms of influence?

[21:57] Dr. Genevieve Bartuski: I'm hoping that other countries follow suit. I would love for us to follow suit as well. So far, what I've read in the act yesterday, I was just like, I was really focusing on the time timeline of how they're rolling it out. So I would love to see that be copied in other parts of the world because I think it's really important. They're putting some really significant guardrails on there to protect people and to protect the society. It'll be interesting to see how it actually plays out, how it's enforced, because, yeah, you can have the acts, but if you don't have anybody enforcing it, it's not going to go anywhere. But, yeah, I would love to see something like that come over here. I don't know how, how, how plausible that is given the, the way we debate things here in the United States. You know, things sit in Congress, it's like, oh, no, it should be the States. Oh, no, we need to, you know, this will hinder the economy. And so we'll see. Because we don't even have a privacy act.

[22:56] Debbie Reynolds: Yeah, right. Putting the cart before the horse on the AI thing. Even though I know that the US we have the executive order about AI. Different government agencies. Every month, every agency has been putting out their reports about what they're going to do about AI and so I'm not sure that's going to lead to regulation, but at least I think within the year that the government will have a lot more information about the best way to really implement AI. Yeah, I guess a couple things that I think are important about the AI Act. One is that I like the fact that they categorize AI by harm or risk. This level that's very new because we don't really think about it that way in the U.S. it's more like, oh, you're a consumer, you're not. So if you're harmed or you're not a consumer, it kind of doesn't matter. But things like, you know that you had mentioned earlier, things that are prohibited like pre crime, like minority report type of thing in the eu and so I just read an article where this company was like, hey, we're doing this pre crime thing. Some state like bought some software. It's like a pre crime thing in the U.S. so maybe, maybe the AI act and the way that they go through that risk based approach will create more shame for companies where they're just making more of a moral stand about. We don't think it's acceptable that you try to use AI algorithms to try to predict crime. If someone's not done a crime, you shouldn't try to pigeonhole them or try to mark or target them in some way that will be harmful to them. So I'm hoping to see that. And then I also like the fact that they are taking time to be able to roll it out in phases so that people really understand what they're doing. But I think it will be very influential. Just like the GDPR has been where not other countries haven't adopted GDPR types of things, but it has since the GDPR came out. You can see traces GDPR things in different laws, even in the US in terms of terminology and how we name things. And so having some of those things trickle down or be spread around in different jurisdictions, I think it'll make kind of our jobs easier when we understand and talk more in a common, common language. What do you think?

[25:29] Dr. Genevieve Bartuski: Oh, I, I absolutely agree. Yeah, you said a lot in there. But yeah, I do, I let like you, I do like how they categorize the level of risk. I was actually that's what I was looking at yesterday. I was looking at the comparison between. This might be a little off topic, but I was looking at the comparison between the EU AI risk, like how they categorize it and then I was looking at the nist, like, it's hard to compare them because they're not, they, they don't look at things the same way. And I was trying to, like, put something together just for my own understanding of how, how they can kind of work together as the EU AI act rolls out. That risk assessment is going to be part of it. Like, you're going to have to follow that, you're going to have to make sure that you, you know, it's going to, for our listeners, if you haven't seen it, it's like a little pyramid. It's really neat and it tells you what kind of AI falls, falls into which level. But yeah, the NIST over here in the us, it's just, it's recommended, it's not, it's not required. And I do think that, that as a, as a business, I think businesses should still pay attention to that and pay attention to what the EU is doing, even if it's not required here in the us. I think they should and still pay attention to those, because at some point I am hoping that we do get some of those regulations here and like you mentioned, potentially our different industry regulators, maybe they'll put something in for their industries. The states might do that. So it's always good to be proactive.

[26:52] Debbie Reynolds: That's true, that's true. I want your thoughts, because you are in a cyber psychology. I want your thoughts. I did a video not long ago about shadow AI. So this is about people in organizations using AI, even though it's not sanctioned, like they're sneaking off, not supposed to be doing it. And the issue is that people are putting personal information, they're putting confidential information into these public models. And we know that there may be some intellectual properties, a risk, there's some client confidentiality, there's obviously some privacy risk. Because I think the report that I read from Cyber haven't is that they're finding people putting people's HR documents, their personal Social Security information into these models. But from a psychology point of view, right? I find the whole cyber AI, the shadow AI thing, fascinating because a lot of it is around the psychology of people and how they think. And so tell me a little bit about that.

[28:00] Dr. Genevieve Bartuski: Oh, I think it's, I think there's a lot of stuff that goes into the use of shadow AI. One of the things is that now there's more pressure on people to be more productive all of the time with work, whether you're working at home, whether you're working remotely, or whether you're working in the office. And AI does give you the ability to streamline some stuff, and so it can take off a lot of the pressure. But also, most people walk around, we don't think about it. But have you ever heard of something called decision fatigue?

[28:35] Debbie Reynolds: Yes.

[28:36] Dr. Genevieve Bartuski: Yeah. And especially, especially if you're a parent, the decision fatigue is real and sometimes it's easy. And so I think about that, like the shadow AI, it's like, oh, gosh, I'm just so overwhelmed. I have so much stress going on. I'm just going to throw this into the AI and see what it does, just because I don't have the capacity to think about that right now. And also when people are stressed. So your frontal lobe is where all your logic and your reasoning is. So when people are stressed out, that doesn't function as well. It just doesn't. Your mind is. Goes into almost like a flight or fight mode. Not really. Not to the extent of where you're frozen or anything, but it does. Your mind goes into, okay, I have to deal with the stress and this. And it makes it harder to think rationally and logically and take that time. So I think a lot of it goes into that, like the decision fatigue, the stress, the push to be more productive. Also, I think the company culture has part to do with that. Especially, you know, if the pressure's coming from up high, like, you got to get this done, got to get this done. And there's more and more goals or things that people have to do. And I think all of that pushes people within the organization to use that shadow AI. It's like, okay, I've got so much on my plate. This is a lot. I can't really handle this. I don't have anybody to delegate this to. So let me just run it through the AI, let them handle that and at least I can get it done. And it's one thing off my plate.

[30:06] Debbie Reynolds: I've never heard of anyone say it that way, but I think that's totally true. Because I think especially a lot of my cyber friends, they are always banging their head against the wall. They're like, why did they do this? Why did they do this? And for me, I always said, talk to them, find out what's happening. Maybe there's something that they need to do that the tools that you have right now cannot do. So that may be a conversation about you maybe bringing on another tool or maybe giving a person training. Let them know this is the thing that you want to do. Here's the best way to do it, or also talk with them. About what those risks are, because I think for the most part, employees don't want to do harmful things right. To companies. They just want to get work done. So if you tell them the right way for them to be able to do that and educate them, I think that'll definitely help. But I think kind of ignoring those psychological stressing situations, it is a problem because I think I've seen studies where people say that companies, because they are adopting AI and these new tools, they are putting more pressure on people to do more with less and to be more productive. And it's hard to do that when it's okay, I already had a full job, now I have more stuff and you want it faster. How do I pull a rabbit out of a hat with what I have without adding more time or bringing on more people? So, yeah.

[31:40] Dr. Genevieve Bartuski: Oh, absolutely. And also I also think too about, you know, data privacy and cybersecurity. So this is how I think about it from an organizational standpoint. When I worked in the prison, security was foremost. Like, it was part of the culture. Everything we did, it was security first. And it was a little bit of a mindset to get into that because up until I worked in a prison, I didn't have to bring my lunch in a clear bag or have to go through a metal detector every day or make sure doors were closed behind me. And if you think about cybersecurity and data privacy within an organization, if you can make that part of the culture of, part of what we think about all of the time, where it just becomes second nature, where it's like, oh, yeah, we don't use that because we don't trust the security of that. I think that's really important as well within an organization of having that mindset, having that cybersecurity, that data privacy mindset. I know it's. It's almost second nature to people who have worked in the healthcare industry because of hipaa. So. Because it's always, you know, you're always making sure that people can't hear what you're saying. Things, you know, your computers are locked down, everything's passworded. It's not a second nature on some office settings.

[32:52] Debbie Reynolds: Yeah. And also, I think not every security measure is a technical one. Some of it is kind of just like you say, like locking your computer or making sure you're not putting sensitive data on your desk, like putting that in a drawer or something, or making sure that you're not being. You're not saying sensitive things like over intercom or in earshot of Someone you know when you're talking with them. So some of those are some low tech cybersecurity security things that any company can do. And actually it's interesting because I find I talk a lot with people in other countries like Africa, South America and those countries don't have as many breaches as we do in the US and what I found is a lot of them do a lot of these low tech security things we don't even think about, like, hey, don't share your password.

[33:50] Dr. Genevieve Bartuski: Yeah.

[33:51] Debbie Reynolds: You know, stuff like that. So yeah, let's, let's do more of that simple stuff.

[33:56] Dr. Genevieve Bartuski: Yeah. And you also talked about privacy breaches too. And one of the things that always, that's really, really important in any secure setting is that if there is a breach, the ability to say, hey, we've got an issue without fear of retribution. So having a company, an organization where the team has that, that sense of psychological safety, that's, that's another thing that's really important. Whereas, you know, and you kind of mentioned other, other countries. Yeah, it is low tech. But I also wonder, it's just my questioning right now, are they more able to come to their supervisors or their people up and say like, hey, there's a problem. And so like if, and how does the company respond? Like if I accidentally send information that I shouldn't send, am I going to get fired or is it going to be like, oh, okay, all right. You know, like I was going to say, you know, the S word. But yeah, like stuff happens.

[34:49] Debbie Reynolds: Right.

[34:51] Dr. Genevieve Bartuski: And then just kind of move on from there? Or do they give you a reprimand and write you up and make people afraid to come forward when there are those issues so you're less aware of them? Or if they bring like, hey, I know we're doing this and we haven't had any issues with this yet, but I can see how this could go wrong. How is that, how is that taken by the higher ups and by the stakeholders?

[35:13] Debbie Reynolds: Yeah. Valid points all. So if it were the world According to you, Dr. Genevieve, and we did everything that you said, what would be your wish for privacy anywhere in the world? Whether that be regulation, human behavior, or anything with technology?

[35:30] Dr. Genevieve Bartuski: Oh my goodness. My wish. First thing would be don't take any more information than you need. Like that. That's a big thing. Because sometimes I'm like, why do you need this? Like, what is the point? Why do I have to share this? Do you really need to know this about me? No, you don't. So that, that would be one thing. And we're never going to stop bad actors. We're not going to stop people from taking data and using it, manipulating it or spamming you or trying to scam you out of money. So I would love to see scammers disappear. That would be. Take a wave of magic wands. That, that would be really nice, I think. And I would also like to see really good regulations because we do live in a global economy where the regulations. I don't know if we can do a global regulation, that would be wonderful. But we do live in a global economy where, you know, we could have regulations that work with other countries.

[36:27] Debbie Reynolds: I agree with that. I always thought once privacy started to heat up, I thought maybe the Hague or the UN are going to do something on privacy where they said, okay, we know that we're different jurisdictions, but at least here are some guidelines, some things that we can agree upon in the world that we think this is a good idea or this is a bad idea. And so that has not happened.

[36:55] Dr. Genevieve Bartuski: Right. I would love to see that as well.

[36:59] Debbie Reynolds: Yeah, yeah, that will, that will help us a lot. Definitely. Well, thank you so much for joining me. This is great. I really appreciate all your insight. This is fantastic. And let people know how they should be able to reach out to you.

[37:13] Dr. Genevieve Bartuski: Sure. So the. I'm always on LinkedIn, so my name is Genevieve Bartuski. You can Find me on LinkedIn. Please send me a connection request. Always happy for that. I also have a website. It's www.bartuskiconsulting.com and I will spell my last name if you're listening. B A R T U S K I consulting dot com. You can also email me at genevieve@Bartuskiconsulting.com.

[37:39] Debbie Reynolds: Perfect. Perfect. Well, thank you so much again. This is great. I'm so happy we got a chance to, to meet and talk today on the show. It's fantastic.

[37:48] Dr. Genevieve Bartuski: Oh, I appreciate it. I had a really nice time, Debbie. And I love, love the, I love the data diva in my, in my previous career, I was a salty psychologist. A.

[38:00] Debbie Reynolds: Thank you so much. I really appreciate it. All right, talk to you soon.

[38:03] Dr. Genevieve Bartuski: All right, bye.


Previous
Previous

E213 - Bill Buchanan, Professor of Applied Cryptography at Edinburgh Napier University (Scotland) 

Next
Next

E211 - Paul Starrett, Co-Founder, PrivacyLabs, Founder, Starrett Law (AI-Governance Technology, Law, Cyber Risk)