E134 - Pamela Gupta, CEO, Co-President Trusted AI an OutSecure Inc company

53:36

SUMMARY KEYWORDS

privacy, ai, cybersecurity, people, security, data, creating, system, mentorship, model, business, important, building, talking, cyber, pamela, intended outcome, debbie, rely, elements

SPEAKERS

Debbie Reynolds, Pamela Gupta

Debbie Reynolds  00:00

Personal views and opinions expressed by our podcast guests are their own and are not legal advice or official statements by their organizations. Hello, my name is Debbie Reynolds. They call me "The Data Diva". This is "The Data Diva" Talks Privacy podcast, where we discuss Data Privacy issues with industry leaders around the world with information that businesses need to know now. I have a special guest on the show. My dear friend, Pamela Gupta. She is the CEO and CO president of Trusted AI, which is an Outsecure company. Welcome

Pamela Gupta  00:43

Thank you.

Debbie Reynolds  00:44

Yeah, well, we've known each other for many years, and we've collaborated on a lot of different things; I think we met; we were doing something with Lan Jensen around what was Tech Cares. So this was at the height of the start of the pandemic, and they wanted to do things to have companies help them with the kind of security and digital transformation during those times. And so in the Bay Area, and you would call me up, we were all sort of a joint call, you call me up and like, hey, we should do stuff together. So for a number of years, we've done a multitude of things together. So we've done videos, we've done webinars. And I was just looking because I you know, you're someone that I follow a lot, and we chat a lot, and I’m like, why hasn't Pamela been on my show? Let me call her up.

Pamela Gupta  01:44

I was beginning to feel neglected.

Debbie Reynolds  01:47

So we're going to right a wrong right now. Definitely have you on the show. But you have such a deep experience in cyber you understand the interconnectedness of that with other domains, including privacy, including. Obviously, AI and it's something that you talk to me about quite a bit. But I would love for you to tell people about yourself and your trajectory; how did you start in cyber? How did you find AI? How did you put all that stuff together?

Pamela Gupta  02:34

Yeah, thank you. It's great to be here. And it's always great to talk to you live, so thank you. How I got started in cybersecurity and artificial intelligence is an interesting story. I started out with psychology. And I didn't, didn't think I could do clinical psychology. And it was a bit too intense for me. So I went into what was what I thought, at that time, was most similar to psychology. And that was how to make computers act and think like human beings. So I went into and did a master's in psychology and computer science, with a focus on artificial intelligence. This was 25 years ago. And then I got into when I actually created a product, believe it or not, that was sold off to Westinghouse; it was an expert system, the only type of AI that was commercially viable at the time. And it's a hot a good, good, I would say, experience with AI even at the time when data was not, you know, we didn't have this data tsunami, if so to speak, right, where there's so much data, and which made AI really possible in its current form, because we have so much data coming from, as you know, from IoT from different devices, different smartphones, etc. Anyway, we do have a lot of data, we can talk about how we got to that data, tsunami point, but it made it kind of fueled AI in its current shape and form. And that was the foray into AI, and my passion for AI is actually what brought me to this country from India. But when I started, when I sold off that product, you know, to Westinghouse, and then started getting more into programming, somehow got looped into network security. So I have this, this really interesting trajectory, a career trajectory for AI building a system, an expert system, and AutoCAD, right? And then getting into programming and doing C++ programming and then getting into creating SMTP gateway for Gartner using scripts, and yeah, it's so funny. It's so funny, I would say, career trajectory. But when I created that email architecture and email gateway for Gartner, that's when I got into cybersecurity with the firewalls in your work you typically associated with cybersecurity. From then on, it was creating technical and really, really deep dive into the security kernel, or if so to speak, okay, of operating systems and systems and VPN, etc. Going to architecture and management and security architecture, and then how to make it achievable, security being achievable for business processes, etc. So I would say I've really taken a look at it from any perspective when you come to think of cybersecurity. And if you look, take a, let's say, let's just take a standard, very globally accepted standard for security management, which is ISO 27001. Two, okay. And that has 14 areas, including, you know, network security in regulations, privacy, HR, security, organizational security, management, security, vendor management, and supply chain security operations. So there are 14 areas, and I would say I pretty much cover all of them, which is a bit unusual, because usually in security, either you're doing the governance side and the creating the policy, are you doing the, you know, the network security, or the communication security, I just happen to fall into all of them, and I loved it. So I stayed with cybersecurity for a long time. Including, I would say, going from network security, to presenting and creating a strategy or risk management, cybersecurity risk management strategy for a financial institution and presenting it to their board so that they could understand how it was very aligned with their business protection and business strategy. So. So that's, that's pretty much the scenic route.

Debbie Reynolds  07:38

Wow, I didn't know that. I know that you're a smart cookie and that you're just a wealth of information. That's such a really great story. Tell me a little bit about women in cyber. I mean, you've had such an illustrious career, and you touch on so many things; I feel like I'd like to see more women like you be able to step out more and be seen and recognized for your accomplishments. Tell me a little bit about how it's been as a woman in cyber for you.

Pamela Gupta  08:25

I would say it's been pretty good. And you have to do a couple of things in order to, you know, kind of really chart those waters, which is very, very male-dominated and male-oriented. It depends on a couple of things. One is your, I would say, and believe it or not, it starts at home and how you're, you know, my mother raised me as same as my brothers. And this was in India, right? In fact, I'll tell you a funny story. I didn't realize that I was not supposed to be, you know, I was not equal to men till I went for diversity training in Taiwan. Which is really bizarre, you know, so it was, or some company that I was at, right, I went for diversity training in the company. And this instructor asked me that. So at what point? Do you did you feel that your voice was not being heard or that you were not, you know, it was not equally important as the men in the room? And I said, really? I was supposed to be feeling that way. I didn't. I never thought that, you know, he came away feeling, oh, whoa, you know, something's wrong here. But to your point, I think whether you see consciously the bias against gender or whether you see it, unconsciously, you do notice that subconsciously, right? One thing that has served me well, is knowing my material and really enjoying what I do. So much so that I want to kind of just take something and look at it from all angles holistically, look at everything there is to know about something. So really building that subject matter expertise that has helped. And also, you have to develop a little bit of a thick skin, right. I mean, you are your own. rara gotta gotta just be your own cheerleader sometimes because, you know, your support system, if you're lucky enough to have one, may not exist in all the scenarios where bias can be an issue. And that's something that I have to kind of do as well. So I think I know, I can see where you're heading, and you want to, you know, you know, another talk about that too, as in what, what is the lesson learned that can be valuable for people who are coming into this industry, women coming into the industry? And I'll stop there.

Debbie Reynolds  11:11

No, no, no, I love it. I love that. You know, I think you're right; you have to be your own cheerleader. Right. And I think that that's probably true for anyone, especially in this day and age where we see so many shifts in technology, so many shifts and kinds of talent. And so being able to really speak up for yourself and, and, and speak up for the things that you know, and the things that you can provide to other people is very valuable. Whether you're in a company or not, this is a little bit of a segue; when you start talking about this, it reminded me of this. You know, there's been a lot of debates about whether people should go back to the office or whether people should work from home. And one of them. There are some people who advocate that you need to be back in the office; they think we need to be back in the office because you get mentorship and things like that. And I sort of laugh at that I'm like, what mentorship I mean, I, you know, any mentorship I've ever got, I have to seek out my own mentors, right? It didn't just naturally occur. And I think a lot of other people, who aren't, maybe don't look like me, maybe they got a mentorship, you know, maybe that's something that they miss about being in the office, but that's something I've just never had. So, you know, thankfully, I've been fortunate enough, smart enough to be able to seek out smart people, and probably, you know, people, whether they were in my, in the organization I was in or outside the organization or in different levels of the organization, I think it's really important to really seek out those people. What are your thoughts?

Pamela Gupta  12:54

I think everybody can take a lesson from you, Debbie, in terms of how you have framed mentorship for yourself, all right. And by that, I mean, I mean, I see you on LinkedIn, for example, you are not only promoting yourself, but you're also promoting others, right? on a consistent basis. And I think that is really mentorship at a societal level, right? At a professional level, which doesn't exist, that kind of security of kind of promoting others does not exist, you know, commonly is not, I would say, intuitively or just something you gravitate towards. So I think you are one of my role models when it comes to that thing, I'd love to learn from you in order to promote not just my thought of my perspective, what I'm seeking for, you know, or kind of have that push in one direction, but also, and I've started doing that, so thanks to you. But I see something I will pull in other people. So that's very important. You kind of learn on the fly to your earlier point on being physically there in terms of getting mentorship. A that doesn't, that's not important, and that we not, may not all elements of that may not be important for mentorship. What is more important is the willingness to do so. Right? That was my point, only that you're willing to do that for others. And last but not least, you know, who are my mentors? I scramble, and I asked, and I begged right for mentorship in corporate America because that's where I was for 25 years. And it wasn't nothing right. I mean, you can say, what are you the right things? The corporate culture can be all we have. We have no formal mentorship program unless it's individually motivated. That kind of, some kind of value would be really instrumental for career growth and stellar career growth. And that cannot happen without a commitment. And I've no, no, not I'm not sure what these people are talking about. No, I

Debbie Reynolds  15:26

I feel exactly the same way. I really think that people should really seek out mentorship; they should mentor other people, you know, there are other people who are coming up below you that want to know what you're doing and really be able to try to provide that. And mentorship is a two-way street. So it's not just the one person imparting information; it's kind of a sharing of information. And it's kind of a commitment, right? It's a commitment. It's kind of a long-term relationship that has a give and take to it.

Pamela Gupta  16:06

But I have relied on a lot of books and, you know, leadership books and self-taught, you know, how to, and I'm constantly out there trying to kind of just do things to promote myself. But it hasn't been any formal mentorship. But one thing I think that is really kind of telling in that is if you haven’t, you need that go-getter attitude, right? And, for me, somehow, I've never really, and I'm saying it on record, I'm not sorry by others, but I just don't respect authority. So when you know, so, so if you know, so, you know, you saw Knight, let us pause, right? You'd mentioned something where this lady, she may have been like the queen of Russia, and I was saying, Okay, well, you've got to take a look at this trusted AI model that I've developed because this can really help you with what you are publishing here. And she made me really, and I didn't do a lot of research on her. But I know that your design was extremely impressive. But my point here is I am making a very important point, though, which is, in that whole learning process, right? When I'm learning from people around me, whether it's my kids or whether it's somebody who knows a lot more than me or a lot less than me, I'm very happy to provide their feedback. So if I've been in a meeting where, you know, it was somebody I felt was doing something really great and saying something very important. It doesn't matter if you are, you know, low down on the totem pole, I think it's important to give that feedback to them as well so that they know right at it, I think as human beings, we are we have to be there for each other, and kind of promote each other and not take it for granted that that, you know, labels and designations are enough to kind of put you on a particular pedestal or a particular place. I think we are all vulnerable. And we can all learn from each other at some point.

Debbie Reynolds  18:22

Absolutely. Absolutely. Thank you so much for that. Wise executive advice. I want to talk a little bit about cyber and privacy. So I find this in the US a lot, where a lot of people confuse the two. So they think one ecompasses the other. And you know, they're two different domains. But what I like to say is they have a symbiotic relationship. And sometimes people who don't understand that just say they think everything is cyber, right? So cyber is everything. It was like, well, no, it's like these are two different domains. They play together, right? You know, so, if you do it right, they can fit together like a piece of a puzzle, but they are different. So tell me, from your experience, how do you delineate privacy from cyber, and how do they work together?

Pamela Gupta  19:25

So yes, you're absolutely right. So for the longest time, you know, security professionals say, Well, you need security in order to protect privacy, right? And so secure cybersecurity is like a kind of rule, right? It's just everywhere, but if we look at it, if we break it down, these are two very, very different fields. And yes, in order to secure something, you have to have certain fundamental things in place right, such as Need to Know which is king in cybersecurity or queen or just rules. Alright, let's just say that it is accessed on a need-to-know basis. It's restricting, you know, protecting the information, it there are certain fundamental things that have to go with protecting and cybersecurity. But when it comes to the type of data, everything that needs to be protected is not privacy data, right? And all privacy is not all security, meaning there is a big difference, which is not actually something Did you just jump forward, and I'm gonna come back, but it is no, no place is more obvious. The difference between cybersecurity and privacy than in artificial intelligence, okay. When we are building these really massively impactful systems out, right, it is very essential to, of course, have the right cybersecurity in place. But privacy is not off is not considered here in terms of let me give you an example. It's not considered as it may not be even at risk, for example, of a cybersecurity risk of threat of being compromised, let's say, right, in order for it to play a big role in the right protection of that particular AI model, for example. So I'll give you an example. All right, because it's a bit nebulous at this point. Let's say you're building out a model for an AI system for predicting who will get should be getting very expensive, or a very high profile or high, something that's a dangerous disease, let's say, Okay, who should be getting priority in terms of some resources that may be very limited, such as medicine, such as the really small amount of medicine that may exist, this is a hypothetical example. But so there's, you have five things on something, you have to give it out to the ones who are most deserving, right? Now, if you have a system that determines who should get that, you could be protecting all the information, you could make sure that you know, nobody can go and sub work the information, nobody can change the way that model works, you can have all the right security mechanisms in place. However, if your model is going to come out and say that based on whether it's on gender or on ethnicity, certain people should be the ones and those five, that has nothing to do with security, but it has to do with privacy. And how is that a privacy thing? As you're building out these models, right, right, let's say right, and you don't have the right representation of the restricted variable in this case, for bias, let's say gender, okay, so you have men, and you have women in there. But when you're training your model, right, and the population that deserves to get the treatment, it should be equally distributed, regardless of who it is, but should be based on the severity of the disease. But now the privacy part here is the privacy elements here would be gender, for example, what is sensitive PII or PII, personally identifiable information would be, for example, again, we have to identify who that person is, what the gender is, what their ethnicity is, where they live, you know, so there could be all those elements, which could be very well protected. But they can still play a role in determining who the model is picking based on that privacy of, you know, private data, PII, PII data. So here, I'm not talking about necessarily protection of privacy, but I'm talking about how privacy can privacy data in this case, which is the name, gender, etc., and ethnicity can play a role in determining who's going to get that information. The model is training and how the model is getting that information. That is not bias for example, okay. But the point that I'm making is not about protection. Here. I'm saying that why is cybersecurity data and protection mechanisms not the same as what you would need for privacy and how it's so obvious in artificial intelligence systems more so than anything else? But to go back to your original point, are these two different Yes, cybersecurity is completely different, is not a completely I should say; we are looking at two Venn diagrams. This is cybersecurity, this is privacy, and there is an intersection, but it is they are not one that is not one big circle. Right. So how so? That's the one example I gave you with, you know, is, you know, these are deeper conversations, right? And it kind of also goes into one thing, which you can consider a plug, but I will do it. It is what are the essential pillars of trustworthy AI? You know, when I define that, based on my practitioner's point of view, cybersecurity, and privacy are two different pillars, but they are both important. Okay, so that's AI. Alright, so let me get back to conventional data protection and data systems. Okay. So when we're talking about cybersecurity and privacy, how can they play a role in overall business strategy and business approach? One is security for cybersecurity, you will want to make sure that your systems are, you know, again, going back to those fundamentals, which is access is on a basis or need-to-know basis, you have the right controls in place, you have the business need to know for example, you have the right infrastructure controls in place, you have the right, data classification done, etc. But when it comes to privacy, all the data classification is not going to be all, again, confidential data, a subset of that would be extremely important would be the privacy data, which will be, again, for example, what is that information that you are collecting? Are you collecting the information? As an answer answering two things? I'll tell you one is what is privacy data? What should be privacy data that is very obvious, which is regulated, that will be personally identifiable information, right? Again, name, gender, you know, location, etc. But that is a very obvious one and an easy one for a business to be really clear. What are we collecting? Do we need to collect all this information? Can it get us into hot water if it was compromised, but the other side of it is also, as we're collecting more and more Data Privacy wide, it's becoming more and more important, and I will say not shadowing cybersecurity, but becoming that the diagram is getting you know, is that circle is getting even more, I would say the overlap is getting a bit less is because the kind of data that we are collecting, which is not regulated, but can be considered privacy data, that is increasing by many folds, okay? So, for example, your heart rate when you work out and let's say you have an app that's collecting your heart rate, and you know, if your fitness level, right, your fitness, intensity, or fitness and exercise that one is doing, collect the that the kind of data that has been collected, right, that these apps can collect, for example, those are very telling of who it is, okay, though, they can be rhythm, they can be patterns in here, such as, even for voice wise may be considered, you know, it's biometric data, it may be considered, especially under EU and GDPR, you know, coming under protected and regulated data, but I'm talking about data that we are gathering for as a business, you know, for different business purposes, you could be gathering data, which would be, I would say, should come under Privacy Protection. And we've identified as private data that we don't have regulations for, which can be really detrimental for the business if it was compromised, as well as for society at large. So in short, what is the obvious difference for me between cybersecurity and privacy is even if you have the right protective measures in place, the right data classification place that you would have in a good cybersecurity program, and all the things that we are very, very important first, and for cybersecurity, those remember those 14 things I was talking about areas of security management system for ISO 27,001 Even if you haven't those in place, that doesn't mean that you protecting your privacy data, okay? And that you know what your privacy data is. So there are big differences between the two and by mine.

Debbie Reynolds  29:34

Thank you so much. I really appreciate you doing that. That's like a very important thing for people to know and understand. So thank you for explaining that. Well, what's happening in the world right now that's concerning you, that you're looking at, you're like, oh, I don't like that.

Pamela Gupta  29:52

Okay, so I was looking at an interview the other day and watching an interview on you know, one of the people who are very fundamentally involved in creating large language models. And not to call people out. But it is a bit. It truly is a bit scary when you are hearing somebody who's creating something which is extremely powerful. Having, I don't know, I haven't considered all the realms, I don't know what the outcome will be, you know, to him see, that is very terrifying to me as risk management in a cybersecurity professional is like, why what, what is going on here? And that's one thing, and then who is doing anything about it? Right? That's the other thing. What, how are we just watching? You know, do we need to start a call to action, Debbie, you know, and say, Okay, let's people if we are going to unleash something which has not been tested fully and may not be that safe, you know, because, you know, this level of different levels of the skill set of people deploying these things and using these and who may not have the ability to discern what is good and what are bad defects, for example, you know, there's so many different things that can go wrong, right? And when we are, you know, I have written an op-ed, an opinion edition on deep fakes, right? How the biggest problem of deep fakes is not that they can be used but that they can distort our sense of reality because they can distort our perception of reality, which, to me, outweighs any other harm that anything or anyone can do, right? If we start believing that something that is false is true. And we read it, and we see it. I mean, that's what we are about as human beings, right? We want to trust people, we want to trust what we're reading. And if we can't trust all the different technology and the and are the, you know, at our highest level, the regulators and the builders, it's not a good state to be at all right. When these principles of cybersecurity and privacy are, we will care about them anymore. I mean, we are our word is like literally dissolving; the reality is dissolving. So nothing kind of even makes sense to me. It's I'm sorry, it sounds very; I don't want to be doozy. But I'm just saying we have to kind of corral it and read it. And we can, you know.

Debbie Reynolds  32:44

Correct. So my concern, and this analogy I use a lot. And so we've gone on hyperdrive on this. So the analogy that I use about the Internet and how people use data, you know, people think when they go on the Internet is like a library, right? So you walk to the library, and you have like all these different things you can look at. But what you're really doing when you're using some of these tools, they're basically creating a library for you, right, so as a section of a library, and it gives you the impression that you're seeing everything, but it's actually only showing you certain things, and then when you bring AI into it, there's a level of manipulation, then that can be added to that, which is you don't know what first of all, you have the impression that you're seeing everything or not, you have the impression that you're saying things that are true or or or be given information. That's true, it may not be and that that is a dangerous thing, because, you know, people for nefarious purposes, they'll use it to manipulate people, right. So giving people false impressions, mostly trying to have people take an action that they may not have taken otherwise because they weren't fully informed. What are your thoughts?

Pamela Gupta  34:08

The fear I have is that you know, we are rolling the system. They're not fully baked for risk and for intended outcome, as I say, but the biggest problem, though, is there is so much we can do with this technology role, right? And then we might miss out on that just because we're taking these really tactical, Ill thought out steps right now instead of thinking about it strategically and holistically. This is what this technology AI is built for. So if you look at some of the use cases, Debbie in the medical field for what AI and large language models are achieving. It's like wow, it's like this is this is just dynamite. It's really fantastic. Right? But If we don't factor in how to use this in a controlled and restricted and thought or well-thought-out manner, then we're just going to muddy the waters for everyone. I feel right then, the regulators are going to have this. And they're quite, you know, known for having a knee-jerk reaction, right? Okay, we all know something heinous happens on a large scale; we need to just shut down this technology completely. And that will be I want to can't say that word but wouldn't be ashamed it would be we would all as humanity lose out on it, right? That's not where we want to go with us. So many times in cybersecurity, right, your business did not want to invest in the right guardrails in the beginning. And then you try to do it as an afterthought it will not work not for where we are right now.

Debbie Reynolds  35:57

Yeah, I agree with that. I mean, the harm, it could be catastrophic. So to me, I think that this is the reason why I focused my attention on emerging data spaces because I feel like the harm that could be created, I don't think that there is or can be any adequate redress. For example, let's say an AI system misdiagnoses you; let's say for someone who has skin cancer, a system misdiagnosed you because I can't read your skin color because your skin is brown, and other people have paler skin and works on better, someone may lose their life as a result of that. So they're How can you rate how can you create a law about that? Right? So it's like, we have to think about the harm beforehand, you know, as we're developing these systems, it because it can't be something where you wait for something bad happens, and Whoa, let's start to try to create, you know, pass laws or do regulation, what are your thoughts?

Pamela Gupta  37:07

That I absolutely feel is the fundamental thing, right? So as we are building them, so how do we corral that risk? Right? And who in that community of developers, for example, so my, the way I correlate the traditional software development, for example, development cycle and, you know, stakeholders in war, to the AI development cycle and stakeholders involved? There are some differences and some commonalities. Okay. So what I would say is the developers of the previous, you know, systems would be more like data scientists in today's world. And the problem is, though developers are not looking, they're tasked with finding an answer to the solution and making it happen, right? They're asked to provide functionality for a particular purpose. And typically, there's the stopwatch is clicking right there, the PMs are breathing down their neck, and they have to have certain functionality. And they have to make this thing functional and productive at the earliest possible. So they're not often given enough time to really sit back and think about where are the risks coming from. And because that doesn't come naturally, right? You think about how to make something work. And you're not thinking about what can go wrong, right? So you, if one doesn't take that approach, and in this case, the number of stakeholders has increased earlier, it was developers, and you know, let's say the IT team and the business and the security team. But now it is for when you're building out something like an AI system, the number of teams in war is a lot more. Okay. And by that, I mean, it should be developers, data scientists, privacy professionals, security professionals, you know, governance, right? We need the right governance in place right at the beginning, business, and legal rights. It doesn't have to be a very heavy process, right? If you kind of nail down a particular environment, what is the risk tolerance? What is it that the business will really have a negative impact if something didn't work or didn't have the right elements of security in place or privacy in place, such as even for the model itself right, even for the very process itself? If the intellectual property was exposed, you know, to identify, you know, what is unique to the environment? What is the bread and butter for the environment, what is the intended outcome for the environment, and then identify who in those teams needs to win be on it, right? And believe me, I've done this, you know, for not necessarily for this particular context I'm talking about, but on a large scale in for anything to scale, right, any process to scale, the more you identify what those elements are that if you're asking for a major functionality to be added. So what do we need? We need Debbie from development, we need Pam from security, and we need to know if this is a going to be something that we can put in those security risks and the privacy controls that we have on the have done in the past, or is it introducing anything new? Okay, yes, no, and goes on to the next one, is there going to be any legal repercussions? Who do you think, right? So it doesn't have to be, you know, extremely weighing the whole cycle down. But the right strategic stakeholders at the beginning of the process, especially when it comes to building out, you know, massively impactful systems, right, is really important. And things such as, you know, which algorithm to use, which model to use, etc. I mean, leave that to the data scientists, but honestly, that doesn't even change that much for each and every project, okay? Because if you're looking at, for example, different companies, depending on their size, they're not going to have just one emerging technology they're using, they're going to be using a, you know, third parties coming in getting a supply chain, you know, they're looking at, you know, who's got who's supplying what in their supply chain, there are external suppliers, they're going to be internal development, they may be some customized, or they may be relying on commercial tools, whatever. But it doesn't have to be a very tedious and time-consuming process to hold things back. But the better, more involved, they are at the right time, and are very for each organization, the better it will be for making it actually getting to fruition and making it so that it doesn't come back and you redo something because you really can't go back and redo things when and they are so ambiguous and complex, you know, such as AI. Yeah.

Debbie Reynolds  42:15

Now your company is called trusted AI. And so as we're seeing kind of generative AI really just skyrocket around the world with things like ChatGPT and stable diffusion, things like image, generative AI, we've had, I've heard a lot of privacy folks, or, you know, we see companies like they just did it, they try to stop people from using these tools. And I feel like, you know, you can't stop it; it's unstoppable. But you need to be able to find a way to use them and put them in their proper place and not abdicate your human judgment and responsibility. So tell me how, well, first of all, we can't stop AI. That's the first thing. But people need to learn how to really use it. So tell me how kind of you know your thought on how to build those guardrails with trusted AI?

Pamela Gupta  43:18

Yeah, I would say that business comes first; right business context, organizational outcome, what do they what are they looking for? And that having that holistic look at? Where do they want to go? And let's step back and see what can go right. And what do we want to go right? And what do we not? What is the rest? We absolutely cannot have, for example, we this is a democracy, we don't want a surveillance state, or, you know, it's we are in a medical field, and we want to make sure we are delivering the right value to everyone, for example, is there going to be a bias issue are going to be there's going to be there are so many different things that did not, we didn't have to think about in conventional system development, you know. So what I was talking about earlier, and that is extremely important, is that we want to have the right elements, or holistic examination of risk right at the beginning. And there are some frameworks that are coming out that I feel are still very bloated and are not going to help somebody who's on the drawing table or at the drawing table. You know, those stakeholders are around the table, legal privacy, security, management, data scientists, and developers, right? If that is your stakeholder community, they take a look at some of these frameworks that are out there. They're just not going to provide the guidance they need in a very efficient fashion. Okay. We're in looking at the EU AI, you know, there's OECD, you know, a lot of guidance, but at a very high level, and now isn't enough to be really functional. So in answering that question framework that is developing those essential pillars of trusted AI that I was talking about, my whole concept there is that we don't need all of them for everyone. So that would be highly context-dependent. What you need is, let's say it's a medical application, we want to make sure that we have the right tools for creating, removing bias for any time and you know, the bias, what kind of bias you want to remove, will be very context-driven and business-driven as well. So what are those pillars? Let me just talk about what those are, there are eight pillars that we, and there's transparency. So the steps model created called AI tips, artificial intelligence, transparency, integrity, privacy, and security. That's the AI tips model, and it stands on eight pillars, which is, how do we achieve that? What is transparency? What do we need to know? What do we need to know? So really, that will vary for each company, right, or each model that you're developing. And it doesn't have to be very heavy and weighed on the whole development process. But it will help it actually proceed that listen, you don't need to have, I don't know communication in here, for example, you don't need to have this part of it. Because what you are doing is extremely reliant on data, that is, for a particular lead to creating a genome. And let's say, Okay, you're relying on your focus over here should really be on confidentiality, let's say, or integrity. And you don't need things like privacy because there is no element there for privacy. Or, in this case, you don't need security, or you don't need. But having that ability to right at the beginning of the project know what that element of trust is, and what is the element of risk is so that you can put in the right elements of trust will help counter any landmines that come out later. Because, hey, regulation, you can't rely on that to be your guide, you know, for the guardrails, some of it, yes, but not completely, as a business you or organization, even if it's public, right? What are the things to protect so that you don't have to come back and redo things? And there is a lot of LME, I actually want to tell you one thing, large language models holds so much promise, Debbie. And if we take the right approach, so for example, if you even take a look at Chad GPT versus word, all of this came from all these large language models, and this generative AI, based on a lot of research coming out of Google. The LLM is here. And char GPT, of course, has a lot more data that is trained on but see the risk when it comes here. And I'm going to be releasing my newsletter soon, which talks about which LLM has what elements of trust or lack there are. The problem between charge CPT or that just take the activity not even comparing is a fork is trained on data that is coming from unreliable sources for some part of it, right? It's trained on Wikipedia, you know, I can go in you, and I can go and change things on Wikipedia. There's no control in there for the general public, right? You can go and control. Do you want to take a ground truth from something that can be changed? And there's no real process for managing who has contributed to it. No, you? So how, what would you rely on it for? On right? That's what the question is; in my mind, the biggest problem which is its data lineage; there is none. And even if there is, it's limited, so it could have some which are extremely good data, and it can have a mix of really questionable data. And that's the biggest problem. So your question earlier was, you know, as charged up and these things are getting deployed, that's the one I was talking about, you know, that interview I was telling you was just carrying the heck out of me was why is this man saying? Well, I don't know if you know if it is possible to do this, but why are you releasing it to the public if it's not ready, and if it's not, hasn't been tested? I mean, you heard even the pilot they did for, you know, people to sign up and go Microsoft down pilot, right? This released it out saying if you want to try it out, sign up and a million people sign up, and it is there. As the guy on Reddit got into this big argument with him and the charity bill, he basically said, Well, you know, you're just rude, and you got to stop using it, or was it obnoxious, but it was just not achieving any purpose, right? I mean, what are you trying to do? What are we? Do we have time to waste? You know, no, we want to go for using a system because we want to be able to use it for intended outcomes, right? We want to solve a particular problem. How do we do that? Can we rely on tools that are not tested? And which are? It's fantastic too I mean, it's got a lot of good and really fun parts of it, right? You throw in some data, and gives you a synopsis, and you're like, oh, wow, that is? That was super fast. Right? But it is not meant for enterprise. It's not enterprise-ready, in my mind, or definitely not for general purposes.

Debbie Reynolds  50:52

Yeah, yeah. So thank you for that. What if it were the world, according to you, Pamela, and we did everything you said, what would be your wish for either privacy, cybersecurity, or AI? Anything? Whether it be technology, human stuff, or regulation. What are your thoughts?

Pamela Gupta  51:13

I think it would be very simple. For me, it would be just don't think only of who or what you're building for, right? You know, in terms of your company, right? For let's say I'm taking a business approach to it, right? Because that's where I come from. Don't just think about this is a product I'm building; this is a service I'm providing, and this is how I'm going to benefit from it. But think of it in a larger way, sense where what can go wrong, not just for me, but can who are the other people who can also be affected by it. So that's a classic cybersecurity thing, for example, that you didn't just protect your website, let's say, and put in the right security controls in there so that you will protect your own data. But it's also it can be used to attack other websites or other systems out there. So I think it's just that taking that look like I'm taking that holistic approach, or if I'm doing something, it's not just going to impact me, let me be just good citizen or good Global Citizen, right? We're all in it together when we make mistakes, you know, we have to be forgiving and all that I'm not talking about all that good stuff. But I'm just saying that if we stopped being so self-centered, I think we can do a lot better for ourselves, and that's ironical, but that's just, that's just my two cents on that one.

Debbie Reynolds  52:47

Very good. Very good. Well, thank you so much, dear Pamela; I'm so happy we were able to do this finally. And I know everyone will really love your insights. You know, you and I have talked for hours about this stuff. But yeah.

Pamela Gupta  53:02

It's always so relaxing talking to you. And you just seem to draw out so much that's in your either experience or psyche that, you know, you didn't even think about, you know, so thank you.

Debbie Reynolds  53:15

Yeah, this is an amazing tour de force, this discussion. Very good. So we'll talk soon, I'm sure, and thank you so much for being on the show. Thank you.a

Previous
Previous

E135 - Ken Chikwanha, Executive Head: Data Governance, Data Privacy & Data Protection, Standard Bank Group, Johannesburg, South Africa

Next
Next

E133 - Vadym Honcharenko, Privacy and Data Protection at Grammarly