Debbie Reynolds Consulting LLC

View Original

E31 - Dimitri Sirota, CEO of BigID Actionable Data Intelligence Platform

Find your Podcast Player of Choice to listen to “The Data Diva” Talks Privacy Podcast Episode Here

Your browser doesn't support HTML5 audio

The Data Diva E31 - Dimitri Sirota, CEO of BigID (38 minutes) Debbie Reynolds

 Dimitri_Sirota

 38:26

SUMMARY KEYWORDS

data, privacy, metadata, people, problem, create, customer, visibility, correlation, developed, classification, product, databases, provide, build, duplicate, companies, security, apps, terms

SPEAKERS

Dimitri Sirota, Debbie Reynolds

 

Debbie Reynolds  00:00

Personal views and opinions expressed by our podcast guests are their own and are not legal advice or official statements by their organizations. Hello, my name is Debbie Reynolds, and this is"The Data Diva" Talks Privacy podcast where we discuss Data Privacy with industry leaders around the world with information that businesses need to know right now. Today I have a very special guest on the show. Dimitri Serota, who is the CEO of BigID, BigID is an actionable data intelligence platform. We like to say data management, reimagined. BigID has some huge accolades, lots of awards, a couple of mentions in the press, Forbes Cloud 100 Award, the World Economic Forum Technology Pioneer Award, Gartner Cool Vendor award, Business Insider, hot cybersecurity startups, and many, many more. Very happy to have Dimitri here on the show. Hello, hello. First of all, I can tell you that I was very excited to see the product come on the market many years ago, and people have asked me about it. And like I get calls from, you know, private equity, VC, all types of people, tons of people ask me about the product, have raved about it for many years, I've been really happy to see how well the product is done in the marketplace and all the accolades that you get. I think it's wonderful.

 

Dimitri Sirota  01:41

Well, thank you very much. Yeah, no, we take great pride in building something we think is highly differentiated. And I hope you say nice things about us are you will going forward.

 

Debbie Reynolds  01:53

Absolutely, I've actually have the pleasure to collaborate with you all. So I'm really happy to be able to do that as well. Tell me, tell me about the problem that BigID is solving. So when you came on a marketplace, what was the gap that you felt that you were filling?

 

Dimitri Sirota  02:12

Sure. So I think you know, it's evolved a little bit since so we've been in the market now for about three years selling product. And when we first started, which is now five years ago, we thought that there was a gap in the way privacy was conducted. Privacy historically was viewed very much as a problem of process, as opposed to a problem product it was it was predicated on this notion of recollecting where data resided, as opposed to actually knowing where data resided. So data recollection versus data records, I like to sometimes say, and we felt that with GDPR arising, started in 2016. And later, you know, California and Virginia and other regulations internationally. Privacy at its heart is really about kind of providing data transparency, data integrity, and there was no way of doing that, so long as you basically kept it separate from the data. So long as privacy was really more something done by the legal team, more the compliance team as a GRC thing, that through surveys versus scans, it would never kind of deliver on its vision of providing true transparency and trust and choice to customers. And so, we felt there needed to be something to bridge that gap. And so we developed BigID, which had the big idea of doing a data up privacy solution, starting with the data that you have to help organizations understand where do they have data on individuals, so they could solve some of the privacy requirements around data rights, record processing activity, privacy impact assessments, but rather than the beginning with surveys and questionnaires that rely on people's recollections, which are fallible, right, I personally, rarely remember where I put my sunglasses or car keys. Well, that means that most people probably don't have a perfect recollection of where they put their data. And a part of the aim of GDPR is to again provide trust to transparency and choice; you needed something better. And so that was the genesis. And that's kind of where we got our start. And we solved for this problem of a very hard problem, saying, Look, a company has a very large data stage. And for the better part of the 2010s and 2000s. All they cared about is grabbing as much data as they could. And now, for the first time, they needed to be able to separate that data. They need to be able to figure out where do I have the Dimitris data and only Dimitris data, whether it's my credit card, Social Security, or even my IP address or cookie or session key or clickstream. I need to be able to figure out across my data state where is the retrieved data, and nobody had ever done that. And it kind of boggles the mind a little bit. I'll use one analogy before I paused to ask another question. If you think about the banking world, right when you go deposit a check, and I realize most people do it electronically or automatically today, but there was a time for the older people for the jet setters on your, your audience or Gen Xers rather than your audience, you would go to a bank deposit a check, and there's an expectation that the bank would keep a record of that they would know how much you've deposited. Because if you turn around and say I want to withdraw a portion of it, or maybe the whole thing, they have an accounting of your deposit. Well, today, data is the currency for the digital Enterprise for Congress and communication and collaboration, right? We saw that under COVID, we all just interacted across data. And yet, unlike currency in the bank, there is no accounting for what you deposit it for what it is yours. And without that kind of accounting? How do you provide accountability with the data? And that's, that's how we got BigID.

 

Debbie Reynolds  06:02

Yeah, also, you know, over the years, I've had an opportunity to interview legal teams and IT teams at major corporations. A lot of times, when I'm working with them, we need to know exactly where data is, where certain things are. And I've always been surprised by how different these two groups perceive where things are and how they're not on the same page. So being able to have some empirical evidence, which is the actual data, the actual scans themselves, I think, helps bridge that gap, don't you think?

 

Dimitri Sirota  06:43

Yeah, have a look at the example I give is that we don't believe that, you know, people don't accurately recall where they put their car keys, right. So you have all these kinds of helper devices to track your car keys. And yet, we're completely fine with relying on recollections for people recalling what data they put where and think about the number of data stores that companies have. And not one or two, but hundreds of petabytes of data volume. And yet, the privacy world is this other era where you can just ask questionnaires and so forth. And, you know, look, it's obviously a stopgap. But ultimately, if you truly want to provide accountability to be able to capture the data, and thankfully, data science, machine learning has advanced to the stage where it is possible to separate my data from yours. You know, one of the analogies people talk about incorrectly, in my opinion, is that data is the new oil. And they use that analogy because data fuels the modern digital Enterprise, right? It's the thing that feeds it. But here's the difference between oil and data. Two buckets of Western crude oil are the same, right? They look the same; they smell the same; they have the same viscosity, whereas your data and my data are different. And at the end of the day, to provide that transparency, trust, you need to be able to provide me my data, not your data, right. So data, even though it's commingled in your Hadoop and Cloudera and S3 and dot and redshift and EMR, you're getting my data are actually different things. And so there needs to be novel approaches, which is what big ideas develop, to be able to look across the data and tell people concretely, this is what you have, and this is who's you have. So it's that solving that kind of dual problem of what and who?

 

Debbie Reynolds  08:35

Yeah, I don't like the data is the new oil analogy. I like to say insight is the new oil because if you can't get inside, the data doesn't matter, right. So let's talk a little bit about data silos. This is something that I like to talk about. And this is something that I was very interested in your product. So I felt like on the market, a lot of products, say, you know, you have data and all these different data stores and our product, let's come in and take your data and put it in a new bucket where you're saying, leave it where it is, and then we'll attach to it and then we'll be able to get data from it. Can you talk to me a little bit about that?

 

Dimitri Sirota  09:21

Yeah, so we decided not to take the Dear Liza approach and basically put something in a new bucket that may have a hole as well. At the end of the day, when people are dealing with sensitive data, people data, other types of crown jewels, and things that sort. The last thing you want to do is necessarily move or relocate or dislocate the data. Now there are occasions where you want to do that. But the role of somebody like big ID that provides you visibility to the data is not to copy and move the data. And part of the reason you don't want to do that is because the more copies you make of data, not only do you pay more, whether it's to count infrastructure in your data center or by the gigabyte in the cloud, you also create additional security vector of right there are more opportunities for bad actors in places that you're familiar with, to take the data, manipulate the data, abuse the data. And so lastly, what you want to do is move that. So our approach is the antithesis of that we want to basically create centralized visibility or views into the data, without centralizing the data, keep the data decentralized. So if you put the data in snowflake, if you put it in data, bricks, if you put Office 365, we leave the data there. But we essentially give you a map to understand it to navigate it. So we're kind of the opposite of creating yet another data store that you got to pay by the gigabyte, or by or by the kind of terabyte, we basically look across snowflake, we look across data bricks, we look across BigQuery, across redshift, we look across mainframe, and we tell you what you have everywhere, we give you the power to navigate it just like you would on the road. But we don't necessarily copy it. An analogy that I would give is in the early days of Google; their aim was not to copy the internet for you to navigate it. It was to index the internet. So you could find what you needed to find. But it wasn't like you were looking in some facsimile of the internet. And so, in a similar way, we're not trying to create a facsimile of your data that creates all these kinds of security issues. We're trying to give you a way of navigating, a way of understanding, a way of contextualizing what data you have, wherever it is. And that's kind of a different approach that we developed and patented and all the other kinds of things.

 

Debbie Reynolds  11:48

Yeah, context is really important because I feel like humans really need to bring that context. So you can bring visibility, but you really need someone to be able to tell you why certain things are important. I really like I don't know, I don't know what people call it, I call it, I call it myself, metadata data. So it's the other data that you can glean from all the data that you're looking at from these maps, and then do tagging of different things. Such things like, you know, being able to find documents that have Social Security numbers, you know, in manual days, you just assume okay, well, HR has records, they have Social Security numbers, but then if you know, those numbers or that data was in other places, it will be harder to find without technology. What are your thoughts?

 

Dimitri Sirota  12:42

Yeah, so you know, I'll give you kind of a little bit of a kind of a singsong in terms of the way we view it. So discovery is really about content and context, right? It's about understanding what you have and a little bit about why you have it or whose it is, you know, where it came from. So context matters. And typically, in the data world, you think in terms of data discovery and metadata, right? The metadata is kind of like the surface view that provides context. Now, here's a little secret though, most metadata management tools don't provide you context; all they do is lazily just look at the Column Information, say, Ah, I found it, let us give you context. In fact, that gives you a miss reading because the metadata was put in there by some developer, and all you're doing is surfacing that you're just basically leisurely, kind of grabbing it and putting it into a catalog and saying, but it doesn't really tell you about what's underneath. In fact, it could be completely wrong, which is why they develop this notion of data stewards that are somehow supposed to be experts in technical metadata. So they can look at the catalog and, just by staring at it, figure out if the metadata is correct, or what it is and, and basically provide that kind of human interface. So we think that while that may have been okay, in 2005 2006, you know, we've come a long way, you know, right, we're in the 2000 20s. Now, there are advancements in machine learning. There are investments in scale and data science. And so our whole belief is that it's important as you kind of to build that kind of full contextual map, need to be able to find the data and be able to find the metadata and need to leverage machine learning by looking at your entire data state to help you interpret that. So do the curation, do the understanding, do the contextualization through machine learning, and then you know, where you have a steward the steward's primary role is really about validate, not to be some, some Oracle, like in ancient Greece, where they have to be able to kind of look at this archaic catalog and make sense of it. That's not their job. They're humans; they're fallible. Their job is to provide options, and that kind of use their best judgment to validate it. And so again, when we do discovery, and you talk about insight, we don't just tell you here we found this data element in this place, whether it's Social Security in a document. Whether it's a name in a database, we find that we give you context around and we give you, here's, here's kind of like, Who's it is, here's why you haven't, here's where it came from, we tell you all of that. And then, equally important, we allow you to build additional metadata over top of that. But again, we use the machine learning to kind of recommend, you know, the labels, the tagging, everything else you would want. And then a human can basically just validate it, as opposed to having to just kind of stare at this catalog and, you know, make something that which is kind of the way it's been done today.

 

Debbie Reynolds  15:37

Yeah, well, what one of the features of your product that I have really loved over the years Is your correlation, the ability to correlate data? Can you explain to people what that is and what it means to your products?

 

Dimitri Sirota  15:54

Yeah. So you know, one of the things we realized is the need for data knowledge kind of straddles three organizations, right. So there's kind of the way you almost think of it is there's kind of visibility. And there are three organizations that care about visibility. Privacy cares about visibility, and especially when it comes to personal data. Security cares about data visibility because they're looking for your crown jewels and high-risk data. So they can remediate, encrypt, tokenize, whatever that is, and data governance cares about visibility around the data because they want to be able to get more value from the data, right? You want to bring some order to the disorder. Now, as it happens, each of these three buckets takes a very, very different approach to understanding data. And part of that is historical. So in the data governance world, they grew up in a village that only talks metadata. That's the language they speak. So everything is interpreted in terms of metadata, and pretty much everything that data governance is about how do I make sense of that metadata, right. Stewardship is really about making sense of that metadata, lineage quality; they're all predicated on this metadata. In the village. On the other side of the mountain, there are the security folks. And they use a slightly different set of words and meanings. To do data discovery, they talk in terms of finding sensitive data; they talked about data classification versus metadata. So it's a different kind of animal and a different kind of approach. And then over here in privacy land, there trying to do something a little bit different as well. So they primarily care about personal data. And they not only care about what personal data, do I have social security, you know, health record IP address, cookies, session key, they also care about whose it is. So they talk in terms of inventory and talk in terms of correlation because they don't, they don't, they want to be able to correlate that data back to an identity. And so when we realize is this kind of one and done approach, where I just say, I'm going to do classification, or I'm going to do cataloging, or I'm going to do, you know, this kind of inventory doesn't work for all three organizations. So we basically built a platform that does all three; we have a full next-generation metadata catalog that allows you to collect and capture and interpret technical and operational metadata. We have a data classification engine that leverages pattern matching, machine learning, like natural language processing, and deep learning to make sense of where you have crown jewels. And then, we have this third thing that we invented called correlation, which is a graph-based technology. So another type of machine learning that essentially automatically builds a graph to show you related and connected data. And so this way, I want to find what data do I have on Dimitri, we're able to report back, here's all the data that you have on Dimitri, even when the data is not explicitly mine, meaning I have I'm sitting in a house in Westchester, my wife is sitting in the same house, my kids are in the same house, because of COVID. We all share a GPS coordinate. But if I'm communicating to you over the session of Zoom, that GPS instance is mine. It's not my wife's. And so we built this kind of correlation technology to show you that inventory and how that inventory is connected to an identity to solve the privacy data, right problem. So we have correlation. We have classification; we have cataloging. And so we develop each of these, and they work together so that now for the first time in history and mankind, companies don't have to have one solution for the data team. Another solution for security, another solution for privacy. They have a single authoritative view of user data that straddles these three worlds because each of these worlds want to be able to deal with compliance. They don't get fined security, so they don't get fired data so they can make money. And we do that in correlation is kind of the view we developed for privacy. It has other use cases like Master Data Management, reference data, etc. But it's a completely new technology patented reference data doesn't exist anybody else or anywhere else? That's a big idea about that. So that again, we could find your data, we could find my data, we could find my wife's data as well. Yeah, only three people, we only find data three people agree. Oh, yeah.

 

Debbie Reynolds  20:22

So as you've done this journey, and you work with different businesses with their data, what is surprised you maybe the thing that you didn't anticipate that you'd find at working with companies about their data?

 

Dimitri Sirota  20:38

What is surprising? So look, I think some things did, right. So we started off from the get-go and decided to build a solution gear to the Enterprise's Enterprise that has more complex needs, right? Looking at data is hard. And part of it is because you need to look inside of things. And you need to make sense of when you look inside of those things. And you don't always have the benefit of starting out with any kind of presupposition. You don't have the schema. You don't have the data on politics. You don't have anything. And so I think what continuously surprises us is that every Enterprise is like a snowflake, right? They have all kinds of data stores, NetApp, and EMC and mainframe and, and Oracle, SQL and Tableau, and you know, in AWS and Azure, and GCP, and Salesforce, and Workday. And connecting to all of that means you need to be able to understand how to authenticate to those systems. You need to be able to pull data from those systems efficiently and make sense of them and not keep them. You need to be able to integrate with things like password vaults. So well, I don't know if it was surprising, I think, to some degree was reaffirming. I think there are companies that make statements about oh, yeah, we look across data. But the reality is, data is messy. Right? It's stored in a lot of places and in a lot of different formats. You need a lot of different kinds of authentications. To get to it, you need to be able to manage who could see it, and he would do what with it. And I think to some degree, what we've learned, and being reaffirmed by kind of the large customers we have, is it's messy. And the faster you get through that messiness, the faster you're able to abstract that away, the better you are, and frankly, you know, the benefit we've had is we raised a lot of money to invest in making the hard things easier.

 

Debbie Reynolds  22:33

Excellent, excellent. Let's talk a bit about duplication, duplication of data within organizations. So, you know, I think it's like an epidemic of duplication of data and organizations. Because again, we were in this bucket thing where everyone has all these different applications, people have different uses for data, the way that people were using data, in some ways, it was easier for them to duplicate it, and instead of sharing it in some ways, so I know that that that your product is very powerful in terms of being able to have people surface duplicates, can you talk to me a little bit about the problem of duplication? I think it tells me a bit about in terms of cybersecurity, which we know increases people's risk.

 

Dimitri Sirota  23:24

Yeah, there's two, there's actually three applications. One is privacy security. The third is in the data governance world. So you, as you kind of highlighted, duplication happens naturally, right? somebody takes a PDF or PowerPoint, and they make a copy of it, they make a slight modification. Our developers take a data set; they kind of take a snapshot of it and create a test data set that they can use for developing an application. That, again, is a duplicate. So duplication happens, a lot of people do an archive, and they basically create a duplicate data set. Here's the problem with it. Problem one is in privacy, right? Technically, every time you make a copy of the data, it still belongs to that individual. You still need to be able to account for it. problem to security. Every time you make a copy of it. You create, you create, you expand your attack surface, right? That data has sensitive information. And now there are more places where a bad actor could go and find it and potentially kind of take it eland, whatever that is, and it has a data governance problem, right? data has drift, right? You make a copy, which is the original data set, which is, you know, which is the progenitor of the other ones. And so there's a whole kind of challenge of that. Now, on top of the data duplication problem. There's a secondary issue, which is data drift, as I mentioned, which is really a data similarity, right? You may be working on a spreadsheet, you may change some cells, and that It's no longer duplicate. But it certainly is similar. And you still want to be able to account for it. You still want to be able to understand which kind of be got what so which was the progenitor of the other one. And so, I would expand the problem of data duplication to one a duplication similarity. And the reason there's urgency is not only does it create privacy issues create security issues, but it also creates governance issues in terms of authoritative data sources. It also has two other impacts that are critical that are kind of worth highlighting. One is cost, right? When you capitalize your MC server, Oracle server, and you had a data center, New Jersey, maybe cost wasn't a big deal, right? You could write off the investment. You make a new server over the course of a year. But now, with the shift to the cloud, you're paying an extra you're paying for every gigabyte you use, right, you're a renter now, and so you want to keep that to a minimum. And again, I think the other issue it creates, it creates an issue around security and risk, and you want to be able to reduce risk. So what we've developed is a type of data classification. But instead of looking for exact matches, it looks for close enough matches. It's something we call cluster analysis. We developed it for both unstructured data, i.e., documents, emails, and structured data, like databases, data warehouses, hive, etc., in, in Hadoop, and to the best of my knowledge, no other company has this fuzzy classification. And what fuzzy classification does is it lets you more easily find duplication and similarity. So I want to know how two databases compare, we could tell you, and we could tell you the difference. And so think about all the efficiencies that get that gets you in terms of consolidation in terms of risk reduction, maybe it's not to databases, maybe you have a giant Office 365 file folder, with all kinds of stuff there. And you're paying a lot of money for every kind of file, you kind of created copy and duplicate. And you may just want to kind of label them similarly, you may want to do some housecleaning, we could do that. So again, we call it cluster analysis. It's a type of fuzzy classification. it leverages unsupervised machine learning for those of you in the audience that are familiar with that. To the best of my knowledge, nobody else offers anything like this. And we develop it to again, it helps organizations save money because it allows you to consolidate infrastructure, reduce your data footprint, it helps you reduce risk, because we tell you, you know, where you have kind of high-risk data, and you know, which was the original document so that again, you can kind of manifest that. And then, it also helps you from a data governance standpoint. Because now, for the first time, I have a view of situational awareness of my entire data state, I want to find a similar data set for BI or AI. I can tell you where else you have a similar data set. And we can tell you the quality of that data set. So it has a lot of applications, including cost reduction, risk reduction, improved AI, and BI. And again, it's something we developed over the last year and a bit in response to customer requests.

 

Debbie Reynolds  28:21

Excellent, excellent. I would love to talk about legacy data and sort of over data retention. So legacy data is a huge problem, in my opinion. A lot of times, people say they have data from old systems, that they may, you know, put on an old server in a backroom or something like that. And a lot of times, that data, because it's so old, it may not have a huge business value. But in the inverse, they have an astronomical sometimes risk, especially if that data is breached. Can you tell me a little bit about how you all approach legacy data and help people work on things like data retention?

 

Dimitri Sirota  29:11

Sure, yeah. So one thing I haven't made clear about our kind of product architecture is we did something novel around data management, we kind of reimagined it. And we took kind of a playbook from folks like AWS, and Salesforce, and, frankly, Apple. And the notion today is you don't have to do everything yourself. You can build these kinds of modular functions that take advantage of your core business, which in our case is data discovery, right? Leveraging cluster analysis, correlation, classification, and catalog. That's our core business, that visibility to that data that gives you that kind of discovery and insight or, or intelligence and but then we wanted there are other things that people want to do like retention management like remediation like quality, etc. And so what we did is we created this upstream layer where you can build apps or modules that essentially sit over the top of that core discovery could take the metadata could take the data could take the correlation or the graph, and basically tell you something about it help you do hygiene or management of your information. And one of those apps that we developed is data retention. And the data retention app allows you to either import or define rules that you have around data you want, you may want to retain, whether that's email data, file data, database data. And it essentially creates a set of workflows and operationalization ones that make it easier for you to manage both structured and non-structured data, what you want to keep, and what you want to archive or backup. And we implemented this or instantiated this as an app, right, which never has been done before. Because again, we're the first data management platform that is almost this kind of web service, like architecture, we allow kind of these kinds of modular apps to kind of open and extend our platform. And so, one of them is data retention. And again, the utility of it or the value of it is really about helping organizations that maybe need to comply with New York State DFS regulations around nonpublic information, or maybe they want to comply with certain GDPR requirements are on personal data. Or maybe they have maybe they're the healthcare industry. And they're subject to HIPAA regulations around data archiving. And so this provides you a mechanism that's versatile enough to support unstructured IE documents, email, images, as well as structured IE data, you may have like a snowflake, or redshift, or a database. And, and again, it's instantiated as an app, which is powerful because, for companies that have that problem, they have this kind of all ecard choice to add this module. It's just another license key. It leverages all of our metadata discovery, all of our other data discovery. And then again, it solves this painful problem around data retention and management in a consistent way across all your data, not just your files, not just your databases across everything.

 

Debbie Reynolds  32:26

Excellent. So tell me, what thing in privacy, whether it's personal or business, concerns you most now I was on the horizon that concerns you?

 

Dimitri Sirota  32:38

Yeah, so I think at the end of the day, privacy is shifting a little bit. Where, you know, there's what I think initially in the first wave is really very much kind of a fear of penalties. And fear is highly motivating, right, I don't like spiders, so I avoid spiders highly motivated. I now live in a place without a lot of spiders, for instance. But there needs to be more than that. And I think privacy is kind of shifting to a point where it's really about defining a customer experience. It's about providing that trust and transparency to the customer. It provides creating that loyalty, the relationship that basically says we are going to handle your data with care, we know what data we have, we know where it is. And so I think, for me, there's this kind of underlying shift. And I think you'll see this in some of the apps we're building and privacy, where we're trying to make it a lot more about customer experience for those companies that care about customer loyalty, about customer transparency, but creating a trusted relationship with our customers. We think we could make an impact there. Because again, we have visibility into the data. So it's not just doing this kind of workday, or workday kind of program of all I need to do a robot, right to do a PIA to give to a legal authority. It's really about the company and the customer, and but building that trusted relationship. And so I think, I don't know if this is necessarily a fear as much as this is where I think privacy needs to go. It needs to be less about fear of fines and more about fear of churn more about fear of not providing the best service to your customer.

 

Debbie Reynolds  34:27

Right. I've always said you can make privacy a business advantage. You know, instead of it being some people think I would have a tax or penalty, that they have to spend money on privacy, but I always try to tell people, you know, you're a steward of the data. The data doesn't belong to you. The data belongs to the customer, so you have their trust. That will also help you have a better customer, so more customers will want to use your product, and they feel like they can trust you.

 

Dimitri Sirota  34:57

That's right, people buy from companies they trust and at the end of the day, if you think about kind of what happened in the early eCommerce stage, you know, companies implemented PCI to secure their credit cards, that they did fulfillment on a, you know, without dropping orders if they handled returns. So data is just the new frontier and that customer experience journey. And so I think we've kind of, we've kind of figured out how to do kind of customer support, customer fulfillment, security and credit cards, and now providing that trust and transparency across all personal data, the customer stuff.

 

Debbie Reynolds  35:31

Right, right. So if it was the world, according to Dimitri, and we did everything you spend, what would be your width for privacy, whether it's technology law any anywhere in the world?

 

Dimitri Sirota  35:44

Yeah, look at it. I think it's really kind of boring, a playbook from the financial industry. And I think I used the example earlier, about, you know, the way organizations operate with your money, right? People take responsibility for it. If you deposit it, if you transact it, right you do, you give it to a, you know, Theatre Company, because they're going to see a movie, or maybe an e-commerce company, because you want to buy one of their products, you assume that there's an accounting of what what you give them and what they collect. And then there's a corresponding kind of service, whether it's, you know, depositing an interest or whether it's a product that you get, privacy needs to follow that right. Privacy at the end of the day is about trust and transparency data. And privacy could not just be a GRC type service where this kind of priesthood of lawyers are basically producing work products for regulators. That's missing the point. The real point is about providing power to the people, and privacy is a mechanism to provide power to the people around the data that they are. They are loaning you that they are get granted to you in exchange for something.

 

Debbie Reynolds  37:01

Right? I agree. And I think because privacy type of humans, they have to include all humans, so it can't just be an ivory tower exercise. Yeah, now we're ready to go. Well, thank you so much. This has been a great session. Again, so thrilled to see your success and the things you guys are doing, and I love the way that you're thinking about data—these data problems in those areas of governance, security, and privacy.

 

Dimitri Sirota  37:34

Well, thank you very much again for having me.

 

Debbie Reynolds  37:37

Bye-bye. Talk to you soon.