E75 - Cillian Kieran, Founder & CEO, Ethyca
Your browser doesn't support HTML5 audio
Watch our two-minute video audiogram preview of this podcast episode HERE
37:47
SUMMARY KEYWORDS
data, privacy, business, organization, governance, policies, systems, developers, people, tools, users, technology, companies, problem, enforce, engineers, underestimate, software, create, building
SPEAKERS
Debbie Reynolds, Cillian Kieran
Debbie Reynolds 00:00
Personal views and opinions expressed by our podcast guests are their own and are not legal advice or official statements by their organizations.
Hello, my name is Debbie Reynolds, and this is "The Data Diva Talks" Privacy podcast where we discuss Data Privacy issues with industry leaders around the world with information that businesses need to know now. I have a special guest on the show. He is Cilian Kieran and is the founder and CEO of Ethyca. Thank you.
Cillian Kieran 00:39
Thank you, Debbie, for having me. It's a pleasure to be here.
Debbie Reynolds 00:42
Yeah. So this is interesting; I've been following your company for quite a while and am very impressed by the work you do, really the content and the image that you put out there. And so I think you're doing something really, really interesting. So obviously, you're a data company, but you have an ethical bit as well, right? So give me an idea. First of all, tell me about your journey into privacy and your journey with this company. And what caught your attention about this area.
Cillian Kieran 01:19
Absolutely happy to. So I'm certainly not from a sort of typical traditional privacy background policy or legal function. My background is data, and then software engineering for my entire career. So previously, I founded a digital marketing company that handled a lot of data on behalf of its customers. And I ran that company for a very long time for over a decade. As we work with our business, we regularly encounter challenges and how we will comply with future regulations like the GDPR. And if you cast your mind back here, although many of your listeners and you will be sort of experts in privacy for your entire career. In many respects, for technology companies or data-driven businesses, they've only really concretely thought about the risk related to trust and governance and privacy, probably in the last decade at most. And so many of them address privacy in the sort of typical fashion, right, which is imposing governance on systems, that is to say, compliance workflows, process management, policy, enforcement, etc., are all very valid things to do. But as a data engineer, what I found, surprisingly, about my previous business was that it was two systems working asynchronously. That is to say; there was the legal and governance side of the business and software and data engineers building systems. And they're often a sort of a cultural loggerhead right there. They don't have the same objectives or goals as organizations. And there's a lack of understanding for engineers in often how to do privacy well. And so that left me spending a lot of time thinking about how we might make software inherently more trustworthy, its design. So how could we consider in the way of privacy by design but at a software engineering level? How do we consider some of those capabilities, so they become features of the software rather than things you bolt-on afterward as a process? And so I spent a number of years working on prototypes until we got to what has become Ethyca, about three years old now. And we focus on essentially trying to provide tools to other teams of developers and data engineers to help them make more respectful systems on behalf of their users.
Debbie Reynolds 03:28
That's fascinating. I am a data nerd. So I decided, I don't know, 10 or 15 years ago, kind of like a webmaster, just to understand how the Internet works. Yeah. So I'm an expert at SEO and stuff. But I was kind of shocked about flow data and stuff, that people just don't understand that, right? They don't understand how data flows and the different things that have data that will talk about that as well. But one thing that you mentioned that I think is really interesting, what you talked about is this gap, right? There's a gap between privacy as people think of it as policy, a check box type exercise, right? Have I put a cookie banner and I put a box? I'm good. And then there's this operational part of privacy, right? So when I think about privacy, even when I think about companies, they run afoul of regulation, right? And they get fined. I'm not seeing anybody get fined because they didn't understand the law. They got fined because they couldn't operate right or something operationally that they couldn't translate. So talk to me about how you're finding bridging that gap—those two sides, the legal side and then the operational side of privacy.
Cillian Kieran 04:53
Absolutely, well, really, you know, you've nailed the problem, right, which is often organizations They have a let's call it a, like a substantial enough amount of data for it to be a real critical risk. And they've pretty well-oiled governance structures, right. So they're good at managing the sort of policy aspects of privacy and governance, right. And as you say, very often, if you examine failures that result in a fine, let's say, or some kind of enforcement action, you'll find that some an organization that intentionally did bad or are like, terrible human beings, often their mistakes made at the operational level or a lack of ability to impose the policy on to the operations, right, so you end up with often incredible reporting tools in the governance suite. Right. So the GRC side of the house often has great reports that say that we understand where data is, but actually, nobody's lying; everyone's trying very hard to keep that accurate. But it often doesn't reflect how quickly an organization's data models adjust. Right? So you mentioned, people don't remember data flowing. Often software engineers are building things too fast and adding new systems and modifying the technology landscape of the business so quickly that it doesn't maintain parity with the governance model, right? Like it's just not able to keep up. And so then, when you look at things like enforcing a user's rights requests, like a data subject request, or an erasure request, that's almost impossible because you don't actually know where the data is because it's changing on a daily or weekly basis. And then if you say you want to sort of enforce data minimization controls and ensure that we're not excessively collecting, you're using data for some sort of limited purposes, it's almost impossible to do because you don't actually know what you're using the data for, where it resides or where it's going. And of course, the logical place to start is data mapping. And that is the right step in the early stages of a privacy program. But very often, you'll find it becomes a labor-intensive process led by the governance entity. And often, the engineers don't understand its value or purpose, and they're not heavily involved. And so the systems continued to be built at pace, and they quickly diverge, right? The things you've built in the software and data you operate look very different from the policies you've governed. And so when you say, you know, bridging that gap, it's quite literally what we spent the last sort of three and a half years doing right, which is trying to understand how you would create tools that you could give to any developer to lower the bar for privacy. And when I say lower, I don't mean lower the quality of privacy, but make it easy for every developer to implement some tools that would make it very easy to observe. And instrumental, identify what data is in what system, how it's being used, and what it's been used for, without the need for a human being to continuously update that. So that then if you have a context, so if I can tell you concretely, what kind of data exists in all systems, and what it's being used for, I can evidence that now you can write policies on top of that, right, you can say, Okay, we don't use these types of data together for these purposes, or we simply don't allow these purposes at all. Those enforcement's become very easy because you have the total context to actually control the data. And so our focus is to provide those tools for that to those developers and the education and capability to make that easy for them as part of their existing workflows. So it's very low friction as a promise for the developer and the benefit that is for everybody, right? Software and data engineers understand their systems better; they can move very quickly still and be agile, while the governance organization can write very responsive policies to change the regulation or internal business conditions without the need to rebuild things.
Debbie Reynolds 08:22
Well, that's fascinating. I love the way that you're attacking the problem. But let's talk about data mapping. Okay, so this is a term that is overused, and it means so many different things, different people. So describe to me how data mapping happens in your world and how it may be different. I've seen people pull out Excel spreadsheets and say, hey, I have a server in Cleveland, and you know, whatever. And I'm done. He's like, no, that's not it, or I've seen people go through like excruciatingly ridiculous detail, it just makes no sense. And then it's so detailed that it doesn't say, the operation changes tomorrow, and it's totally out of date and out of sync. So talk to me about just explaining data mapping from an operational view.
Cillian Kieran 09:15
For sure. So our perspective, my perspective, and therefore our teams that businesses view would be, if you will get to what we how we see it, but if you think about how it's typically done, there are probably two dominant solutions today, right? There's the sort of human in the loop, like very labor-intensive, which you mentioned, spreadsheets, and questions and hours of meetings and debates over what systems are in and out, and then someone realizes they didn't include the billing system that someone plugged in two months ago and this sort of unknowns all over the place. It's very labor-intensive and very hard to maintain. It's often a lot of spreadsheets, and therefore you obviously might end up using a governance tool that allows you to manage those maps. But in truth, most of those governance tools, like the traditional sort of companies that are called privacy technology, Do companies, what they really provide you with is just a way to manage a slightly prettier version of the spreadsheet, right? Like, I think it's important to recognize that what they really do is ask you to answer the same questions. But instead of putting the data in the Google Doc, you're putting that information into a sort of slightly prettier dashboard, right? That's one way to solve it. The other, of course, is sort of increasingly, people are relying on machine learning classifiers to make data discovery, or there's a bunch of solutions in that space. The challenge with purely relying on machine learning, or data discovery, machine learning, and AI, is, first of all, it's precision or accuracy, right? The world of compliance is a binary; it's not a spectrum of being compliant. What I mean is, you can either evidence that you can comply by knowing what it is, or you're out of compliance being that you don't know where data is, and you don't know what it is machine learning in the sort of best academic environments where it's very precisely trained and managed, might get to 88 or 89% accuracy. So that means you've got a 10 or 11% risk of inaccuracy or incorrectly labeled data. The other issue with that approach of machine learning classification is that it's essentially boiling the ocean, right. So what I mean by that is, what you're saying, I have a legacy of data from my business operations for many decades when I'd like to plug in something to sift through the data to look for needles in haystacks. And every time my systems change, I will keep doing that, sifting through needles in haystacks and hoping that I never fall out of compliance. Again, you could do that. It's a very labor and cost and a resource-intensive way to solve the problem. So the ethical approach and the one that we're working on with Fides and our open-source Developer Tools are very different. We provide a server quite literally called the Fides control server. And it runs in organizations and infrastructure. So it's configured internally, it's managed internally, we don't host it, use it yourself. It can connect to source systems like databases and data warehouses. But what its primary purpose, if you started, the sort of the software development lifecycle is the first thing it does is it allows the organization to describe policies for things they permit and don't permit like types of data, you can use insert under certain conditions or purpose limitations. So the governance team can write those policies. They're stored in the server. And that server is then connected directly to the work that developers do. So as a developer writes new code, makes changes to systems, and connects to new databases, the server is monitoring those checks effectively. And it's enforcing those policies. So if a developer attempted to write a piece of code that was going to use location data and gender data to infer something in a way that the organization at the governance level would say, we don't permit that, that will be prevented. So it would never even get deployed; you could never get to the point where Facebook was misusing phone numbers because our code would never pass the policy checks that exist in the development process. But the other benefit, aside from being able to enforce that actually in the development process, so ensuring you never deploy anything compliant. The other side of it is that that evaluation process actually generates metadata. So every time a system has changed, every time a developer adds a column to the database or modifies a data structure, or stores new data, it regenerates and updates the data map to reflect the change in the organization. So instead of saying to your developers, hey, when you work in two weekly sprints, and every time you deploy new code, you've got to come back and fill in a privacy review report and then fill in the data map report manually. They continue coding, and the server is monitoring the video servers, monitoring their commits, and generating a metadata layer from the work that they've done. And that that exists in the production environment so that you can report on it. So you can say, well, here's how our map has evolved. And it's extremely fine-grained for identity through data that a service receives and the data the server sends out, and what databases are storing what types of data.
Debbie Reynolds 13:47
Wow. So you're basically attacking the problem from the opposite way that most people go at it, right? So most of the time, I say, well, let's get our template and create a big policy and procedure. And then let's see if we can get people technologically to do this. Right. So in a lot of ways, the problem with that, and people who aren't technical fame that they don't understand, is that they may be describing something that can't really be done, right? So they're kind of creating a new problem by imagining the way that data works. So I think the way that you're going at it, it seems more, it should be easy, much easier for organizations because I always tell organizations, you need to make sure that your policies or procedures match what you actually do. So your policies should be operational, not aspirational, right? So if you can't actually do it, you shouldn't actually say it. So the way you're doing it's like an evidence-based approach, where you're like, okay, this is what we're, you know, this is what's actually happening. You're leveraging the technology to be able to tell and help tell that story. And then, based on that story, companies can kind of create policies to align with that.
Cillian Kieran 15:12
Exactly, Debbie, I think you've encapsulated it perfectly; it's basically saying, Okay, well, there are two different domain experts who hear this, this policy and governance experts. And there are the engineers that know what the data is doing. And if you want that to work seamlessly, for the benefit of the organization and the software systems you built in the data flowing through it, you need to provide the developer tools that allow them to very easily provide context, right, like context or understanding as to what data is flowing where. And if you can do that, you can then enforce actively on it. And to your point, the operation becomes easy, like the operational aspect becomes easy because you're essentially able to say, we know precisely what kind of data we have, where it is, and what we're using it for. And not just now today for like the GDPR. But for any regulation in the future, we can create new policies and know that we can enforce those consistently on our systems, whatever they are.
Debbie Reynolds 16:04
Excellent. So tell me, so this is the challenge. This is what I would love to hear your thoughts on this. So because you're attacking a problem from a completely different angle, and most people go at it the opposite way. How privacy, I think, may maybe the opposite to me of cybersecurity. So a lot of people, one of the problems in cybersecurity that we have is that companies may not embrace cyber and the investments they need to make there until something bad happens, right? So we know that privacy requires more proactive, right, attack to the problem. So how do you get people? How do you convince people to proactively embrace the tools that you're doing? You know, what, what do you say about, you know, your customers? Or what is the impetus for them to say, hey, I have to call?
Cillian Kieran 17:07
Really good question, Debbie. So I think we all recognize this sort of secular industry trend, which is that, aside from the GDPR, the CCPA, and LGPD. And people, China, I think you're a seasoned expert in this space, as are many of your listeners. So for them, privacy has always been present; I think there's a realization that it's become a business critical issue in the last, I would say, five years, right. So businesses are increasingly trying to find solutions. The degree of sophistication of the business affects maybe the understanding of the complexity of the problem and, therefore, the solution they will want. With that said, the approach that we have, which is important that you mentioned, customers, there's, as he was growing incredibly quickly, and I don't say that out of arrogance is because our approach is very different. And what I mean by that is, I think it is a completely open-source business. So our tools are freely available. They are available right now to pull from our publicly available developer repositories on GitHub. Any organization can copy and use that code, modify it to their needs, and continue to leverage it for privacy purposes. So, for example, if you want to do data subject requests, automation, DSR automation with Ethyca, you can go and pull the public feeder's tools, read the documentation, join the feeders community, and get it up and running with your own internal IT or technical department in a matter of days, at no cost. So there's a huge cost benefit there. Right, we don't charge for any of those tools. The other side of it is that our sort of approach is to very much appeal to the part of the organization that is struggling the most with privacy. What I mean by that is that sort of governance and policy teams have a road here to ensure the organization is compliant. For a lot of developers and engineers, it still manifests as unpleasant friction. It doesn't mean they don't like privacy or security. But their experience of it there's so the cold face with their business is pain, right? Like disagreements or how to do things, delays, and shipping code audit trail reporting, what you're going to them and saying is, here are tools that make your users your customers safer; they will trust your business more in the future. And it's going to allow you to speed up the way you build things. So these tools just have endless benefits without any friction. That's a very powerful promise, given that they're freely available.
Debbie Reynolds 19:29
Yeah, the topic of friction is interesting. So I've always been in the mindset that I've seen this happen over and over. People are going to do what's easiest for them, right? So if you can make this process easy for them, reduce friction in the example you gave people like you say someone's going to make a sprint and then they doing this development and then every two weeks they have to go back to the spreadsheet or whatever, we know that's just the other half, right? It's just not going to happen. So if you're creating a way, leveraging technology in a way that helps them, continue what they're doing, but then also get the evidence or the information in a format that can that not only they can use help them, but also health parts of the business. I think that's really key.
Cillian Kieran 20:23
I totally agree, Debbie. And I think, you know, I like to give you a concrete example of that point of sort of reducing friction, privacy impact assessments, or DB. And there's so many names for these things, but TPAs, privacy impact assessments, privacy reviews, and organizations have multiple ways of doing those. But in business, where they're having a lot of data, they've often created a privacy review process, sort of similar to the privacy by design principles, right, which means that you're asking developers to describe the things that they're building, and the business context for why they're building it perhaps as a team and what it's doing and what kind of data so that you can evaluate risk, right. But often, it's a very labor-intensive process for everybody for the legal front or GRC function for the engineers to describe it. And often, there's a translation issue, right? The governance team or policy teams write and speak in a different language and think in a different way to the engineers. So they sort of have to then translate each other's point of view of the world to understand what's actually happening and evaluate the risk, very sort of manually remediated and labor-intensive, even if you use sort of privacy tech tools, like the pitch to an engineer there to reduce the friction with Fide as you add these tools to the way you build things tomorrow, they will generate reportable impact assessments, they will generate the data that you need to hand to privacy teams to be able to evaluate risks in your code directly, without the need to spend three days for the informs and collecting information. So all of those are three huge accelerants that give a developer a reason to adopt the tools and hopefully make that system safer in its delivery?
Debbie Reynolds 21:53
Yeah, so I'll address a different issue here. And it comes up right a lot. So some organizations benefit from the labor intensity of this process. So they may not want to get these efficiencies you're talking about? Have you ever encountered that?
Cillian Kieran 22:20
I have to candidly say not often, but I know you're referring to it; I say I cannot catch up directly with our clients or customers. But I'm aware in the industry that, you know; there is a sort of universe of operational or some kind of management around operational privacy that is well oiled and sort of process-driven and valuable from a point of view of, you know, rules and an individual's expertise. I think very transparently, the folks that are in that space that are thinking that way about it probably won't find what we're doing appealing today, right, because it's designed to streamline that that work. I think we, you know, myself, our board, our investors, and our management team, look at this with a 10-year view, which is, if you assume that we are legitimately racing towards whether it's the metaverse or not, it's not picking the way it's delivered. But let's assume for a moment that we're racing to some version of the world in which we're spending even more time connected on some data-driven system. That's a scary thought. But we're probably headed there if that is the reality of things and that we increasingly rely on machine learning or sort of partially automated tools to power that universe. And the volume of data that will be flowing, like the sheer volumes and the complexity of system design that we'll be processing data, will be very difficult to police or enforce or manage or govern with human beings. It isn't to say humans don't have a role; it is more to say that the scale at which this stuff is happening is already reaching a critical point, which is very difficult to manage with just humans, like a tacit, like a sort of random example. It has nothing to do with us. But if you think of the moderation problems that Facebook clearly has, like a significant problems in moderating content, from a trust and ethics standpoint, they are employing 1000s of people, and they are still not able to bring that issue under control. Right. There are many systemic issues at Facebook, arguably the two that are a challenge to resolve. But fundamentally, it is hard to justify that you would police the content of, you know, more than a billion users with a couple of 100 or a couple of 1000 moderators, right? And I take that example to say that if data flows increase in any business as its success increases, the enforcement tooling has to become technology-driven, or it's unsustainable. Yeah.
Debbie Reynolds 24:37
Right. You're right; you hit on a big issue. And so I was talking with an investor group yesterday; I was trying to explain this problem because they're thinking, oh, well, only large companies need these types of technology tools. And it's like, no, it's like the data is increasing exponentially, right? So there, we're creating much more data than we ever did before. You mentioned the metaverse; we'll be creating even more data than before. So these data problems are going to hit all types of organizations, big or small. So just because you maybe have a small team or a small group, that doesn't mean that your data problems are small. So being, you know, to me, technology has always been best used when it's helping organizations do kind of the heavy lifting.
Cillian Kieran 25:32
Totally agree. And again, I think it's so important to say; I think it's not in the business of them, automation to displace users, right, it's actually the opposite, I would say, which is that our recognition would be that governance and policy specialists in privacy and sort of complex regulatory frameworks are domain experts, their time and energy and expertise is best spent using that strategically to assist the organization in leveraging its data in ways that are acceptable and trustable. And sort of evidence-based and respectful on behalf of their users. Developers don't have any of that subject matter expertise and shouldn't have to write their knowledge as in writing that code and building those systems and maintaining them. What we actually need are tools that bridge that gap, right, so that those folks can both respectfully do the tasks that they're best suited, suited to in parallel while ensuring the operating systems of the organization are as safe and trustable as they can be on behalf of the users whose data flows through.
Debbie Reynolds 26:30
Excellent. Well, tell me, what are you seeing right now? You know, you're definitely someone who's very far forward-thinking. You kind of see what's happening in the privacy world, what's happening right now in privacy or in technology that's concerning you most?
Cillian Kieran 26:47
I could answer this as the parent of two very young kids or as the person who runs the technology committee. I think this is a very personal side of it, which drives some of the business decisions we make anyway. But I think like without spending, excessively, evangelical, I guess, you know, the I think we underestimate already at this stage of where this tech venture-backed technology industry, in particular, is we underestimate the volumes of data that are being collected by individuals, and the way in which that data can be used now, but also in the future, right? There's a lot of focus on sort of informed consent and helping the user understand that in very dark design patterns, which are all very valuable things that should be solved. But I think we underestimate just the exhaust fumes for the trail of data that is generated by every human being on Earth. And where that is being co-located in servers or infrastructure around the world, and how that can be used. One of the things that I talk about internally with our team a lot is, see, it might seem like a nuanced example, we forget to log in very many businesses are enduring, like Facebook might last generations. Google does, but we forget that MySpace was the Facebook of two generations ago, right? That business was bought cheaply, you know, ultimately, by News Corp. The data of hundreds of millions of users was acquired there; it wasn't the design of Myspace or the user interface; it was the data that was acquired. And I think we'll see a lot of data arbitrage and acquisition over the next two decades. And there are very valuable data sets in very small companies. Suppose I were to go out and attempt to bring together five failed e-commerce and home delivery companies from the last two years during the pandemic. In that case, I could accumulate a huge amount of user data, which I could use for any number of purposes, depending on how well that's enforced or policed, or managed. And so one of the biggest concerns I have is simply a lack of awareness on behalf of end-users about the volume of data they're generating and the depth to which that might be used over decades to manipulate, confuse, obfuscate or do any number of things that makes me very nervous. It's a very esoteric one. But I think we underestimate just how data will be used over the next couple of decades.
Debbie Reynolds 28:59
I agree with that. We definitely, first of all, don't understand all the data that's being collected in the first place, partially because we don't think anyone cares about it, right? You know, you would think who cares that I'll walk to the coffee shop three times a week or whatever, like, someone somewhere wants to know that. Who knows what they're going to do with it? Maybe they think, oh, a crime committed was committed around here, and Cillian, you walk to a coffee shop for the past week or whatever. So you're right. I think there will be a lot of data acquisition, right. I know that the FTC is trying to do all these antitrust cases, but I'm concerned that they're really swinging a big ax. But they're missing the mark as it relates to that sort of data. So, you know, I see.
Cillian Kieran 29:53
I was going to agree vociferously. I think the other side of it is the business aspect of it right there. We're all very interesting organizations that are very focused on privacy compliance at the moment because it's the burning commercial issue. It's about more than that, right? This is about your ability to trust in the governance models of any organization. Are the things the business says they are doing with my information, or any data for that matter, any type of data? True? And can I trust and rely on verifying that those are the things that have been done with the data, whether it's about a user human or its financial records and medical data? And the reason I say that is, you know, we forget that regulations will evolve. The critical business issue might not be personal data management or governance in five years; it might be something else; the good example is the DSA in Europe, right? The Digital Services Act, when that comes into force.
Sure, there are some privacy aspects to that. But there are also aspects that relate to market manipulation and the use of commercial data to essentially drive commercial advantage, right, in an antitrust environment, that starts to move the need to govern data for totally different reasons. So now you're talking about data maps, from the point of view saying personal data? Well, that's not good enough. You need to know all of the types of data to flow through an organization so that you can govern them effectively. So the business can be trusted in its business promise effective.
Debbie Reynolds 31:11
That's fascinating. I hope that we see more talk around what I call data monopolies, right? Where there are companies that are acquiring come other companies that really have nothing to do with their core business because it sort of fills out more detail. They can join together more information about individuals. Definitely.
Cillian Kieran 31:38
Absolutely agreed. I think again; that we underestimate the volume and the monopoly or cartel effects of a handful of businesses working together to use that information for their benefit. Right. And that will only accelerate like that's not going to slow down.
Debbie Reynolds 31:52
Yeah. So if it were the world, according to Ciilian, and we did everything that you said, what would be your wish for privacy in the future? Whether technology, human stuff, or legal, what are your thoughts?
Cillian Kieran 32:12
So I share this publicly, but we talk about it very often internally in the company, so that you'll if you speak to anybody at Ethycal, they'll tell you, this is Cilian soapbox, but I really do believe that software developers a have a very a powerful role to play in software engineering, software developers in building safer systems. So we forget that software engineers today are effectively building society's civil infrastructure. So what I mean by that is 30 years ago, software developers were building video games and calculators, and if a calculator breaks, so all right, and I'm being glib, but you know, it's not a big issue. Nowadays, software engineers want to ship code into the world that might be used or affect the data of 100 million people. And that's success for them. Well, if you're doing that, you're now not building sort of a calculator or peripheral technology. You're building civil infrastructure, right? And the truth is that software powers the entire world; you pick an industry, and it's powered by software. What you will never see, though, is civil engineers that build bridges, were in T-shirts that say move fast and break things, right. Like you would never cross that bridge. If you met a civil engineer wearing a T-shirt that said that, that would be petrifying, right. But Isn't it curious that we allow people who build effectively the systems, cleans the air communications going to move fast and break things? I don't believe we need to move slowly at all. But I think we can build better tools to enable developers to build safer, respectful, and more trustworthy technology on behalf of everybody on the planet. But that is my hope. And so if you were to square, it is like, what do I hope we achieve both an ethic and therefore for the industry, I hope that we can have to create a standard. And that standard is easy to use for all developers. So there are no excuses to avoid building trustworthy technology on behalf of every user that like that should be a baseline; we should all care enough to want to build safe technology forever.
Debbie Reynolds 34:09
I love that answer. Wow, have a really really think about that is true, I think there are many about bias, you know, bias in systems. It's like, okay, you have a developer, they're doing this code, and you know, they're hitting all the marks for their company, but then, you know, maybe this bit of cold help, it helps indicate someone in a crime, for example, right. So let's say someone is up in jail as a result of something someone caught it. So I feel like that process is disconnected from kind of a human impact fair. And so I agree with you that you shouldn't be, you know, breaking things, especially because the things are people, right? So you'll be breaking people if you're doing this the wrong way.
Cillian Kieran 35:02
Completely I think that's just it; we underestimate that we have a number of large sort of customer users of our technology that use it increasingly to do what they call explainability. In machine learning models right now, there, there are not yet today, explicit regulations in that space is a lot coming very fast. It's inevitable. But this idea of being able to actually explain what is the computer doing right, and from a point of view of machine learning, and what are the decisions making it? How has it arrived at those conclusions? I think that should be mandatory. I don't want to create work for developers; I just think that I, as a software developer, would care enough to make a piece of software that is safe. I think that, like I say, through the context of my two young kids, I don't want to make bridges that fall. I want to make software that's enduring that lasts for a long time. And we should all want that.
Debbie Reynolds 35:49
Absolutely, absolutely. Well, I say software; it's not like fine wine. So it doesn't get better with age. Right? You definitely have to enter it. If only it did. Well, thank you so much for being on the show. This was really interesting. I love what you're doing. I like the fact that you are really looking at this gap. And I hope that more companies look at this problem from the same way that you're you're doing so you're looking at okay; this is what actually is happening within the enterprise. And then let's build policies around it as opposed to creating policies mean, let's see how we can do it operationally, it was that it doesn't work that way.
Cillian Kieran 36:37
Absolutely, I, first of all, thank you for having me. It's been an absolute pleasure; happy to talk to you and share what we're doing. And likewise, I hope that more companies in privacy embrace this approach. And we end up with more people competing in this space because that's only better for our industry. Right. So I hope to see it for shortcomings.
Debbie Reynolds 36:55
I agree, I agree. But we'll definitely talk soon. I love what you're doing, and thank you so much for being on so. Thank you so much.