E173 - Nitin Singhal, VP of Engineering, SnapLogic

39:25

SUMMARY KEYWORDS

data, blind spots, companies, data governance, privacy, lineage, insights, build, context, feel, happened, responsibility, organizations, rules, practice, purpose, understanding, customer, perspective, regulatory

SPEAKERS

Nitin Singhal, Debbie Reynolds

Debbie Reynolds  00:00

Personal views and opinions expressed by our podcast guests are their own and are not legal advice or official statements by their organizations. Hello, my name is Debbie Reynolds; they call me "The Data Diva." This is "The Data Diva" Talks Privacy podcast, where we discuss Data Privacy issues with industry leaders around the world with information the business needs to know now. I have a special guest on the show all the way from San Jose, California. Nitin Singhal is the VP of Engineering for SnapLogic; welcome.

Nitin Singhal  00:41

Thank you. It's a privilege to be here on your podcast. I've been a listener to a lot of great showcases, been learning and impressed by the great audience and great speakers. So looking forward to the session. Thank you.

Debbie Reynolds  00:55

Thank you so much. You contacted me on LinkedIn, and we're connected on LinkedIn, and you gave me a great compliment. You said I really enjoy that you're raising awareness about data and privacy. Hey, you should be on the show. So well, I thought you'd be a great person; I am a data person, and your realm of work definitely interests me. I feel like if you're a data person, you can go in any direction, right? So, you have to deal with data on different levels. But I would love for you to talk about your career journey in data. How did you get interested in this field of work?

Nitin Singhal  01:35

Absolutely, thank you for that; when I talk about my career journey, it just makes me feel like I'm getting older. And I'm actually old. been in the industry, in the ecosystem for 25 years now. The first 10 years of my career were all building application systems and Application Engineering as an engineer working in the US and England and building credit processing products. My data journey in the true sense started in 2006 when I joined PayPal, and at that time, PayPal was actually recently acquired by eBay. And the whole goal was for PayPal to be ubiquitous. My responsibility was to basically serve data needs for marketers and salespeople and also mitigate all those regulatory risks. If you recall, 2006, right? You know, it was, it was in after when a lot of financial regulations were put in for anti-money laundering to basically see that every single transaction that happens is examined for potential bad funding or terrorist funding. What was the play? It was all about understanding data at that scale and being able to make that compliance and obligations to regulatory bodies, and at the same time, finding enough insights to generate leads for sales. From PayPal, I moved to Visa. And PayPal and Visa always had cross-pollination of people. After building the entire data engineering team at PayPal where I built a team of about 200 people scattered across different continents, I switched to the product side and visa. When my charter was, somebody asked me what should be the next wave of growth for Visa and my response was that Visa as a payment company is given. But Visa can be a data monetization company; if someone has the largest conversion data for any transaction, that is Visa, when it is processing almost 52% at that time if I recall the stats of all the E-commerce processing. Now, the challenge was who owned the data. Visa is a network company. Data is owned by banks, and how we make sure that we are playing to the rules of the land. And it is basically a very transparent game. That is where I started emphasizing data governance; I incubated a team that has data management data governance to make the data rights visible for the purpose of using insights that can be used for generating transaction enrichment from the merchant perspective. So basically, my team built an ecosystem where we had the largest merchant insights repository of 120 5 million merchants with exact lat long at the point of transaction. It was all in collaboration with the issuer and acquirer, and we made sure that the rules were visible. it was a very, very fast and very satisfying experience to build an ecosystem of nonfraud dispute resolution with banks that will save hundreds and hundreds of millions of dollars. I had a stint at JPMorgan, where my charter was to build a third-party data ecosystem. Basically, push all the third-party data onto AWS and put a data science machine learning layer on top of that. And again, right, one integral aspect of all of these bodies was the responsible use of data. That Okay, whatever We do how we have a layer that is actually making us honest about what we can do and what we cannot do about data. And then present those contextual insights. So that way actual consumer of data is unblocked from those constraints. The machine is taking care of all those rules. And we built that in the short time before. Actually, I got poached by Twitter; Twitter and Facebook were coming fresh off of those Cambridge Analytica election challenges. There were a lot of media and regulatory eyes on them. They wanted somebody who had regulatory experience to build a data management data governance practice, a very satisfying experience, some of the smartest people I've worked with, and the volume of data was huge. Of course, being a tech company is a bottoms-up culture, which means that we want to rely on innovations. At times, smart people are so self-sufficient that they will have every kind of rule in the book in terms of processes, right? And it becomes challenging when you're trying to basically see where you have to enforce control. So one of the topmost engineering experiences and satisfaction I had was building a large auditing system that will understand every single data transaction and make sure that we are putting responsible use and appropriate controls on top of that. So that was my Twitter journey: a lot of great products, great people's satisfaction of making sure that we are using data for the right purpose, and keeping users safe, as bad things can happen on those platforms. But our responsibility is that data is used for the right purpose, and it is not in the hands of the wrong bodies. And then the Elon Musk saga happened, and I decided that, hey, I will say that I have no complaints with anybody. I feel that you know, everybody has their own ways of doing things. And I decided to leave Twitter at that moment. And join SnapLogic. My responsibilities as a VP of Engineering is to build the enterprise data orchestration system on top of the integration platform. SnapLogic is a data integration company that connects applications and data together. One of the things that we take special pride in is making sure that the metadata and context of the data are part and parcel of how we move data around. That is what my passion is, that is what my responsibility is to make sure that the pipelines that we build are responsible pipelines built optimally. I'll pause here. Halt description.

Debbie Reynolds  07:33

Yeah, that's fantastic. That's tremendous. I think people like you are the future of technology companies because you are, in my view, if you were a wheel, you're like the center of the wheel, right? So you know where all the data is. When I started my career, this was before, and I'm dating myself, right? This was before the Internet, no one had emails. And so the asset within the company, they thought was the task or the skill that they brought to the company. So, with the rise of technology and all this data, data has become the company's biggest asset. Being able to figure out the best way to use it is a challenge. As you say, thinking about data governance and regulation is definitely heating up those data points. Those ways of actually looking at data. Tell me a little bit about how privacy interjects itself into your job or into your role.

Nitin Singhal  08:38

I sometimes say that can somebody live in a house without any curtains and doors? Everybody needs privacy as a human being, and so does data. Anytime data is exposed for the wrong purpose, that means the endpoints and the place where that data is sitting could be in the hands of the wrong people. When it comes to privacy and managing data in a responsible way. I always relate it to how we live our lives. On a daily basis. I call it a healthy data lifestyle. What is a healthy data lifestyle? Like eat your vegetables, drink 10 glasses of water, do your run, and do a regular physical check every 12 months, and do not close your eyes to the facts; monitor your blood sugar, monitor your cholesterol, monitor all those important parameters and make sure that you're living a healthy lifestyle. It is exactly the same thing about data in the last 15 years. The crux of my experience is that, at the end of the day, it is about the mental baselines, and embracing heart fact is very, very critical. So, we have to make sure that the data is auditable. That means, like the way we do the physical check, like do the blood test, if I'm not able to audit data, I'm going to have issues, I have blind spots. And our job as data leaders or data practitioners every single day is to uncover those blind spots to bring reality to the surface. So, that is practice, I have always adopted that, hey, I can reconcile data, I can audit data, who is using, why they are using what for so that we can ask those questions and see that it is meeting the bar for those controls. When I talk about eating vegetables, drinking water, and all of those things, what does it mean? Take an example: if I have a young child in the house, and I put all my valuable documents, and jewelry or cash exposed in the front room, it is going to be risky, right? So, depending on the way I live my lifestyle, it is all about how I classify things within the house and appropriately secure them. So it is absolutely critical. Sometimes, people think that data governance is actually a regulatory thing; it is boring, and data governance professionals find it hard to get traction in tech companies. But my pitch to them is do not plead for adoption; create that realistic and data-driven model that does that. What will happen if we do not do that? If our data is not classified properly, what is going to happen? It is all in the public domain: what happened to Facebook, what happened to Twitter when data was used for the wrong purpose. And then regulatory bodies enforce large fines and ask you to do things in a very controlled way that slows you down and eventually actually can take the business impact away. So, having that right data classification. The third thing is understanding data for the purpose and intent. Sometimes, people think that cloud migration is fast and good and everything will be taken care of automatically. No. At the end of the day, people who have good insights and mental models about their data succeed in the data governance or privacy practice; let's say when somebody tells me that, hey, my GPA is four dot oh versus somebody who says my GPA is 2.0, my mental model will say that we're to scream and we are to embrace or give a hug, right? In the same way, having an understanding of our own internal data and being able to have that instinct that, okay, something is good or bad about it is very, very critical. So, for three things, I talked about the data governance practice, which is not just for regulatory purposes; it is for critical aspects of business drivers. The second piece is auditability of the data to uncover blind spots. And third thing is understanding and building mental models about your own data. And I feel that if, as data practitioners, we can adopt that, we can build the world.

Debbie Reynolds  12:47

That's fantastic. I love it. I agree. I agree. I want your thoughts on the end-of-life of data. The reason why this is important is that when companies started using digital systems, even before, there were no regulations really about how long they could keep stuff, right? So outside of statutory things like oh, you have to keep tax returns x amount of time, whatever. A lot of these Data Privacy regulations aren't doing now, this is something that you mentioned a few minutes ago, is they're basically saying to companies, they're not saying delete private data after seven years, they're saying delete data or get rid of data or anonymize it or change it in some way or de-identify it, once you reach the end of the purpose for the data. And so, that brings the end-of-life data challenge to companies to never have to delete anything; they just have to keep everything. What are your thoughts about that challenge and how companies are thinking about that?

Nitin Singhal  13:58

Over a period of time, with the volume of data growing, there are two aspects that sometimes clash with it. One is how long data needs to be retained, right? There is always data retention based on legal holds and many other things. The second piece is how long you can actually hold based on your privacy contract with your customers and based on the business you are in, and the third thing is how you have to retain or delete data based on customer inputs. If I am into the b2c model and some customer comes back and says hey, take me off the platform and delete all my data, I have to meet that thing within X number of days, right? It is very challenging sometimes the retention and deletion comes into clashing way that okay what I can delete and what I cannot delete, right. The most important thing that I have learned about period of time is having a single place, an order of operations of your retention and deletion strategy. And then have it very, very defensible when I say defensible that okay, one of the key things from a regulatory perspective and you know, how we practice data is that things need to be consistent. I cannot inconsistently delete or retain data, I have to be very, very clear about what is my order of operation with respect to what kind of policies I am adhering to. And that order of operation manufactured or manifested is critical. And that is what I have used as a practice Tane. In my past life and past companies, we retained data, what we are basically bound to from legal holds perspective, and then we delete data that needs to be deleted at periodic intervals, some data we cannot retain beyond 30 days, some data we can retain for X number of months, some data we have to delete based on the action from the customer. But every time we do that, we go back to the same order of operations and rules engine that dictates that where it falls in that rule stack. I don't know if that answered the question, Debbie.

Debbie Reynolds  16:10

Yes, that actually did answer the question. I will talk a little bit about what you spoke about related to blind spots in data. My experience is that a lot of these blind spots happen because, let's say, a company wants to do a project. They're only thinking about the benefits of that project. And they're not really thinking about the risk or the downside. So, that can be a huge blind spot. What are your thoughts?

Nitin Singhal  16:38

On no, absolutely. This is one topic that I really love. And wherever I have gone in the last 15 years, every time I've talked about my experience, I pitched something is an iceberg. Slide. My pitch is that the Titanic sank not because of what it was on the surface but because of what it could not see under the water. And that lack of awareness. That misunderstanding is what causes accidents. When I'm driving a car, blind spots are what I have to take care of by going to driving school; the biggest thing that they tell me is how to avoid crashes because of blind spots. Now, I'll give you an example of data blind spots. Say I'm sourcing an email about a customer, an email comes from a user sign-on, or it comes from a two-factor authentication source. If I end up storing both emails with the same database without defining the lineage of that, I might use data that is coming forward 2FA, which is strictly for security purposes for advertisement emails, right? And then I will be in the soup if I don't know that, right? So that is an example of a blind spot. Another example of a blind spot is sometimes, organizations like data governance organizations change the metrics, saying that you have to make 100,000 data sets compliant. And they start producing numbers that we are 90% compliant, that we have 100,000 data sets and 90,000 data sets are compliant. Guess what, right? What is the blind spot here? The risk is not in the number of data sets risk is in the bad usage of the data set. And typically 80% of data sets are never used. After the first creation of the data set, there are single-use data sets. So, in one of my past companies where we had such metrics of 90% compliance, I looked at the usage level and found that actually, there were only 20% data sets that were actively used. And that is of the 10,000 non-compliant data set. Most of the non-compliant data sets were in the US data set. So, actually, basically, our compliance metrics were only 55%. Right? Again, the blind spot here was we are missing a dimension that is based on the usage. Another thing that we have a blind spot is cost. We all are working towards a world where everything is hosted on the cloud, right? If I take my car off my parking lot in my house and park it at San Francisco Airport, right? Every single minute of that parking will cost me the same thing. If I take a data set that is hosted on my on-prem data systems and post it on a cloud system, the cost models will be very different. At times, we are caught unaware of it. We make the wrong judgments. And guess what happens is that the same team is challenged with meeting the regulatory needs, and then also the cost needs cost constraints. So having awareness about where those issues are, where those risks are, and then being able to make right this isn't based on the cost factors is critical. So again, right my pitch is to ensure that you're looking at the logs, and logs are where you find answers to those blind spots, never ignored. Never trust anything based on the surface. But then ask those questions that can help you reconcile your information.

Debbie Reynolds  20:22

I agree, I agree, This is another deep philosophical conversation we can have, I'm sure. And it is about data literacy. I feel like sometimes people want to make a qualitative assessment when they really need a quantitative assessment, right? So people love to put numbers on stuff, but not all. Insights are numbers.

Nitin Singhal  20:50

100 percent, the day we master that data literacy and data literacy is knowing your own numbers, right? Your impact as a professional, your impact as a team, and your influence across the organization increase multi-fold.

Debbie Reynolds  21:08

I agree. I agree with that completely about AI, and data lineage. We know as data folks, that data lineage is really important. But as more companies are starting to use AI more and more, that data lineage issue becomes more important; for us, it has always been important data when it's where the data came from. But now, we have regulators that are very interested in what companies are doing with data, especially if they feel like the data use could possibly create some type of human harm and that data lineage question will become vital. What are your thoughts?

Nitin Singhal  21:48

On no, absolutely right. See, data lineage has always been a topic of interest. And one of the most complex pieces of things that we want to explore or uncover. With AI, it has necessitated the scale and understanding of lineage I will say that urgency and the speed at which we need to understand that Insights is absolutely critical. My perspective on data lineage. First of all, I feel that there are multiple definitions of data lineage. Sometimes, I look at companies that actually parse the job dependency and show me a graphical representation of a data set; there is an SQL parser that actually represents the target table and the source table. My layman's definition of lineage is understanding the first point of ingestion of data. A few minutes back, I gave the example of email, right? Now think about this: a single entity like email, my email, my personal email, for example, can be sourced from, say, Axiom, which is selling data from a demographic marketing perspective, and I might have a contractual relationship with them. My same email could be sourced from user sign on, and I've given rights to that form to use my email because I'm using their service, or the same email can be used for two-factor authentication, right? In all three use cases, if I push that into, say, some prompts for LLM and do not give them insights about the lineage, guess what? The purpose of processing will be wrong and will not be making the wrong decisions will be making the wrong call and using the data for the wrong reasons. This is something that in my current responsibility at SnapLogic, we are paying a lot of attention to we call it a responsible pipeline that provides the lineage context as data moves, gets replicated, stored, processed, and then replicated in different systems as part and parcel. So, at any given point in time, you have insights about who is the immediate parent and where that data is coming from; from the first point of ingestion perspective, context and metadata are part and parcel of every single data entity. So again, right, in my opinion, understanding data lineage from first point of ingestion perspective, as the context to make right use is absolutely critical. More and more companies will adopt AI with ease and confidence. Once that information is available, they can exclude and include the line of data depending on where it is coming from. I feel that it is our responsibility as technologists and our responsibility as SAAS providers to provide insights and context as we build bridges across data ecosystems.

Debbie Reynolds  24:41

I've never heard anyone say it that way. You're absolutely right. So at the point that the data is captured, it has a purpose at that point, right? So the problem that companies have is they lose the purpose. They forget what the purpose is, and the data travels throughout the organization. Same as being used in other ways that may create these other business or regulatory risks because they were never intended for those other uses.

Nitin Singhal  25:09

I want to share this perspective that data lineage and human DNA are exactly the same analogy. data lineage is not a bottoms-up thing; it is always a push-down thing. Sometimes, we think that we can look at data and predict lineage, but it is absolutely impossible. I would love to have a debate with those people and would share, and, you know, maybe I learned something new. Like, think about this, my DNA is passed on to me by my parents, right? Same thing; the lineage tag is always pushed down at the point of ingestion, where that gets manufactured a child, a data child gets lineage information from the data parent; it can never be the other way around; it is always pushed on just the same way that human lineage is exactly the same way as data lineages.

Debbie Reynolds  26:02

Right. That creates that risk within organizations. I think that goes back to governance. Again, where some companies have not been really well like governance, so those types of companies that maybe haven't taken governance as seriously will have trouble with things like AI because they don't really have that foundation in place to be able to track the data will be able to explain what's happening to it throughout the process and how it travels and moves through companies. What do you think?

Nitin Singhal  26:36

Oh, 100% 100%. See data governance is not a discretionary thing. Data Governance is not an after-the-fact. Data Governance is not just about having some tool in place that will automatically do it. It is an integral part of the responsibility of every single data practitioner and engineer or product manager in every single role in the company. And it is a collective responsibility. It is not the responsibility of a group. When I started my data governance journey, at times, I actually felt humiliated as well that, hey, I'm chasing people, and they are pushing me back saying that, okay, you know what, it will slow me down. Okay, we'll do this, are you going to be happy and you know, then I look back saying, oh, it is not for my own personal happiness, it is to make sure that we look good in the eyes of our own customers, and we meet the regulatory possibility in the most meaningful way. I'm glad that in the last 10 years, things have changed tremendously. Awareness, and the things that you are doing, Debbie, building that awareness; many people like you are actually very much taking it as their corporate and community responsibility to build that kind of awareness. Now, every single company, I feel that tech companies as well, they pay attention; there is a huge investment. It is no more a process governance aspect. It is very much part of the engineering and the software development lifecycle. The only thing I say is, when we implement it when we practice it, make sure that our design documents, our code, scanning tools, and all of those things have automation built to examine such vulnerabilities. As much as we automate the rules that govern governance is going to be good in the hands of people. So that way, there is less human intervention; there is more machine-enforced thing. With the advent of generative AI and AI practices all across the board, I see that all these practices will scale and adoption of governance will become more seamless as we advance. I think so, I hope so. Definitely. I actually just wrote an article recently about some of the excuses or the problems that companies have when they're adopting AI. One of the biggest ones I feel is blaming the system. So hey, I don't know what happened. They did some wacky things. I have no idea what happened. But you have to have that responsibility, that accountability, transparency, and also all those things really engender trust. What do you think? Oh, no. 100% means like, see, we live in a very factual world. We cannot just put the blame on a glitch that never happened. It has to be all supported by facts. And guess what, right? Our customers and our regulators are not going to give us credit for those system glitches. It is our own responsibility. With such a high degree of reputation and penalties at stake, I feel in the last five years or six years, those kinds of things have gone out of the window. And it has become a class one citizen in any kind of organization ecosystem. So it's like something happened is no longer a right answer. Right? Do you know why it happened? How we avoid it is the right phrases going forward.

Debbie Reynolds  30:06

Now, let's talk a bit about human centricity and trust. So when I'm talking with companies about data and data systems, I think one of the things that was happening, and especially around regulation, is that they want to see more human centricity. So I think organizations have always been very good at understanding why data benefits them. Right? Well, we have to figure out how that data use benefits the individual. And I think when you have that trust, people want to give you good data, right? So companies that are not trusted get a lot of bad data, and they make a lot of bad insights as a result; what do you think?

Nitin Singhal  30:48

Let me rephrase the question. So basically, how do companies understand the good and bad aspects of data and take action on that, right? Yes, all of this is very contextual and relative, the good data and bad data, right? First, is what is the definition of good in what context, right? Means I'm solving a business problem. I feel that I'm not getting enough insights. My operational processes are not collecting enough information; there is a lot of nervous value, there is a lot of inconsistency, and it makes it hard to make business decisions about the data from the data right. Now, the second part is the risk that is presented by the data, right? The risk is that if I do not understand the context of this data, I will end up making wrong decisions. So the way I see it operational data is your business decision; by capturing too much of data at the time of user sign on users will not sign on, and I might actually land myself in trouble. So, how do you basically do data minimization and baseline the definition of good and bad? So, one definition of good and bad is based on your operational process okay, this is a baseline; we are actually not capturing the null values. Okay. The second definition of good and bad is the depth of metadata and context that we want to infer. That is exactly where the biggest challenges and that is where all the data governance, professional privacy aspects, and security aspect comes into play; that content is one thing, and context about that content is another thing. So that is where organizations need to invest that, at any point in time, metadata, no matter how the content is exactly up to date, they cannot compromise on the metadata. The content is a property of your operational process. If the user decides not to share their phone number, that is not a bad quality; that is just how my operational processes, but not able to figure out whether that is user data coming from a sign-on or user data coming from a two-factor authentication, that is bad. If we have metadata with nulls, that is bad. Content with nulls is a definition of an operational process. But sometimes quality definition can be very, very contextual. So, my pitch is defined content from the operations perspective. And define metadata quality from end to end business operations perspective. And that is where you have to have 100% score at any given point of time, metadata quality should be good and known.

Debbie Reynolds  33:47

Right, right. So it doesn't have the metadata, they lose the context. Right?

Nitin Singhal  33:51

Exactly.

Debbie Reynolds  33:52

You use the information about where it came from, what is doing, why you should have it, and that becomes a problem. And so I think that that you touched on this is a very common thing that happens within organizations, and we see it play out, and news articles about or regulators really concerned about and that typically is, and I think you had just mentioned earlier in an example, let's say someone took data from a single sign-on to use that information for marketing, even though it did not have proper consent. Didn't have a proper context, right? And that's a huge problem. And I agree with you 100%. The reason why that happens is because you don't really understand that initial lineage of data, the origin of the data, and why you have it in the first place. Exactly.

Nitin Singhal  34:40

Because he sees user consent, for example. I can start sticking user consent as the data gets into my system. But then data is used by hundreds of people gets replicated at different storage systems, and then you lose that context. Because users are using and copying data using a variety of tools, not all the tools are carrying the context forward. And that is what technology needs to implement to make sure that no matter what tool I use, the context needs to flow as part of the content. So today, content and context are separated; it should not be. Technology should enforce the tight coupling of context and content.

Debbie Reynolds  35:27

Wow, that's tremendous. Oh, my goodness. So if it were the world, according to you, and we did everything you said, what would be your wish for privacy anywhere in the world? Whether it be regulation, data stuff, human behavior, what are your thoughts,

Nitin Singhal  35:45

Philosophically, currently, I feel that when we think about privacy, we should think about privacy is like respecting our customer. If we do not implement the privacy practices, we are not respecting our customers. The minimum our customers expect from us is their privacy and security. And if we are any time compromising those things, from our features perspective, we are not doing justice to our customer obligation. Now, the right word will be that data governance and privacy are not separate thing, or after-the-fact things. It is very much part and parcel of how we develop software and how we build products. Privacy cannot be a separate thing, and not the responsibility of a single group is exactly about how we think.

Debbie Reynolds  36:42

Yeah, I agree. I agree. I think what has happened because of the rise of these privacy regulations, a lot of companies may not have thought about that initially. But I think a lot of people are getting the memo now that privacy, especially when you're dealing with the data of humans has to be foundational. And that context really does need to travel through the organization.

Nitin Singhal  37:05

That's like adding another thing about it; you’re absolutely right; the rules and regulations are very evolving. They are becoming like, say, not just we're not just living in the world where one GDPR governs the entire world, right? You know, we have rules. At the continent level, we have rules at the country level, we have rules at the State level. And think about this, right? Different governments in different parts of the world are trying to force data localization as well. The thing that I pitch, as a technology practice, is to try to capture the context of data at the lowest granularity. And this is what the jigsaw puzzle is. So that way, anytime in the future, we have to adopt a new rule. It is not like digging the earth again; it is about just enforcing those plug-and-play rules in place. So again, right, it is not as easy as I say. But then my learning over the last 15-20 years is that as much as you can reconcile data, as much as you can actually make sure that data is auditable. And ensure that you have the context about data at the lowest granularity; you can meet those evolving obligations in the current and future state with much more ease and with cost-effectiveness.

Debbie Reynolds  38:28

I agree, right? You shouldn't have to recreate the wheel every time a new law comes out. So understanding those foundational principles, understanding just being responsible, understanding what people really want, I think, helps organizations move in the right direction. So well, thank you so much. This has been tremendous. Wow, you really dropped some bombs on us. Some knowledge bombs. Thank you.

Nitin Singhal  38:52

Oh, thank you so much. Thank you for having me on the show. Thank you so much.

Debbie Reynolds  38:56

Oh, my pleasure. My pleasure. I look forward to the possibility of collaborating in the future.

Nitin Singhal  39:04

Best of luck to you and you're doing an amazing job.

Debbie Reynolds  39:06

Thank you. Thank you. That's so sweet. We'll talk soon. Thank you so much.

Nitin Singhal  39:11

Thank you. Take care. See you.

Previous
Previous

E174 - Joyce Hunter, Executive Director, ICIT (Institute for Critical Infrastructure Technology)

Next
Next

E172 - Sean Vargas-Barlow, Senior Global Privacy, Product, and AI Counsel