Debbie Reynolds Consulting LLC

View Original

E203 - Darren Spence, Chief Revenue Officer, SmartBox AI (UK)

Find your Podcast Player of Choice to listen to “The Data Diva” Talks Privacy Podcast Episode Here

Your browser doesn't support HTML5 audio

The Data Diva E203 - Darren Spence and Debbie Reynolds (43 minutes) Debbie Reynolds

43:55

SUMMARY KEYWORDS

data, companies, information, people, technology, redacted, organizations, email, share, requests, breach, personal, systems, uk, privacy, business, ai, regulation, regulator, document

SPEAKERS

Debbie Reynolds, Darren Spence

Debbie Reynolds  00:00

Personal views and opinions expressed by our podcast guests are their own and are not legal advice or official statements by their organizations. Hello, my name is Debbie Reynolds. They call me "The Data Diva". This is "The Data Diva" Talks Privacy podcast, where we discuss Data Privacy issues with industry leaders around the world with information that businesses need to know now. I have a very special guest on the show all the way from the United Kingdom, Darren Spence. He is the Chief Revenue Officer of SmartBox; welcome.

Darren Spence  00:39

Welcome, and yeah, thanks, Debbie, for inviting me onto the show. It's great to be here.

Debbie Reynolds  00:44

Well, I'm excited to have you here. We've gotten to know each other over the last few months. I'm very impressed with you and the things SmartBox is doing. I'm very impressed with the technology, and I would love for you to be able to talk about it, but before we get into that, tell me about your trajectory and your career and how you became the Chief Revenue Officer of SmartBox AI.

Darren Spence  01:09

I actually joined the IT industry in 1996, working for a very small reseller called Bytes Technology Group. I was there for 17 years, man and boy. It's actually during that period that I became a director of that company, and actually being on that journey and being made a director, the great thing about working in the IT sector is you get to meet some great people. One of the opportunities that I had was actually I managed to be nominated for a director of the Year by this point, as part of a very big organization, a big IT company based in South Africa, and every year, they have awards to recognize some of the work that their directors are doing, and I remember this specific occasion. I was 33, I think I was the youngest director in the company at the time. I'd been invited down to Johannesburg to attend this big awards ceremony. There's 500 people there, and they had a guest speaker, and it was FW de Klerk, and just seeing him was quite amazing. Because, you know, he is the president, or was the ex president at the time that let out Nelson Mandela, which must have been quite a big decision for him to make. And listening to him speak was quite, quite amazing. He was talking about always doing the right thing in business, and obviously, his right thing was letting out Nelson Mandela. So that was quite remarkable. Then, after he finished talking, they announced a second guest speaker, and it was Nelson Mandela, and you could hear a pin drop in the room. This quite frail ex-president came onto the stage and talked about more of the same, really, you know, doing the right thing in business, and after got over the job of hearing these two great ex-presidents speak, I had a chance to meet them. So it's funny, isn't it? You do a good job, you do your best, you get promoted, and then you just get the opportunity to meet some great people. So, I was part of the bytes journey for quite some time. The business that I ran for Bytes was their Xerox document solutions business, and we grew that to be the biggest Xerox partner in the UK. Eventually, we sold that business in 2014. I then left Bytes after 17 years, and I set up a consultancy business where I helped some IT companies. I worked with Microsoft and a few other very large vendors to do different things to help them grow, and then on that journey, I met I already knew, actually, the team that founded SmartBox. I've known since my Bytes days, and we'd kept in touch, and we always talked about doing something together, and the timing wasn't quite right, but I had this new freedom. This didn't really have a ceiling to my career anymore. So I joined SmartBox as a consultant, initially helping them understand how to take this solution to market, and then I joined as a full-time director and board member in the summer of 2022, which brings me up to date now,

Debbie Reynolds  04:30

Well, that's a tremendous journey. Before we get into some of the final details, why don't you explain what SmartBox does? I really love the technology, by the way, so I want you to be able to tell people what it does.

Darren Spence  04:46

So we provide an applied AI solution that organizations rely on to detect and redact personal and sensitive data that's hidden within their unstructured data sets. So by UN. What should we mean? Email systems, teams, chats, Slack conversations, WhatsApp chats, so we find personal and sensitive data that might be within those particular data repositories, and then we enable those same organizations to meet their regularity responsibilities that govern how they must respond to data requests and cyber incidents. So that's what we do. We find needles in the haystack that really empower our clients to bond to certain requests made of them.

Debbie Reynolds  05:37

Yeah, the technology is very slick. I really like it. So I think the problem that you're solving has been a problem from the early days of digital systems, which is, how do you wrangle sensitive data in unstructured data sets, especially if you have to produce or turn over that data to someone else. So the challenge has been over the years with paper days people, let's say, for instance, they need to redact data. Someone will be in a room with a bunch of markers, maybe putting black marks on paper. Or people would do that and scan the paper and make a PDF. But in the digital age now, the data is so voluminous, you really can't do that, and so there have been very few companies that have tried and succeeded to really tackle this redaction issue because even people in higher places and these big companies, they're still doing redaction in very manual ways. It's very time-consuming, very laborious, and a legal process. Actually, the statistic that I've seen is that a redacted document is the most expensive document to be reviewed in a legal process because it may be looked at or handled eight or nine times. So, who has time for that?

Darren Spence  07:13

That's something that we've seen a lot of in both the UK and the US and South Africa and the other markets that we operate in. Like I said, there's two main use cases that we help organizations with. One is these data requests, which can include people requesting their medical information, their educational information, or their children's educational information, together with a whole range of other things. So certainly, what we've seen. If I look at some of the local authorities that we deal with, or healthcare providers, they're printing off information, they're then eyeballing that information and taking a big black pen and physically redacting the information, and then they're photocopying that information and dispatching that to the subject, and it's very time-consuming. Also, as human beings, we miss things. If we're trying to find personal data in a ream of 1000 pages, we're going to miss things. So the message that we keep being told is, is there a better way of doing this? And that's really why we built the solution. I mean, just to bring to life a couple of those use cases there. I mean, I mentioned about medical records. The interesting thing about medical records is that not only do you have to find and redact information, but more often than not, the medical information of one person gets mixed up with the medical information of another person, and if you share a medical record that is contaminated in that way, you're exposing yourself to quite significant fines and risk. That's one of the things that we see when you look at a manual process. I think the other thing as well is just the boredom of talking to people that are spending all day going through these huge data sets, and it's just so time-consuming that their job satisfaction isn't quite there. So we're trying to help organizations, whilst trying to improve the quality of people's day jobs as well.

Debbie Reynolds  09:19

I think two things that are happening in privacy that bring this full circle are that there are more laws and regulations around data subject access requests where companies in the past have not had to be transparent or had to give or produce information to consumers. So, unfortunately, a lot of companies set up a mailbox or email or a phone number where they're obligated to have someone be able to contact them and ask for these records. Then, what happens within the organization is the companies scramble to try to fulfill this thing and a lot of it again. It is manual as printing things or looking at them eight or 10 times before they go out and things like that. But then also, I think one of the other things that is happening is that we see that a lot of data privacy laws have carve-outs for companies that can make things anonymous. So if you can anonymize things in such a way that people's personal information is not exposed, it lowers your risk when you're transferring that data, and so I think that a lot more companies are trying to go down that path because they know that that may be a way for them to maybe use data further because they've anonymized that personal information, or it really helps them with this DSar process. What are your thoughts?

Darren Spence  10:57

Yeah, 100% I mean, just on your first point about some of the privacy regulations, and you know this better than everyone. Debbie, given the nature of what you do, not only are there specific rules, particularly in the US, govern how different types of data need to be managed, whether that's health information or I mentioned education or freedom of information. There's very specific rules around that, both at the state level and federal level as well. But equally, if you look outside of the US and you look at the UK, we have the UK GDPR in mainland Europe. We have the GDPR in South Africa. You have the Poppy Act, and you've got all these different regulations all around the world that actually have global reach. So what we're finding is big companies in the United States that are working with our technology use that technology to not just solve local privacy problems but global ones, where they might be handling UK data or data for UK subjects, and they have to adhere to the UK GDPR, and we're helping them do that. So what we're finding is that the organizations that do have an international footprint tend to be leveling up. They're implementing the processes that they need to handle the toughest regulation around the world. Because if they can do that, then, in theory, they should never fall foul of some of these other very strict privacy rules. I mean, you must see this all the time. I mean, how many different privacy pieces of regulation are there in the US? There's going to be a ton.

Debbie Reynolds  12:45

Yeah, there are a lot. Because there we have Federal, Sates and some local, we have some cities actually, that have very specific regulations, and it's just coming fast and furious, especially at the State level, for sure.

Darren Spence  13:02

Oh, definitely, and what we're also finding, though, because we're living in this very complex world that is getting ever more complex, organizations are having to turn to people like yourself for that very specialist help, which is why we're really pleased to be working with you, and also specialist law firms as well. We deal a lot with law firms on both sides of the Atlantic, where those law firms are having to take on these very complex tasks of producing a redacted document. You mentioned just a moment ago that a redacted document is one of the most expensive documents to produce. We've certainly seen that we have customers, end customers, that may have had to put in or may have had to respond to a data quest. They might have only had three or four, but they're being charged 50, $60,000 for each one just to produce a redacted document. Very expensive that's before any litigation work takes place, so producing that redacted document efficiently is absolutely key, and I agree 100% about the people are better off anonymising stuff or over-redacting if they have to, because they are reducing the risk of non-compliance.

Debbie Reynolds  14:20

I think that's true. I want to talk a little bit about a conference that I had gone to. So I went to a Federal working group conference in Washington, DC, a few months ago, and they were talking about freedom of information and the challenges with being able to fulfill those requests because, a lot of times, different agencies have to work together with these redaction requests. I actually saw a chart, and it was I hadn't really thought about it this way, but a lot of data becomes available for freedom of information, and like 25 years after it's created. So think about what was happening 25 years ago. That was the early stages of the Internet. That was the early stages of things being digital, and what they had shown in this chart is how much more data has been created in the government since that 25-year mark, and so they're going to be dealing with the avalanche of more data. Because there have been more data in digital systems, they're going to have to be more information that's going to be made available. Right now, people are scrambling, trying to do these requests even with things that were paper, things that were in smaller systems, or smaller digital systems back then, and this is totally blown up now, but what are your thoughts about that?

Darren Spence  15:51

I think it's really interesting. Because if you think not only is the amount of data this year going to be exponential compared to last year, but if you think 25 years ago, I would have been 26 then there wasn't the awareness of what we should and shouldn't be writing down, because we probably wouldn't ever think we'd be in this position. So a lot of the information that was captured in those documents, a lot of the information that might be captured in legacy email systems, is probably going to contain information that, with hindsight, we should never have written in the first place because we didn't think it'd see the light of day. So I think on the one hand, you've got an enormous amount of additional data, but on the other hand, you've got so much risk potentially contained within that data set, for the reason I just explained that it's going to require even more inspection than ever. I mean, now, when people write documents or write emails, there's a lot more awareness about what you can and can't write. There's a way to behave, and people are much more aware of that. That wasn't the case 25 years ago. So absolutely right. The amount of work that's going to be needed to go through, the volume, and the type of content is vast. So yeah, I think there's going to be an explosion this year, and I think every year for the next 15-20, years, it's going to be bigger, and the job at hand is going to be even hard. I mean, we do a lot of work in the government sector, and a lot of those documents that are produced in the last year need to be heavily redacted because they contain confidential information. Governments, of course, get requests from citizens, and they get requests from media outlets. They get requests from students and all sorts of people. So, these documents today need to be heavily redacted for their audience. But like I said, if you think about 25 years ago, I mean, the disclosures after redaction are probably going to be so heavily redacted they're probably going to be pointless, but that's the level of redaction that I think is going to be needed.

Debbie Reynolds  18:07

Now we have Artificial Intelligence in the enterprise, where companies are creating even more and more data today. So how does that make the challenge even harder with companies trying to do either data subject access requests or other requests where they have to turn over documents to people.

Darren Spence  18:28

Let me put it this way. So if you look at the amount of personal data and confidential data that is captured in email, it's vast. So, for example, let's take the education sector, schools, colleges, in many cases, financial information. Credit card details are being captured via email communications between the financial department in a college and a parent looking to pay for school trips or whatever. So in an email system, there is already credit card details. There might be a school trip where people are sharing passport details or medical information about vaccinations if they're traveling. So email has been used because it's convenient to capture all this information, but clearly, that information should reside in secure databases that are there to serve a specific purpose of keeping data safe. Now, what we've seen is we've been contacted in the past by companies in the US, so outside of data requests, some of these companies that have suffered a cyber incident. Now, depending on where you are in the world, you have to respond to your regulator within a very short period of time. So, the SEC in the states stipulates that if you have a cyber incident, you have eight hours to respond. If your business is providing a critical infrastructure, if it's outside of that, you have four days. In the UK, we have 70. Two hours in mainland Europe, you have 72 hours. In India, you have two hours. So depending on where you are in the world and the types of industry that you're in, you have to respond quickly to the regulator. Now, let's have to try and size how many personal records could have been compromised. Well, what we're finding is a lot of that personal data currently residing in email systems for the reasons I've just explained. We were contacted by a company, a luxury services provider, in the US not so long ago; they had a breach. They didn't know how much personal data was contained within the blast zone, and the blast zone was one of the director's inboxes. Within this inbox, there were 1000s of emails, and within those 1000s of emails, there were passport numbers, credit card details, passwords, email addresses, and a whole range of different information. But they had no way of quickly sizing the problem, so they gave us a copy of every email in this person's inbox and what we had to do using our technology was to go through every email. One email might say, this email is related to Darren Spence, and then a few emails later, it might contain my date of birth, my address, or my credit card details. So my personal data, my data points was actually spread across a number of different emails. What our technology was able to do was to go through this mass of emails and stitch together the data points that we could find on each person very quickly so we could return back to the client. This is all the information we have found on Darren. We've got his name, we've got his email address, his password, his credit card details, his social security number, and all the rest of it. So I think what we're finding is, as I said, emails being used as a convenient vehicle to transfer information, but actually, the risk is exponential now in those email systems, and when you then consider people are now extending conversations into maybe corporate WhatsApp groups, teams, chats, Slack chats. This information is all over the place. It's also in unsupported third-party cloud systems. It might be in Dropbox. It might be in other systems that need to be interrogated, and all of a sudden, you've got a very chaotic landscape where personal data is in all sorts of places. So, as an IT director or as a chief information security officer, if the worst happens and you have to respond to a breach, it's a very complicated, very hard job to, first of all, size a problem. If it's data requests and someone a subject, has requested their data, the same thing has to happen. Organizations are having to go to multiple different data sources to try and First things first, they've got to bring all this information together. Say, well, I've got Darren's health records over here. I've got his contract of employment over here, you know, I've got his bank statements over here. All these different pieces of information need to be collated, and then when they're collated, they then need to be analyzed to make sure that there's no other data within that data set that shouldn't be there, such as third-party information, other people's details that might be contained within my information. So it's a two-stage process. First is to try and find out where all the different data points are, bring them together, and the second piece is to review that amalgamation of data to make sure that you're not sharing third-party data. So it's a very complex business. It sounds quite straightforward, please share my data, but when it comes to actually responding to that, it is incredibly expensive and incredibly time consuming. And if you get it wrong, and if you share particularly protected data, so I'm talking about, you know, medical records and other such type, then the consequences can be quite severe.

Debbie Reynolds  23:58

The two points that you made are really interesting. I hadn't thought about it that way. So one is, when a company has a breach, they do have a very short period of time be able to report to a regulator about what's happening with that breach, and it's hard to be able to get that information. So we see companies, for example, that'll say, hey, we had a breach. We think this may have impacted certain customers. We don't know how many customers, so being able to use the technology to get an idea of the customers that are impacted, the records that have a personal data in it, I think, is very helpful. But then also some organizations, they create breaches themselves because they are sending out the data that has other people's personal information in it, whether it's health or finance. So I guess I can see it in both cases. One is as a check before you actually send things out, to make sure. Not creating a breach. In a situation where there is a breach, being able to understand the scope and scale of the personal data that has been let out.

Darren Spence  25:12

Absolutely, it's easy to share data accidentally. There's a case in the UK, in Northern Ireland, just last year, so in the public domain. So, I encourage people to do their own research. But the police service of Northern Ireland, or the PSNI, provides two functions in Northern Ireland. One, they are the national police force of Northern Ireland, but second, they are the Counter Terrorism Unit in Northern Ireland. So they employ officers to do some quite tough policing work, both at the terrorist level and the normal national level, a Freedom of Information request was put into them just fine. People have the right to do that, and someone within the PSNI accidentally shared a spreadsheet that had a hidden tab on it, and on that hidden tab were the details of 10,000 police officers. So, if you just think about that for a second, the person responding to that request saw this Excel spreadsheet and didn't think too much of it. Couldn't see any information that might cause harm or conflict to the regulation, but actually shared data, the personal data, the contact, the names and addresses of police officers clearly don't want their details in the public domain, you know. So I think that shows the complexities of that when you're dealing with this highly sensitive data. Now, our technology would have stopped the NS tracks because as that Excel spreadsheet went through our system, it would have picked up, it would have raised a flag to say we found 10,000 personal records on that spreadsheet, and it would then force the operative, the administrator, to go back and validate and check to make sure that there's no personal data that has been compromised. So I think these things aren't done maliciously because there's data in places that it probably shouldn't be in the first place. It's very easy for someone who is probably overworked to just quickly look at something and send it out without maybe too many checks, and it's not necessarily a blame game. I mean, certainly, you gotta understand the core problem and why it happened, and you gotta fix that. But like I said, I think information gets shared accidentally, and that adds to the problem because if the wrong information is shared, or if highly sensitive information is shared, it can be very problematic for the organization, as was the case with the PSN. I believe the chief executive had to leave.

Debbie Reynolds  27:54

Oh, wow. Oh, my goodness, I heard that one, and then also one other thing: we're talking about breaches. But also there's another category of thing that regulators are very concerned about, and that's unauthorized access. So maybe someone within an organization is viewing something that they shouldn't have seen, whether it's health or finance, and that may be a reportable incident to a regulator. And so when that happens, there often is an investigation. There often has to be certain types of reports or things that are given to a regulator about that type of thing. And so having companies proactively know where those danger spots are, I think really can help them in a proactive fashion. What do you think?

Darren Spence  28:42

Yeah, I think that's a really great thing to highlight. So, if you look at the cyber security world, there are an awful lot of vendors that specialize in technology that stops the bad guys from getting in. There's technology that's monitoring viruses, and there's technology that's trying to educate employees against social engineering and phishing attacks. There's lots of technology in place to protect an organization, but if the problem exists within the organization, that's a part of the thing to fix. So the way that we would handle that is we would encourage organizations to be a little bit more proactive and use technologies such as ours and there are others on the market to look within an IT environment and highlight where personal data could be residing. It probably shouldn't be in many of those places. I gave the example just a minute ago, about that spreadsheet, but highlighting that there's lots of personal data that we found that is in this particular person's inbox or their SharePoint or their OneDrive or whatever. Once you know where that is, you can either move it, delete it, or secure it, but you need technology to help you pin it. Point exactly where it is because going back to my previous point, it's very easy to accidentally share information internally, but then when the information has been shared, it should be moved or deleted or secured in many cases because otherwise, if you have or you suffer a breach, it could be exposed, or, worse than that, you could have a malicious insider who could then decide to share that information, particularly if it's trade secrets, if it's intellectual property, or if it's high-value content that could actually have a street value, then employees might be more motivated to actually take the information that they have access to and share it to the outside world. So it's important to have a view of where the personal data is, and then, in addition to moving security and deleting, just make sure that different people have the right permissions and access rights to the data. So if it is highly sensitive and highly personal, then let's just make sure that it can only be accessed by people with the right access rights, and again, that technology exists that's existed for many years, permissions and access rights, but that's really what we try and do, is try to get people to work proactively, first move things out of harm's way. In the worst case, if you end up in harm's way, then let's quickly size the problem so you can deal with it in an appropriate manner.

Debbie Reynolds  31:30

Yeah, and also, as you were talking, I'm also thinking about the problem within organizations of duplicates, where the same thing, like the example you gave about the Excel spreadsheet with the 10,000 police records, that spreadsheet was probably duplicated multiple times in the organization, so maybe it has gone out where someone manually redacted or did something to that one record, but then it may exist multiple different times and not be redacted the same way. What do you think?

Darren Spence  32:03

Absolutely, and again, we see that all the time, right? Take emails. Say, I sent you an email, and then you forwarded that email to a colleague and said, oh, look at this. This might be of interest, and then that colleague might send it on to somebody else. My email has been duplicated a number of times; in most cases, that's all business as usual. There's nothing wrong with that. But if the email was talking about a particular employee, and it might contain information about the employee that actually might be very sensitive, it might be derogatory, they might be talking about a person in a way they shouldn't be talking about. So, that email has been duplicated many times. If that subject that is the subject of that conversation requests their data, then it's important that all the emails, even the duplicates, get considered. So again, our technology can look for that, and it can immediately reduce the duplicates. So there's only one version of the truth, which means that if you have got the job of reviewing data, then your data set is reduced. We tend to find that we can reduce a data set by about 40% just by taking out the duplicates because you're absolutely right. Just take that spreadsheet that 10,000 officers; that spreadsheet will absolutely be living in a number of different inboxes and for information like that, you do not want anyone to have access to that information that should be in a secure database, nowhere near anybody's inbox, easy to share. So, finding duplicates is the real challenge, and it's easy to miss. And if it's down to human beings, then you are almost certainly going to be sharing information accidentally that could be prevented

Debbie Reynolds  33:56

Absolutely. So what is happening in the world right now, especially in the privacy and data protection sphere, that is concerning you, something that you see you like, wow, this is going to be a big issue, or something that we need to really think about.

Darren Spence  34:12

I think people are sharing their data far too easily. Sign up to an app on your phone, whether that's a social media site or it could be anything, or you might be logging into a new app, and you might use your Google credentials or whatever, log into a new app. How many people actually read the terms and conditions? Just scroll through accept, except accept, and most of the apps say, are you happy for this app to track your other activity that you're doing? And we're just giving away our privacy, our personal data, very, very easily. And the challenge, I think, with that comes back to this social engineering we're finding very, very sophisticated attacks now, where ransomware as a whole, you can buy the code from the Dark web if you want to instigate an attack on a business or an individual, that information is readily available. So we're sharing all this data. I mean, people share their mother's maiden name and the place where they were born. Some of this information that you would use to show that it's you it's been shared far too easily, so it's very easy to replicate who you are. The other part as well that I think scares the hell out of me, I don't know about you, Debbie, is deep fake voicing and facial videos of people. I mean, my Mom's pretty astute, but if she picked up a voice message that sounded like me and had my mannerisms and referenced things that probably only I would know, but it wasn't me, and it was asking my mother to share financial information or whatever, she's far more likely to fall for that than if it was just on text, adding videos as well. I think we're in a very dangerous time now where the more information we're sharing, coupled with the advances in Gen AI, it's going to be so much easier for hackers to get into our systems, to steal our identity, to steal our money. It worries me, social engineering, just clicking on this link is one thing, but when you've got very sophisticated audio and video to support that, people are going to fall for that again. But it's not their fault. They're being conned very easily. So I think just be even more cautious than ever before when accepting and signing up to terms and conditions, even things like and this has been the terms and conditions, I think, in Facebook for some time, and if they've updated their privacy then, then great. I'm not that close to it, but any photos that you share on some of these social media platforms have the right to use. So if that is the case and you're sharing photos of your kids or your family or yourself, then it's a lot easier for those AI companies to produce videos of you that aren't you and to try and trick people. So it's a dangerous time that we're in, and I just hope people just think twice before they are sharing everything about themselves.

Debbie Reynolds  37:46

I think that's true. I think people do overshare. I feel like the data, let's say, the pre-tech days, data was more curated, and so people really had more thought about what things got written, what things got typed out, or whatever. He was very curated. Right now is really not, so we're just throwing so much extra data into data systems that were never there before. So I think that's definitely a challenge. But if it were the world, according to you, Darren, and we did everything that you said, what would be your wish for privacy for data protection anywhere in the world, whether it be regulation, human behavior, or technology?

Darren Spence  38:33

Wow. Crikey, you've really put me on the spot. Debbie, that's good. I think you need a bit of everything. I do think you need regulation. I think the social media platforms, I think they need to be very carefully looked at. I'm all for freedom of speech, and that's great; providing this freedom of speech is accurate. We had an instant literally just last week in the UK, stabbing the news where you had a guy who stabbed some girls, and on social media, it was shared that this guy was an illegal immigrant, and he wasn't, you know, he was born in Cardiff and but that inaccuracy led to some of the most violent riots that we've seen in the UK, and it was based partly on a falsehood, a lie, and the social media platforms, they played a part in spreading that news. So I think there's got to be something around that. I'm not sure what the answer is, but there's got to be a level of responsibility, and I think one of the challenges there is that social media companies are classified as technology companies, not as publishers. If they were a publisher, like some of the news agencies, the rules are much, much stricter. So I think there's got to be a balance there. I think education generally has got to start from a very. A young age, teaching children and adults, for that matter, that not everything you see on the internet is real. If that means that you aren't quite so trusting with strangers, then that's fine. I was brought up to always fear strangers walking down the street. Okay. If that protects us and our children, then that's okay. So let's educate Let's educate people. So regulation, definitely education, and also technology. Technology can play a big part in this as well. Just take the Gen AI piece open AI; very easy to have content created for you, but I think those same companies that generate content should also be able to confirm very quickly if a piece of content, whether video, audio, or text, has been created by a Gen AI technology. And I think open AI does do this. So I think there's a responsibility. So if you are unsure if something is genuine, particularly if it's around something that's been created, then I think you should be able to very quickly upload the video to a piece of technology to say, Has this been created, generated by an AI platform? So I think all those three things apply, and I don't think we're that far away from any of those things. The technology exists to validate if something is genuine or not. The education certainly is there, and the social media platforms could do this a lot better. They're choosing not to because it's expensive, and you could argue that it limits freedom of speech, but they could do that. So I don't think we're a million miles away from where we want to be or need to be as a society, but let's try and make the world a little bit kinder, and that starts with making sure that people are making decisions and behaving based on facts that they've read and not something that plays into an echo chamber. We all live in echo chambers. We tend to consume content that suits our beliefs. So because of that, it's quite easy to trick us. So I think let's try and just make sure that information that's in our echo chamber is factual before we make some quick decisions and behave badly.

Debbie Reynolds  42:19

That is tremendous. Thank you. I agree with that. I agree with that the technology is there for us to be able to figure this out, but I think it does take people and it takes companies to have this power to want to help solve this problem, because it is a big issue.

Darren Spence  42:36

Yeah, definitely.

Debbie Reynolds  42:39

Well, thank you so much. It's always fun to chat with you, and thank you for sharing all the things that you do, and you're doing a wonderful job. I love what SmartBox is doing. It's funny because I know I'm "The Data Diva, so when we were first talking, I was asking you all some very deep questions about how your tech works, and you answered all my questions. So I was like, okay, yeah, right, you guys are on the right track. So I'm happy to support what you all are doing, and I'm happy to see that you're trying to solve this hard problem, a problem that a lot of people just gave up on. They didn't think it was any good reason or good solution to be able to solve it. But I'm glad that you're working in this space because it's a really hard problem.

Darren Spence  43:24

Debbie, I love to work with you. Thank you so much for inviting me on today, giving me a bit of a chance to talk about what we're doing, and I look forward to continuing our chats. I'll see you in the US. We're coming to the US.

Debbie Reynolds  43:35

Oh, excellent, excellent. Yeah, I'd be happy to see you. Excellent, excellent. All right, we'll talk soon.

Darren Spence  43:43

Thank you. Okay.

Debbie Reynolds  43:44

Thank you. Bye.