E141 - Craig Hamill, Director of Innovation and Technology at UL Research Institutes
Your browser doesn't support HTML5 audio
53:57
SUMMARY KEYWORDS
information, people, ai, working, technology, data, build, world, privacy, generated, create, transparent, model, amplifies, drive, feed, feel, safety, product, component
SPEAKERS
Debbie Reynolds, Craig Hamill
Debbie Reynolds 00:00
Personal views and opinions expressed by our podcast guests are their own and are not legal advice or official statements by their organizations.
Hello, my name is Debbie Reynolds; they call me "The Data Diva". This is "The Data Diva" Talks Privacy podcast, where we discuss Data Privacy issues with industry leaders around the world with information that businesses need to know now. Our special guest on the show is Craig Hamill, he is the Director of Innovation and Technology at UL Research Institutes. Welcome.
Craig Hamill 00:38
Hey, glad to be here. So it's been a while since we last talked, and looking forward to the conversation.
Debbie Reynolds 00:45
Yeah, right. Right. So the last time you and I chatted, you were the Director of Information Technology at the Pritzker School of Molecular Engineering at the University of Chicago.
Craig Hamill 00:56
I was yeah, that's definitely a mouthful. Yeah. And I was working with the head of the Novel Technology Stack Engineering School at the University of Chicago, working on a lot of material sciences quantum engineering, pushing the forefront. Recently, I transitioned over to UL Research Institutes to build the Lab of the Future and drive innovation and take all that research knowledge that I had in the past and continue moving forward and building that new safety science and creating a better world and the mission of UL is to really drive standards and engagement and build tomorrow better, and that's a really novel passion, that gets me up in the morning gets me to work and wants to learn new things. So it's been a fun, short journey being there. But I'm really looking forward to all the things that we have moving forward.
Debbie Reynolds 02:01
Yeah, that's amazing. You and I had the pleasure of being on a panel together, and I thought, wow, this guy's smart, like, we've got to get him on the show. Not only that.
Craig Hamill 02:11
That's always a good way to butter me up and bring me on.
Debbie Reynolds 02:15
Also, you have such interesting views. For a lot of people, when they think about innovation, they think about pie in the sky, or something that really doesn't have any tangible application. So I feel like you're doing work where you have that foundation around the practical reality of user technology, but then also having a lens towards the future; give me your thoughts about your career trajectory, how you moved into this emerging or in a recent technology space?
Craig Hamill 02:46
Well, I think what I've always brought to the organizations and the groups that I've worked for and worked with is I come from an engineering background, I've done a lot, I've started at the helpdesk level, building my own computers. And when I was really young, learning how all these things work, pre-Internet, or, I guess, Internet light days. So you had to learn by doing and troubleshooting and developing all these practical applications; you didn't have a Genius bar, you could just take a broken piece to, you had to reach out to people and learn and break things and realize your mistakes and grow from that. So I've taken that kind of mentality of failing fast, learning by doing, and not getting too crazy into anything. But it's helped me learn the business side of things, learn the technology and the engineering side, and then provide value. I mean, that's really what I've tried to do in my career, and my growth through the various levels of everything is doing what works now, build the foundation, and start really piling on the frosting and the icing and making it shine after that. So it's what I'm trying to accomplish in my new role is to have a pie-in-the-sky vision of things. How do we integrate all these emerging technologies, but also, we need to bring value we need to show in our research space and technology that popular AI, ChatGPT all of these models; how do we bring all this in? And it's cool, and we hear about it in the news, and everybody seems to be talking about it, but what's the value? I can't just bring in and request funding and people and spend so much time bringing something in that like is cool but really the end of the day, doesn't do anything or provide any value for what worked. So there's a lot of little things that, especially in the AI, in that generative AI model that I think are very useful for what we're working on now, has tremendous applications. But also, it's young, it's the wild child of the technology forefront. It's a wild card in terms of how it's building these things out. And to me, especially just talking to my colleagues, and how are we going to use this in the future? It has value, it can accelerate and create a lot of velocity for what we're working on, especially in standards, creation, and documentation. But we also should have a healthy distrust for it. We don't know what's being fed into it; we don't know what we feed into and how to build out. So I'm running in that direction. But I'm also very, very cautious in how we implement any of our stuff. So yeah, it's been a fun journey. And in my career, I worked in a large bakery in Chicago, making cookies and learning machinery and timing, and all that stuff to tack in and a lot of just different applications. So it's been pretty great.
Debbie Reynolds 06:28
Excellent. You used the phrase before we started recording that I'd love to get you to talk more about, I know that you work in AI, machine learning, VR, XR, the ARS reality space, but the phrase that you use was digital safety, tell me a little bit about that. Because I feel like all the things that we're working on now in technology, it has to have that element. And that cuts across whatever it is that we're doing. So give me a flavor of what you mean when you say digital safety.
Craig Hamill 07:07
It's a frontier, where it's an extension of social media, whereas now we have an avatar or something, especially I guess, there's different kinds of the hours, the realities, right like you have industrial VR, AR, which is a company built training program that is corporate centric within a company and then you have the public reality, your Metaverse, your Facebook, a lot of the Reality Labs Mark Zuckerberg type area, Second Life being another, one that's been around for over a decade now. But the Reality Labs and Mark Zuckerberg team are pouring billions of dollars into building that digital universe of the the Ready Player One book of things and how to move everybody from this physical world with a headset, and everything into a digital world where we have our own avatar, and we could create our own future. But with that comes a tremendous amount of problems. You don't know who's who I mean, even social media, catfishing, and all those things happen. It is a problem; you don't know who you're really talking to. But when you move into an AR, or into a VR world, you could create your own persona, you could have your voice be different, all the things that you want. And that creates all sorts of obvious issues if younger children are going into a VR world. Are there predators, are there scammers out there trying to push the boundaries of that technology? I mean, I think there's the safety component of that. And digital safety is so wide-ranging, we have demonstrated technologies of how the next phishing and criminal elements could use AI, ML and voice recognition, and VR in a situation that could really start causing issues. And I think it's not very far away where for instance, you get an email from the CEO to buy a bunch of gift cards, and you're like, yeah, I know that's a scam. People fall for it all the time. But what if I could create a VR or an augmented reality component of your CEO, with the tone of his voice, how he talks to you, create a Zoom call, have it call you and say, hey, so and so I need you to transfer $10,000 into this account for me, and you look legit, the verification of all these things become really muddied. You don't know that it's not them. Because it looks like them, talks like them, comes from them, it passes all of the candidate sniff tests. So, that digital safety, how do we watermark that? How do we know of deep fakes that are out there? How do we know that news is being generated by people that isn't being curated content, just specific to you? I mean, that is future proof, a future thing of the AR-VR world, but even to just some state right now, that in social media that we don't, there's a large contingent of people who don't realize that the stuff that is being generated, the content that's being generated in your Facebook and your Twitter feeds, is pointed to you. It's not the old-school newspaper of news that shows up; that is the same for everybody. It is curated and amplified to who you are and drives more of that. Whatever you want it to be, it builds a community that may be good and may be toxic, or misinformation, or amplifies these other things that may or may not be true. So that digital safety component is a really tough nut to crack. Because who's that? Who's responsible for doing that? Is it the corporations? Is it the platforms? Is it people? Do we have cyber cops that are looking over all these things? Do we have our legislators create laws that do this? I don't know. I mean, the legislative body of the United States is not tech-savvy. You could watch any news, you can listen to like TikTok and watch how that's going. And it's painful to see people creating laws or attempting to create laws who have zero fundamental ideas about how either the Internet works, or their cell phone works, or the information they're being provided is by some 23-year-old intern in their office telling them something so I don't know who does, who creates digital safety? How do we put the standards in there? But moreover, how do we create all this stuff that isn't political and bipartisan and has all this stuff wrapped around it? I'm not entirely sure. But I think we're going to take a crack at it and see what we could do.
Debbie Reynolds 13:07
Absolutely, it's a tough problem. I feel like I have to find it on different fronts, and maybe people like you or me, I find that people seem to have had success, and it can target maybe one area and just pierce the veil and that way to be able to do that. I want to talk a little bit about privacy. So privacy is, in my view, a horizontal issue that cuts across almost any type of digital life that people have and the types of things they do. How does that play into your work, maybe in ways that it may not have in the past?
Craig Hamill 13:48
Well, the privacy component, it's not just personal privacy; it means it's corporate privacy, it's research privacy, and when we think of privacy, just my stuff, what can I Google about myself or what is out there? When within my new role, and even when I was at the University of Chicago, is, again, all of these emerging technologies are coming up there. We're leveraging the cloud and whatever the cloud means, and we're taking all of this data that we're generating, and our intention is to make it work for us. We create all this information; we use a service that's out there that gives us feedback and helps us derive the next step. The problem is that when we feed it into that black box, we don't know what's inside that black box. We don't know how it's generating some of that information. I was reading through some. So some details on some of the merging social media platforms. And it's very interesting that they own your content. You know, if you send out a tweet, or I'm actually reading about Blue Sky this afternoon, the stuff that you put up there is no longer yours; it goes into, it gets shifted into this algorithm into this black box, and then spits something out for you. And, you know, in terms of, is that your information? Is it? Can I trust what's coming out the other end, like I put stuff online or input things in and it comes out now? Is the open AI models or Bard or whatever else is out there to these AI generated and ML tools? And they're training off of all this material? Well, I didn't give them permission to do that. But how's it taking it? Is it randomizing this information? You know, or even to some extent, as a pivot, I think we could all agree that using some of these AI tools for medical diagnostics is amazing. You could feed CAT scans into an AI model. And it can tell you things that doctors wouldn't have you ever known about. Well, to build that it had a look at millions of images before that; well, who gave them permission to look at those images? I didn't, I'm pretty sure nobody else did. There's likely no language in any of our medical records that said they can. Did they add them? Did they take it and shift all that, remove all that information out of there? Is there any information that is stuck somewhere? It's helped our diagnostics thing, but also, in a privacy component, starting a new job and going through even just setting up, my medical stuff, there's a component for rewards for doing activities. How many steps do I do? Am I riding my bike, going for a run, and that's being fed into the system? Is that being used against me? Is that going to raise my premiums, or some of these details that I'm working on, or in conjunction with going to the doctor predicting things that maybe my health care provider is going to start charging me more money, or changing how the healthcare model for where I'm at those things are? They don't keep me up at night, but they're definitely on my mind. Some of it may be a little bit of a tin hat thing, but I'm concerned that we don't know where any of this information is going. Who's managing it? We do know when there's a breach. And it gets out on the web. And it's reported, but we only know about the stuff that's reported. And that, again, is another huge deal.
Debbie Reynolds 18:22
Yeah, it's something you touched on; my concern with me often is either the absence of data or the absence of just like your medical example, whose x-rays were taken. Is it only one group? Not every person has the same health physiology makeup. Is someone going to take that information from a limited data source and then make a judgment globally around other people that can be very harmful and is also a concern that I have? Also, I think what we're entering into, and I want your thoughts on this, is that we're reaching an age of unprecedented computing ability, whether that be personal computing or quantum computing, and things are going to change and escalate very rapidly. I think the thing with ChatGPT things that are going on that's an example of how rapidly computing and AI and machine learning are going forward, and I'm not sure that we're already; I guess my concern is we still have people using 12345 passwords, and they were going to slam it with mixed reality and deep fakes, quantum computing. What are your thoughts?
Craig Hamill 19:42
Yeah, I mean, nobody really cared about AI and the generative AI technology until last November, when your ChatGPT came out, and Open AI and ChatGPT came out, and then it's all we could talk about for the last four or five months. And just everybody trying to come up. They've all been working on generative AI and building these models for a very long time. So it's not anything new; generative AI wasn't invented six months ago; it's been there. But now it's become easy. And that's the whole thing; a lot of this has become very easy. AI and ML tools and machine learning tools have always been around to do all this data, but now anybody could use it; anybody can leverage it. And now people are coming up with really novel ways to exploit maybe isn't the right word, but they could use it to do everything. The joke of you could write, it could pass the bar exam, you know, ChatGPT could pass the bar exam. And I think that ChatGPT 4 could do that. How's that going to impact education and all these things? It is driving forward pretty rapidly. The response? I find it very interesting, especially with the open letter from some technology leaders like, we need to put a six-month pause on the development of this stuff. I find that actually quite comical. Because it also makes me just feel, well, do you want a six-month pause because we need to come up with standards and safety and how we wrap our heads around this? Or you're in last place, and you need that six months to catch up? Because generative AI models are feeding information and developing they're pausing it for six months, does it stop anything, it just means the the GPT five model will come up maybe a little later, but it's still going to be there. And it's going to be even just stronger later on. How all of this works in conjunction, and why does it matter? There's a lot of applications that people are not aware of; they're going to catch them by surprise really, really quickly. We live in a divisive time. And I don't think we could really; we don't have to debate that. But if we live in a divisive time where we don't know where the truth is coming from. And we could create models, AI isn't going to take over the world, and it isn't going to be some sort of Terminator thing, mainly the models are one source only. The generative, it's, we're not going to have the Minority Report or any sci-fi movie, AI bots running around, they're typically just one task only things. But there's a lot of things that are going to hit people pretty quickly, and a lot of uneducated to that technology is going to fall victim to; it alluded to scammers using this technology, with two sentences worth of voice recognition or voice capture, they could regenerate your entire tone, how you speak your dialect, your tonality creating an image of you, we've seen the Mona Lisa coming to life and talking. You they could guess what her voice may have been based on some samples. But we could if you see that and you start seeing deep fakes. You don't know what's real, what's not real. And the truth becomes this sort of weird component of what's being generated? You know, I think there's a lot of news organizations and journalists in generally are having conversations of, I could write a news article with some input, I still have to input data into this generative AI model to get a news article out, and it saves me time from having to do grammar and punctuation and other stuff, but how much transparency are we willing to provide, as we're creating this to let people know like, hey, this article, if I'm doing a review on a laptop, I did the spec and the work, but this AI model wrote the article for me. Are we going to be transparent in that way? I don't really know that we ever will be because there's no money to be made in being transparent. And then that goes back into the safety component of if you're not transparent, then you don't know what is correct information or unbiased information. Or even if it's generated by an AI, that knowledge that the AI and the machine learning tools used. Do we even know if that's accurate? I mean, if you built your model off of encyclopedias that were way out of date that had no information, you could have huge gaps that don't make sense. But it's your feeling that it's the truth. And that's not going to be right.
Debbie Reynolds 25:50
I heard someone call it a post-reality world where we have to check to see whether something is true or not, as sometimes people know that sometimes it's just easier to take things at face value, but part of manipulation, I guess that happened, or just like you said, at the very beginning, I think people think that when they get information it's like the newspaper in the past where everyone's saying the same thing, as opposed to things, algorithms curating things just for you, they want to get your attention, and they want to get your participation. So whatever it takes, if you're in the thing that gets your eyeballs is controversy, then that's what they're going to serve you.
Craig Hamill 26:39
Yeah, absolutely. And it's an interesting problem. I think a lot of people realize that, but a vast majority of the social media user base doesn't get that you're being served information that only amplifies you and your immediate community. There's a lot of technology that is invested in your smartphone and all the listening devices around you that are amplifying not only what your thing is but what your friends things are and the people you're close to, and amplifying them to you. And people don't realize that, and then when you try to tell them, that's a thing. They're like, now, who would want to do that? I don't have anything that's really noteworthy like you do; you're the product you're being sold on. A lot of stuff that you don't realize, and it's done in a way that is so subtle that it's imperceptible to anybody else, to me to everybody. Until they gotcha. I mean, I have a great story that I tell people of this exact thing. I have been a Formula One fan for a very long time. And with the drive to survive series on Netflix, it garnered a lot more people, but I had a, during the pandemic, friend, a couple that we would hang out with in our bubble. I would always joke about watching F1; they did not care one single bit about auto racing or even Formula One. That was not even on their radar. But every so often. I would just mention it. And it was infrequent. I mean, maybe twice a month. But it became such a thing. And I wasn't like trying to get them into it. It was just a talking point or like, hey, I watched the the Grand Prix at Monaco. And it was kind of an off the cuff thing. I knew they didn't care. It wasn't worth big trying to get them into it. Well, over time, they knew that I watched it. And my smartphone knows I watch it; Google and Netflix aggregate all that information. They started getting ads on their phone when I'm not talking about it. When I'm not around. How the latest Grand Prix is starting in two hours. Again, they're not searching for this stuff. They're not looking for it. It's not even on their radar. It's not in their search history but started getting pop-ups and notices and other things. And oh, did you guys know about Netflix's drive to survive? So it didn't happen overnight. It didn't happen over the course of six months. But over a year. They were bored one day it popped up in their feed, and like Alex, give it a try. Craig talks about it infrequently but frequently enough. Now there are huge super fans of Formula One who watch it every weekend. And it drove that conversation, by just a slight little spark years ago, was a long game. And now they're into that whole thing. Which is, that's what happens. And people don't realize how subtle that information is and how they get you, the advertisers, and these sources get your feedback. Again, they weren't searching for it. They weren't talking to their phone, but their friend whom they had a good relationship with. And as someone that is influential in their life drove their buying habit, and they didn't even know it.
Debbie Reynolds 30:52
Yeah, I remember you told that story that is really, really interesting. That's definitely true that I know that happens, for sure. That actually is a good outcome, right? So that the advertiser who's listening, they're like, great, got some more people interested in Formula One. So we sold them more stuff. That example could go a different way too, right? And one example I like to share is one of the whistleblowers from Cambridge Analytica; he said they had gathered so much data from people what they were seeing; you and I know correlation is not causation, right? But they saw that whenever they served up, there was an anti-Semitic message that a lot of those people also liked KitKat bars, so they called it like the KitKat project. So does that mean, if you like a KitKat bar, that you're anti Semite, you know what I'm saying? Like this could go like way off a cliff?
Craig Hamill 31:50
Oh, I mean, there's data mining. I mean, we talked about smartphones, and your Alexa and your Googles, and everything listening to you, but I mean, that's a very recent example. I mean, there's also examples of, why does your local grocery store have a rewards card? You know, they give you rewards, sure, but they're tracking your purchasing. They're tracking what your demographic is because you put in your name, your age, where you live, that information, and then they know that as a 40-year-old white male, I am buying this product, this product, and this product, and therefore, when they're gonna send me some coupons, or whatever, this is back in pre-smartphone, that's what they would target. And there was always an example of a Target doing exactly this, if you bought these five products, there was a high likelihood that you were pregnant, and they would send you maternity coupons and stuff where people were receiving this, I think this was a case study quite a while ago that a younger person who was pregnant but hadn't told anybody to receive something at home, and their parents were like, hey, why are you getting coupons for prenatal vitamins and everything? And it was; these things were purchased. Correlated to, you're likely pregnant; therefore, send this stuff. And it became a huge deal of, yeah, this information is being mined. And they're putting all of these details together to drive more purchasing, which is sinister.
Debbie Reynolds 33:53
Right, and so that leads me to my next point, which is around privacy, right? So I'm sure the person who was buying vitamins and doing different things assumed that they had privacy, and they probably didn't realize that data is being mined and targeted to them, and probably exposing them in a way that they hadn't anticipated. Since your innovation, you're probably the best person to ask this question. I think when people like me are people who are privacy advocates when we talk about privacy, a lot of times we talk about privacy in terms of agency, so we want people to have control over their data and stuff like that, which there's nothing wrong with that. But I feel like agencies, especially in the new technological age, we see all these innovations coming about. I don't think agency is enough because how can we? How can I, as just a regular human? What can I do with all this data that's being collected about me? Let's say I have a bank of data. And I have like all this information about me; what can I do as a human without Facebook or Google or all these really cool technologies? What can I actually do with this data? That's my question.
Craig Hamill 35:17
I mean, yeah, if you were the only person that collected it, and you owned everything, what could you do with it? I don't know. I'm of a certain age where smartphones and these things didn't exist, I'm still buying stuff. I was still doing things. But even to some extent, I don't think that all of these places that are collecting data on you are necessarily evil or bad, or it's a thing that we shouldn't be doing. It provides value. And I, again, going back to my original, what I'm trying to do, a lot of these things provide value to you. We're happy to give up a little bit of something to get something else in return. And that's not a bad thing. I think we're, the issue is, the lack of transparency in all of this. I know that I go on Facebook, which I no longer have anymore. But when I was on a Facebook account, I knew Facebook was doing all of this, but it was also allowing me to build a community. Talk to people I have talked to, share in there the lights that I give my third year and fourth year, friends and family, I'm not picking up the phone to call them. But I also could enjoy them going out for a picnic with their kids. And that's great. But I also know that they're taking my information and all that viewing habit and crafting something there, they've got to make money. I mean, companies need to make money; they have to be profitable. We have to understand that there's money to be made in all of this. But the problem is being transparent about it. How much transparency? That's another discussion that I don't know if we have enough time to go into, but I want to know what you're doing with my information and how you're using that information. If that information leaves, if I give Facebook, my clicks my eyes, I know they're gonna use it, but who is that going to? Is that being sold to another third party, which is selling it to another third party, which is doing all this other stuff? I should have a paper trail on that. I know they're gonna, at some point, sell me whatever it is I'm looking at. Sure, that's just kind of the name of the game; where do you get stuff for free? But I want to be transparent about it; please tell me what's going on. If the data is leaked or stolen, I want to know about it. I know that I could choose to continue using your product and your service if the value which I perceive that you're giving me outweighs the risk of using it. So having all of this ton of information as my own, in my own little world in my bubble, like unless you're a data scientist or whatever, it is an on-prem unhackable service, then it's no use for you. But also, if you're collecting all of this information. Now you have all of this in one thing; now you're a target for somebody; it's like having all of your money under your bed, instead of putting it in a bank or distributing it across some assets. Now someone takes it from you; they've got everything. So it's sort of a little bit here and there. And somebody's putting a picture together, which of course, there's a couple of data lakes out there that literally have everything about us anyway. But there's not one attack platform or one tech factor to do all that. I mean, I also believe that you have; we should have the right to get rid of all of this stuff if you don't feel comfortable with it being out there. I should be able to delete it and delete it permanently. You know, what have you collected on me? And I would like to get rid of that. I would like to know if it's something, I want to know what it is. And if I'm not comfortable with it, or I think you're a bad actor, I should be able to get rid of all of that information. And you no longer have that control over the information that I've generated for you. Again, that's something that, to some extent, you can do with Google; you go in there and erase all of your stuff. But do I trust that? Is it somewhere else or has it been sold somewhere else, and that's the only copy? No, they're gonna make money. That's the whole point of it.
Debbie Reynolds 40:41
Yeah, when people are trying to develop products and tools, I heard someone recently say that we're moving into a situation where we have a trust economy, where people are trying to decide what companies or brands they trust most, and that's who they're going to give their data to. But I think the point about transparency is key. But then also, part of transparency is for the person to be able to say, okay, this benefits me, so I'm going to give data in exchange for whatever X service or product; what are your thoughts?
Craig Hamill 41:18
Yeah, I mean, we all, at least, people that I know, choose products or services and align themselves with companies and organizations that fit their goals, their aspirations. We try to be more environmentally friendly. We're looking for a new car, do we want to buy an EV? Or a hybrid? In that sense, if we do buy an EV, are the materials being sourced, how's their battery recycling program? I think there's just a more holistic view, and a lot of people are buying and choosing to use things. Because I think we've all been burned. If you open up this credit card, we'll plant 10,000 trees or whatever, reducing our carbon footprint. But are you? And who's going to hold you accountable for that? No one is. Am I gonna go out and check that you planted five trees for me? No, you're not; I'm not going to, nobody else is going to. But the people who are actually doing these things that you believe in and trust, I see those getting more business and kind of driving that information sharing what we're actually doing. We all pay, we vote, and we have advocacy, and all of this by our dollar. We don't want to do something. We don't have to pay for it. If we don't like a product, we just don't have to pay for it. We don't vote for that person. We don't vote for that ideology. We don't do these things. But sometimes, there isn't an alternative to doing that. But we all want to trust what we're doing is the right thing.
Debbie Reynolds 43:42
I agree. So if it were the world according to you, Craig, and we did everything you said, what would be your wish for privacy anywhere in the world? Whether it be human behavior, technology, or regulation, what are your thoughts?
Craig Hamill 43:59
Oh man, I mean, that's a huge ask. You know, there's no utopian part of this. No, there's things that I would love to happen, but I understand that they're probably not profitable. Again, they'll please business, and then these organizations need to make money; they need to do this stuff to pay their employees to do other things. And in my world of privacy and anything else is being open, be transparent. I'm not doing these sort of building your models around your customers, to build the requirements and the profitability of your shareholders to the detriment of your product and the people who use them. I say this because there's a very popular password encryption company that was, I don't know if they still are but they were hacked, bad. And it took them six months to disclose that. Well, I was putting all of, there's a lot of people who were putting their passwords and stuff in there and it was hacked. And they didn't tell you right away. And then when they did tell you, I believe the CEO sent out an email on a Friday at 5:30. Letting people know about the plans and how they're working on that. Dude, like, you did it after hours for obvious reasons. And you did it on a Friday again, so wouldn't get, gather all that media scrutiny. But at the end of the day, if you read the article, you drop it like a bad habit. Let's be more transparent; let's like, don't be overly transparent. You don't need to tell me everything about everything. But if there's a breach, or there's an issue, or something's going on, tell us. We know that the world isn't perfect. But we also need to know that different information is being used in a way that we don't find acceptable, that we should be able to take corrective procedures to protect ourselves. Credit agencies, losing all of our information is a huge deal. And if you tell us six months later, I don't know about this. And that has impacts on so many things in our lives. And now we're fighting to figure out why our credit score was dropped. Because a bunch of people opened up credit cards in our name or looked at our stuff. Why are they doing that, if no one's telling you or having some information, I just think there's just so much like, acknowledge it, shove it on the rug, and move on and get out of the news cycle as quick as possible. And that's unfortunate, especially with a lot of the information that we have generated that's out there, that is, unfortunately, being hacked or compromised. We don't know about it till it's too late. And that has ramifications for a lot of people. So my ideal world is to be transparent. I know you're gonna use my data, especially on these free services. But also give me a way to opt out of it. And then I can also not use your service, which is becoming more and more, the drop social media campaigns. But there's also value to a lot of these things. Like, I don't I have a Twitter account. I'm not a huge fan of where Twitter's gone in the last year. But there's also value because it gives me notifications on technical outages and product updates. And that's just a quick way to do it. So how do I drop something without having an account? But so I'm kind of diametrically opposed to what I'm supposed to do with that. But I also wish some of these things were run by not one guy with an agenda. Right? There was more of an editorial board or something. I mean, I kind of go back to the olden days of newspapers, which had a bias, but there was integrity, right? I just don't feel like anything has integrity anymore.
Debbie Reynolds 49:02
Yeah. Yeah, it seems like it's like a Ferris wheel that's going off the hinges and just kind of rolling around.
Craig Hamill 49:10
A little bit. I mean, it's the people who realize that Twitter fully is moved away from that news sharing quick and easy way and 140 characters to get a thing out to the world. And now it's weaponized toward other things. It's become sad because now it's people and things that I used to find a lot of value in are dropping off of it. Which isn't necessarily a bad thing, but my aggregation of information is now taking longer. I could certainly go to NPR and look up at new stuff and hit their website. But it was also great to see the reporters and things that are very specific to me that I cared about be on Twitter and have that in my feed. It's an extra couple of steps. Is it a big deal? Oh, kind of, because I'm not going to do it. You go to one platform and see it. I'm not gonna go to 10 platforms to see the same information. So right, that aggregation is important. But also I don't want to put my dollar, my eyeballs into somebody that I think is driving divisiveness and an agenda that is crazy.
Debbie Reynolds 50:41
Yeah. I don't know about you. I feel like the future of social media isn't the public square. I just don't think so. I think people, maybe it's a pendulum that goes back and forth. But I think people want curated, specific information you want to get really quickly, not necessarily the firehose of what everybody else is looking at. Right?
Craig Hamill 51:08
Yeah, I mean, I was a very early adopter of Twitter, in most social media platforms, just because the very nature of who I am, and gets to a critical point, you follow some stuff, and then it gets to a critical mass, where it's a full-time job to just keep up with these posts. You follow some celebrities, and it just seems like there's just a stream of consciousness of, like, I like this person, I think they have value or funny, but you're not posting every other day, you're posting every, like every 45 minutes, and I don't really care what you had for dinner. I would love to know what show, your touring schedule, or music act, I don't need the constant stream. And then once you get to a critical mass, I mean, you can't, there's no way to continue to follow this. And I don't know how people actually do follow it. But to me, there's just so much noise, even the things that I want to see, there's just so much noise in that conversation that I have to have multiple Twitter handles that I know are very tech-specific, or these are products that I work with, that they're telling me if there's outages or updates or maintenance schedules, and there's another Twitter that I have, that's just music that I like, and these are the artists that I follow. So you aggregate all these things together because you've put them in one stream. I can't keep up.
Debbie Reynolds 52:43
Very true. I agree with that wholeheartedly. Well, thank you so much, Craig; this was great. This is fantastic. So keep doing what you're doing. I love what you're doing, the innovation. You're the perfect guy for this job. So I'll be watching you and what you're doing. And I'm excited to support you in any way I can.
Craig Hamill 53:02
Oh, absolutely. This is great. I look forward to future conversations and coming back in the very near future to tell you what I'm working on and how we're like, I don't know, changing the world for the better. And doing all of our safety, science and now doing just doing cool stuff. I mean, and in a way that is equitable and safe. And I don't know, just smart.
Debbie Reynolds 53:31
Yeah. Things that people want, people need, and being able to do it in a transparent, cool way. Very nice.
Craig Hamill 53:38
Absolutely.
Debbie Reynolds 53:39
Well, we'll talk soon, for sure. Thank you again.
Craig Hamill 53:42
All right. Thank you very much.