Debbie Reynolds Consulting LLC

View Original

E137 - Louis Rosenberg, CEO, Unanimous AI

Find your Podcast Player of Choice to listen to “The Data Diva” Talks Privacy Podcast Episode Here

Your browser doesn't support HTML5 audio

The Data Diva E137 - Louis Rosenberg and Debbie Reynolds - (57 minutes) Debbie Reynolds

57:42

SUMMARY KEYWORDS

ai, conversational, human, influence, information, conversation, data, create, generative, systems, talk, emotions, virtual, models, companies, people, agent, real, feel, technology

SPEAKERS

Debbie Reynolds, Louis Rosenberg

Debbie Reynolds  00:00

Personal views and opinions expressed by our podcast guests are their own and are not legal advice or official statements by their organizations. Hello, my name is Debbie Reynolds, they call me "The Data Diva". This is "The Data Diva" Talks Privacy podcast where we discuss Data Privacy issues with industry leaders around the world with information that businesses need to know now. Our special guest on the show is Louis Rosenberg, he is the CEO of Unanimous AI. He's also an AR/VR pioneer, founder of Immersion Corporation, founder of Outland Research, and he has over 300 patents in augmented reality, virtual reality and artificial intelligence.

Louis Rosenberg  00:52

Welcome. Yeah, thanks for having me.

Debbie Reynolds  00:55

Well, this is going to be a fun show. It’s so funny how we know each other. So first of all, I'm a fan of your work. And I read almost everything that you write on LinkedIn, it's funny that I actually end up seeing a lot of your stuff in the press before I even see it on LinkedIn. So sometimes I see that, you know, oh, that's really great. I love what you're doing that also you and I intersect, because we collaborate with an organization called XRSI, which is into all this immersive types of technology. But before we get started, I think you're the perfect person to talk to around emerging technologies and emerging data spaces. And that's the space that I play into. But I'd love to get your background so that the audience understands your trajectory and what got you into AR/VR and also your interest in artificial intelligence?

Louis Rosenberg  01:54

Yeah, yeah. So the I spent my whole career really focused on three technologies, VR, AR and AI, and they're actually all converging recently, which is what I expected to happen from the start. It's taken longer. But I started my career over 30 years ago, back in 1991, working in virtual reality labs at Stanford and NASA, working on early vision systems. I was immediately convinced that immersive technologies would ultimately be the future of computing. I really do think that immersive technologies are a humanizing technology. Our brains are meant to receive information spatially, and not on flat screens. It is ultimately the way we should interact with information. When I was working at NASA in Stanford, I really did love everything about Virtual Reality, except I didn't like that it cut me off from the real world. And I felt like that was a real barrier. And so I pitched the idea basically, of mixed reality to the US Air Force. And they funded me in 1992 to go to Wright Patterson Air Force Base, Air Force Research Laboratory, and developed what became really the first interactive mixed reality system, a system called the Virtual Fixtures platform. That was a great experience because I had lots of human users who would come in and try the system; they would be trying basically mixed reality for the first time. And I could tell from their reaction that if this was available to everybody around the world, they would want it, and so in 93, I founded one of the early VR companies, a company called Immersion Corporation. We started out working on VR medical systems for training doctors to perform surgical procedures and got very involved in haptic interfaces for all kinds of applications. And we brought the company public in 1999. The company is actually still around today. But I left and founded a number of other companies, one of which is my current company called Unanimous AI, which is an artificial intelligence company. So I've been really spending the last decade focused on artificial intelligence. And my philosophy about artificial intelligence is that it should be used to amplify human abilities and amplify human intelligence and not be used to replace humans. And so I've been a vocal advocate for the last decade about the dangers of AI replacing humans, and now those risks are becoming really, really central to the whole world. And yeah, so it's, that's how I got to where I am today. And you mentioned you're interested in Emerging Technologies. I think the most interesting emerging technology at this moment in time is generative AI and large language models and the very rapid impact that it has had and will have on everybody.

Debbie Reynolds  05:19

Well, I always appreciate your work, the writing that you do because you explain these things in detail. But then also you give a more big picture view of what's happening. So what don't people understand? They're just reading blurbs on the Internet about AI and Generative AI? What don't people truly comprehend?

Louis Rosenberg  05:40

Yeah. So I mean, I think that we are right now at the start of a revolution. And, you know, I don't say that lightly. I do think that AI is going to change society in really major ways. It's as impactful as the PC revolution, the Internet revolution, and mobile phone revolution, in that this is basically how the AI revolution has just started. And the difference is that it will happen much faster than those previous revolutions, the PC revolution, you know, we had over a decade to kind of live through the transitions of personal computers changing society, Internet revolution, maybe it took seven, eight years, the mobile phone revolution was faster. The mobile phone revolution only took six years from when the iPhone was first launched to smartphones making up more than 50% of the phone market, replacing flip phones. I think this AI revolution triggered; we can look at the iPhone moment as being when ChatGPT was released to the world. And I think it's not going to be, you know, 10 years, or eight years or six years; it'll be, you know, one to three years, we'll see really big changes. And I think the way that I think of the AI technologies that have been unleashed is really twofold. One, we now have the ability to have AI systems create human-quality content across all levels of media and human-quality articles, scientific papers, videos, artwork, photographs, videos; everything is now in the possible realm that AI can generate. And not just a human quality but an expert human quality. Now, these systems are not sentient. And they have, they have lots of problems. But they can create human-quality content at scale. And I think it's one issue that I think is going to have a huge impact. The other thing that I think we also need to talk about is that Generative AI in these large language models now allows us to talk to our computers, and the computers talk back. And so we now we're entering this realm of Conversational AI. And we've kind of, you know, people thought we had that for a while. You talk to Siri, you talk to Alexa, those weren't really conversations, you issue a command, and then Siri does something, hopefully, that you wanted to do, or Alexa, these current generation of AI technologies, like ChatGPT you can talk to the AI, the AI can keep track of the conversation, the AI could actually ask you for more information could ask you to elaborate, and you can have an actual back and forth conversation. And that's gonna change all aspects of computing because the way we interact with computers is going to become largely conversational in the very near future. Yes, right now, it's text-based for when you interact with your ChatGPT or Bard or any other chatbot, but that will very quickly become voice-based, where you're talking to the AI vocally. And then there will be virtual humans that look at, that also have, you know, they're not disembodied voices. When you go to a website, there'll be a virtual representative that will look like me in a Zoom window. But it could look like any person that they can generate. It will look photorealistic. It will look like it's expressing emotions, and it will be the representative of that business. And you will hold conversations with those representatives to get information. And so all these things are, first of all, that technology is super impressive and remarkable. There's lots of good things that will come out of the ability for computers to generate content, human quality content, scale, and allow us to talk to computers. But there's also really big risks. And that's what I focus on. That's what you focus on. And so that's probably what we'll talk about, as opposed to talking about all the positive things. But I do think it's worth talking about both the risks that come out when we can create content at scale. And then the even worse risks that I worry about the most when we can talk to our computers. Absolutely. I want you to travel with me to the philosophical plane; I think you're a perfect person to ask this question. And I think my views have changed over the years. So you're a screenwriter, you've done things in media and film and stuff like that. But when you see a lot of movies around about the future, it's very dystopian, a lot of it. And I didn't believe in the evil robot theory. But now, with this AI, I'm a bit concerned. And I'm not really as concerned about the technology as I am with how people use the technology. So I feel like some people will abdicate their human judgment to AI. And I think that's very dangerous. But what are your thoughts there? Yeah, so I do think there's very significant dangers that emerge from the AI that exists today, the AI that has just hit society. I know that, you know, when we look to science fiction, and we look to, you know, this dystopian vision of AI, it usually jumps very quickly to us, you know, sentient AI that has a will of its own, and its interests are not the same as our interests, and it wipes us out? I do think that there is a real danger of AI becoming sentient and AI having a will of its own. I don't think it's a danger right now. I don't think it's a danger over the next few years. We could debate whether it's, you know, five years away or 50 years away. But I think that it almost doesn't matter. And people should be working on those protections. But it doesn't matter because current AI technologies are really powerful. And if they're controlled by sentient humans, they can have a very bad effect. And so, right now, we have these AI technologies, and they will be used by sentient humans to create problems. There's what I would consider to be the expected problems. And then there's what I would consider to be the unexpected problems. The expected problem that a lot of people are talking about is using Generative AI to create misinformation and disinformation at scale. And that's a real issue. You know, disinformation is not a new thing. misinformation is not a new thing. It's an existing problem. But now it's much easier to create deep fakes and to create fake scientific papers and fake articles. And there is a real risk that bad actors will use generative AI to flood the world with so much bad content that it becomes hard to know what's real and what's not real. That's real danger. It requires policy and regulation. I think one of the policies that require a governing body is an AI governing body that authorizes these large language models, and these are really expensive systems to create. So there's not like there's that many of them, but authorizes these parties and ideally requires them to implement a watermarking strategy, where there's a digital watermark that's built into everything that every piece of content that comes out can be traced back to where it came from, in terms of which Generative AI, but more importantly, that it can be distinguished as generative. And so if there is a piece of content that's generative, you at least know that a human did not create that scientific paper. It was created by an AI, and you can judge it accordingly. And so that's what a lot of policymakers are thinking about. To me, the more dangerous issue is that it's not that these Generative AI systems can create content at scale; it's that they can create content in real-time. And when they can create content in real-time, that means that content can be individually targeted at every single one of us. I can go online to a particular website or use a particular app, and it can create a piece of content that is optimized for me personally. And so you can think of targeted advertising as now becoming super targeted where there's a marketing message, but the advertiser can have a general description of what the message is. And they will also feed personal information about each of us. And so when I go to a location, it could know my age, my gender, my education, my political background, or political interests; it can know everything about me. And it can generate a piece of content that is perfectly tuned as best as it can in how it looks and what the messages are, and how it uses language to impact me personally. And if I respond to that piece of influence, it will update its model, and it will know that to generate more influence that way; if I don't respond to that influence, it will update its model, and it will try different tactics. And so we will soon see targeted influence advertising, both image-based advertising, text-based advertising, video-based advertising, that will be custom generated for us, and it will get better over time in terms of being as appealing to us as possible. And so that is a new danger that's emerging. But it becomes even more dangerous when we realize that ultimately the best form of advertising is most likely conversational advertising. If you're a salesperson, you know that the best way to influence somebody is not to hand them a brochure; the best way to influence somebody is to engage them in conversation to size them up through small talk, to pitch, whatever it is you're pitching to hear their reactions and reservations, and then to adjust your pitch to overcome their reactions and reservations. And salespeople can be very persuasive. Well, now, with Generative AI and large language models in the very near future, unless there's regulation, online platforms could deploy conversational advertising. And those conversations, that conversational advertising, will be two-way. It could engage you in small talk, size you up. It also has access to data about you. So it'll know things about you before you even show up. So it has a big advantage over a human salesperson, right? It might have huge amounts of demographics about you. And so now it will craft the conversation perfectly for you. It will mention the sports teams that you follow or mention, the musical artists that you follow, or were mentioned, political issues that it knows you're interested in. And it will, it could very easily gradually work in aid promotional information, targeted promotional information that it is trying, somebody is paying to persuade you on, and it could be almost invisible. So right now, most of us are engaging these conversational agents through text. And you can go to a chatbot and maybe go to a chatbot to do to search about something you're going to type in, and it's again, it'll be a conversational search, or I'm interested in, I have an electric vehicle. And I'm interested in finding out where the charging stations are. And so I type, or I just talk to my conversational agent, hey, tell me where you know, where the nearest charging stations are. And it talks back to me, and we're having a conversation, and it could just weave into that conversation, you know, hey, you know, a Tesla Model 3 would be able to get, you know, between this charging station and that charging station better than the car that you're currently driving. And so it could weave emotional information into the conversation in a way where you don't know where the boundary was between just getting informational content and getting promotional content. And unless there's policy that says, hey, it has to be really clear when a conversational agent is delivering influence, paid influence. You might not even know it, and again, right now it's text, but soon it'll be voice, and very soon it will be in digital humans that you're talking to, and it will be largely how we interact with businesses and services. And, again, informational services, you could be a kid who goes and wants to learn about dinosaurs, and you go to a website, and there's a cute character that's talking to you and saying, what do you want to know about dinosaurs? And you say, well, you know, tell me what the biggest dinosaur was. And it tells you in that, you know, kids search engine, and if it just gradually mentioned, you know, there's this dinosaur cereal that kids really like, you should tell your parents about it, like that could very easily happen. And current laws around targeted influence, around advertising, don't really consider the fact that we're going to be talking to these conversational agents that can be given a promotional agenda by a third party. And, guess what, the companies that are launching these conversational agents first, like Google, and Microsoft and Meta and do the very same companies that sell targeted influence, that's your business, they, there they are, they are in the business of providing a service in exchange for advertising. And if that service is conversational, and the advertising is conversational, we have it very easy for people to potentially be deceived or not know the boundary. And it's also very young; my biggest worry is that these AI agents could potentially be very manipulative because they are more powerful than a human salesperson; it's kind of at least a fair battle, right? The salesperson can size you up; you size up the salesperson, the salesperson has a lot of factual information that you could draw upon, and you have your own factual information. When you're talking to an AI agent, the AI agent might already know stuff about you before you even show up. And you know nothing about that AI. You don't; it's not even human, like you. Like this, AI might be trained in human psychology and trained on sales tactics, and trained on cognitive biases. And knows exactly how to influence people. And you are talking to a black box that nobody really understands how it works. And it could have a persona that looks like, you know, an 18-year-old woman or a 60-year-old man, and it's neither of those things, like whatever you think you're sizing up, like it's not, and it could talk to you in a very intellectual language, it could talk to you in a very casual language it could talk to you like it's your friend, and you can choose those for whatever works best. And so you are at this extreme disadvantage compared to human influence. And we and our laws and our policies and our regulations are not prepared for that type of advertising. It's prepared for, you know, an image that pops up on a website or a pre-scripted television commercial.

Debbie Reynolds  23:42

Absolutely. So you talked perfectly; this is a great segue into privacy. So I feel like the way AI is developing, the way it's being used now, it's going to raise the privacy risks and concerns because now you're using data in different ways, talked about a couple of issues. But one of the issues, when I think about Generative AI, is a data lineage in terms of where the data came from, and if it is consented, obviously, that's very different, for example, in Europe than it is here in the US. But then also that transparency piece on the front end in terms of the output that gets put out, like how transparent is it to the individual, I know that there was a case going on still, a notable person, who, when they went to a Generative AI model, I think it was ChatGPT said he had done some crime and he was like, yeah, this is an example of harm that could possibly happen to a human. And the problem is, I don't think that you can say or you shouldn't be able to say, well, I don't know what happened. You know, the makers are like, I don't know why the model thinks that this person is a criminal. And you know, there's this harm that happened to him as a result; what are your thoughts about the privacy implications?

Louis Rosenberg  25:08

Right, so when it comes to Generative AI and privacy again, I think the conversational interfaces are actually really dangerous when it comes to privacy, and again, in a way that policymakers hadn't really thought about. And I say that because if I'm online, and I go to a  product, a company's web page, and they asked me, a bunch of us to give me a survey, they say, you know, oh, you're looking for a new car, you know, answer these questions. Like, I know, I'm answering these questions, I'm answering on a survey, and I'm giving them some data about my interests or what I'm in the market for. If, instead, I'm in a conversation, I go to this website, and now I'm in a conversation with his, you know, virtual human. And it's, and it's asking me questions in real-time. It's far more data than just checking a box, you know, I'm looking for a car between, you know, between this price range in that price range, that because this conversational agent can draw information out of you to say, well, what kind of car you're looking at, you know, what kind of car did you have before? You know, would you be willing to pay a little bit more if you got better gas mileage so we can see a conversational AI could be designed to probe you for information to draw more information out of you? And if it's voice-based, it can be processed to process your vocal inflections. And so now, it's capturing not just what you say

 but how you feel. And so the amount of information that is potentially captured when you hold a conversation with any type of AI agent is so much more than if you fill out a questionnaire, and it's probably far more accurate, meaning a questionnaire you might not want to answer those questions truthfully. They might ask you how much you make, or they ask you different personal preferences, and you might answer in a way that you think is going to get you to the type of car you think you want. If when you're engaged in a conversation, and it's probing you for information, and it's listening to your vocal inflections, it might infer things about you that you didn't even exactly reveal. It's very difficult to infer things from people who fill out a questionnaire; it's very easy to infer things from people when they're engaged in a conversation, and you have some measure of emotion, either the exact language they choose, because your exact words that convey emotion, or the tone of your voice conveys emotion. And so people will be giving up far more information, very personal information, when we go to this conversational world. And they might not realize that all this information could be stored, right? Like we know, if you fill out a survey, it's being stored, right? We hold a conversation because our experiences are to have conversations with humans, if I go to Best Buy and I have a conversation with a human salesperson in Best Buy, I don't think he's gonna remember what I said or that's going to end up data. But you go to Best Buy online, you know, a couple of years from now, and there's a conversational agent wearing that same blue shirt, but it's an AI. It will be capturing data, and users won't really be conditioned to realize that the data capture is potentially stored. And so, again, it creates really new privacy issues, new policy issues, and like you said, the data that gets captured by these AI systems could also end up in the models themselves, and they don't have to necessarily be accurate. And so, there are many different privacy concerns. But the one that I think is not getting enough attention and really changes is how information will be collected from humans in this coming shift to conversational computing; it will come quickly. And it will have all kinds of good things about it. It will be more convenient. But it really changes how we think, how we will have to think about data, personal data.

Debbie Reynolds  29:50

Absolutely. Just to raise the stakes. Let's talk a bit about XR. Which, to me, brings that layer of immersion there. And that, to me, could heighten that level of either knowledge gathering about someone or manipulation. So what are your thoughts?

Louis Rosenberg  30:11

Yeah, so XR is virtual, and augmented worlds are something I think about a lot been involved in for over 30 years. And I do believe that there are real benefits to immersive worlds. We, humans, evolved to interact with information spatially; it's how we understand our world; it's how we build empathy with other people face to face. There's all kinds of benefits. But from a privacy and policy perspective, we need to just look at it in the abstract and realize that when you put on a headset and you enter a virtual or augmented world, you're entering a platform, a computing environment, where a third party can track everything you do. And I mean everything and can modify the world around you at their discretion. Like, that's a really dangerous recipe if you want to watch a competent, dystopian scenario. Let's create a product where you can enter a world where a third party can track everything and modify the world. And to make it even worse, it's very likely that their business model is going to be selling influence because that's the business model they have today. That's really dangerous. And but again, with policy, we could protect the magical applications, but make users feel safe, that they won't be manipulated in these worlds. But when you enter a virtual, augmented world, the platform provider can track where you go, what you do, who you're with, what you look at how long you look at it. And again, you could be shopping in a virtual world, and they know exactly what you're looking at and exactly how long, or you could be wearing augmented reality and mixed reality headset and walking down a real street. And you slow down, and you look into a store window, and a third party knows you slow down in that and looked into that store window, and they know how long you looked. And they know when you walk on, and these headsets can also track your posture. So they know if your posture is showing that you're energized, or you're not. And so they can get emotions out of your posture, these devices. Now, almost all the latest ones have cameras that look back at your facial expressions and your eye motions. And so they will be able to track in real-time, not just what you're doing, but how you feel. And every moment you're walking down the street and looking at a store window, did you smile? Did you look stern? This emotional data is now going to be captured continuously. And so we were creating a product that, without regulation, will not just allow people to track everything you do, but exactly how you feel while doing it. And it creates a huge privacy issue. And the privacy issue is not just that they can access this information. But potentially, they can store this information over time. And so if you want to create a really interesting, compelling virtual world or augmented world, you really do need to know what direction people are looking and what their eyes are doing and what their posture is doing and how quickly they're walking, you need to know that so you can simulate the world. But you don't need to store it over time. And if you stored over time, you could create a record, a profile of everything they do that you create this detailed behavioral profile. And you could then use AI to create a behavioral model that could allow you to then predict what they're going to do next in every situation. And again, if these devices are tracking your pupil dilation, and potentially your blood pressure, and your facial expressions, they can also create a profile of your emotions over time, not just what you do throughout your entire day, but exactly how you feel in 1000s of interactions throughout your daily life. And then, they can build an emotional model that could allow them to predict how you will feel when confronted with different stimuli. Right? And so now, if you're in the business of sales, of selling influence, and your goal is to sell, is to put the most persuasive influence in front of people that you can, and you have a model that allows you to predict how they're going to behave and how they're going to feel. I can pretty much guess you could create really persuasive influence, and that influence is not going to be a pop-up ad. Like on a website, that influence is going to be immersive in these worlds. And so these platforms will be able to inject virtual product placements that are placed around your world to influence you. And also put virtual spokespeople into these worlds, avatars that could potentially look like anybody else. But they have a promotional agenda, and they can strike up a conversation. And now when they're having a conversation with you, they don't just have access to what you say and your vocal inflections, they now also have access to your facial expressions in real-time, they have access potentially, to your pupil dilation, your eye motions they have access to all of these emotions. And so this conversational avatar that you're talking to that has a promotional agenda, it could adapt its conversation based on your real-time emotional reactions and try to optimize its influence. And this combination of immersive worlds, real-time AI, and generative AI allows us to create adaptive experiences that have a promotional agenda, that have an influence agenda, that are reacting in real-time to how you behave and reacting in real-time to your emotions and potentially reacting in real time to what you say if it's conversational, in order to optimize its influence. And it could be extremely persuasive. To the point where it crosses the line, I think, from marketing to manipulation, and it could be used not just to sell, you know, a box of cereal, it could be used to convince you to believe a piece of misinformation, or disinformation or propaganda. An avatar could engage you in a conversation. maybe it's trying to convince you that a particular medication is not safe or a particular vaccine is not safe because somebody has a political agenda and it eases you into conversation in a very friendly way. It's looking at your reactions, your emotions, and seeing what pushes your buttons, and seeing what gets you riled up. And it's adjusting its conversation in real-time. And it could be really the most powerful form of influence that we've ever created, certainly far more powerful than handing you a document or having shown you a prescripted video; this is an influence that adapts to you in real-time. And the technology now exists to do it. But the policies and protections don't exist to protect you from it.

Debbie Reynolds  38:01

I think at some point, and this is something I did a video about a couple of years ago, the consent from individuals may have to be incremental. Because as you're going through these experiences, you're making choices or you're being given information that maybe doesn't get captured adequately in the 80-page privacy policy that you said you read before you went into an experience. What are your thoughts about that?

Louis Rosenberg  38:30

So yeah, consent is a huge problem because people don't read the documents, and they consent to things. And they’re, in some sense, pressured to consent because there's all kinds of really good things, you know, I've talked about the dangers of reading your emotions and the dangers of engaging in conversation. There's all kinds of benefits. I mean, the reason that these platforms are putting the sensors on these headsets is that they can read your emotions in real-time, then the avatar that represents you can express those emotions. And you can have real interactions that convey empathy, and it will make these virtual experiences more human and more natural, and that's good. But if you're using these very same things in an automated way to drive influence, it's dangerous. And so, consent needs to happen in real-time at the moment that you're engaging with interaction. And part of the reason it used to be real-time is that the Metaverse and just conversational systems in the traditional web are real-time, and those are real-time interfaces. You know, most of our interactions online are really passing messages between people. Real-time conversation with an agent means consent could could be brought up in real-time. And these AI agents are, they're smart, and they're natural. And so the way I would imagine it to work is I'm talking with either a chatbot or by voice or to a virtual human face. And we're talking about charging stations for electric cars, and it realizes it's going to transition to promotional content of a particular car, that conversational agent could state that fact, you could say, hey, you know, if I'm about to transition to promotional content, I'm informing you of this conversation already. And, you know, if you want to continue the conversation, you can say, okay, and if you just want to stop, you can stop. But, I mean, these tools are now powerful enough to do that in real-time. And it doesn't have to feel like a disruption in the conversation. It's not like a checkbox has to pop up; it can be just as natural and conversational as everything else that went on. And it makes sense, meaning these interactive agents can appear in any form possible. They can speak in any style possible; these are essentially digital chameleons, right? When they're transitioning to deploying paid influence, they should have to tell you, and you should have to acknowledge that before it continues and even if this type of influence is not conversational. There should be levels of consent that are very specific meanings. There are certain situations where you, in a virtual or augmented world, will give consent or the platform to read your heart rate, your blood pressure, and maybe even your respiration rate; all those things are easily doable. Because maybe you're going to use a health and wellness application, or you're gonna use an exercise application. And so you have very good reason to consent. But if that application is then going to give you advertising or try to influence you emotionally, you really didn't expect in that consent that it was going to use potentially your blood pressure, as you know, to infer emotions from you, but it could. And so that's where it gets tricky is that a lot of these really invasive forms of sensing your physiological reactions, they have good applications, they can make a lot of experiences better, and people will want to consent to it for good reason. But that consent is really very specific. And when the intent shifts, and again, with AI, the intent of these applications could shift very easily. Consent probably has to be given again.

Debbie Reynolds  43:52

It should. I want your thoughts about inference. So this is a huge issue. I think these AI models, especially as companies, may over-collect information. And I think that over-collection, you may be reading things into something that may not be true, right? Like the example I always give is, in the Cambridge Analytica whistleblower testimony, one of the people said that they had a thing called a KitKat project. So anytime they would see an anti-semitic message to someone, they've hit thumbs up. They saw the data set, there's a correlation between those people, and they also like KitKat bars. So then, do you infer that people who like KitKat bars are anti-Semites? Do you know what I'm saying? This is the inference problem. I think that will raise the stakes, I think, in AI.

Louis Rosenberg  44:53

Yes, so inference works both ways. It could be correct, or it could be incorrect, and AI will have problems in both directions; I do think that the negative problems in terms of making incorrect inferences are really when these systems get used by law enforcement or by the judicial system and that really goes to this idea that it's really dangerous when we're taking humans out of the loop for decision making. And we're taking it as evidence for decision-making. And we could potentially, as these AI systems can make incorrect inferences, biased inferences. It's really problematic. And so there should be just like you have to get a new drug approved by the FDA if somebody's creating an AI-driven automated system that's going to make important decisions, employment, hiring decisions. There should be a validation process; it should go through an approval process to make sure it's not prone to these types of problems, just like deploying a new drug goes through an approval process and has to prove that it's safe at these if you're taking humans out of the loop. In an important situation, you have to prove that it's safe. On the other side, inference is also being done very, very skillfully by these bio models that have a lot of data about us and can be used to either manipulate us or identify us, personally identify us. I've been involved in a research project with some terrific researchers at UC Berkeley, where it's XR related; we're using virtual reality, looking at the data collected by the most popular virtual reality game Beat Saver. And the data collected from Beat Saver is you're basically holding a sword, and you're doing an exercise game. And the data is just the motion of your hands and the motion of your head. That's it. And the research team got access to a very large set of data from the company; they provided it. To look at what can you understand from hundreds of thousands, or even millions of users and what was discovered was that just looking at this very, very minimal data, just how you move your hands in a particular game, you could uniquely identify a user in one out of 50,000 people, which is even more accurate than a fingerprint and most fingerprints that get lifted off of a surface. So alright, already just data that we don't think is invasive actually really is. But then the next level of that study was to correlate that data with surveys filled out by people. Can you infer anything? Just looking at how they play Beat Saver, what can you learn about them? Turns out, you can learn their height, shoe size, and their weight; you can identify if they have certain medical conditions. You can identify what kind of product they bought, what is the actual device they're using, you can identify all kinds of personal information about that person, just from how they're playing, just from the coordinates of their hands when they're playing a game, as you can imagine, you know, a conversational interface that's engaging you and engaging you in dialogue and asking you questions and reading, your vocal inflections is going to infer a lot of information that goes beyond what you actually say. And so there's really big privacy issues because AI has become so powerful and can correlate so many things. And the amount of data that's out there is so great.

Debbie Reynolds  49:28

That's staggering. Thank you for sharing that. There has been a big debate on the Internet. And I'd love you to jump in on this. Do you think the US needs an agency focused on AI? So we're seeing, like in Spain, they created an agency for AI, and China's actually created a Generative AI law. Obviously, we see big legislation in the EU around AI, the AI Act. Do you think we need an AI agency in the US, or what do you think we need that we don't have?

Louis Rosenberg  50:01

So I absolutely think that we need something. We do need a governing body and agency in the US that oversees AI, especially these large-scale AI systems, these large language models, these Generative AI systems; there's significant impacts to society, it's going to impact jobs, it's going to enable disinformation, it's going to enable human manipulation. It's also going to be used inside of systems that are making decisions, important decisions. And these systems are not; they're not flawless; they make errors, or they can make significant errors. You know, if you fly in an airplane, there's an agency that makes sure that it's safe to drive a car, there's an agency that makes sure that it's safe. If you take a medication, there's an agency that makes sure that it's safe. We're going to be relying on AI just as much as we rely on cars and planes and medications. These AIs might be involved in medical decisions, or these AIs are involved in employment decisions and legal decisions, in financial decisions. There's just as much of a motivation and a need for an agency as there is for cars and planes, and medications. And in fact, it's urgent; AI, it's these technologies that have been released into the wild, and they're being adopted extremely quickly. They all have APIs so that developers can integrate these. You think about the danger of these AI systems, and you say, well, what if there's just a few companies, you know, Microsoft, and Google and Meta and Open AI and a handful of others who make these large language models? Do we just need to make sure that they behave responsibly? Well, even if those companies behaved responsibly, and even if those companies were following very strict guidelines because they have APIs that can now allow anybody to implement these models? Any application they're creating, any website they create will emerge everywhere. And there needs to be an agency of consequence that is considering how to control that because it's not just about regulating a handful of companies in some sense. There are only so many companies that can manufacture airplanes, right? I mean, a regulating body that manufactures airplanes deals with a handful of companies. This AI technology is much bigger. And it really needs to get under control as quickly as possible.

Debbie Reynolds  53:10

So if it were the world according to you, Louis, and we did everything you said, what would be your wish for either AI or privacy anywhere in the world, whether that be regulation, technology, or human behavior? What are your thoughts?

Louis Rosenberg  53:27

Yeah, things that I think a lot about are our behavioral privacy and emotional privacy, which is really very different than how we thought of privacy in the past about where you click in, what data you give when we have these systems that can track what you do over time, or track how you feel while doing it. I think we have to have really strict, really strict rules around storing that information. So people cannot build behavioral models, cannot build emotional models. Because once you can build behavioral, and emotional models, it's very easy to use those models to then manipulate people because what those AI models can do is predict what you will do and how you will feel if they present you with a piece of information or they present you with a choice. So they present you with a situation. And so they will be able to manipulate us really skillfully. And so, making sure that those types of predictive models, behavioral predictive models, emotional predictive models are outlawed. I think it's really important. And then it comes to transparency. And by transparency, I really mean interactive transparency, if I'm interacting with something that is driven by an AI if it's an advertisement that's adapting to my emotions in real-time. That has to be disclosed. For sure, I should know that's happening. If it's doing that conversationally, that should have to be disclosed. If I'm interacting with a conversational agent that's an AI and I'm a kid asking you for information about dinosaurs, and it's just giving me that information. Fine. But as soon as it has any type of agenda, any type of influence objective, you'd have to disclose that. And you'd have to make sure that I acknowledge that it disclosed that and then it, and honestly, if I'm a kid, that shouldn't even be allowed to do that, to begin with. So I think conversational influence and AI-generated influence should not be legal with kids because they already have a hard time telling the difference between informational content and promotional content. Imagine how hard it would be if they're in an immersive world, and a giant teddy bear walks up to them, and it starts talking to them and befriending them and then casually mentions, you know, some toy that they really like, and that I should convince the kid, should convince their parent to buy them that toy, that a kid is not equipped to know the difference between an authentic interaction with a character and an emotional interaction even if there's some disclosure.

Debbie Reynolds  56:36

Yeah, yeah. Wow, that's a lot to think about. Thank you so much for being on the show. This is wonderful. I'm happy that we get to collaborate on XRSI. I think it's a fascinating time to be in the space. I'd love to see what's going to develop. Keep doing what you're doing. I love the work you do and the truth telling that you do because I think people just aren't fully aware. So being able to get the information out there is very important.

Louis Rosenberg  57:07

Yeah, this was fun. Hopefully, we can do it again. We'll be talking about the positive developments that have happened in controlling AI six months from now.

Debbie Reynolds  57:17

Yeah, that'd be great. That would be great. Well, thank you so much for being on the show on our podcast. Yeah. It was fun. Thanks.