E83 - Noble Ackerson, President of Cyber XR

59:43

SUMMARY KEYWORDS

data, context, user, organization, google glass, privacy, apple, understand, information, xr, people, ai, trust, tying, talk, bias, google, collecting, model, space

SPEAKERS

Noble Ackerson, Debbie Reynolds


Debbie Reynolds  00:00

Personal views and opinions expressed by our podcast guests are their own and are not legal advice or official statements by their organizations.


Hello, my name is Debbie Reynolds, and they call me "The Data Diva". This is "The Data Diva" Talks Privacy podcast, where we discuss Data Privacy issues with industry leaders around the world for the information that business needs to know now. I have a special guest on the show, Noble Ackerson. He is the president of Cyber XR, and I'm happy to have him on the show.


Noble Ackerson  00:40

Thank you so much for having me. I'm excited for this conversation.


Debbie Reynolds  00:43

Yeah, this is going to be fun. So I just have to tell a story about how we met or got to know each other. You and I collaborate with XRSI, which is run by Kavya Pearlman. So she runs a safety initiative around all things reality, all digital types of reality. So that's XR, VR, all the R's, all the types of realities that there can be. And so I worked with her on, I think it was one of their privacy frameworks; I ran the compliance part of that. But I think you and I got on some joint calls a couple of years ago, but this year, I had the pleasure of being a panelist on a session about privacy in the Metaverse as it relates to all these different types of realities. And you were on there as well. And I was struck by your authority, right? The way that you speak about the space and the way that you're very passionate about it, and there aren't that many chocolate people that I can talk to. So I love to have chocolate people on my show. You're fascinating. So you're a tech wizard; I would love to have you talk about your journey in technology and XR, what you do, and your interest in privacy. I would love to hear you talk about that.


Noble Ackerson  02:21

Yeah, I'm an anomaly. One being chocolate in the space, as you put it, and then too a bit of an enigma because I come from both a design and a technical background, which is perfect because I'm a product manager, entrepreneur, and a recovering startup founder as well. So a little bit of the business sense. But I started my career as a confidential assistant to policy and technology policy for now Senator Mark Warner. This was when he was running for governor and when he was in office for the Commonwealth of Virginia. And it's funny because my life has gone full circle when we start talking about extended reality because where I live is at the intersection of how to be good stewards of users’ data. So we talk a lot; I talk a lot about Data Privacy. In talks and blog posts, papers, conversations with you, like podcasts and the like, I'm right there when it comes to policy and policymaking around the impacts of data use in practical applications that we depend on today. And looking a little further forward, looking into how data is used is planned on being used in the future. So that the end users' focus, the policymakers' focus, and then the creators', right, so you can come to roping and device manufacturers and studios or independent creators of experiences like for XR, or generally for AI, which is actually my full-time job. So that's where I  sit right in the middle. And it's perfect to be a product manager because I kind of talk to those different stakeholders at all times. How did I, the second part of that question, was like, how did I get into this? So as someone working in digital software, data is a bedrock of what I do day to day. And back in 2011, I was working for a different company. I had, through a consulting firm, I was actually co-founder, co-founded by a good friend and me. And we're doing just consulting services. But I really, really wanted to work on a product in the emergent space. And I got a hold of the Nike Fuel band, and I think it was called. And it's got an accelerometer in it, and it could step counter. And in my mind, if it count steps, that means it tracks motion. So the question I wanted to answer is, could it count a jumping jack? New York Times had this seven-minute, what is it, seven-minute workout or something that was super famous at the time; it was like 2010, 2011. That fascinated me. It's like, using a simple tool like the Nike Field Band, and I actually worked out. And so do like basic workouts like jumping jacks and being able to detect it on the tool. So fast track, I quit my comfy job to start up something called Byte an Atom Research. I was just looking at answering some of these questions. And my hope was that a product would fall out of it. And the product did. I built a fitness assistant. On Google Glass, we were a launch partner for Google Glass, if you remember that device. One thing that I learned through that process, naturally, if I'm tracking biometrics, so my motion, my gait, in order to tell someone who's having a hard time walking, and provide them with a regimen to help them live a healthier life, or somebody who's just looking to work out and want to prevent them from reaching for a tablet, or a phone, when they want to do a circuit, we want that head-worn experience was great for one, and then two, I was collecting a lot of data, right? And so, I've always looked at it as this is not my data. This is my user's data. So fitness experts are their customers. That's their data. And so I started thinking about like how to collect this data in a safe way. Right, so that in the inevitable breach, you couldn't tie someone's workout routine, worst case, to their name and their email. And so a lot of governance of how I use the data, how long I retained it went into that product. Around that time, Google Glass was flailing, but when it launched, it was doing great. I was raising money, and I went on a flight. And as I was clearing security, guns, the airplane check my bags. A flight attendant comes to me and says, you've got to take those smart glasses off. And so I go wait; these are my prescriptions are tied into these Google Glasses. And I'm not recording. It's like now you're making passengers nervous, by the way, as I was going back and forth with this, and everyone around me had their phones out recording, right? So I started thinking, then this is actually what launched my career and, in my case, assisted reality, which is Google Glass, but data trust as a subject or data stewardship is an area of interest for me because there are many reasons why Google Glass failed. But if I were to guess, there were three things that Google didn't do well. One was context. Everybody around me on that plane didn't understand what Google Glass was or did. And for those who knew that it could record, they didn't know how Google would use that data. So you'd hear some nightmarish stories like a passenger wearing Google Glass on the BART in San Francisco getting punched in the face because they had it on, and so there was no context. There was no understanding as to how the information was going to be used, how long it was stored, and how Google would turn my face into selling me an ad then. There's so many reasons for that. So that was the first thing, a lack of context. In my way of trying to explain why I think Google failed, or Google Glass failed, number two is okay; well, in addition to not having context now, I actually, I believe, let's say I believe that Google Glass did allow the wearer to record me. Where does that information go in? Do I have the choice to opt out of being tagged or being sold to with ads because Google is an ad company? So that's where people's mind goes. And the media perpetuates that regardless of what measures Google does to protect humans information or data subject information. So the next one, outside of context, was choice. Right? We can talk about choice a lot in the context of I saw your post very recently, Debbie, about cookies and little cookie banners and consent, flows. Choice comes in is a great analog for choice, right? Consent to me. And choice in that context is, can I remove my, do I have the choice to add myself or remove myself into the different categories and ways that you're collecting information about me whether it's a first, second, or third party, I'm going to anchor on choice for a second, I love using an analogy, explaining the choice to people like my parents who may not be as nerdy in Data Privacy wants as me. So I have a daughter she is in daycare. And I put her in the care of teachers, caretakers, the principal, that's the agreement, I take her to school, you care for her, and I pick her up from school. That's the agreement. If the school wants to take her to the park, zoo, or swimming class, they come to me as the parent, right? They ask permission, and I give consent on behalf of my child as to the bounds by which they can break the existing agreement. Right. And so that's how I look at choice, right? That it's my data as a user, and I agree based on the context that I understand. So before you use my data for any other means, let's say I gave you access to track my behavior around the tool, the app, or the experience. That makes sense to me as a user because I want my experience to be counted in the improvement of said tool. Okay. But when that data gets sold to Cambridge Analytica, and it changes the course of the world that I live in, had I known that context, I would have made a choice to not provide specific bits of information or any of that bits of information to Facebook at the time. So that's the choice. And then, the last piece was control. Why I feel Google failed if we're still tracking this super long explanation. I understand the context. It's transparent. It's not written in legalese. I can give my consent, and I can revoke my consent. Now, we start talking about the right to erasure and controls right, the right to delete. So deletion or erasure of the right to information, right? The utility and the usability of accessing the features in your app to make to act on those choices that I defined based on what I understand from the context that you provide. So it's successive, right context choice control, where companies like Google and Facebook and startup founders start a very great company, creating value for a user. Where companies fail, whether it be the Experian breach is that they're eroding trust because there's no context. There's no information, and when it was last time like I could ask you, Debbie, like do you know and trust what you're ad is used for your local bank. Like you don't know, there's not enough reasons as a justifiable reason for that. In some cases, there's no context. There's no choice, and there's no control. And that’s my TED talk. And I have my diatribe around why I think Google eroded trust, why I think Google lost its way in fueling the adoption of the tool because it didn't do enough to check those three boxes along the way. Also, they overhyped it, and it wasn't really an augmented reality device, but they call it that. So.


Debbie Reynolds  15:39

Very cool. Very cool. I love this, but the three C's, you call it? I want to talk about context. So we really go deep on this one, okay. Because there are many levels to context, right, and context can change over time. So I think it's just such an interesting thing. And I like the fact you brought it up. So what's happening now in the world? I feel let's talk about advertising. Okay, so what's happening now around the world? On the regulation side, there are regulations trying to stop people from doing these data transfers from one party to the next without kind of consent, a consent mechanism, and part of that is making sure that the individual has some type of context as to why this data transfer is happening. Right now, a lot of data like you had mentioned, like Cambridge Analytica, if people knew that if their friend was going to take a personality test, and then they were going to take their data, and then create psychological profiles, and then target them specifically with these ads and different things, they probably wouldn't have said yes, to any of that, right? But there's so much that changes during that chain. And then, I guess, for me contexts, I think I've been there on many levels. But the two levels I really want to talk about is, let's say a marketer takes data for one purpose as a user for another kind of marketable purpose, like maybe selling you another product. And the underbelly of that is that someone's selling this data for a purpose that you would never agree to, like, let's say, gait recognition. This is a great example. So, in a fitness tool, you may track if you know someone's gait or whatever. But what if someone says, okay, I want to use this information to create like a risk model. So I can deny people insurance. This is what the regulations haven't really caught on to yet. I mean, the technology is moving so fast. But talk to me a little bit more about context and where we are right now in terms of contexts.


Noble Ackerson  17:59

It's a great question because I look at the context, and you're absolutely spot on. There's so many levels to it. But to simplify how I think about it, I think about it as context and understanding from the organization's standpoint. And then context and how it benefits a user or information understanding from the end-user data subject standpoint. So from the organizational standpoint, we've got so many different ways of understanding our data. And that's a mature space, using metadata, management using data governance tools, like AWS glue, while you're going through implementing your pipeline type of things, you naturally need to understand, let's call it first-party data, like so you've got an E-commerce site, and people are actually buying things from you. So they're volunteering, their PII, or sensitive information to you. The question that I asked an organization is, do you understand your data? Did you understand? Do you understand the provenance and origin lineage? Do you have a risk register that helps you codify areas of potential leakage for the sake of security but is also robust enough to help you explain to the user should there be a right for information request? For example, I donated to a political campaign in 2006. What information do you have on it? That is a tough question to answer, especially if not, in addition, because when you dig into it through the lineage of user A, that person may have come to an event attended a webinar. And if you're a robust enough organization, there's so many vectors of input. So when I enact my agency on wanting to erase my data, right to erasure what data are you erasing, right? Or from an end-user standpoint, that was on the business owner understanding the lineage of provenance and retention to make certain policy and compliance decisions. And also to explain it to the end-user or for the end-user. The value for an end-user actually benefits an organization in a way that a lot of people don't think because if I understand, I requested information, I learned that oh, wait. I'm getting these emails from this organization because I perhaps opted into getting more information after I bought the widget or donated to the campaign. I also learned from the context that was shared based on the understanding from the organization as to who I am that I'm actually very active. I over an extended period of time, I've donated multiple times. I thought I only donated once I've attended an event. But I want to decide as a user that I don't want to be associated with this organization, individual or service, or company ever again. And so that's one piece. And it’s explaining that within the confines of extended reality, I've often used an example of a simple made-up application like tagging an AR augmented reality application where I do put graffiti or create artwork within digital artwork within real physical spaces. So say, I went to my favorite grocery store, and outside, I  drew a heart or an apple. And someone came around and defaced it, right? The Apple or part that I drew, whatever I drew, may have some unique information that's tied back to me. In this fictional example, I opted to share that information to get attribution, say, for example, for the thing that I drew that I was proud of, but say, a day later, I learned that someone else had come in and defaced it and put something inappropriate within that same space. Right, I would then have that context. Because I would ideally be notified by the organization because they understand their data and will perhaps need to be in compliance with some laws to let me know if there's a change. And so then I can choose to remove my information from that I don't want to be associated with anymore. And so, the organization would have to act on that. That's how I think about context from that. Just something that benefits an organization to understand their data put simply to govern that data for compliance needs in order to serve and respect an end user's needs, right and naturally to be compliant with some laws. Excellent, excellent. What's happening in the world right now with XR and stuff like that, that concerns you the most, right? So you're right on the cusp; you're forward-thinking. What are you seeing right now that maybe you can project into the future that is top of mind for your concern in this area? That I'm going to say the jokey answer, I don't want to say the serious answer. I just want to preface it that way. That Facebook Meta took the name, Metaverse or Meta, and now everyone is associating their future vision or assumed vision with a company versus a set of various experiences. That's actually not the serious answer. The serious answer is more data stewardship and practices with current applications that have problems like data governance like dark patterns. If I were to quote Harry Bricknell, who came up with that moniker. So taking dark patterns is one example of major concern when it comes to XR; we know that companies are going to want to maximize the value of whatever solution they're having. They're going to want to collect as much information in order to reach that goal or achieve that goal. And so as part of that, currently, there's a lot of course of tactics to get you to not read, not get the right context, in order for you to not act on your agency to choose and control the data that they may have. Because that goes against what they might think is value creation. So the dark patterns example is a broader topic around data trust, and I think data trust, whether we are talking about the trust of users and the organizations that they care about, or just users and the data that they shed, to these organizations, I think is massive. Why? Because in addition to what we collect today, standard fare, name Pete, what we define as PII sensitive information or even personal health information. As we know, with VR, that expands a bit. You get a little bit deeper insights into a user's day-to-day. So let's break down that the headset; for example, the headset has optics; in order to give you pass through, I'm looking at the Oculus two, for example, that gives you grayscale, but a convolutional neural network does not need color, in order to discern information from its environment in order to give you occlusion or any of the experiences that we need for occlusion, light understanding, estimation, that kind of stuff. That data, if it falls in the wrong hands, or that data, if acquired through the use of dark patterns or generally an organization, makes it very difficult for me to understand how that data is used. That makes it difficult for me to give consent to the types of data that is going to be used. That is a darker path because of the wealth of information that a VR or AR experience will be able to provide. Not only is it an inside-out optical understanding of the space to give me an experience, but it's more biometric information. We talked about how you're walking, your gait. We talked about your IPD or interpupillary distance. We talked about your head motion, their technologies, and academic papers on brain human interfaces. And in order to tap into emotion. Did I squint my eyes through the inward cameras in a device, which is actually built for several things, including foliated rendering, so that I can actually optimize the performance of my experience? But now I can actually sense if a person is scared or enjoying that based on how my eye is. There's so much data being shed, and as to your question, my biggest concern is that if an organization isn't a good steward or has their users’ data isn't enacting safety mechanisms, privacy mechanisms isn't compliant to what society and legal and regulations might deem as normal or expected, we could cause accelerated harm. Accelerated in the sense that the harm that we may have today when experience leaked all my personal information. That's laughable. When you think about tying my psychographic. my interests, tying that to the PII that is already collected, tying that to my biometric information, and just getting a more holistic view as to who a user is and in the hands of a bad actor or an immoral organization. Now that's bad news bears for anyone. And it ultimately erodes trust because it takes a few of those examples in this emergent space to erode the adoption that everyone wants. And talking about since the word Metaverse came back into normal parlance, that they're going to attribute these bad experiences that may be reported on as the norm. And that is a lack of trust. And I've used the word trust a lot. Data Trust a lot. The way I define it is that three C's, the transparency that you get, the value that the organization is delivering. And the resulting acceptance by a user of any consequences that may happen. So Apple, for me, personally, embodies my level of trust with Apple, they talk a big game, but they have slip-ups. But they're transparent in how they use my data. And they fight for me in public, I presume, or at least I think they deliver good value for me. By the way, I'm an Android phone user, but all my devices are Apple, it's for the most part, but they value good. They deliver good value for me. And so if there's a breach where I learn that Siri sent Apple sends anonymized voice data to improve the Siri experience, I accept that consequence. And it's not a big deal. It's more an art for me than a science. Just trust that they're going to do right by me in the event of an issue. So again, another long answer.


Debbie Reynolds  33:19

No, I'm glad you brought up Apple, I posted an article, and actually, I don't know. Maybe it's gone viral now. But the article I found was that as a result of Apple's privacy changes, they estimated in the market that companies that were affected by this change lost over $280 billion. That's a lot of money over a short period of time, this change? I don't even know it's been out for a year yet. What do you think about this?


Noble Ackerson  33:55

I think if I were to turn my definite if I were to have a one day when I grow up, start a business in scoring companies on their three C's or trust score. I don't know whether that's a thing or not. Something I've been mulling over is how do you score these? Apple would be scored pretty high. They'd be in my top right-hand quadrant of any sort of chart that I might have when it comes to my trust and the issues that they may have. I am not a Facebook user. And so I don't really know how that affects Facebook, technically. I'm also not an iPhone or iOS user, as I mentioned, so I don't know what they've done to address the issue in the article that you shared. But from what I hear, maybe I could turn it around, and you can tell me a bit more about it as now Apple tells you, you know what? Facebook is using it at a given time. That's a hard problem to solve. So I give them a lot of kudos to lean into that.


Debbie Reynolds  35:16

Yeah, well, in a nutshell, basically what Apple did, and I did a video about this, like last year, it's pretty funny about saying the exact this exact same thing, it's like, if you all don't change your business models, you're going to lose like a ton of money, which basically has happened. But basically, what Apple did was they changed the way that they share data with third parties, right? So if I'm an Apple user, I'm the first party I give Apple data, right? So basically, they made that data sharing opt-in, is instead of an opt-out, right, so instead of just sharing it with third parties and then making me opt-out, they basically opted everyone out. And we have to opt-in now. So a lot of that data that advertisers are marketers could get, just by virtue of the fact, let's say, if I download it, for example, if I downloaded the app on iPhone or something in the past, not only with the iPhone, they know my device ID and stuff like that, but they will know the amount of money that I've spent on other apps on the phone a lot of information, right, about people and then Apple, in terms of marketing, Apple people with the Apple device, they estimate is worth like 10 people with Androids, right, because they spend a lot more money. So getting an Apple device user and getting information about what they like or what they spend was like a gravy train. So the gravy train has been cut off now. And they said that as a result of this app transparency change that they made that only 5% of people five or 6% of people opted in. So I mean, that's, that's like a huge message, even though obviously, it's easier to have a tool that opts you out of stuff and then makes you opt-in, right? Because on privacy, that means if people really, really want this stuff, they're going to agree. But the problem is this goes back to your trust thing. Companies that were using this data, first of all, a lot of individuals didn't know who these people were right. So you can't even trust a company that you don't know, to begin with. But then, the companies, even if they didn't know, they don't trust them. So they're not sharing this data. So this is a huge problem. And I think it's showing up on people's balance sheets in red now.


Noble Ackerson  37:55

Good. Sorry to take such a positive spin. No, no, there are individuals behind these that come into work every day not looking to do harm to anyone. But like I said in the very beginning, they have different definitions of Data Privacy. This is more aligned with Tim Berners Lee, that’s, starting a look at a digital locker or a way for me to control the data that I want to shed. It's almost like a step into that direction, that libertarian view that it's my data. And if you need it, I can charge you for it. It's not at that point yet, but I love the idea. I love the concept. I do think that your third-party data is slowly fading away. And so marketing professionals, professionals need to evolve and understand new ways because if the cookie and those brokerages and the underbelly, as he put it, exchanges that peddle in my psychographic data through third party data. Once cookies go away, I know FLOCs, Google introduced FLOCs last year as a proposal that bombed, and then they introduced, I forget what they call it. I think it's called topics or something. And so, I would love to see that proposal gain some traction at that. And some partnerships and partners scrutinize it and adopt something like that. Because it takes the opaque nature of cookies and puts what is being tracked in plain and simple English around my psychographic data, so if it's only retained for, I think it was like three weeks or something. So if it's like travel fitness in some other categories that are within Chrome, Google Chrome as the first browser that this would launch in, I can go in and then understand what data is being collected, write context, and then enact my choices based on nope, I do not want you to track anything health-related, so excellent that theoretically, what I've understood with the proposal aligns with what I think Apple is doing. It's unfortunate that billions of dollars are lost in the market. But, if that comes at the cost of providing a safer experience for individuals, then adapt is what I say. Right, exactly. I'm like, enjoy the fact that you had a gravy train for all this time and move on. The world is changing now. So that's one thing I would love for you to talk about a bit. And I've seen your TED talk. You had done one at Tyson's. So it was really cool. But one thing that that I think is very important, and I think it'd be very important as we embark upon using more advanced things like AI or  XR stuff. And that's the explainability of AI and AI systems. So talk to me a little bit about the importance of explainability and how you approach that in your work. Wow, where do I start? The explainability is a subset of responsible use of AI. Now, what I like to believe is what some people call a responsible AI or the responsible use of AI. There are other parts of responsible AI, such as Data Privacy and security. And, of course, explainability is a key one. And the point of explainability is to shine some light on the black box nature, I hate that term of machine learning predictions, or deep learning AI predictions will come. Far too often, when an AI makes a prediction, a data scientist or an ML engineer might be able to understand the F1 score, the performance; they know the success rate of the data model from a perspective of its confusion, the confusion matrix, which is basically, what false positives do I have in this data B or even things like whether the data is balanced or unbalanced, in order to solve a specific need. So explainability, I actually coupled nicely on the organization side to the context in that three C's. So sorry for just weaving that thread right back.


Debbie Reynolds  44:15

No, no, perfect, perfect.


Noble Ackerson  44:17

It's important to tie that back in. So it's all explainability as part of the practice. Let me just sum up what I've just said as part of the responsible use of AI, which is part of the human-centered AI practice ACAI. It talks about one stakeholder, one key stakeholder that benefits from it, which is the data engineer, and other beneficiaries of Explainable AI. When you are able to explain why a prediction is made, an inference or prediction is for consumers. Right? Because it increases trust, again, brings that back in. It prevents harm. It prevents unfairness in talking about bias and transparency. For an end-user or consumer, it helps you understand the impact of an ad prediction. And with that, I'll give a quick example, where I was just reading this from Google, the Google Flights team, a responsible AI team, or an ethical AI team put together a white paper as to it makes a prediction, I'm oversimplifying here, but it makes this flights product has a when to buy your ticket feature. And it will say Tuesday at three o'clock is the best time to buy this. But you get a nice little toggle, and the consumer gets a nice little toggle tip or tooltip to explain why that time is the best, right? Sign in too much detail, but at least afford to trust and increase the trust. We can talk about how for an end-user consumer, it prevents or minimizes bias, and I'd love to get into that, especially because I'm Black and I care about those kind of things. That's right. But the last stakeholder for explainability is policy and compliance. I'm talking both internally with your legal team and your policy team, for a large organization, and for explaining to policymakers and lawyers, and stuff like that. And so explainability, one last thing is that you basically understand the model's behavior is supercritical. And so explainability, practical explainability, or SAI as it's getting known to be called, it's critical in the construction and preparation of your data, building, and training, your model evaluating and deploying, and monitoring from when you're collecting and refining your data collection process, getting a lot more context as to the stage of your data using things like shap or lime or integrated gradients. On the building and training, verify that your model's behavior is gained through explainability. And so on, and so forth. That's my train of thought. But yeah, so that's explainability. In a nutshell, as it ties right back into the context. From an organizational standpoint, do I understand my data in this case? The model? Right, right. So I would be remiss to not have this conversation with you about bias in AI. So this is one reason why I'm in privacy. Now, this is one reason why I talk about it a lot. Because you know that it directly impacts me the example I gave someone, and that's kind of where I think we are right now, let's say you go to a grocery store, you have a mat that you step on, right, to open the door. So you step on the mat, the door opens, but let's say the person behind you steps on the mat and the door doesn't open. So the person, the first person, steps on the mat, they're like, well, there's no problem because this mat opens the door for me, but then they don't want to investigate why it doesn't work for the other person. And I feel like that's where we are with the bias discussion on AI and these technologies; what are your thoughts about that? One, my thoughts are that it's still something that I'm learning; I have to be brutally honest. And from what I'm learning over the last year or two, is that there's no one simple definition of fairness like you talk about bias, and you talk about what's fair within the context of the experience that you're trying to deliver value for. And so, there's no one simple definition, which is why organizations need to look at the domain and what they're trying to achieve. If it's a doormat that uses some sort of convolutional neural network to understand who's coming out the door for security benefits, for example, and you're just riffing off of your example there. What's fair, right? What's the one thing or the net end things that society as a whole today may accept as fair? And then, of course, does that definition align with my organization's principles around delivering the solution. So fairness is very, very hard in a tangled web to define. But with the door example, assuming there's some sort of camera and the benefit of security, discriminating against who gets in based on what they wear may be one of your definitions of fairness, what they look like, the color of their skin. And so you then you have a bunch of knobs that you want your model to tweak with your model to say, okay, well, I'm optimizing less for the performance of my model and more fairness, or to debias. Any situations I do not want to bring, I want to train my models with a data model that has enough people of a variety of shades of color. Types of clothes, if I'm wearing my hat backwards, it doesn't assume that I'm a criminal, that my hoodie on and a mask, and it might just be because there might be a pandemic and it's cold. But that historically was never thought of. So you can't talk about bias without talking about fairness, in fairness, and very hard to define in domain-specific and situation-specific. So then we get into bias, right? There are tons of different behavioral biases there's, and there are tons of different types of biases. And they actually, in my opinionated view, there is good bias, the knobs that I tweaked earlier to optimize for fairness versus performance. Because you tweak one like a squeeze toy. Once you squeeze one, the other takes a hit. It's actually how I've seen it work, and it's very tough. And so I've been doing this full-time for my full-time job. And we're trying to bring some of these practices and bake these in by design with every model that we build. And so yeah, I'd love to unpack that if we have time.


Debbie Reynolds  52:26

Yeah, yeah, that'd be great. That'd be great. So if it were the world, according to Noble, and we did everything that you said, what would be your wish for privacy anywhere in the world, whether it be technology, regulations, or human stuff? What are your thoughts?


Noble Ackerson  52:45

I'll take this super quickly from the policymaker standpoint, that they don't over-regulate before the technology is ready. I know that currently, in the US, we do not have a Data Privacy Federal law. Every state it's like the Wild West, and every state is taking a different path at it, or several states are. Virginia, where I'm from, for example, passed one in the last administration. I would hope that policymakers work closely with device manufacturers and content creators, publishers, and everyone in between. And listen to some of these issues. Also, work with the end-user, policymakers listening, have these listening sessions with end-users, or just look at data to understand the impacts of qualitative and quantitative, so that's the policymakers on the device manufacturer side. An ideal world would be one where software development kits, SDKs or APIs, and interfaces and interactions with the device are open in a way that is also governed, so open yet governed, and what I mean by that is, increase innovation by not creating fiefdoms, and proprietary hardware in a way that innovators who want to do right by users would have to wait until a feature becomes available publicly. So just a responsible way to open up some of these hardware devices, both augmented and virtual. For the creators, a lot of the experiences that end-users want to benefit from, no pressure to them, but if I'm in all space, it's the content creator's space right now. No pun intended. So essentially, there's a lot of pressure on creators and studios and experienced developers, whether it be for E-commerce, education, healthcare, or entertainment, to practice those three C's, be transparent in types of data and provide enough context, choices for users to actually opt-in and opt-out as they wish. Give the controls of the user's data based on one and two. And as a result, you start looking at it if you have to practice data minimization or move your algorithms to a different top method of AI like federated learning, or edge-based learning as a benefit, that will inspire new avenues for you to make money, it is a positive-sum because end users feel safer and give you their data. And you provide them with the controls on that. And you essentially learn new ways to provide better experiences. And I'm thinking about, I've been talking a lot about progressive disclosure in games like Candy Crush, or whatever it's called, I don't play Candy Crush, but as an example, you get nudges along the way. And they're only collecting or giving information in that case, in the case of gaming, when the user needs it progressively. I think there is a space for a AR or VR and extended reality experience. When I look down, or my gaze is not focused on a thing, I have security, a virtual orb, or assistant that tells me in context of what I'm doing, where I am, what data could be collected. And I can interact with that personal agent of sorts. That is my personal guardian, I guess, we'll call it. I'm just making this up on the fly for you. That will allow me to act on any choices that I feel like acting on in context with what I'm about to do and understand the context or the detracting features that I may not be able to experience if I do not give my name.


Debbie Reynolds  57:47

Oh, wow. That's a Twitter for us answer.


Noble Ackerson  57:50

I was just making it up as I'm going.


Debbie Reynolds  57:54

Very good. Well, this is amazing. Thank you so much for being able to be on the show. This is fascinating. I love what you're doing. You're definitely on that cutting leading edge. And I like to hear smart people talk through this, having practical experience, but then also understanding how to communicate that to people at all levels. That's great.


Noble Ackerson  58:18

Thank you. Hopefully, if it makes sense, once this podcast goes out, I've done my job because I tend to like to break down complex subjects in a way that I understand in order to explain to somebody, so I am honored, and I thank you so much. Debbie "The Data Diva" for creating the space for me to talk to you and your audience, and hopefully, who knows what happens in the future, and maybe I can come back and talk about just inviting myself, by the way. We'll find ways we can collaborate. That'd be great. Thank you so much.

Previous
Previous

E84 - Enrico Panai PhD, Data and AI Ethicist, Éthicien du numérique, France

Next
Next

E82 - Tom Cottingham, CEO/Partner at Flyover Media Group LLC