E130 - Franklin Graves Technology Counsel, HCA Healthcare Attorney, Creator Economy Law Newsletter Author

54:03

SUMMARY KEYWORDS

privacy, ai, laws, creators, image, data, lawsuit, ftc, tool, people, model, output, content, filed, youtube, platform, case, called, publicity, brands

SPEAKERS

Debbie Reynolds, Franklin Graves

Debbie Reynolds  00:00

Personal views and opinions expressed by our podcast guests are their own and are not legal advice or official Statements by their organizations. Hello, my name is Debbie Reynolds; they call me "The Data Diva". This is "The Data Diva" Talks Privacy podcast, where we discuss Data Privacy issues with industry leaders around the world with information that businesses need to know now. I have a special guest on the show. Franklin Graves is an attorney. He focuses on emerging technology, intellectual property, and media law. He's also the Inside Counsel, Technology Counsel for HCA Healthcare. Also, probably one of the reasons why I know Franklin is we collaborated on an article together a couple of months ago, we can talk about that, as well as child privacy, but also he is the creator of the website and the newsletter called The Creator Economy Law Newsletter. Welcome.

Franklin Graves  01:15

Thank you so much for having me, Debbie; it's been great to virtually connect with you. You're very active, pretty much on every platform. And I love following along with the updates. And it's it's such an honor to finally join your podcast and get to talk through some of these fun, fun issues.

Debbie Reynolds  01:33

Yeah, well, I am just over the moon that you agreed to do this podcast; I think the work that you do is so unique. And your newsletter is, like, stellar. It's unbelievable. With the amount of detail that you go into, and the links and all the references that you do, you're really honing your legal chops, right, because you back it up with the evidence and everything. So a lot of times I go onto your newsletter to do research, Franklin, he had that link to so-and-so I go wherever you're trying to find different things. So it's great to have you on the show, and you sort of crisscross all types of things and technology. And so I think that's the reason why I like your newsletter so much. Because it does go horizontally across all kinds of data and tech, and so has a little cybersecurity stuff in it, primary privacy stuff, and you really go deep. So tell me a little bit about yourself, your trajectory, and why you decided to do this newsletter.

Franklin Graves  02:46

Yeah. Oh, cool. Yeah, thanks. So my background, I'm an attorney. So I know that you have a lot of legal professionals and privacy professionals that are listening and cybersecurity professionals that listen. So I would lock myself into kind of the legal framework for most of that. That's kind of where my expertise comes in. And as you said, I learned week to week because it seems like every week, if not monthly. Now, there's new laws coming out that all of us have to digest and kind of pull out and think through the ramifications of, and that's really what kind of spurred me to start the newsletter as well was this concept of, like, okay, I'm doing this already, just personally, putting aside what I do for my day job, but like, personally, I love keeping up with stuff like that. And then also work with creators. And so I think back maybe like maybe almost 10 years ago, I probably would have been still referring to them as Internet Creators. But I think now the whole ecosystem has really taken off and taken shape. And so it's more so they're, they're referred to as creators, and it's called the creator economy. And you capture both creators and brands and the platforms. And so, for me, it kind of is a melding of so many different areas of law, areas of practice that I just I love, and I'm fascinated by. And that's kind of where we're at coming up with this concept of creator economy law because it allows me to dive into all of that, and on a week-to-week basis, and provide not only myself updates, but update others, and do what I was really just doing to begin with just staying up to date on stuff. But working backward, I guess, with my career. So yeah, I do. My day-to-day job is for a healthcare company, doing IP media, Data Privacy, AI, and emerging technology work for a for-profit, publicly traded company. And then prior to that, though, I was working for Eventbrite, which is the online ticketing and registration platform. I was on their commercial team. So I was negotiating commercial contracts like sales contracts, vendor agreements, in the fun I think it was so much fun when I was there because I was right on the cusp of when GDPR went on To effect, I joined that around, like, I guess, May 2018 is when that went into effect. And so ever since then, I was getting a taste of it with my first job out of law school, which was for a global record label. And I really, really got into the weeds of negotiating DPA's and privacy clauses and privacy policies and all of that at Eventbrite. And that's kind of carried forward with me as well. But even before that, I was working for a record label in the music industry. And it was a global music company, and I had the opportunity to understand, okay, wow, this is what working at a global level looks like, and how there are so many different laws, not just IP laws, copyright, trademark, patent, and trade secrets. But also, just generally, this is how the music industry works. And this is how privacy works. This is how you handle email lists and customer communications, all that kind of fun stuff. And so I really kind of had the benefit, I guess, over my career, about to hit 10 years of being a licensed attorney, of having my experience my exposure across a wide range of issues, and a wide range of industries. So I kind of take that. And in my all my free time, I work with Internet Creators, I work with creators on a pro bono basis just to help them negotiate a book deal they were working on, or understand, Okay, here's kind of what fair use means and whether or not it really applies. And whether you can really rely on it, stuff like that. But then, even now, like you and I got to talk to you about the privacy issues for creators as well, because it all is coming full circle, and you can't really operate in the tech space, you can't operate in the online Internet ecosystem, without really at least being able to issue spot.

Debbie Reynolds  06:44

Yeah. Well, it comes through, it comes through on your work it, comes through and the things that you publish, that you have this cross-industry experience and understanding. And this is kind of new, right? The idea that you have people who have these rights, it's kind of like writing, rewriting the way people think about intellectual property, what's private and what's not, what can you use? How can you, you know, move forward in your career? So, those are all really great things. But let's start by talking a bit about the article that you wrote. You asked for a quote from me about children. So this is a really good article. I think I did a video a little while after that about this. So tell me a little bit about this article. And we can discuss it.

Franklin Graves  07:47

Right, yeah, absolutely. So the court case you're talking about that we've kind of talked about and alluded to is actually it was a class action lawsuit. It was filed in October 2019. And within the class action lawsuit, you have to define your class of individuals that were harmed or that were that are seeking damages of some kind. And so, in this case, the class was a class of children. And it was through their parents and their guardians as their representatives that the lawsuit was filed. And the claims really center around the use of targeted advertising. And I think your audience, and you will know, the tie in there is, of course, the use of persistent identifiers as persistent IDs. And those, of course, are tied directly back into the platform on which children are watching videos. And so the lawsuit was really centered on how YouTube, which is a division of Google and larger than that alpha that YouTube was using targeted advertising. And in the process of doing so, they were collecting data, they were tracking the behavior of their users. And the problem there, though, is those users were children. They have a lot of children's programming and content that was really pulling in that audience of young viewers and young eyeballs, for lack of a better term as well. And the problem is that by just going on to YouTube, anybody could access it; you didn't really have to be logged in or not. Or if you were logged in, you were accepting their online terms. But can children consent to those? It's a whole other issue. And so YouTube, in order to fund its operations and to pay creators later on, as it became more established, was collecting the data, the viewer data, and also the online behavior of children both within its own platform, kind of what they watched how they watched it, all of that. So that lawsuit, that class action was filed again; why are we talking about a 2019 class action that was filed? Well, in August of 2021, the district court was upholding the motion to dismiss that was filed by YouTube and the channel owners. So then of course, that means that they were going to lose their case if it gets dismissed. And then what happens? So the class of children that's like really sounds really weird have called them that, but that's kinda how you refer to them. In these cases, the class of children, through their attorneys and their parents, appealed that dismissal. And the appeal goes to the Ninth Circuit, which is the Federal Court of Appeals; they ended up overturning that lower court's decision to dismiss the lawsuit, and they are allowing the class action to proceed forward. And so really the highlight that I have here, for creators and for brands, I know a lot of people think, oh, YouTube, it's just anybody can upload content. But in reality, there are a lot of brands or a lot of corporate corporations that have accounts that released content that may or may not be targeted towards children. And so the takeaways are really kind of understanding, okay, there's a lot of liability that can come about just by sharing content online. And so historically, I think we've seen a lot of lawsuits filed against the platform's themselves, and the ways that they were acting, and or, in some cases, maybe not acting to protect privacy or to overshare information. I mean, with Facebook, you have the whole Cambridge Analytica, and a whole laundry list of issues that you can probably go into, especially if you start looking at it overseas. So, in reality, what we're looking at here, and kind of why I think this is an important case is an exposure to individual channel creators. So in this case, it was both, like I said, that mix of brands, so you had Mattel and Hasbro and DreamWorks Animation, but then also looking at it, you have individual YouTubers, and arguably whether or not he is an individual YouTuber anymore is up for debate, but Ryan's world or Ryan is the YouTuber that the child that would unbox and play with toys and started out that way, multimillion dollars 10s of millions of dollars industry now. And but the other issue here is, there's also an independent YouTuber that was mentioned, and the channel has since changed since the lawsuit was filed. But the individual YouTuber was creating content that was the same kind of thing, unboxing toys, stuff that children would really want to be watching, like, including in the thumbnail images of toys, and, and all of that, that the cartoon characters that that children would recognize, just to entice them and help them to click on the video and, and have those engagement numbers and the watch time increase for YouTube, which is very important for the algorithm to know, hey, we need to feed this to other users that are enjoying and consuming similar content. So yeah, that's kind of a brief overview of where we were and why we were here with this case. And then I think, as you know, you mentioned in your video, is also the tie back to the 2019 settlement with YouTube, between the FTC. So the Federal Trade Commission, and that I think, is really important to look at because in that those documents that were filed by the FTC in connection with that lawsuit, and not lawsuit, but the settlement agreement and all that with YouTube, they mentioned some of the specific channels, that then a class action attorney came along was like, Oh, here's a good group of people that we can file a lawsuit against, and maybe get some money and, and have children, the class of children that were clearly harmed or arguably harmed by YouTube in Google's activity. And here, channel owners, content creators, and big brand companies are targeting them and trying to get them to watch. And so all of that plays in with the privacy law claims that can be brought against them.

Debbie Reynolds  13:48

Absolutely. And there's another kind of blockbuster theme that's running here that you and I chatted about. And it has to do with the preemption of privacy laws. So part of the reason why this lawsuit was revived is that the people who are ruling on this, the judges that were really on this, said that there was a claim here that this case should be dismissed because the US has a Federal law called COPPA around children's privacy. And then, because of that, the State claims they should dismiss it, because COPPA handles children's privacy. And so one reason why this is overturned is they were like, well, States have rights States have laws. They don't just because there's a Federal law; it doesn't excuse you from these other State laws. And I think this is the crux of the problem that we have in the US and the reason why people don't really understand why we don't have a Federal rule of law right now. So States have rights as well. And it may not be the same as Federal rights. So even though some Federal laws can preempt certain things, they can preempt everything in the State law, right, so some State laws have things called general applicability, whereas they were in on a Federal level, they may be more narrow in the scope of what they can do, where States may have more latitude. And so that means that Federal law doesn't necessarily cancel out what's happening at the State level. And this is, to me is like, one of the biggest issues in the US around privacy law. And the reason why we can't seem to get anything done because people think, well, let's get this Federal law. And it's going to preempt State law, like no, it won't. You know, it may preempt some State laws. And it may actually stop other States from trying to pass laws, we feel like the Federal law sort of takes care of it. But it doesn't stop States from being able to pass things are outside of the scope of what kind of the like, for example, the FTC can regulate what commerce can regulate, things like that. So what are your thoughts there?

Franklin Graves  16:15

No, I think you're spot on. I think that's a really important thing to know. And very nuanced distinction, like you mentioned, for the US and the situation, I think that's one of the important takeaways here is a lot of brands and creators and even platforms recognize, okay, so we have the Children's Online Privacy Protection Act that is only enforceable by the FTC. And so they might work into, like, this is terrible, but they might work into their business model and understanding of, okay, the fines that we're going to get from the FTC because they're the only ones that can bring an action under that are going to pale in comparison to the revenues that will generate or the value will add back to our shareholders. I'm not saying that's what businesses, these businesses were saying. But that kind of risk analysis and business analysis really comes into play. And where we're, I think a lot of that is getting undercut is where State laws come into play. And we're seeing that with the patchwork of State level privacy laws that are being enacted or amendments to those that are happening, like in California. And the true question will come at a later date; maybe put a pin mark there with a question, put a tab there with a question mark, what Federal regulation is going to come in and override CCPA or CPRA? Or Nevada's? or what have you? And that's the question that remains and whether they will or not, it's like he's like can Congress can lot legislature legislators get together enough to pass sweeping privacy regulation? Who knows? But until then, it is this thing, I think, for creators and brands, oftentimes it was like, Okay, we have the FTC guidance, and they can enforce against us for not properly disclosing something as an advertisement, for example. That's one way that the FTC can regulate like you're saying, that relationship between consumers and people that are selling something, but in this case, a lot of brands and creators and people in the creative economy space haven't really been taken into account. Okay, now we have to be aware of what are these other laws? What other pitfalls do we need to be aware of? I think this case is perfect to highlight; okay, not everything is preempted by a Federal law that you might have thought of. And now you have to be really doing an analysis on a State level, what State laws are and not to get political. But I think we're seeing that, too, with how platforms are being treated. In certain States like Texas and in Florida or other States in the US, or even outside the US, entire countries are banning Tiktok, for example. So what other external factors will be impacting a business model? And it's getting really difficult to predict in the creative economy space.

Debbie Reynolds  19:08

Yeah, I agree. I love this. I love these deep, deep thoughts and the chats with Franklin. One thing I want to talk with you about is AI generative AI, ChatGPT. This is like a boatload of privacy IP, it's like a slew of all types of legal issues all kind of intertwined. So tell me a little bit about this generative AI issue, you know, obviously has privacy, ramifications, and IP. It has implications for people who are creators that are using information that's generated by this, especially in their commercializing the work that they do, so this just give me your riff on that. Tell me what's happening. What's going on?

Franklin Graves  19:09

Yeah, absolutely. So generative AI has exploded, of course, generative AI tools. So that would be everything from mid-journey, which is the text to image generator. And the other one that's similar to that is stable diffusion and stability AI. And then, on top of that, you also have open AI, which has made headlines for Dall-E and Dall-E 2, which are also text image generators. But then you throw into the mix like you were saying, the stew, you throw into the mix of tools, chatbots chat interactions with a large language model, or an LLM. And that's really where things really start to get interesting and GPT. For that has also been announced and is currently in release, as well; as a hybrid of all of that, you can drop in an image and have it analyze the image and do so much. So that takes it a step beyond just a chatbot. So yeah, there's so many issues surrounding AI and machine or artificial intelligence and ml or machine learning. And the way to look at them, I guess briefly, is AI as the larger kind of term that's used for artificial intelligence or, and then within that, you have machine learning or ml. And that's to overly simplify it. And then from there, you have all these different tools that are either generative and their outputs. So like we were talking about text to image, and it generates an output that is usable, tangible, maybe a work of some kind, maybe mimic artwork or an art piece, or it mimics a written text of some kind. Or it just is if take it to the video game example, you have one of these models thrown in and plugged into a video game, and a nonplayable character, an NPC, and that character has just sitting there in the game, and a user goes up to them somehow, depending on what technology is powering the game, and they interact with that character. And for all they know, it could be an NPC, or it could be just another user. So it raises all of these potential concerns both on the use of it and the disclosure of using it to how economically sustained because generating a lot of these models is costly. It takes up a lot of server resources, and it has an economic or ecological impact. And so taking a step back, one of the best ways that I kind of approach each of these issues is to understand, okay, what are the three buckets that I that I play stuff in? And so the one I've been really honing in on right now is the output. And that's really the third bucket of how I view any AI or machine learning technology. So that's that output; what is the software? What is the algorithm? What is the model, and what is the user's engagement with that? What is its output? And how is that? What is the privacy? What is the security? What are the IP issues surrounding that? That's the whole bucket of issues by themselves. But then the second bucket and I'm working backward here, the second bucket is the AI tool itself. So the ML tool itself, the software that's being run, that would be like mid journey open AI or Microsoft's? Well, Microsoft owns GitHub, so get hubs copilot, or Microsoft's Bing or Google's Bard technology that's getting implemented into a lot of their search and other tools. Those are the tools that's like a second bucket of what we interact with; what levers can we pull to control the output? Are there any levers that we can pull to control the output? So, for example, can we put an individual's name in a text image generator and say, Okay, I want to create a comic book that has a character that is uniform because, again, if I'm using mid-journey, I can't really control the output every single time to give me the same looking character. So I'm going to use that data as the actress as an example. And I'll use her as a prompt her name. That way, I can say, Okay, give me a character that's doing X, Y, or Z that looks like it's in data that is in this environment, what have you. And that way, every single time, I will get the same looking character, and I won't have to worry about, okay, the AI or the tool that I'm using is just we're going to randomly generate a character's facial features. And so that is a whole name, image likeness approach are all privacy rights and publicity rights issues that are all again State law vary from State to State, and not to mention just general biometric information that your that has been captured at that point. So all of that is just really it can be triggered just by using a tool itself. So what is the tool, the manufacturer or the provider of the tool? What safeguards are they putting into place? Les to prevent that type of use from happening if they even can at this stage. So that's the second bucket and the first bucket backing up to how the models are generated and created and put together; they rely on datasets, you have to have massive amounts of data to train and build up a model and train a software algorithm that this is a dog, or this is not a dog. This is a fire hydrant. And so the massive amounts of data that it takes to do that, we've learned from some of the lawsuits that have been filed by Getty Images and others that it takes billions of images to do that. And where are those images come from? Oh, across the Internet. So, so sure, let's just grab all those and not worry about what type of privacy or security issues or what type of IP issues surround that data scraping content scraping; that's a whole ‘nother issue do you refer to it as a dataset and data scraping? Or is it truly taking content that is copyrightable or copyrighted and owned by somebody? And then pull that in are their privacy, right? And that somebody's name image likeness has their face that sir biometric information. All of that is data? Yes. But is it fair to classify it just as data? Is that overly simplifying it is that making it seem like it's harmless? And I think that is important from a distinction and a discussion standpoint is how that is referred to ethically, responsibly, all of those kinds of keywords you hear surrounding AI right now. So those are the three buckets that I put any type of AI technology into, and kind of the, how I analyze, okay, what am I actually looking through here? Okay, I'm going to be buying and using this AI tool, which Adobe just put recently put out; there's a text image generator, okay? They're upfront, and they're telling us, okay, we built these models, only using images that we have permission to use, and only from sustainably sourced, and people are like, there's a way to give payment back to them in some instances. Okay, that's a step forward in the right direction. It's not just scraped data and scraped content from across the Internet, they're actually putting into the market what is arguably a sustainable, ethical privacy forward model and tools that are based on their AI and ML models. So those are the types of questions that you have to ask, and okay, what are the downstream ramifications of using outputs that might be a mirror image of some model that you didn't even know, like a model, like a photo model, or something like that? So yeah, that's my take on AI; there's still a lot to unpack. There's a lot of risks. There's a lot of liabilities that still surround their use development and outputs of them all of that.

Debbie Reynolds  27:47

Yeah. This is why I love you. Deep thoughts. I love this. You mentioned two things that I want to talk about. One is, and I've said this to people before, you know, some companies, they're coming out with AI, where they're saying, let's create an AI, maybe it's a deep fake or something of someone that doesn't exist. So it was a computer-generated thing. But one of the things it generated looks like an existing person. And maybe you've done something that maybe could harm that individual. I mean, this is totally, very future State like we're starting to see things creep up now in lawsuits that are currently going through the courts trying to address that exact issue. What are your thoughts?

Franklin Graves  28:37

Yeah, I'm actually so glad you mentioned that because I completely forgot one of the reasons why I appreciate what Adobe has done is the tool that they've rolled out Adobe as part of what's called the Content Authenticity Initiative, the CAI. And it is a conglomerate and open-source set of tools that can be used by anyone that wants to join and use the open-source tools. So that if I'm using Adobe's AI generative tools, it will in the output, it will embed metadata that indicates this is generative art, this is a generative work. So that way, it's clear and transparent to anyone that is processing that image. So let's say, let's extract that as an example. If I use my phone to take a picture, I could theoretically be using an iOS Apple device, and they could join the Content Authenticity Initiative or some other technology that comes around that validates, okay, this is a photo that was taken on an iPhone device, and that image was not manipulated. It was verified from the hardware into the software written into the image. And then that can then I can go to Facebook, and I upload that image to Facebook, and then Facebook's algorithms can know and can verify Oh, this is an authentic iPhone photo from this user. Phone or whatever. And now we know, okay, we are safe to promote this or to surface this within our algorithms to other users. And the same thing with Adobe, if I'm using Adobe Photoshop, or their generative AI tool that I think they have like three or three of them, or two of them at this point, then that data will indicate okay, this is coming from a trusted AI source in that it's Adobe or some other company, it could be any other company that decides to join this. And so we know that it was properly and sustainably created AI, responsibly created AI. And I think that's going to be important going into the future is understanding the authenticity of any content that we are consuming. So going back to your deep fake example, if there is no information metadata that can be verified from a tool that was used to create the deep fake, then, in my opinion, at this point, a platform should not be promoting and should not be making that content either available at all or readily available or what have you, without some other check or balance content moderation lever that's being pooled. And I think that goes to the heart of deep fakes as well, from a privacy standpoint. Okay. I think that goes to the heart of a privacy concern. If I'm using a tool, an AI generative art tool that has been built on a dataset that was, like I said earlier, billions of images scraped from the Internet, then that's not responsibly sustainably sourced. I'm using like food labels here, responsible, sustainable fish, it's caught. But it's the same type of thing. And I think food labels have something going because I think Apple love what they did with privacy labels in their app store. Kind of as though they actually came out saying it's like a food label. It's like a nutrition label. And it's I think we're getting to the same thing with technology is having clear, transparent labels for content. And then, even within the AI space, okay, what data content was used to train a model that is powering an AI tool? And can I trust that the output is, even if it is similar to a human model that might have been in one of the photographs, there will be a waiver for that model understood, okay, I'm, I'm part of the photo session, where I know I'm giving up my biometric information to be fed into an AI system. And I'm fine with that, from my model standpoint, a human model standpoint, human subjects standpoint, and then that can perpetuate itself and can go have a downstream impact of having sustainable, transparent AI that is the output of that. So where you have issues, though, it's like deep fakes, if it's going to intentionally be looking like somebody, or if I'm just trying to become a virtual influencer, a V tuber, as they are sometimes called, where I am using just generative AI tools that are just fly by night have been created and aren't sustainably or aren't ethically created, then yeah, I could potentially run the risk of using a tool that is going to generate an output that might be similar to somebody, and I might face issues, I might have liability, I might have risk associated with that.

Debbie Reynolds  33:12

Absolutely. So I'm glad you touched on something that I would love to talk about; this is amazing. And that is the difference between privacy and publicity. I would love to chat with you about that. But before we get started, I want to bring your attention to cases coming up. It's kind of dancing around these issues. But I want your thoughts on it. So there was a case a guy in Ohio who filed a case against ancestry.com earlier, and they're trying to create this as a class action. I think that they're at the stage where they tried to dismiss it, and it did get dismissed. So it may actually end up being a class action. So this guy in Ohio he's filed a case against ancestry.com. He says ancestry.com has used his high school picture in advertisements and then targeted him. So they had his picture. They know his name. They knew the school he went to I know the State, and the city that he was in, and they use that to target people who possibly went to school with him or went or were in that city in order to get more people to sign up with ancestry.com. So this is a privacy and publicity issue. But privacy and publicity are different and part of this lawsuit for ancestry. They're kind of dancing around a bit. They're sprinkling in little things in this lawsuit around who has the right to publicity, right? Because publicity ties in the commercialization of your image, right? So I don't want to put words in their mouth, but in a way, they're kind of saying, you know, you're a guy in Ohio, you're not a celebrity, you're not a political person. You probably are commercializing your own image. So maybe this publicity claim doesn't fit you. But tell me about this because I think this has a huge impact on creators.

Franklin Graves  35:21

Oh, absolutely. I think you're spot on. I'm vaguely familiar with the case you're talking about. I think that from my own understanding of everything, I would argue that just by being a high school student, a middle school student, an elementary or even a college, whatever level of student you are, by participating in the yearbook process, whether they did that anymore or not, I do not know. But I can easily see this becoming an outdated concept. Yeah, probably. Anyways, whatever the digital equivalent of that might be in the future, which will be a whole different discussion because Terms of Service, privacy, assault policy, and all that kind of fun stuff. Anyways, putting all that aside, I think that some of the inherent nature of a yearbook is that it is not going to be used for this purpose and that it will not be scanned and digitized. And recreated, it is intended for an, I would argue, kind of like a private ish use for anyone that went to the high school or alumni or, or maybe there is an argument, okay, from a historical perspective, it's got value, but to the degree of using it for commercial speech, and assigning a commercial speech aspect to this individual whose image was used to pull in and draw on their other classmates with their image is a huge issue. And I would argue that, yeah, that would definitely be within the right of privacy; it's all going to depend on the State, it's gonna depend on whether that user even went to the website, they've been to that website before, have they agreed to the terms. And then, therefore, they now and looped into arbitration, and all that kind of fun stuff that goes along with it. But anyways, it actually reminds me of a case that I think Facebook, I'm not mistaken. Again, I'm going off the fly here. I believe Facebook was involved in a similar lawsuit against another user on the platform back when you'd go, and you had like a business page, or you would leave a Business Review, Facebook would just algorithmically and automatically pull in. And let's say I have friend Joe over there in Sally over here, both Joe and Sally, like this local business, that business can choose to advertise. And part of that advertisement will say, Oh, Joe, and Sally, your mutual friends have liked this business organization. That was a contested issue. And I think Facebook ended up either kind of speaking off the flag here, so people should double check me. But I believe they had to update the terms and either did away with that practice or updated their terms where you can opt out of your likes and your engagement data being used in those types of situations. So anyways, to answer your question, I think that, yeah, there is an inherent distinction between the right to privacy and the right to publicity, right of privacy, I view as more of an individual; I am a private person, and I have aspects of my life, I'm not speaking for myself because I choose to be online, I choose to be publicly available. But theoretically, I am an individual, I don't want people coming and taking photos outside my home on the street. And just because the windows are open, they can take photos of me, I don't want any of that. I don't want them to find private facts about me and then start talking about that, or putting out a book about it, or having me portrayed in some Lifetime movie. That'd be fun. Right? So Right. It's life, you write privacy as, as one's individual privacy, and again, it's gonna differ based on the State statutes. But the right of publicity is when you have somebody who's reached that level of fame. And from a commercial standpoint, they want to protect their name and image likeness; they want to protect the commercial value that they've built up and that their public image has associated with it. So there are there ways that can do that outside of privacy and publicity laws or publicity laws. In this case, they could take advantage of trademark laws, they could trademark their name, potentially, they could trademark their brands, and they could ensure that their photographs would require a license if used for commercial purposes. And so another example, actually a really good example I would share, is the drugstore I think it was Duane Reade who had taken a picture of Katherine Heigl. It was a paparazzi photo of her walking out of their store or holding a bag or something like that. And, they had used it to be like, oh look, Katherine Heigl shops here. This would be good enough for you to shop here too. And that's a right of publicity issue because as a celebrity endorser, she makes quite I would imagine if she chose to go that route, she would make quite a lot of money to be a brand spokesperson. And so that would be using somebody's public image, the reputation they've built up, all of that without their permission. So in the case you described, I think it's more of a right of privacy issue for a person that didn't want to be used in that way.

Debbie Reynolds  40:07

Right, right, I think the way that I tried to explain it, to me privacy is about your right not to share. And publicity is about the right to control what is shared about you if they consider you, and this is a whole other thing, and this is why this is really important. If you're considered notable, like, if you have 100 followers on Facebook, are you notable? You know, I'm saying, so I think that that's what they're getting it you know, this guy is, he's not a celebrity, you know, he doesn't have a big following. So they're, I guess they're trying to play with the idea of who has the right to protect or control their image in this publicity area?

Franklin Graves  40:55

Actually, I'm working on a book about creator economy law, and that is one section that I have outlined. I have yet to get there yet. I know that there have probably been arguments; I think I've seen them before. Okay, well, this person has a public Twitter, and they have a public Facebook; why are they not considered to be wanting fame wanting recognition? Why do they not want to be infamous versus famous and all of that? That's definitely an issue that I would love to explore more. And I, myself, have it on my list to get to eventually.

Debbie Reynolds  41:25

Yeah, well, I'll see you over my research on that case; I just find it fascinating. And I've been watching it really closely. What other things are happening in the world right now that concern you? Beyond AI? Obviously, AI concerns you concerns me as well.

Franklin Graves  41:45

I would say just generally, all of the Internet laws that are coming out of Europe, I think, that especially relevant to your audience; I think that we are facing a time like we were back in 2016 and 2017, leading up to GDPR. Going into effect. Yes, there are laws that are only really impacting and targeting EU citizens or EU data subjects, or their laws are impacting how Internet users in Europe are engaging with an Internet platform or a marketplace, or how an algorithm is impacting them. But I think we're at a point in time where again, we have the EU that is doing stuff that is going to impact how services and products on the Internet are delivered across the globe. Because it's as we learned the GDPR with privacy laws, it's nearly impossible to segment out your business and to segment how your products and services are delivered. And I would argue it's even harder to do so with these newer laws with the upcoming AI act that's been proposed, or the Digital Services Act or the digital markets act that are passed and are coming into effect soon. It's going to be hard because I think we've been working through it on the privacy side as, as US based companies, or Canadian companies or any other company outside the EU, where, okay, maybe I just need to spin up a server in those jurisdictions. And maybe I can tweak my privacy policy this way, which I'm making sound very simplistic, but I know you and I both and your listeners know that is far from simplistic. But to have data practices that are siloed and geo specific, I'm having a hard time seeing how from a cost perspective, that's going to be extrapolated out over so many other areas of business operations, not just what we now have as privacy silos within the company, or the security silos within a company where you have maybe your chief security officer, your chief information security officer, your chief privacy officer, but now all of your other teams across the board are going to have to be thinking about it. And I think maybe a precursor we're seeing is there's now a lot of these privacy laws are starting to impact those previously carved out areas like HR, all the data surrounding HR applicants. job applicants are right, yeah, or you're actually your employees and, and so I think it's starting to get larger in scope. and the EU is really pushing this forward much more quickly and swiftly than then we are in the US. And I see that as a huge concern just for me, both in my professional capacity as well as my personal capacity covering what I cover and staying up to date. It is rapidly changing. And again, I think we're gonna see the EU doing stuff that is going to have a global impact, not to mention that start to happen after that.

Debbie Reynolds  44:44

Absolutely. I agree with that. 100% also one thing to note is that because the EU has a human rights bit to things, they're the way that they pass laws. They aren't as prescriptive as they are in the US, right? So where they're like, okay, keep data as long as you have a purpose for it, where we're like, put a button on your website that says this, you know, say, so I think it makes it harder for us in the US to try to pass laws because we try to be overly prescriptive. And a lot of lawyers hate that. Hate laws that aren't prescriptive, right? Because you have to kind of figure out, you know, it does not say, you know, the GDPR doesn't say, delete data every seven years, as people will love it. They're like, okay, great. I understand. I can, like program this, but it's like, okay, tie it to a purpose, you know, that's harder to do. And that's kind of a more company-by-company thing. And then also, even though I'm a privacy professional, I'm also a business person. So I know that privacy isn't the only thing that businesses have to think about. It's part and parcel of what they need to think about. It's something that they need to think about early, definitely. But it's not the only consideration.

Franklin Graves  46:07

Yeah, absolutely. And the other thing that I've mentioned along the same lines that's concerning for me, is we're in a very high growth stage of AI. Everybody wants to, the tools have become much more accessible, especially now with Google and, Microsoft, and Amazon. And their cloud-based services offer AI, playgrounds, for lack of a better term, even open AI, they call their playground, where you can go and experiment with it. The privacy concern that I see there also spills out into IP concerns. And I have to give credit where credit is due for this concept, but the concept of once you train a model on data or content, how can you untrain it? How can you ensure that it's not going to crop up as an algorithmic shadow or like a data shadow as the tool is being used? And an example of that is the FTC that we've seen twice. Now, if not, maybe I think three times they've required companies to not just like to not just delete the datasets and no longer use the datasets; they have to delete the models that were trained on data that was improperly gathered because of their privacy policy, not properly disclosing it. And so, Tiffany Lee has a great paper that I would highly recommend to your listeners. It's called algorithmic destruction. She's a law professor with the University of New Hampshire School of Law and does a lot of stuff with Yale Law School, their information society project, she's the one who really kind of taught me, and I've been running with it as well, trying to really think through this, okay, you have this algorithmic shadow concept, where, okay, you're using data scraped from the Internet to train up models. Okay, that data is most likely going to have some personal elements to it, some individualized, personally identifiable information; what have you? How is that going to be reflected in the outputs of AI systems that are built off of those data models? And the algorithmic shadow? Is that concept of okay, it's going to be there in some capacity? It's just a when will it show up? And how much will it show up? But the reason I mentioned that is because, like, on the copyright side, we're seeing that now. Okay. If an image by Andy Warhol was used to train an algorithm, that image is going to be present in some of the outputs. And I think that's what we're seeing, especially in the Getty Images lawsuit that was filed here in the US. And there's also one in the UK. A lot of the outputs have the Getty image watermark on them if you're using like sports and other images like that, and the complaint that they filed is fantastic to illustrate this. And maybe you can include a link to it on your show notes. But it's this concept that okay, well, if they lose this lawsuit stability, or open AI stability AI, if these companies lose these lawsuits, saying that there's no fair use, or there's no carve-out, like the UK has the text and data mining exemption to just freely gather this information, this content to train a model, and it's showing up in your outputs? Or maybe it's not, you still used it to train? How do you untrain a model? And that's what technically right now, from a technological standpoint, I think people are working on it, but it's really difficult. And so that's why I harken back to the FTC on the privacy side. Okay, you have to delete your entire model that is going to decimate some of these businesses, both from a current product standpoint, product or service standpoint, and also the cost to retrain and don't a whole new model that's not that has properly licensed content if that even possible.

Debbie Reynolds  49:56

Wow, this is mind-blurring. Thanks so much. So if it were the over the world, according to Franklin, and we did everything you said, what would be your wish for privacy, or data, or creator stuff anywhere in the future?

Franklin Graves  50:13

So specifically within the creator economy, I would love to see an approach that puts creators first. And I think a lot of platforms are doing this. Well, a lot of platforms are making sure that creators have the tools they need. And I think that we've seen a lot of great responses from, like YouTube, building in capabilities on their platform to indicate whether or not content is made for kids or not. And not only providing the toggle for that from a product standpoint but also from an educational standpoint, building out massive amounts of help articles and helpful videos that walk people through what that means. And then even also, again, on the FTC side, allowing toggles to identify when something's a paid disclosure. So I think that I'm happy to see platforms that recognize that creators and creators themselves as individuals, whether or not they're their public figures yet are not, like we talked about, or they still don't have right to privacy issues. They're considering the fact that those individuals are humans that power their platforms. Without that content, their platforms would not have the daily active users that they have. And so I love being able to point to examples of where Mehta has developed a greater program; a lot of companies are developing creator-focused programs and sustaining them. And helping creators find new brand deals, for example, on a brand platform to exchange, okay, you want somebody to sell this product, okay, that product would really work well for my audience. So let's talk. They're facilitating transactions like that happening. They're facilitating and empowering creators to become educated. And so that's what I really love seeing and what I hope continues. And it touches on privacy because I think Adobe is doing that right, as well with the way that they're providing creators tools that are economically sustainable, that are transparent and also respect the rights of the creators on which those tools were built.

Debbie Reynolds  52:19

That is wonderful. Oh, my goodness, thank you so much; I have to say, for people, even if you're not a creator, if you're on the Internet at all, you need to read this newsletter because you delve so deep and go across so many different things. I read it religiously like I'll stop; if I'm looking at it at two o'clock in the morning, I'll stop what I'm doing and read it because, oh, my goodness. And the information and the research you do is impeccable. So I look at your profile a lot like pull up, oh, here's this case, here's the file, and here's whatever. You actually recently did something about the US Patent and Trademark Office. You know, there's kind of a brouhaha about people publishing books using AI tools, and I'm trying to figure out is are these original works, so it's just going to be a very interesting time in the future around these concepts about creation and how people protect their rights and protect their privacy and things like that. So, thank you so much for being on the show. This is so much fun. It's going to be hard for me to stop talking to you; we have to talk more.

Franklin Graves  53:27

Hey, maybe there is a way that I can come back. It's been an honor, I love your podcast. I love your content as well. I really appreciate you having me on and occupying their headspace with earphones or if they're working out or driving their car, whatever they're doing. I appreciate their time as well because I know you have a lot of great episodes and a lot of great content that they can be consuming.

Debbie Reynolds  53:45

Oh my goodness, thank you so much. I'm sure we'll be chatting soon.

Franklin Graves  53:50

Sounds great. Bye bye.

Previous
Previous

E131 - Egil Bergenlind, Founder & Privacy Hero Sidekick at DPOrganizer

Next
Next

E129 - Tharishni Arumugam, Global Privacy Technology & Operations Director,  Aon