E93 - Susie Alegre, International Human Rights Lawyer, Author of Freedom to Think
Your browser doesn't support HTML5 audio
38:41
SUMMARY KEYWORDS
inferences, people, privacy, algorithms, advertising, data, ai, technology, day, information, person, thought, freedom, understand, life, book, sold, absolutely, surveillance, buy
SPEAKERS
Debbie Reynolds, Susie Alegre
Debbie Reynolds 00:00
Personal views and opinions expressed by our podcast guests are their own and are not legal advice or official statements by their organizations. Hello, my name is Debbie Reynolds. They call me “The Data Diva”. This is "The Data Diva" Talks Privacy podcast, where we discuss Data Privacy issues with industry leaders around the world with information that business needs to know now. I have a special guest on the show all the way from London, England, Susie Allegre. She is an international human rights lawyer and author. And she is someone I think is fascinating to talk with about her about the idea of freedom of thought and how that applies to privacy. So she has a book coming out as well. We'll talk about that a little bit later. The name of the book is “A Freedom to Think”. It is on Atlantic books. Welcome to the show.
Susie Alegre 01:00
Thanks very much, Debbie. Thanks for having me.
Debbie Reynolds 01:03
Yeah, well, I've seen you on LinkedIn. I've seen things that you posted. And I decided to reach out to you several months ago because I thought this woman is fascinating. I love the way that you think about the philosophy of thought, and I feel like people get so wrapped up in the technical things that happen day to day, they're not thinking on a high leveI. You do that really well. So tell me, tell me about your career, your trajectory to being an author of this book?
Susie Alegre 01:36
Yeah, well, my background is as a human rights lawyer. So I've worked as an international human rights lawyer for the last 25 years. But originally, I studied philosophy and French, so I wasn't planning to be a lawyer at all. And I suppose I came into human rights law because as you mentioned, it gives you that kind of big picture. So human rights law governs how we live as societies and how we live with each other. And that's what really attracted me to human rights law. And so I've worked in various different contexts on human rights. For many years, I worked on human rights and counterterrorism, looking at the balance between security and privacy and other human rights. I've also worked on human rights and anti corruption and human rights at borders and how we protect human rights as we're crossing borders. And so in many of those areas, I was increasingly coming across questions of privacy, coming across questions of data protection, particularly in the European Union context, where data protection laws have obviously been very much developed in the last decade or two. But while I worked on privacy and data protection, I think it never quite cut through. I knew how the law worked. I understood why it mattered, but I didn't really feel it. And it was only when I first read about the Cambridge Analytica story in early 2017, before it became front page news, reading about political behavioral microtargeting and this idea that your data could be used to understand what your personality was, what kind of a person you are, and to then use that information to profile and target you to change your mind that it suddenly dawned on me that the reason why data protection and privacy is so important, is because data protection and privacy are the gateway to our mind and the gateway to freedom of thought inside our heads.
Debbie Reynolds 03:52
I love the way that you put that in terms of privacy being a gateway. And I agree with that. So I think, and I too, was struck by what happened with Cambridge Analytica and how much power these algorithms have to suggest things, and we're a captive audience, aren't we?
Susie Alegre 04:17
Yeah, absolutely. I mean, I'm no different from everybody else. I'm using social media, particularly ironically, to promote my books. You find yourself unable to step away from the screen, I think. And all that time, the screen is gathering more and more information on you and interacting increasingly with your mind.
Debbie Reynolds 04:37
Yeah. So I think it's interesting that you study philosophy. I did as well. My mother was horrified when I did that, but I think it just helps you reason through things and be able to talk about them at a high level, then figure out how you apply it in practical ways in your own life.
Susie Alegre 04:58
Yeah, absolutely. And I think having studied ethics before coming into the law, for me, it was a real revelation that human rights law, as if you like, ethics with teeth. So it's about how you put those philosophical ideas into practice in our everyday lives.
Debbie Reynolds 05:17
So what is happening in technology in the world, right, your concerns around privacy?
Susie Alegre 05:27
Well, I think as I say, for me, privacy is very much the gateway right to other rights. So privacy is the tool that we need, if you like, or it's the gate, that protects our other rights. And so for me, one of the big areas of concern is the way technology is increasingly being designed to get inside our heads. And so losing our privacy is essentially opening us up to the risk that we lose control over what's going on inside our own heads. When I first started to look at the right to freedom of thought, one of the things that really struck me is that in international human rights law, there are a very small group of rights that are what are known as absolute rights. And so these are rights that you can never interfere with for any reason. And they include the prohibition on torture, the prohibition on slavery, and the right to freedom of thought or freedom of opinion inside your own head. And those rights, you know, are so fundamental to what it means to be human and to human dignity, that there can never ever be a justification for limiting them. Whereas privacy is what's known as a limited right, which is what most human rights are, which means that you have the right to privacy, but it can be limited, for example, to protect the rights of others, or in the interest of public health or there are a list of reasons why you might be able to limit someone's privacy, but you can never limit that freedom inside their their head. But what I saw, the more I looked into the question of how technology engages with us, and how technology instrumentalized as interferences with privacy, it was that really the goal of much of the tech that we're seeing developing is precisely to get inside our heads down to where the money is, the money and data is what it says about who we are, how we're feeling and how that might be used to sell us something. Whether we're being sold a political party or a pair of socks, it doesn't really matter. That's where the money is.
Debbie Reynolds 07:44
I think one thing that really concerns me is things like the Metaverse, where you're going to be put in an immersive environment. You're definitely a captive audience in that regard. But it's also taking into account your senses and your response to that. And as you know, the feedback that the system gets, it may choose a different thing to either show you or have you interact with, maybe change your behavior.
Susie Alegre 08:18
Yeah, completely. I think even now the way that we engage with screens, if you like, the way we engage with visual imagery is different from the way we might engage with just talking to somebody on the street. It has different implications for how our minds work. One of the big concerns is about emotional contagion, if you like a social contagion. And that is something that even without going into the Metaverse, I don't know if you remember a few years ago, I think it was in 2017. Facebook did this experiment where they found that by curating somebody's newsfeed, they could change how that person felt. So they selected individuals at random around the world, which you know, if you're on Facebook, that could have been you, could have been me, we probably will never know. But they selected individuals at random and monitored, how their posting changed. At the end of the day, depending on whether or not their news feed had been managed to give them more positive stories and posts or more negative stories and posts. So this isn't necessarily about advertising. It's just about how your news appears to you the order that things appear to you. And what they found was that people who were being fed a more negative news feed by the end of the day, we're posting more negatively so effectively sitting on Facebook, if they've decided to make you feel miserable by the end of the day, they can do that by tweaking the algorithm of your newsfeed. If they want to make you feel happy by the end for the day, they can do that, too. And so if you think about that on a kind of global scale, this sort of power to change how people feel about themselves about the world around them in such a direct way, when you put that into the Metaverse, it's sort of multiplied to the power of 1000, effectively, the kind of impact that might have. And it's not just how you're feeling when you're in the Metaverse. I mean, when you've been in the meMaverse, or you've been on Facebook, if you end your day feeling miserable, that's going to affect your relationships, your work in every part of your life, potentially, how we feel has a huge impact on what we do, how we engage with others, how we see ourselves and how we see the world. So I think that the potential impact of this on us as individuals, but also on societies, is huge and very worrying.
Debbie Reynolds 10:59
I would love to talk a bit about emotional AI. So this is basically algorithms using information that they know about us to try to gauge our emotion or try to gauge what we're thinking. And then obviously, the next step of that is to try to elicit some type of action or response from us. What are your thoughts about that?
Susie Alegre 11:27
I think emotional AI is an area of really serious concern. Going back to the idea of what's protected by the right to freedom of thought, the right to freedom of thought, inside our heads, as opposed to what we say and the opinions that we share. So just what's going on inside us our inner lives, if you like, includes the right to keep our thoughts private, the right not to have our thoughts manipulated, and the right not to be penalized for our thoughts. So there are these three aspects. When you think about those three aspects in relation to emotional AI, you know, you're walking through the mall, minding your own business, a facial recognition camera decides it's going to run your face or your video through an algorithm that's going to decide whether or not you're a shoplifting threat. Or whether or not you're someone who might be open to being sold something ridiculously expensive. You know, as you're walking around the mall, this algorithm is trying to understand what's going inside on inside your head. It's basically making inferences about your inner life based on your image. And I would say that that kind of activity really hits the first button on the right to freedom of thought which is that right to keep our inner lives private. Obviously, on a day to day, we're all reading each other. It's part of our human engagement. But I think that's quite different from an automated reading of what's going on inside our lives, particularly when that automated reading is either being used by government or by private companies in order to exploit or judge us. Going back then again to this facial recognition technology, emotional AI. You know, if you happen to be unlucky enough for the AI to identify you as someone who is likely to be a shoplifter, you may well find yourself being stopped outside the shop, not given entry. You may never know why you may never know why you're being searched. And you may find that you're ultimately being penalized because of the inferences that the AI has made about what's going on inside your head about your thoughts and not something you've done. But something that the AI has decided you might be thinking about doing. And it really gets you into the zone of Minority Report. I mean, it's science fiction, but it's there now. And so, for me, looking at this lens through the right to freedom of thought means that you can actually start putting in regulation legislation to say actually, it's never okay. It's never okay to be using AI on the streets to attempt to read what's going on inside people's minds in order to judge them on that basis and then take action on that basis. There's been research as well around emotional AI that shows that there is discrimination in the AI as well. You know, some research that compared the way AI interpreted I think it was basketball players, white basketball players smiling, get good grades on how happy and open the whereas black male basketball players were being judged as being aggressive, even though they've got exactly the same kind of expression on their face. So the, discrimination in the AI is an extra level of problem. But what I would say is it's not even about whether it works or whether it's discriminatory. It's about whether it should be allowed at all. Because what it is, is an interference with our right to keep our thoughts private and to not be penalized for our thoughts alone.
Debbie Reynolds 15:28
Yeah, I think we're going into a different phase of the way data is captured now because of technology. So there's so many different sensors IoT devices are gathering data and fusing that data together. But one interesting thing that I think, even though I've been always concerned about multiple AI, but now we're adding in the biological element where you have sensors, so it's not just a camera, looking at your face and seeing whether you're smiling or not. It's also a sensor that you may be wearing. It's checking your heart rate, and it's trying to make inferences about you based on that as well. So we're combining that data together.
Susie Alegre 16:10
Yeah, absolutely. And I mean, I think that as we're seeing in that, the quantified self-movement and fitness, fitness trackers, are increasingly moving into that emotional space. And this idea that your fitness tracker might be able to tell you that you're feeling a bit miserable and maybe give you some tips on feeling better. It's like, actually, maybe you just want to get on with being miserable for a couple of hours and work through yourself. And I think there's a real question there about that privacy, but also, again, about the manipulation point. That, on the one hand, maybe being nudged in the right direction, in some circumstances, is something you're happy with, but it's about understanding what you're signing up to, what you're agreeing and what the impact might be. So if I ask a friend to remind me that I should take a walk every day, for example, I'm going to ask someone I trust, I don't expect them to be selling that information to the wider world, in order for people to understand whether I'm somebody who's easily pushed into taking a walk or be whether I'm someone who doesn't do what I've committed to do. So there is this question about how the information is being gathered on us is used. Even if it's being packaged as something that potentially helps us, we need to also think about how it might be used against us. And so that fitness tracker that's telling you how many steps you should be doing every day, if you fail, what does that mean for your health insurance down the line?
Debbie Reynolds 17:54
Wow, that's really interesting to think about. One analogy I like to give about about data and information is that we have the image or the idea that the Internet and the way we Internet, and with the way that we interact with technology, it's like we have this worldwide library is open, that we can walk into a care section. But in fact, we're walking into an illusion of a library that has a very small section, picks of information for us that was curated that you know, so a lot of people, they have this idea. And I tell them, like you and I type in the same thing into a search engine, we're going to get different results because the information is being curated. And in a marketing sense, I can see how that would work, right? It's like, okay, we have this product, we want this person to see it, we think they may like it, that's one different thing. But then, if you're also limiting people's view on other things that are sort of forcing them or try to nudge them, it's making maybe a decision they would not have otherwise made that's problematic.
Susie Alegre 19:09
Yeah, no, absolutely. And there are sort of two sections to that. One is about how we access information. And obviously, to form our own opinions, we need to have free access to information. And as you say, we've all had this idea that, in some ways, the Internet opens up worlds that people wouldn't have had access to. Before, you can access books and information without necessarily living down the road from the library. But as you say, it's like walking into a hall of mirrors with these kind of distorted images of yourself being reflected back on you as a kind of image of the world. And I think that's something that a lot of people don't realize is how personalized our information is and not just well, I'm interested in going on a camping holiday, so I'm getting information about tents sent me. But rather, I'm somebody who may be prone to anxiety. And therefore, the kind of information I might be being given might be designed to play on that anxiety or make me increasingly anxious. And we see that with conspiracy theories and the way recommender algorithms work on things like YouTube. One piece of research that I found really interesting was some researchers in the US who went to a flat earth conference to talk to people about how they came to believe that the Earth was flat. And every single person they spoke to, but one had come to the conclusion that the Earth was flat because they had been on YouTube. So they'd started off with a different conspiracy theory. They hadn't gone there looking for flat Earth ideas. They might have been looking at 911 conspiracies or climate change conspiracy, whatever it was. But YouTube understood, I suppose, their particular vulnerability to this kind of content and effectively escalated that type of content through their recommender algorithms, taking them down the rabbit hole to the point where they would buy a ticket and fly across the country to go and attend a conference full of other people who like them believed that the Earth is flat. And there was only one person they spoke to who didn't get there through YouTube. And he got there because his daughter and son-in-law had gotten there through YouTube and persuaded him that it was a good idea. But I think that is a really strong indicator of how powerful, how powerfully, persuasive, and personalized recommender algorithms can be.
Debbie Reynolds 21:57
Yeah, I read about a campaign that an alcohol seller had done, and they were targeting, they also want to sell alcohol, right, which is, you know, nothing wrong with that. But they were targeting basically people that they thought were alcoholics, right? So I mean, at some point, I think about the social costs, what is the human toll, and who's responsible? Right, you say, Well, you know, this person, they're alcoholic anyway. And we just care about selling more alcohol. We don't care about the damage that's doing to individuals.
Susie Alegre 22:38
Yeah, completely. I think from the marketing angle, and there is that question as well. And people can say, well, it's advertising. Of course, the whole point of advertising is to persuade people to buy stuff. That's why it's there. But I think what has changed in the current environment is this personalized nature of advertising. So while I might walk past a billboard in my neighborhood because such and such a company wants to advertise to people who are in the kind of demographic that is in my neighborhood, you know, that's one thing. If that same company is mining my data to understand what my emotional buttons are and how I'm feeling this evening, in order, as you say, potentially, to say, this is someone who really needs a drink right now, or here's somebody who really needs to start gambling, she's just on that threshold that she's had a really bad day, she's exhausted, now is the time when we might be able to persuade her that a little bit of online gambling would be just what she needs as a pick me up, and understanding how to sell me that not just when to sell me that but also, which kind of advert is most likely to press my personal psychological buttons. And that, I think, is a very, very different beast in terms of advertising to the kind of thing we're used to. I think as well, one thing that's interesting is the concept of subliminal advertising, this idea that they had in the 50s and 60s, that you could be sold things without even knowing that you'd seen the adverts. So this idea that you go to the cinema, and you'd get flashed pictures of Coca Cola or whatever, while you're watching your movie, you wouldn't even notice it so it wouldn't disturb your movie, you wouldn't get the annoying bits of advertising, but the advertisers would get the benefit that at the break, you'd go out and buy yourself a coke. Clearly, the idea of subliminal advertising, the idea of messaging that could influence individuals and the public In a way without them even knowing that they'd seen the message was so dangerous that in Europe, at least, it was just immediately banned. It's still highly debatable whether subliminal advertising works at all. We don't need to wait to know that it works to know that it would be a terrible idea and to ban it. And I feel as though the way surveillance advertising has developed has somehow bypassed that ethical scrutiny, if you like, somehow, we've been lulled into it slowly that we haven't seen it coming in the way that we saw and dealt with subliminal advertising.
Debbie Reynolds 25:42
I agree with that. I agree with that. I think that as these systems get more complex, we do see people, especially in the EU is, definitely a front runner in terms of trying to look at these issues from a regulatory lens, things like behavioral advertising, third party data transfer, and things like that. But technology will always outpace law and regulation. And some of these harms are so detrimental that there won't be any adequate redress for the harm that can happen. So I think it is incumbent upon us to try to stay pace with technology and, as you're doing, call out these issues so that people are really thinking about them and figuring out how to reduce the human impact or negative impact on humans.
Susie Alegre 26:36
Absolutely. But I think one of the problems is the way we focus on what the technology, how the technology works, rather than what the technology is designed to do. And this is one of the reasons that I stepped back and started looking at it from the perspective of freedom of thought; instead of looking at the technical aspects of how you use the data, it's more about looking at well, what is the purpose and saying you can never use the information for this purpose, you can never judge whether or not someone has criminal intent, purely on an algorithmic assessment of their biometric data, whether that's a photograph, whether it's their Fitbit, whatever it is. So effectively, by blocking off the outcomes, or the reasons why things can be used, my hope is that that will then turn the direction of travel of technology in a different direction.
Debbie Reynolds 27:41
I agree with that. I like to tell people chasing the technology to think about the harm to the individual. So if you're thinking about the ways that the people can be harmed, like in physical ways, in psychological ways, I think that will get people out of the idea that, for example, in the US, a lot of times we have a situation, and this is true of a lot of laws, unfortunately, is that there's some bad thing that happens, and then there's a law gets passed, right? And some of these things that we're talking about in terms of AI algorithms, the harm can be catastrophic. And they're almost immediate, right? To someone at work, or it can trail them throughout their lives like so let's say someone gets a bad test result or whatever when they're in grade school, they may put be put on a track because of their grade at that time, where they don't have, they're not privy to certain opportunities as a result of that. So I think the harm can be, it can be a cascading error or one that continues to snowball throughout someone's life.
Susie Alegre 28:57
Yeah, the way data is collected and used on children, about children is a really serious problem and like you say something that can affect their whole life trajectory. One of the things that really shocked me as well was the way that going back to this question of the kind of information you receive when you go online, how that forms your future. So looking at things like whether or not you would ever see an advert for a job, because you've been excluded from the demographic of advertising for that job, based on information about you that you're never even aware of the same going for for credit or insurance, that this idea that you're a risky person or an uncertain person or an untrustworthy person, based on masses of data should not about things you've actually done or said, but just on this huge big data being pushed to an algorithm and deciding whether or not someone you whether or not you as someone was betting on, if you like, another piece of research that I came across, when I was preparing the book that I found fascinating was how even your supermarket shop, you know, what you're buying in your grocery store, is going into this big picture of the kind of person you are. And apparently, some research in the UK was indicating that if you buy fresh fennel, that's a very good indicator of you being a trustworthy person. So if you want a top tip, to make yourself appear trustworthy to the algorithms go out and have a nice fennel salad. But most of us don't realize that as we're going through these daily activities, that they are building up a picture of what kind of a person we are. And again, going back to your emotional state, what you stick in your grocery cart may well reflect how you're feeling that day, as well as your personality. And those things are being used to judge us at the moment.
Debbie Reynolds 31:17
I'm happy to see the UK implementing their framework for algorithmic transparency because it is really important. I would love to know that someone thinks that I'm more trustworthy or not because I buy fennel.
Susie Alegre 31:35
I'm not sure how transparent algorithmic transparency is ever going to be on that. I suspect this is one of the problems about transparency is that you have to understand what they're telling you. It's the same as being able to go and get all the information that Facebook or Google has on you. But actually, it's almost impossible to know how that's being used. Or even read it all because there's so much.
Debbie Reynolds 31:59
Yeah, that's a great point. That's a great point. It is information overload. Right. So I think being able to synthesize information that, actually, you brought up a really good point, I don't think people can imagine that someone would even think to do something like that. So I think we have a lack of imagination about how people can use or misuse data about us.
Susie Alegre 32:28
Completely. And I saw again while researching for the book that there was a privacy campaigner in Belgium who suddenly discovered that there was data on him, indicating how often he was likely to go to the bathroom on a particular day. And, therefore, whether or not he would be a good candidate for being sold fizzy drinks. You see that kind of information, you think, how and where has that come from? How on earth can anyone say how often I'm going to need to go to the toilet today. And it came from the weather app. It was the weather app indicating that where he was likely to be hot, and therefore he was likely to be dehydrated. He might need more drinks, or he might be going to the toilet more often. I mean, would you ever imagine that your phone was giving that kind of information to anyone, let alone to 1000s of companies around the world as you go about your daily business?
Debbie Reynolds 33:31
Unbelievable. Unbelievable. Well, if it were your wish, Suzie, for privacy anywhere in the world, everywhere in the world according to you, what would your wish be for the future?
Susie Alegre 33:49
My wish would be to start by banning surveillance advertising. Because surveillance advertising is really the engine behind so many of these developments that are designed to get inside our heads. Surveillance and advertising isn't the whole problem. But it is the driver of the problem. So if we decide that surveillance advertising, like subliminal advertising, is something that is so dangerous and corrosive to society that it should never be allowed, then we'll find ourselves in a new panorama where we can decide how we want technology to develop for our human future.
Debbie Reynolds 34:32
I love that answer. And I agree with you wholeheartedly. I think surveillance is problematic for a lot of reasons. One is that I don't think every bit of your life should be recorded and cataloged for anyone. Second of all, the inferences, as you say, what's being done with that data is very troubling.
Susie Alegre 35:02
Absolutely. One of the things that you sometimes find when you talk to people about surveillance, advertising, or targeted advertising, as well is that people will say, well, you know, it's rubbish. I'm getting sold things that I bought three weeks ago, you know, the tank is not very well-financed. One thing to bear in mind is that it doesn't really matter whether or not it's working well now and whether or not all of it works. We have no idea where those inferences about us are going. So even if the advertisers are getting it wrong, the way that this surveillance capitalism model that Shoshana Zubov talks about is gathering this information on us means that it could be used anywhere. It could, as I said, pop up when you're trying to get a mortgage. It could pop up when you're trying to go on holiday or when you're trying to get a job. So it doesn't matter that the advertising you’re receiving is pointless, outdated, and rubbish. It's about the way that that advertising is driving techniques that want to get inside our minds, that want to understand us, that want to manipulate us. So if we believe that freedom of thought is important, that we want to be able to keep our inner lives to ourselves that we don't want to be manipulated, then we need to just say that that is not okay. If it's not okay, it doesn't matter if it works or it doesn't. And again, when you talk about inferences when people say, well, they're going to get me wrong, I'm so complex that nobody's ever going to really understand me. That's unlikely, but even if they do get you wrong, that doesn't really help you. I mean, if you look back a few centuries, if you're being accused of being a witch by the witchfinder, who's made inferences about what's going on in your inner life based on your cat and your moles, it's not going to help you that the witchfinder got you wrong, you're still going to be burned at the stake, regardless of whether or not those inferences are correct. And that's something that the drafters of international human rights law recognized when they were talking about freedom of thought and freedom of opinion, is that inferences about what's going on inside your head could be a violation, even if they're completely wrong.
Debbie Reynolds 37:28
Wow, that's fantastic. I love for people to go out and check out Susie’s book, A Freedom to Think is really cool. You're a fascinating woman. Thank you for all the work you're doing. And I love that you're opening up this conversation, helping people debate, right, help people to really think through these issues. I really appreciate it.
Susie Alegre 37:49
Absolutely. Thank you so much. And thank you so much for having me. It's been a real pleasure chatting with you. I love your podcast.
Debbie Reynolds 37:55
Oh, thank you so much. Thank you. I'm a fan of yours as well. Thank you. TRANSCRIPT HERE