E25 - Carissa Véliz Associate Professor at University of Oxford Author
Your browser doesn't support HTML5 audio
Carissa Véliz
38:17
SUMMARY KEYWORDS
people, data, privacy, inferences, ethics, companies, thought, algorithm, ai, carissa, tech, wrote, auditing, world, instance, laws, create, article, treated, important
SPEAKERS
Debbie Reynolds, Carissa Véliz
Debbie Reynolds 00:00
Personal views and opinions expressed by our podcast guests are their own and are not legal advice or official statements by their organizations. Hello, my name is Debbie Reynolds, and this is "The Data Diva" Talks Privacy podcast, where we discuss Data Privacy issues with industry leaders around the world for the information that businesses need to know right now. Today I have a special guest on the show. Her name is Carissa Véliz. She is the Associate Professor at the Faculty of Philosophy and Institute for Ethics and AI, as well as a Tutorial Fellow at Hertford College at the University of Oxford. Miss Veliz has published articles in The Guardian, The New York Times, New Statesman, and The Independent. My introduction to Carissa had to do with an article that she had written in The Guardian newspaper in 2020, related to data havens, and I was fascinated by this topic; and I contacted her, and she was really gracious to agree to do this interview after she published her book, which was published in 2020. The name of the book is "Privacy is Power." It was an Economists Best Book of the Year, which provides a philosophical perspective on the politics of privacy, as well as very practical solutions, both for policymakers and ordinary citizens. Welcome, Carissa, to the show.
Carissa Véliz 01:37
Thank you so much.
Debbie Reynolds 01:40
I follow your career quite closely. I got my first introduction to you with a really stunning article that I have read that you had written, in The Guardian newspaper that I saw the US about data halos, you know, it's a data haven. So I thought the topic was really fascinating. But before we even get into that, I would love for you to say a few words to introduce yourself to the audience,
Carissa Véliz 02:14
You pretty much got it covered. I am doing research on digital ethics more generally, specifically, the ethics of AI, and in particular, privacy, for a few years. I wrote my dissertation on the ethics and politics of privacy. And I was writing an academic book about the topic. And then the more I read about it, and the more I researched, I got more and more alarmed about the state of our privacy. And I thought this is just too urgent and too important to just write an academic book about it that only a few experts will read. So I decided to write a very accessible book called "Privacy Is Power" that was published in the UK last year, and this year, in April is going to be published in the US.
Debbie Reynolds 02:57
Yeah, it's been very well received. So The Economist wrote that it was one of the top books released last year. So congratulations to you.
Carissa Véliz 03:06
Thank you. Thank you very much.
Debbie Reynolds 03:09
Well, yeah, let's get into this data haven concept. This was fascinating. And as soon as I read it, I sent you a message on LinkedIn. I was like, oh my God, this is amazing. So why don't you tell us a little bit about your concept and your theory? I thought it was fascinating.
Carissa Véliz 03:25
Thank you. I wrote this article as a response to many messages from the UK government that seem to suggest that they want to take the opportunity of Brexit to further liberalize personal data. They are writing about personal data as something that is an opportunity that we shouldn't see as a risk or as a threat, that we should take the opportunity to exploit it. And for the purposes of business and government, I found this rhetoric a bit concerning; it was vague, so you can only speculate about what their plans are. But something that occurred to me is that the UK and other countries, this is not particular to the UK, could use data as an opportunity, essentially, to earn money fast and easy without thinking enough about the possible consequences of dangers, and especially the ethics of it. And the article argued that just like there are tax havens, and these are fees that agree to not charge taxes to companies so that they are attracted to those countries. There can be data havens, and these would be countries that allow companies to collect data or to manage data in very ethical, ethically questionable ways, in return for money, and then for those companies to produce some kind of product that on the face of it seems acceptable. But when you look a little bit beyond that, you realize that it has been done with data that shouldn't have been collected or shouldn't have been matched in a certain way.
Debbie Reynolds 05:05
Yeah. And what do you think? So this is actually a great opportunity to ask you this. So that article came out before Brexit. So now, after Brexit, what are you seeing? And what are you thinking?
Carissa Véliz 05:15
Yeah, I think I was right. Yeah. Recently, there has been an article written in the Financial Times by the Secretary of digital matters. I forget exactly what his title is. Again, he reiterated the same idea; data shouldn't be thought of as a threat but as a fantastic opportunity. And I just fear that just like we have money laundering, we could have something like data laundering in the UK, and that would be a very bad idea, it would be very bad for reputational reasons, it will be very bad because I don't think people realize the extent to which personal data can be a threat to national security, to equality to democracy. And it's something that we should be very alert about?
Debbie Reynolds 06:08
Yeah. And what do you think about it? You know, when I'm thinking about data, and I'm saying who has data? And who doesn't? You know, I feel like there's a new caste system being created in the world, based on who has the data? Who does it? You know, who has insights, who don't have insights? What is your thought about that?
Carissa Véliz 06:32
I think that's true. And that's part of why I titled my book "Privacy is Power," in the digital age, power is all about data. And whoever has the data has the power. And there is a class system, not only respecting which countries have more data than others, but also which companies have more data than other companies and how they use it. And furthermore, there's a caste system in whose data gets mined for this. And who bears the worst of the brunt of it. And typically, it is people who are worse off, typically like people who need welfare who have already been discriminated against. These are the people whose data is more at risk. And these are the people who are more hurt when things go wrong.
Debbie Reynolds 07:19
Yeah. And what do you think that people, people who read your book, what would you like them to take away from that in terms of their own? You know, what, what can I do as an individual? Or what can I do, you know, to protect myself better in this digital age?
Carissa Véliz 07:43
One idea that I would really like people to take away is that privacy is not important only if you've done something wrong, or only if you're particularly shy, or only in particular circumstances. Privacy is a matter of power. And it's never a good idea to give either companies or the government too much power because power typically gets abused. So another idea that I would really like people to take home is that privacy is a collective matter. It's not about you. Or at least it's not only about you. And when you share your data, you're actually sharing data about other people who have not consented to that and who might suffer from it. There are many cases that we can use to illustrate one very straightforward case. If you donate your genetic data, then you're donating data about your parents, your siblings, your children, your very distant cousins who could get deported or who could get insurance denied. And they didn't consent to that. But another example is just the Cambridge Analytica scandal. And that really showed how only 270,000 people gave their data to Cambridge Analytica. But with that data, the firm access the data of 87 million people, and with that data, they created a tool to try to sway elections around the world.
Debbie Reynolds 09:00
Right? Yeah, I think, you know, all of us here, some people say, you know, I have nothing to hide, and they don't really understand. It's not about hiding. It's about being able to have agency over your information, right?
Carissa Véliz 09:17
Exactly. Whenever somebody knows too much about you, they can have a lot of power over you, they can interfere with your plans, they can try to modify your behavior, they can exploit your vulnerabilities, they can discriminate against you, or they can humiliate you. It just gives them the upper hand essentially. And what we can do as individuals varies a lot, and it depends on who you are and who you're trying to protect yourself from, and so on. But in general, there's a lot we can do by just choosing the right products. Instead of using Google search, use DuckDuckGo that doesn't collect your data; instead of using WhatsApp, use Signal. These products are very good, they're just as good, and they're free, and they don't collect your data. And that creates an incredible amount of pressure on companies to improve their privacy settings. And that matters a lot. Furthermore, we have to do things in politics. We have to pressure our political representatives and tell them how much we care about privacy and ask them what they're doing to protect it.
Debbie Reynolds 10:25
Right, that's a good idea. I was surprised when the WhatsApp privacy policy was changed. There are so many people who were up in uproar about that. And they, you know, some people obviously changed to a different, you know, messaging app and things like that. I was actually happy to see that because I think the conventional wisdom has been that people don't really care about privacy. At least, that's been the conventional wisdom in the US. And I never thought that was true. But to see people kind of collectively react to that, to that level of transparency and understanding that they do have choices, at least in the US, they do. I thought that was actually a good step.
Carissa Véliz 11:11
Yeah, exactly. My thoughts are, I was very excited to see that people are becoming more and more aware of how companies like Facebook are collecting their data and are becoming somewhat aware of the kinds of risks that that entails. And more importantly, they're becoming aware that that's not necessary. But there are other companies that do just a good job, or even better, and don't collect your data. So why would you choose a company that puts you at risk when you have other alternatives? And that's great to see. And it also shows how even though big tech seems so big and so powerful, and they are, really, their domain is kind of it is based on a house of cards? And it depends completely on us. If we rebel, if we reject their business model, it's not going to last for long.
Debbie Reynolds 12:01
Yeah. What is your thought about inference? So this is something that I like to talk about, as well. So you gave the example which is totally true about Cambridge Analytica how they just got data of, you know, a small number of people and were able to get the friends of friends and get millions of people's data. But a lot of times when people, let's say, you're out on the Internet, and you're doing different things, these companies can infer things about the people that you know, the people that are around you, like you said that you didn't like consent to that information. So I think it also another issue with inference is that it's not always true, right. So if that information is being used against you, let's say in employment or some other context that's not transparent to you, that could definitely harm you.
Carissa Véliz 12:58
Definitely, the first concern is that inferences can be wrong, as you mentioned. So, for instance, a prospective employer might calculate that you have, I don't know, a 60% chance of having diabetes or a 70% chance that you're a smoker, or whatever it is. And you might be treated as if that's 100%. Sure, and it's not, and you actually don't have diabetes, and you actually don't smoke, and you might be treated as if you did. Furthermore, inferences are very dangerous because this is data that you don't give up voluntarily. He can include any kind of thing. So, for instance, your life expectancy can be calculated from how fast you walk, when you're carrying your mobile phone, or your cognitive capacities can be calculated by how you slide your finger or when you go through your contacts. And this is not information that you're giving voluntarily. And it can be used in ways to discriminate against you even when that data shouldn't be used at all. So say the data about your health. Prospective employers shouldn't even have that data. So the fact that they can infer It is really worrisome because it's very hard to police. So you know that your prospective employer can say, well, we didn't hire you because there was a better candidate? And how are you going to know that it wasn't because they try to infer your health status from your data? So that's very problematic. Furthermore, there's this idea that, well, because inferences are probabilistic, then it's not a privacy violation. It's just like something that anybody has a right to try to figure out the world and try to make inferences and try to calculate probabilities. What what's problematic is that we are being treated as if it was private information. And if we, for instance, have a very rigorous law about medical data, and your doctor cannot give your data away to companies or insurance or prospective employers, then why should we allow an algorithm to try to figure out whether you have cancer and let's say the algorithm is 95% accurate? Why should we allow that? It doesn't seem to be very different. And definitely, the consequences for your own life are not different. So we should think about sensitivity in inferences, just as we think about personal data.
Debbie Reynolds 15:17
Yeah. How do you feel the COVID has impacted the privacy dialogue or conversation?
Carissa Véliz 15:27
In general, COVID has been a great challenge to privacy from the start. The very first reaction from big tech was, Oh, yeah, we're gonna create an app, and this is gonna take care of it. Right how it worked out that way. There were very good reasons to think that it wouldn't work out that way. We have also been hijacked more and more by tech. Before COVID, there was the illusion that all our interactions with tech were voluntary, that if you didn't want to, you could just not do it. Of course, that was always questionable because you want to be an active participant in your society, and you want a job, and you want to be in touch with your family, and so on. But with COVID, that became really, really evident how we don't have a choice. And if you want an education, you want a job, you want to communicate with others, you do you have to use tech. And that means that what we considered consent or voluntary action is now evident, that is not so voluntary, and not kind of meaningful consent. So in some ways, COVID has become a threat. And also because we have been using tech a lot more. But on the other hand, it has made evidence, something that wasn't as evident to many people. I think people are becoming more aware of their vulnerability to tech. And that is a good thing. Something that I'm very excited about now is that there's more and more talk in the United States about Federal or national privacy law. I think that that is very healthy. That's something that five years ago was kind of almost unimaginable. And we have to acknowledge progress when it happens, even even if it's a great challenge still.
Debbie Reynolds 17:07
Yeah, that's, as you can see in the US, states are rapidly passing all these new laws. It's just making the privacy landscape in the US much more complicated. So a lot of us, I don't know, anyone who doesn't want some type of Federal legislation that is at least harmonizing some of the stuff because from state to state, for example, they have may have different definitions of what is personal data? Or what a sensitive data or how that data is combined in some way. That's interesting. What are your thoughts about cookies and the new announcement that Google is going to start using this FLoC? Which is kind of like this Federated Id to categorize groups of people?
Carissa Véliz 18:01
I'm quite skeptical. I think Google's announcement sounds great. Oh, yeah. No more cookies. That's amazing. Because you know, cookies are trackers. But in fact, what they're proposing to do sounds a lot like tracking people. Yeah, in a different way. And so one question is, okay, is Google really taking privacy more seriously? Or are they taking the opportunity once again, like many other companies, to do some good marketing and then be at the, sorry, at the cutting edge of tracking personal data. So when you read the description, it just seems like, okay, they're going to put people in different groups, according to their interest, but those groups can still be quite sensitive. And then they come back is like, well, but we're going to audit these groups. And then we're going to check. And if, you know, if, for instance, there's a group that's too small bunches with other groups, and we will make sure that the algorithm is not tracking any kind of sensitive category. But then, first, it seems like a titanic task. How are they going to audit this? And secondly, it means like, while they're auditing it, then surely they are coming up with personal information. And there, like checking things or talking like, algorithms or tracking things like gender, so they have to know people's gender. And it just seems very unclear to me how this is a solution and something that will be overall good for privacy. Furthermore, Google's business model hasn't changed. It still earns most of its money from targeted ads.
Debbie Reynolds 19:40
Yeah. And it's not as though Google doesn't know who you are. So even though they do a federated grouping, they still know who you are. And then and then two people aren't people are unique, and they don't change a lot. So it's, I don't think it's that hard to re-identify people, which is a whole different topic.
Carissa Véliz 20:03
Exactly that there are many details like, for instance, for some websites, okay, so the website is only going to get information about which Federated group you're in. But actually, that what to, to access that website, you have to log into your Google account. So that's why I identify you. So there are all kinds of details that just make me think that this is not as good as it sounds.
Debbie Reynolds 20:24
Now, I think in the US, so my feeling about this, how I'm interacting with people in Europe, you know, obviously, people in the US, the idea about privacy is very different. I like to say that people in the US, so the way that people feel about privacy is the same way that people in the US feel about freedom of speech. So that that definite sort of deep, fervent belief in those things. And for the US, I think, especially for these big tech companies. In order for privacy to be a, remain a C Suite issue, it has to be something that hits the bottom line, so not just with fines, but the movement of people having more choice and deciding, you know, voting with their pocketbook, or voting with their data, that they're only going to share data with certain companies. So I think being able to have a situation where companies are seeing people just like you said, Be more empowered to, to stand up and decide, you know, I don't think that this is right, I want to do something different. I think in the US, that'll definitely move the needle because I know the companies are very concerned about having customers and having customers trust.
Carissa Véliz 21:48
So the first thing to say is that I'm slightly skeptical that there's such a huge difference between how we think about privacy in Europe and in the US. I think those differences are more superficial than they appear. Of course, our laws are different. But the US is thinking about privacy. And then, for instance, the new privacy laws in very similar terms and very similar policies. And furthermore, when you just ask people, hey, do you think it's okay for a Data Broker to know whether somebody has been the victim of a rape? And then so that's all that information for profit? Most people say, no matter? You know, we're there, whether they're American or European? And the second thing to say is that, yes, I think we need to tackle the bottom line of companies. And yes, it's very important what people do, and it's going to matter a lot. But at the end of the day, we need to regulate these companies. It's not, first of all, it's not fair to put all the burden on the shoulders of individuals. And secondly, it's not enough because many times, when we talk about privacy and Cybersecurity, these are very abstract terms and very invisible to us. So I know, more or less roughly, I think, what a safe door looks like. But if you give me an app, and if you ask me if that app safe, I have no idea, not the least idea. And just like when I get on a plane, I don't need to be an engineer to trust that the plane is more or less safe. I want to download an app and know that there are just things that nobody can do with my personal data because they're just too risky for myself and for society. So I don't think we can do away with regulation. But yes, individuals are going to be very important to create that pressure. So that happens. And by choosing the right companies, they can have a huge effect. And also, individuals can have a huge effect in motivating companies to change their business model to a business model that doesn't depend on exploiting our personal data.
Debbie Reynolds 23:49
All right. I'd love to talk with you about ethics and AI. This is a topic that I'm fascinated with. And I love to talk with people about this because I don't know when you read when you look at movies about the future is always the evil robot that does all these, you know, crazy, wacky things. But I think, you know, you tell me what your thoughts are. I think that the human has to bring the ethics to artificial intelligence.
Carissa Véliz 24:21
It's a very complicated topic, and there's no easy answer. I think that, of course, there are concerns about the safety of AI and how we aligned its values with ours and so on and so forth. But sometimes I get, I think we get distracted by these apocalyptic visions that are very far away. And we forget that AI is a huge problem today. It's just causing huge inequalities and huge and injustices, and we're not handling it very well. So I'm more concerned with that immediate problem, that is, AI. And for that, I think. Definitely, the ethics are on us. Today, at least AI is a tool. And we are the people who are designing it, implementing it, allowing it, auditing it, or not auditing it. And we are responsible for the tools we create.
Debbie Reynolds 25:15
Yeah, I, you know, obviously I know people like you who are in AI that, that focus here, and also want to do things like audits. But do you think this is going to become more common or more regulated in terms of how people are using artificial intelligence and then the transparency part of it?
Carissa Véliz 25:37
Definitely, it's just a matter of time. Something I often mention is how our ancestors regulated their giants. First was the rail, railroads, then cars, airplanes, drugs, food, you name it, it's our turn to regulate our giants, and at the moment, it's truly free for. I mean, at the moment, you can pretty much use any data to train an algorithm to do anything and implemented it without any auditing whatsoever and no rules. And that's just ridiculous that just can't go on. For instance, with pharmaceuticals, we would never allow a drug to go out into the world without it being tested, not even in really extreme situations like COVID. Do we allow a vaccine to be to go out in the US or in the UK, or in Europe without having rigorous testing all of it? And yet, we do that with algorithms all the time. And algorithms, in many ways, can be just as powerful or more than the most powerful drug you can possibly imagine. And that's just wrong.
Debbie Reynolds 26:43
Right? Yeah, I feel like a lot of these struggles, in terms of communicating this to individuals, is that, as you said, it seems invisible. So it's not like a tangible thing that you can touch or feel or see. So a lot of times, people think just because I can't touch or see or feel it may not harm me, you know, in the same way. I love your thoughts about bias. You know, obviously, bias is a huge umbrella of things. But me being a woman of color, I'm very concerned about artificial intelligence, or applications of artificial intelligence, that may create a bias for people of different skin tones, or races or color, or, you know, gender or anything.
Carissa Véliz 27:34
I'm really concerned about that, too. I think many times there isn't a bad intention behind these algorithms. But the way they have behaved and the way things have unraveled, I fear that in many cases, we are discriminating just as badly as we did in the past. But now we have the excuse that Well, it wasn't us, it was the algorithm. And we didn't mean it, and we didn't even know it was happening. And that, to a certain degree, is true, but it doesn't justify it at all. Right? It is our responsibility to create equality in our societies. And it does matter if you don't want to be a racist or a sexist. If your algorithm has those effects is just as bad. And it's hurting people just as badly. So one of the things that I worry about using personal data so much, in general, is that we are not being treated as equal citizens anymore. We're being treated on the basis of our data. And in many cases, that includes information that shouldn't be included. And it shouldn't be taken into account, whether we're a man or a woman, black or white, whether we live in one part of the country or another, whether we have a Mac or Windows, and so many other things that are taken into account to essentially treat us differently when we should be treated the same.
Debbie Reynolds 28:58
Yeah, that's one thing I try to talk to people about a lot of times because I think when you're on the Internet and doing different things, people have the impression that you and I are seeing the same things, and we're not. So the information that gets fed to us is based on who they think we are, who they think what we what they think we like. And a lot of times, I try to say it's sort of like the Internet. The Internet gives you the impression like you're in a library. You have access to everything. But actually, you're only in a section of the library. You're not. You really aren't seeing anything unless you, you know, actively do something to actually break out of that mold.
Carissa Véliz 29:44
That's absolutely right. A family member often tells me, hey, did you see that ad that was on your Twitter. I know you see that odd because it's you. I don't see that. And I think it very much goes against human psychology to read realize that what you see out there is not a reflection of reality is a reflection of who companies think you are. And that is a really unnatural concept and goes back to what we were talking about how data is not transparent. And it's very abstract and difficult to understand.
Debbie Reynolds 30:18
Yeah, like, I had an example of like women, so women and men over 50, looking for executive jobs, there are certain jobs that men over 50 will see women over 50 won't see. So it gives the women the impression that there aren't jobs, executive jobs for them. And it gives men the impression that you know, I don't know why women are complaining because they're all these jobs.
Carissa Véliz 30:45
Exactly. And that same phenomenon can be generalized to so many topics, many times when we talk about, you know, somebody's thinking, you know, the other person is just crazy, or they're stupid. It's not that it's just they see a reality that's completely different from the reality you see on your screen.
Debbie Reynolds 31:01
Yeah, I, you know, I always like to bring up the whole evil robot concept, because I think that concepts sort of hurts AI and the way people think about it because, in a way, a lot of these dystopian movies are like, you know, the robot took over, and I've had no control. But to me, that's just a way to abdicate responsibility because humans should control AI. AI shouldn't control humans.
Carissa Véliz 31:37
Yeah, that's interesting. I understand the concern from the point of view of computer scientists and other people who think that we may create something that is beyond our control. And this is a very old thought. I mean, of course, we think of Frankenstein, but it goes even before then, to Greek myths, myths. So this is a very old concept. But something that's striking is that right now, we can control AI, and we're not controlling. So really, it should kind of wake us up to the issues that we're having, that have much to do with good governance and politics, and really how we organize society, then with technology. And this is something I find very often whenever there's a problem. There's an instinct to say, well, we should invest in technology and invest in science. And we forget that if we don't invest in training people in the humanities and learning good governance, we can have the best cutting-edge tech that you can possibly imagine. And it will be destructive. Because without good governance, there is no good tech.
Debbie Reynolds 32:42
Yeah, that's great, that's a great, great quote. I'm gonna quote you on that. I think that's true. I think that's true. What threats do you see in the future that you feel like people aren't really taking seriously enough? I'll just give me an example. Like, things like facial recognition or biometrics concern me deeply, especially because I feel like the harm can be so catastrophic and immediate. And there isn't really, in my opinion, if someone's damaged by those things, there probably isn't any reasonable redress.
Carissa Véliz 33:21
I agree with you any kind of technology that fundamentally deprives people of anonymity, by default, is very dangerous. And I think that the possibility of being somewhat anonymous, for instance, just to protest on the street, is very important for democracy. And it's very important for freedom. And we are developing all these tools to deprive people of anonymity. What is it called? Gait and heartbeat recognition. And all these, I think, are incredibly dangerous. And I fear that we are building the perfect structure for an authoritarian regime that will, we will be almost impossible to get rid of. Because as soon as we start thinking about organizing, and we were messaging each other, we're already we can be the victim of these authoritarian regimes. So I worry a lot about that. And on the flip side, I think it's a moment to be optimistic. But there's a lot at stake. So something I would like to encourage us to think about is how much the US and Europe need each other right now. We need to go back to that old alliance that we had and come together and come to an agreement about how do we deal with data? How do we deal with AI? How do we deal with Cybersecurity? Because we have really important adversaries who are very good at these topics and who are a real threat to freedom and democracy in the future.
Debbie Reynolds 34:54
Yeah, I think we have to change from I don't know your thoughts. With my feeling about in the US, so many, many things are reactive, as opposed to being proactive. So a lot of laws to get enacted are based on harm that's happened in the past. And then we create this law, even a lot of laws are, you know about precedent and about the past, not really about the future. So I feel like, in order for us to really get ahead of this, we have to be more proactive.
Carissa Véliz 35:26
I think that is part of the value of ethics. Ethics is about having a group of people who are sitting around a table and discussing how this might go wrong, essentially, and many times in tech, we've seen that there's too much optimism, people in tech, I think, truly build something, thinking that it's going to be used for the but for bettering society, and they forget that technology almost always has different uses. And it almost never is used only for the best purpose it can be used for. And nobody is there sitting around thinking, Okay, this is what can go right. Okay, what can go wrong? And how can we prevent that from happening? And that is very much the role of ethics? Yeah.
Debbie Reynolds 36:10
I totally agree with you on that one. So I love to ask people this question. So Carissa, if it was the world, according to you, and we would do everything that you say, what will be your wish for privacy in the future, either in the EU or anywhere around the world.
Carissa Véliz 36:28
To end the data economy. It's totally unacceptable to sell and buy personal data. It creates very toxic incentives to sell the data to collect a lot of data and then sell it to the highest bidder, who often is not somebody who has the best interests in mind. And I think that while it can sound radical to end the data economy because we've become used to it, really, when you think about it, what's radical and extreme is to have a business model that depends on the systematic and mass violation of rights. That's unacceptable. Yeah.
Debbie Reynolds 36:59
Do you think we'll ever have anonymity again? I missed that, actually.
Carissa Véliz 37:05
Yeah, I think we will recover some of it. I definitely hope to recover it on the streets. I think it is very important for protest. And online, I have a paper that's open access called online masquerade, in which I argue for a system of pseudonymity and how, for most of history, people wrote under pseudonyms or anonymously. And if we hadn't allowed for that, we would have missed out on a lot of great works—for instance, John Locke, and Kierkegaard, and many other philosophers. And I think we have to recover some of that.
Debbie Reynolds 37:36
Wow, that's fascinating. That's fascinating. Well, thank you so much for being on the show. I really appreciate it. This is fantastic. And we're definitely gonna follow you know, your book and your work, you know, love to support anything that you're working on, because there are so few people talking about these issues in the way that you're talking about that they don't really move the needle internationally, they have people having these conversations.
Carissa Véliz 38:04
Thank you so much for the invitation.
Debbie Reynolds 38:09
Well, thank you so much.