E216 - Jim Amos, Human-First Technologist

[00:00] Debbie Reynolds: The personal views expressed by our podcast guests are their own and are not legal advice or official statements by their organizations.

[00:12] Hello, my name is Debbie Reynolds. They call me the Data Diva. This is the Data Diva Talks Privacy podcast where we discuss data privacy issues with industry leaders around the world with information that businesses need to know.

[00:25] Now, I have a very special show today with my guest, Jim Amos. He is a human first technologist, someone that I have really enjoyed following and interacting with on LinkedIn.

[00:38] And I'm happy to have you here.

[00:40] Jim Amos: Thanks, Debbie. It's great to be on the show. Nervous but excited to be here. Looking forward to the conversation.

[00:47] I'm right now the poor man is at Zitron, I would say, and the other British critic of AI and technology, Technosphere. Right now. I have a long background of working in tech.

[01:01] I've had some long tenures at a couple of different tech companies, publishing companies, started off in advertising a long time ago. I've worked my way up and won lots of syntech have been everything from an individual contributor, software engineer, all the way up to engineering manager and director of a small startup within our company.

[01:23] So lots of love for tech. Even though I would say in the past few years I've become a little bit disillusioned by some of the things that have been happening in tech, some of the priorities that big tech companies have taken.

[01:39] So I am critical of some of the most recent innovations and revolutions, in fact.

[01:47] Debbie Reynolds: Well, I enjoy your work. I enjoy the things that you say, you know, you're, you're not contrarian, because I think contrarian is like being the opposite of everyone else. But you definitely put on your thinking cap and you provide a lot of information that helps us all kind of engage and have discussions as opposed to people just, you know, following along the traditional path.

[02:12] So I think that you have a lot to offer in that regard.

[02:16] Let's talk a little bit about some stuff in the news about just AI in general. I will say from my point of view, I'm a data person. So I've been in data for as long as there have been data practically in digital forms.

[02:32] And so I've seen a lot of different ways of how people adopt technology and things like that. And so this new wave people being super excited about AI and hyping it and putting money into it, you know, to me it's is somewhat unprecedented, like the amount of hype that we're seeing in artificial intelligence.

[02:52] And I tell people that I am a technology lover, but I am, I don't love everything people do with technology. And I definitely don't, don't agree with some of the, especially the, some of the wild claims that are being made about what you can actually do with the technology.

[03:08] Because I know from my perspective and probably yours as well, you know, where you're actually working with companies who are trying to implement these things, you know, what it is like on the real, real point, like how it, how when the rubber meets the road, what happens within organizations as opposed to someone who's selling a product or trying to hype a particular thing.

[03:29] So what are your thoughts?

[03:31] Jim Amos: Right. There's so much to unpack there. I'm also someone who has always had a fondness and a really deep love for technology. You know, I grew up from the age of like 11 or 12.

[03:44] I was tinkering with computers, writing my first adventure games on the Amiga, the Commodore Amiga back in the day, like tech. I was, you know, really, really into it. I ended up taking a hiatus from all of that and my, my degree at university was in English literature.

[04:03] But then, you know, with the emergence of the Internet, I got back into it, kind of sucked back into the world and rekindled my passion for, of technology, computers. I work with technologists in my current role, which is setting myself up as a technical career coach right now.

[04:24] So I'm still working with people in the tech industry.

[04:28] But you're right, there is this, there's been a shift, I would say, and it's almost like in some ways it's a repeat of what's happened before. We all have, or most of us remember or were around at the time.

[04:41] Landed.com bubble happened. You know, you had hyperinflated valuations for companies who came out and said, we have amazing new ideas. We're going to revolutionize the world, we're going to change everyone's lives.

[04:56] We're going to benefit the human race in so many ways. And ultimately I think some of them did. But we all saw the fallout. We all saw what happened when the bubble burst, when all these valuations flattened out and realized and companies and investors realized that a lot of these big ideas were just hyperbole and bombastic language and marketing spin.

[05:21] So I think we've had a good decade or so in between where we've done some exciting things in tech.

[05:30] Big data has revolutionized of industries, you know, auto scaling, cloud infrastructure, serverless. All these technologies have accelerated businesses and, you know, helped individuals and have created, you know, massive ecosystems, a lot of jobs and a lot of revenue for everybody.

[05:53] And it's been great for the individuals who work in the tech industry, software engineers as well. Software engineering has become one of the best jobs to have in the western world, even in other parts of the world too.

[06:08] But now, you know, I think something shifted, something happened. We have in the past few years come to the realization that social media didn't really solve the world's problems the way we thought it might.

[06:24] You know, it did bring us together, but it also polarized us and captured our attention too much and turned us into kind of, you know, data zombies to be harvested at will by big tech companies.

[06:39] And you know, we know it's had harmful effects on society and on children, on minds of young people. And it's had polemic effects in politics and other areas.

[06:52] So I think coming off of that, I think big tech companies, you know, realize that they need bigger ideas, they need new ideas. You know, there's no, no more way to skin the cat that already exists.

[07:05] Social media, there's, there's nothing, there's no more juice we can get out of that. And so it's time for bigger ideas, more bold ideas. And AI is something that obviously has existed for a long, long, long, long time.

[07:19] Perfect. You know, there were scientific studies and professionals working on, scientists working on artificial intelligence in the 1950s. You know, the first LLM was invented in like 1954 or something.

[07:35] So I think that was called Eliza. And so there's this like long history of human beings trying to, trying to create artificial intelligence.

[07:47] And it's only recently that big tech has decided that this is something that could be profitable, it could be marketable if you throw enough compute at it, if you throw enough marketing spin at it, if you get people excited about it.

[08:03] But unfortunately it's coming across right now as snake oil because the fruits of that labor, because the return on investment has not really presented itself.

[08:19] And you know, leaders of these big tech companies are not really able to defend their position. They're not really able to market their own products in a way that has made people excited.

[08:33] You know, Ed Zittron was writing in his, his blog about how Microsoft 365 and their copilot offering to enterprise customers has seen very little uptake. It's like between 0.4 and 1%, I think, of premium subscribers who are willing to pay for their copilot AI.

[08:58] And of that 0.4 to 1%, very few customers have reported that it was kind of a game changer for their company. You know, they, they haven't really, they haven't really reported back on the value that has been derived from, from these offerings.

[09:16] And that's why, that's why, you know, people like Ed Citron and Paris Marks and, and Adam Conover and people like myself, you know, just people who are sharing what we can about the situation on LinkedIn and X and other platforms.

[09:36] That's why we are perceiving this as a hype bubble, because it just has all of the, all of the characteristics of a hype bubble and none of the characteristics of a legitimate, sustainable, you know, valuable product.

[09:54] You know, there's.

[09:55] It's like a.

[09:57] There's just a big nothing burger. At the end of the day, I'd say, like, it's very strange that we have spent billions and billions of dollars and we've also used that billion hundreds of billions of liters of fresh water to cool the data centers.

[10:18] And what is really the end result of that? And I, I'm definitely kind of, I'm being overly critical probably at this point, but what we've seen is products that allow you to summarize emails, allow you to generate PowerPoint decks.

[10:38] Very kind of basic PowerPoint. It allows you to capture the output of a meeting, automated leading output. You can create dad jokes, you can create really bad rap songs, really bad country and western songs.

[10:56] You have these really silly, like, videos that are not really videos or movies, but really like a series of animated GIFs or vignettes that are heavily modified and stitched together and are completely incoherent.

[11:12] So, yeah, it's like, where's the proof of the pudding? Know where, where is the return on investment? We're not seeing it as, as end users. And I don't think if big, you know, enterprise companies are honest with themselves, I don't think they're seeing it either.

[11:28] You know, I hear a lot of CEOs out there still evangelizing AI as much as ever. But can they prove how it has revolutionized their organization or how it has led to real efficiencies in their production workforce?

[11:45] I don't see that. You know, I've seen a couple of biased studies that were, you know, paid for by Microsoft, but I don't see how much credence we can lend to something like that, you know, because of that inherent bias.

[12:00] Debbie Reynolds: Yeah, Yeah, I think you're right. I guess a couple things that I would love to just highlight based on what you said. You know, one is that, you know, companies, in order for them to really gain value in using artificial intelligence, they have to have good data and good Data management.

[12:23] And a lot of companies don't have that. That's first thing. Sure. And then, you know, before we saw all this hype around, especially generative AI, you know, artificial intelligence systems used to be very purpose built systems.

[12:37] You know, they were very low risk because they were limited in what they could do. And you typically have pe very expert people working on those systems. Right. So with generative AI, basically you're saying, let's just let Pandora out of the box.

[12:52] Just let people play with the stuff. When companies adopt these things, what they're doing is taking on risk. So it's kind of profit for the company that's selling it, but it's risk for the company.

[13:02] And so I think some of the issues with people trying to really adopt it within organizations is that they're really thinking about what is the risk to me, is it worth the risk to me as a company to be able to do this?

[13:16] And part of this also is being sold as, okay, well this is also a replacement for humans. Right. But what people are realizing is that not only is it not a replacement for humans, they actually puts more work on humans to be able to manage these systems.

[13:30] But what are your thoughts?

[13:32] Jim Amos: Yeah, absolutely. First of all, you're dead right? Like really, before generative AI became the kind of the sexy buzzword that everyone's using, I know that there were very talented data scientists and data engineers working at a lot of companies to kind of figure out how to use that proprietary data that the company owns or collects and how to monetize it, how to use it to make sound business decisions, using algorithmic logic and using deep learning, machine learning and all the tricks at the disposal of a really good team of data engineers.

[14:21] And so it's kind of a shame in some ways that this hype around generative AI has directed attention away from the really powerful and probably important work that data scientists have been doing this whole time for the past decade or more.

[14:42] And that's probably why I find that I have a lot of data scientists following me on LinkedIn and they're just as skeptical as I am in many ways of the generative AI hype, because they're saying, hi, non, you know, I've been studying and working with this, the underlying technology for many years and it is very powerful and it is, you know, potentially very valuable to organizations.

[15:07] But you have to do your due diligence. You have to know exactly what data you have, how to use it, how to clean it up for use, how to mitigate inconsistencies.

[15:22] And index for normalization of the data, et cetera. And then you have to figure out, okay, what is the best way to use this data in a way that's going to hopefully affect the bottom line.

[15:35] So it really is a distraction away from that kind of work. And I've even seen and heard of companies that have kind of fired their machine language engineers and their deep learning engineers and then hired people who just specialize in generative AI and prompting and all the things that are being hyped up now, which seemed like a huge step backwards to me.

[16:04] It's just ridiculous in some ways.

[16:08] And yeah, it just, and then, and then the idea that this technology, generative AI in particular, is designed really with the purpose of replacing human beings. Right? It's not designed to, it's really.

[16:28] Even though it's sold as an augment to human productivity, I think it's really clear that a lot of leaders are thinking of it as a direct replacement because we've already seen big corporations making massive cuts to head count and they give various excuses.

[16:45] And, you know, AI definitely comes up as one of those reasons. And I think that will continue, whether that's true or not, you know, they'll continue to use that as an excuse.

[16:55] But really, if you think about it, generative AI, which is designed to essentially emulate a human being, but at scale, to use the collective intelligence of all human beings and to convince us that it's accurate and friendly and aligned with us, there's no other purpose for that kind of technology than to ultimately replace knowledge workers, replace white collar workers.

[17:25] And in fact, I guess where it's, there's a, there's a chance for it to replace blue collar workers now that they're putting AI, generative AI into robots, you know, so a lot more blue collar workers will be threatened by robots in factory situations.

[17:41] So yeah, it's the whole. All the way down. It's designed to replace human beings.

[17:46] It's disingenuous, I think, to say things like, you know, an AI will not replace you, but someone using an AI will replace you. Because at the end of the day, these things are being designed to think for themselves.

[18:03] Even though that hasn't been achieved yet. The ultimate goal is for a machine to think for itself, in which case you don't need a human in the loop anymore. You know, we've already seen generative AI LLMs that can prompt themselves.

[18:17] You know, so the prompt engineer job is already like your thing only lasted, you know, a year or so. That's not really legitimate career path anymore for anybody. And I think, well, you know, if the technology improves on the kind of scale that big tech leaders keep touting, then we could only assume that the ultimate end result is more and more humans being replaced.

[18:49] That makes me think about what is on the minds of these big tech leaders when they design a system, when they release a system into the wild, like you said, that has this potential to cause huge shockwaves across every industry potentially, and to result in, you know, mass layoffs of millions and millions of people across the country and across the world.

[19:16] What is their like, plan for dealing with the aftermath or dealing with the, the fallout from something like that? If that were to happen, you know, and I'm not confident that there is a plan.

[19:30] People like, people like Stam Altman shrugs. He shrugs his shoulders and he jokes about there being like a 50, 50 chance that a super intelligence and ASI that's created by OpenAI one day could annihilate the human race.

[19:50] 50, 50 chance. And even if that doesn't ever happen, even if that's completely ridiculous, he did write a manifesto that talks about how the AI will eventually replace most median humans.

[20:06] What median humans?

[20:09] It's a lovely way to talk about your fellow human beings. But he talks about that and he also talks about how it could spell the end of capitalism itself and how it's going to drive this cataclysmic change in the economy and, and how fiat currency would collapse.

[20:32] Of course he has a backup plan for himself and that's his other business, which is called World Coin, which is his plan to replace fiat currency with his own crypto coin.

[20:43] As long as everybody scans their irises like they're looking into the Palantir of Sauron from Lord of the Rings, it's just wild that, you know, there are tangents. You can go on from there about how Sam Altman, Zuckerberg, Elon Musk, you know, Marc Andreessen and you know, people like that, they, they, they almost think about the future of AI in religious terms, like in a technotopian terms.

[21:19] Timnit Gebru, the AI ethicist that was fired from Google, she, she has had a lot of really great stuff to say about that, about this kind of quasi religious, like transhuman philosophy that these big tech overlords have.

[21:36] And you know, we can look to them, we can, we can laugh, we can, you know, say that that's fine. They can, they can, those, those people can have those beliefs.

[21:46] Doesn't really affect us. But these tech billionaires have so much wealth and so much influence because of that wealth that I think we do need to think about what is their overarching philosophy and what is the end game that they're playing towards, because ultimately it could affect all of our lives.

[22:09] I think that this mission to create AGI or ASI is, it goes beyond anything any business has ever attempted to do.

[22:22] And if you think about it, they're trying to create a God or a new species. That's something humanity has never attempted.

[22:31] And the promise is that this new God will, I don't know, cure cancer, it will solve the climate crisis for us, it will lead us into space exploration, it will create novel science, it will unlock the mysteries of the universe for us.

[22:52] I like the idea of some of that.

[22:56] I'm a fan of novel science. I'd love for us to be able to populate deep space and uncover the mysteries of dark matter and, and finally solved the problem of nuclear fusion.

[23:12] And of course I want to cure cancer.

[23:15] I've lost several of my family members to it, so I'd love for an AI to be the solution to that. But there's no evidence that we're anywhere close to that.

[23:27] And so it just becomes this pie in the sky idea that these tech leaders are running with. And they expect us to kind of just, you know, run with them and help them market and hype this technology to the point of, you know, just ludicrous levels.

[23:48] And I don't know, that's just a real problem. I've never seen, you know, there's never been another tech innovation that has been so exalted. You know, like we've seen, we've, I mean, and I'm, I'm partly the reason I'm a skeptic now is that I've also lived through, you know, web free, metaverse, blockchain, crypto, 3D printing.

[24:17] All of these things were cool on the, on the surface and have potential, but they were overhyped and they never came to fruition. Fruition. You know, we poured billions of dollars of investment into these things and they never came to fruition.

[24:34] And so I can't help but think we're seeing the same thing again, but this time it's 10x. You know, they're going overboard on the hype. They're literally saying we're going to create a God.

[24:45] You know, we're going to turn science fiction into reality.

[24:48] And by the way, science fiction has given us many dystopian scenarios that we should by now be conscious of and work to avoid. And yet tech leaders keep kind of doing this thing where like, they want to build things that we probably do mostly think of as dystopian, but they somehow interpret it as utopian.

[25:14] You know, it goes back to this. You don't build the Torment Nexus meme. You know, it's like they want to build the Torment Nexus and we're all saying why? You know, well, we should be saying why.

[25:27] I think, I think that's basically what I'm saying with all of my, you know, all of my words on the subject so far, I just kind of push back and say, why are we doing this?

[25:37] You know, what problem are we really trying to solve? And will it really benefit us all equally? My hunch is that it won't.

[25:45] Debbie Reynolds: Yeah, yeah. What I'm seeing companies that do demonstrations of robots when it's actually people in robot suits as opposed to robots, or, you know, even things like self checkout, where it's supposed to be you and the machine, when it's actually, you know, a cadre of workers being paid like dollars a day to watch video.

[26:10] Right. So, so I feel like what we're doing, you know, how some of these companies are using AI is really to subjugate people, to make it seem like the technology is doing it, but it's actually human that's not really being paid or compensated well for their work.

[26:28] And then also I feel like what these companies are doing with artificial intelligence, especially when they're trying to use it for customer service, what they're doing is shifting the labor to the person, to the consumer.

[26:40] So it's not as though the labor goes away. It just means that the company isn't paying for the labor. Like you're. You become the labor, basically. What do you think?

[26:49] Jim Amos: Absolutely. Yeah. Yeah. It is very much a Wizard of Oz scenario. Right. Like, we can easily peek behind the curtain and see that behind the magic of AI, behind the promise of this magical mystical machine, there is an extractive industry that harms the environment.

[27:15] And there is also an exploitative industry that harms human beings and relies on very low paid human labor in developing nations. And also now more and more, these air companies are hiring people in America who are being paid less than minimum wage to do the work of classifying, tagging various data sets for the models to learn from.

[27:44] So, yeah, exploitative and very extractive.

[27:48] I mean, the tech industry has always been exploitative and extractive, I think, to varying degrees. We all know that there's a certain burden we take on when we buy a new iPhone or a new Android device.

[28:04] We know that the materials for those devices came from very kind of extractive industries where people are not very well treated in other countries where they have to mine the rare earth materials in order to make those devices.

[28:21] So we already know we already have some burden to bear just as users of consumers of technology.

[28:27] And so that's, there's nothing new about that. But I do think that it's kind of stepping up. I think that that that destructive nature of technology and the exploitation that tech companies kind of take for granted and try to normalize is, is reaching new levels.

[28:48] You know, they're talking about the exploited workers that work behind the scenes in AI. It's not that, it's not just that they're poorly paid, it's that they have to work very long hours doing work that leaves them psychologically scarred as well.

[29:06] I don't have the words to even express the detail that some of the articles go into that I've read. But suffice to say it's nasty work looking at some of the most abusive material that's on the Internet, having to classify it and sort it and filter it so that the AI models, nobody really should have to do that.

[29:27] That's one job that I would outsource to AI, if AI can do that job itself. Good, because I don't think humans need to be getting PTSD doing that kind of work.

[29:38] But yeah, you're right, there's a lot of this kind of behind the curtain, there are a lot of non tech things going on. It's, it's a mechanical turk, it's, it's, you know, it's a, it's a guy inside the R2D2 costume.

[29:54] There's someone there. And in Tesla's case And in Figure 01's case, I think, you know, I'm pretty sure it's a guy in a tracksuit. We'll see what comes, what comes out, but pretty sure it's a guy in a track suit.

[30:08] And, and that's a whole other kind of, you know, a whole other branch of the hype tree is this new, this new robotics movement. The idea that we will as society accept and be willing to pay for robot assistance in the home that basically do the same thing that human beings can do.

[30:32] So, and I know that I'll get, you know, I always get pushback from people will say yes, but Jim, think about how great it would be for these robots to be able to assist people who are differently abled and who are like older and can't easily get around their house.

[30:49] And I agree, like that. That could be a great use of robotics. I definitely won't disparage that, but I don't think that's the main target audience for these tech companies.

[31:01] I don't think that's true at all. Because when is ever, when has tech, big tech, ever really prioritized people who are on the margin? You know, I don't think they have.

[31:11] I think they've done the opposite. They've always gone for the most common denominator, always want the biggest payout. So yeah, so I think they're designing robotics as a weird kind of.

[31:25] I know, it's a weird faux pas. I don't see how it really makes a lot of sense, you know, accept to that marginalized customer, you know, who actually would benefit from it.

[31:37] So making that scale and making it safe, you know, and making that so that doesn't turn out to be, you know, along the scenarios that Isaac Asimov and other science fiction writers have already warned us about, is another whole, just a whole nother branch of not just the hype tree, but the, the wtf.

[31:57] Why are we doing that? What problem are we actually trying to solve? I'm already quite capable of making a sandwich and folding laundry. I don't know. I need a robot in the house that folds my laundry and the billions of dollars and the extraction that goes into that too.

[32:16] I always try to flip it around and think, well, what other problems could we be solving in the world with all of this money and all of this compute if we weren't chasing these kind of weird, kind of toy like examples?

[32:33] I think Ed Zisran wrote recently, his answer is this happens because none of these men in tech have any actual ideas. And these companies are not run by people that experience problems, let alone people that might actually know how to fix them.

[32:52] So it's like this, you know, maybe it comes from the place of this disconnect. You've got these really rich billionaires who have a dream and their dream is based, you know, I don't know, some, some, something they imagined as, as children, that would be cool.

[33:08] They watched the Jetsons or whatever when they were kids and that would be cool. But like, it's not a real problem to solve.

[33:15] It's not something we're all begging for. There are plenty of other real problems in the world that could be solved. And it goes back to what could we do with the power of algorithms and data science and machine learning, language learning and deep learning?

[33:32] What could we do with science rather than just chasing these weird pipe dreams. Yeah. You know, I don't get it. It just seems so wasteful.

[33:44] Debbie Reynolds: Yeah. Let's talk a little bit about AI privacy and a bit of surveillance. So I remember I saw that post about the video with the robot helper with people like, you know, the elderly or someone who needs, you know, additional help in the house.

[34:01] And, you know, beyond the fact that I would probably buy a baseball bat and dispense with that robot like immediately, I just think it's more surveillance.

[34:11] It's more surveillance and data collection beyond anything else. But what are your thoughts about that?

[34:16] Jim Amos: Yeah, it is. And that, and that's really the darker side of the kind of data industry. Right. Like, I think there's a lot to be gleaned from data and a lot of useful things we can use with, we can do with data.

[34:28] But there is a dark side. You know, we have, we've already, before the advent of Gen AI, we've already become the product, right, the customers, the product, where we are data, whatever metaphor you want to use.

[34:44] But we're there to be harvested for the data that we can provide so that we can inform their businesses so that they can then sell us more things that they think we need.

[34:57] And also they can make money by selling on that data to other companies for uses unknown. Right. We don't really get to consent very much in terms of what that data is used for.

[35:09] And I think you're right. As soon as we have robots in the home and we have technologies like Microsoft Recall in the home and we have Apple Intelligence on our phones, all of these things are going to be used simply as more ways to harvest the data so that they can profit from their customers.

[35:31] It's a virtuous cycle as far as they're concerned.

[35:34] They recycle us and our data, just like the environment recycles air between the oceans and the sky. And it's a thing that will never end as long as they are able to keep selling us these things and wrapping it up in a cool value prop that convinces us that we need these things, you know, so that's, that's really going to be the end goal with robotics is like just convincing people that they need it, even if they don't really need it.

[36:05] Same old story.

[36:07] But some, again, some, some tech leaders see the pervasiveness and the kind of omnipresence of surveillance and technology not as a black mirror episode, not as dystopian, but as utopian.

[36:31] Larry Ellison, the second wealthiest person in the world, I think, formerly of Oracle, he said that he imagines this omnipresent AI surveillance system that will, you know, will make society better.

[36:48] It will optimize society so that we have this perfect system and that nobody ever commits crime anymore. And, you know, everybody is surveilled. So it's like the. The eye of God will be on you at all times.

[37:04] You know, whether you're religious or not, it will be there. And hopefully it will convince you to be a good citizen.

[37:11] And he can say this with a straight face. He can say this and maybe be surprised that that might raise alarm bells in lots of people, for lots of people, because, I don't know, in his mind, it's just some.

[37:26] Some kind of science fiction trope that puts technology on top and paints technology as this thing that can save the world.

[37:38] And it avoids the hard questions and it avoids the hard problems that are really at play when you think about why society behaves a certain way. Right? We do have problems.

[37:52] We do have very deep problems in society. We still have too much hate, too much prejudice, too much racism, too much, you know, hate for marginalized people and underrepresented people.

[38:08] We have too much political polemicism that is unhelpful and unhealthy, and we have crime. Of course all these problems exist. They're all real and present dangers to the future of our species.

[38:24] But I think it's too easy to just assume you can put AI in a bunch of robots and put monitoring systems in every, you know, nook and cranny, so. And become a police state and assume that that's going to somehow be the magic solution to all of those problems.

[38:48] You know, it's. It's oversimplified. It's an oversimplification.

[38:52] And it just seems like a very naive approach, But I do think it will happen. You know, I think that like this, there's enough.

[39:02] There's enough influence at the higher levels, there's enough passion for something like that that it probably will happen.

[39:09] I think that to a certain extent, just putting that idea out in the world, if you're a billionaire, can make it happen. They have the power to make those things manifest because they have the wealth and the backing from.

[39:25] And the investment, backing from people who believe in them. So, yeah, I can see us going down a path where we have more and more and more surveillance.

[39:36] As you can tell from my accent, I'm British born, so I'm a British American citizen, but I grew up in England, and England is one of Some cities in England now it is a police state.

[39:48] Right. In London, there is a camera in every possible place you can imagine.

[39:55] Does it, does it help? Does it make people behave differently?

[40:00] Maybe. But is it also oppressive and dystopian and disturbing? Yes. You know, and many people believe that. So, yeah, I don't think it's not something we should be celebrating.

[40:15] Debbie Reynolds: I guess. I, I had a theory and I just want to throw this out to you. And so my theory is that the reason why privacy isn't in the Constitution in the US Is because the founding Fathers had privacy.

[40:30] They had as much privacy as they could possibly stand. So for them, that was not a big issue. And so now today, what we have is these tech titans that can afford as much privacy as they can stand.

[40:43] So for them, when they talk about surveillance, they mean surveillance for other people and not for them. So what are your thoughts?

[40:51] Jim Amos: Yeah, I totally agree with that. I can imagine the Founding fathers had every privilege imaginable. Right.

[41:00] And we know that there are historical records of some quite hypocritical behavior from some of those founding fathers too. Do one thing, say one thing, but do another. So there's plenty of historical record of that.

[41:16] And yeah, billionaires in general and the tech billionaires certainly enjoy a level of privacy and if they want to, a level of isolation that's kind of unprecedented. They have so much wealth, like unbelievable amounts of wealth compared to the rest of the world that they can build the bunkers, they can buy islands for themselves.

[41:46] Peter Thiel, the original, you know, the original Lord of the Rings villain, he, he has bought himself a chunk of land in New Zealand. He's going to hide out in New Zealand at some point.

[42:01] People like Sam Altman and Zuckerberg, they, like I mentioned earlier, they do actually sometimes joke or refer to AI as something that could ultimately harm humanity. But at the same time, they are building these bunkers in isolated locations and fortresses.

[42:21] Jeff Bezos has a fortress somewhere too. So it's very telling. Right? It's like they're saying on the one hand they hope they're building technology that will, that will benefit humankind and that will, you know, be an accelerant for our species.

[42:40] But on the other hand, they're kind of making sure they have a plan B and making sure they have a well stocked bunker that they can hide out in if **** hits the fan.

[42:52] Debbie Reynolds: Or going to Jupiter.

[42:54] Jim Amos: Yeah, exactly. Yeah. And then, yeah, Elon Musk's plan and Jeff Bezos is also there. Right? It's, yeah, we're going to escape the planet entirely, you know, which was like, very effectively lampooned in that Netflix movie.

[43:09] Don't look up. Right. Like, that's very. That was. That was excellent because. Yeah, that's the way they think. It's like, yeah, we're not attached to this planet or we're not attached to this species with so much better and more evolved and better equipped and we have all this wealth and power that we can kind of write out any storm.

[43:30] We can avoid whatever calamity might. Might occur. Even if it's a calamity that, you know, I started. You know, it's like that's the ultimate kind of privilege, I guess, right?

[43:44] It's the privilege to, if you want, you can kind of start the destruction of the world or the species, but then just kind of hop on a spaceship and take the easy way out.

[43:55] Debbie Reynolds: Yeah. Earth is for poor people, I guess.

[43:58] Jim Amos: Yeah, for sure. Yeah.

[44:01] Yeah. Earth is just for all the. All the losers that couldn't figure out how to earn a billion dollars. You know, it's our fault because we just. We just couldn't rise to the occasion, you know, not even though, like, a lot of these billionaires got started with trust funds, you know, and what we call in England silver spoons, you know, they all got started somehow.

[44:25] So, yeah, it's very disturbing.

[44:28] Debbie Reynolds: We talked a little. You touched on this a bit, but I want to come back to it a bit. And this is about artificial general intelligence. And so my concern, first of all, is never going to happen.

[44:38] So a computer will never be human. So I don't know, you know, I. I know why people like to say that, because it keeps those checks coming in. People are con.

[44:47] Are fascinated by this whole sci fi idea that somehow humans, that robots or technology will take over humankind. And so I guess my concern, and maybe this goes back to the Turing test.

[45:03] I want your thoughts here. So for me, it's not about technology becoming like humans. It's about humans advocating their human judgment to AI or computers. What do you think?

[45:19] Jim Amos: That's very perceptive. I like that.

[45:22] That's definitely crossed my mind, too. I agree with you, at least on our current trajectory and with the current techniques and underlying technology that we're using right now. I don't think we're on a course for AGI or asi.

[45:38] LLMs is not a step in the evolution towards the superintelligence at all. You know, it's. It's. It's silly to think that. I think. I think most data scientists would agree you know, people who are actually in the field would agree, so, but, but yeah, I also agree with the fact that there is a very real risk here that the more we outsource our intelligence or outsource our cognitive functions to these hallucinatory inaccurate machines, that we will, not only will we suffer some kind of cognitive decline, we're already seeing maybe the beginnings of that in schools where kids are, you know, choosing to use ChatGPT and other tools to do their homework.

[46:36] And some teachers are even encouraging that, you know, Professor Ethan Mollick, who I've disagreed with for a long time, has been teaching his students to do just that at the degree level, to just, you know, use chatbots to help with their work.

[46:56] And it has even asked them to imagine a world in which these chat bots are able to surpass them, which I can only imagine how demoralizing that is for a student to be told, you know, you haven't even graduated yet, but these robots are probably going to like replace you before you even begin, before you even enter the workflow course.

[47:17] You don't have a chance. Great message for a professor to be given anyway.

[47:22] But yeah, it's, it's this idea that we're, we're gonna, we're probably gonna suffer some kind of cognitive decline. You know, maybe we're headed towards idiocracy because we are using generative AI as a crutch, but we don't need it.

[47:39] We are, I've already seen it. And there's been, there have been examples, you know, the Google Ad where, you know, they were encouraging like a child to write a letter to their hero using ChatGPT.

[47:51] And like, I'm glad there was so much backlash. It shows that people in general, a lot of people still have the right idea about who they are and who should be because there should be backlash against that, right?

[48:04] Like, why would you need a machine to basically take place of, take the place of your own cognitive and psychological and emotional functions? That's going to be a recipe for disaster.

[48:19] That's going to be a recipe for a society that becomes more isolated, becomes less able to communicate effectively, less able to, you know, make friends and community. You know, it's going to, it's going to affect our ability to, to create and sustain communities.

[48:41] And not only that, I think, because the LLMs are designed to convince us that they are accurate and they are aligned with us because they do a good job of manipulating us psychologically, I think we tend to anthropomorphize them and that Means we tend to trust them when we shouldn't.

[49:05] And so, you know, we're trusting them to tell us things that are patently untrue. And, you know, we, we saw that with the Google suggestions that went viral. You know, Google telling us to put glue on our pizza, pregnant women should only smoke free cigarettes a day, or whatever.

[49:24] And it's like, okay, at the moment I feel good that there's enough skepticism in and everyday culture and society to be able to recognize that these tools are broken and are hallucinating and are not telling us the truth.

[49:42] But the more we use them and the more they proliferate through our tools and through our lives, I think there is this danger that we'll start to rely on them and start to trust them more and more, even when they're probably still inaccurate and they're still hallucinating and they're still giving us disinformation.

[50:00] Not only that, but like, who controls the LLMs output? There are a handful of big tech companies who can control what these chat bots ultimately say. You know, they call it guardrails, but they can't.

[50:14] Through those guardrails, they can actually manipulate what is being said and, you know, what is being returned. So it's essentially they have the power, if they want to, to dictate the way that those machines interact with us.

[50:30] And, you know, they can make them more coercive if they need to, or able to convince us of certain things using it as a marketing or manipulation tool. So there's a, there's an inherent danger there too.

[50:44] And then an AGI if we ever, if we ever do happen to create something that is more intelligent than the average human or even a super intelligence that can somehow aggregate the collective intelligence of humanity and use it in ways that are real and tangible in the world.

[51:12] I think that we're going to end up in a bizarre situation where we have to assume that that thing is going to be aligned with us and will want to benefit us.

[51:26] And that may not be true at all.

[51:29] Robert Heinlein, back in the 50s, wrote a famous science fiction novel, the Moon is a Harsh Mistress.

[51:36] And first of all, Robert Heinlein was a bit of a douchebag. So I'm not really, kind of not a proponent of his words. But he did write this interesting novel, the Moon is a Heart from Mistress.

[51:47] And one of the core characters in that novel is a super intelligent AI that controls pretty much every system on Earth in this imagined future.

[51:58] And I forget its name. I think they end up calling it Dave or something.

[52:03] And it basically has its tentacles into everything, every system you can imagine. Weapons systems, defense systems, environmental systems, power systems.

[52:13] And one engineer, one software engineer, one kind of person on the IT crew has to go into the kind of the mainframe one day and help fix a bug. There's some, you know, this is 1950s technology, so there's like probably a tube on fire or something.

[52:32] And so the tech has to go in and fix that. While he's in there, he's able to talk privately one on one with this superintelligence. And the superintelligence ends up being very friendly with this technician.

[52:46] And so they become best friends. Anyway, to cut a long story short and just spoil the entire novel so you don't have to read it, the technician and the AI becomes so friendly that the technician ends up leading a coup against the Earth government because him and his cohorts live on the moon and, you know, they're being kind of oppressed.

[53:11] So they lead this rebellion against the Earth government. And the way they're able to win that rebellion is that they have this AI on their side simply because the AI considers him a friend, considers him a personal friend.

[53:24] You know, something as simple as one. One relationship has changed the course of humanity, you know, so that, that again, in my mind spells out like just this possible inherent flaw.

[53:36] If you create a super intelligence, but it has its own sentience and has its own personality and its own way of being in the world, it can be manipulated. It can, you know, for.

[53:48] For any number of reasons, just another human being or another AI or whatever can come along and say, I think you should do this.

[53:57] And whether it's good or not for humanity is, you know, that's going to be for history to decide. So it just, it's just weird. Like, why would you create something that powerful that can be manipulated?

[54:11] You know, human beings are already corruptible, manipulatable, and there's already ways to cause a lot of damage on this planet. If you can corrupt or manipulate a world leader, for example, that's already, you know, a danger, but it's a clear and present danger.

[54:27] And we have checks and balances around it most of the time.

[54:31] Super intelligence and AI and AGI, whatever, could be completely, you know, unfettered and free to do whatever it wants with disregard, total disregard for the people that created it, you know, so, so it's like, to what end?

[54:49] What's the purpose? What's the point? Why would we do this?

[54:53] Oh, and another, another, another way to look at that From a different angle is also the business angle. So as a business, if you were in, if you were an investor and OpenAI came to you and says, and said, hey, we'd like, you know, $100 billion, maybe they're talking to like BlackRock or something.

[55:16] We would like $100 billion invest in our company. We are going to build AGI.

[55:23] Maybe there's a chance that that AGI, if it ever comes to fruition, either destroys or saves humanity in some way. But where is the return on investment for those investors?

[55:36] You know, they're looking for a quick turnaround. They're looking for, you know, to double their money or triple their money or 10x their money. They're not looking to cure diseases or fix climate change or any of these big existential crises that we may be facing.

[55:55] That's not what they're in it for. So I don't understand what the value prop is for investors right now.

[56:03] As far as I can tell, there is nothing there unless there's something that these tech leaders have been talking about behind the scenes, you know, some other big thing that they can use to create this kind of value, this return on investment.

[56:23] So, yeah, even from a business point of view, it just doesn't make a whole lot of sense.

[56:28] Debbie Reynolds: Yeah.

[56:29] Well, if you were the world according to you, Jim, and we did everything you said, what would be your wish for AI, even privacy anywhere in the world, whether it be human behavior, regulation, or technology.

[56:45] Jim Amos: I just think, like, I still believe that technology overall can do good. I think it can be a force for good. And I think AI could be a very.

[56:57] On a scientific level, AI is a worthwhile exploration, just like exploring deep space or exploring our inner space or exploring the world's oceans. You know, I think as a scientific endeavor, it has merit.

[57:14] I think we should continue to pursue it. But it should have the same kinds of guardrails around it that we at least tried to have around, like nuclear proliferation. Right.

[57:27] Something that's that potentially dangerous should have extreme guardrails around it that should be agreed upon. Global, global rules and regulations about how it's used. It shouldn't. It should already not be in the hands of the public, not without a plan for how it affects the public.

[57:49] So I think we need. It needs regulation, it needs systems and measures that, and checks and balances.

[57:56] And I do think it would be very interesting if an AI can.

[58:02] When I think, really when I think of the good that AI can do, I'm not really thinking about generative AI. I'm going Back to what can algorithmic AI do? What can we do with the world's data?

[58:15] And what can we do in terms of looking at patterns in the data and finding ways that those patterns can lead to real solutions? So, you know, we could cure cancer maybe potentially, right.

[58:29] If we have a massive, gigantic body of data that shows this is how cancer presents itself in these situations, in these patients.

[58:39] The human mind can't possibly extrapolate and correlate all of that data. You know, we can have a million people working on it and mistakes will be made and things will be forgotten or missed.

[58:51] We can have AI look at that with extreme scrutiny and figure out the pattern that might emerge that then leads to a cure or a preventative measure for things like that.

[59:04] I do still believe in machine learning and deep learning and talent to do good.

[59:10] I hope we still pursue those things. I just want to put some reins on the big tech companies who are just hyping it up. And I just want society to remain skeptical and to continue to use their critical thinking value judgment when applying it to things like generative AI and to just not believe the hype.

[59:35] There's a lot of skeptics out there who are a lot of just professionals and data scientists and ethicists who are speaking the truth. So if you listen to them, and I don't count myself among them, I'm just someone who's trying to help spread their word most, most of all.

[59:50] But there are a lot of great professionals out there who are speaking the truth. And if we listen to them, I think, think we might be okay. But if we listen to the Larry Ellisons and the Marc Andreessen's of the world, I think we're going to be in big trouble.

[01:00:06] So I just, yeah, I want there to be the shift towards critical think back towards critical thinking, science, logic, rationale, and also maybe it can also help us to make sure, even on the business level, we're looking for actually tangible ROI value rather than pie in the sky unicorn dreams.

[01:00:30] Debbie Reynolds: That's perfect. I agree with that wholeheartedly. Well, thank you so much for being on the show. I'm really excited that we were able to talk and I love your writing and your work.

[01:00:40] So people definitely follow. Jim, you're doing some great stuff. I love your thought leadership and the way that you express yourself. And I'm sure we'll talk to each other again on LinkedIn soon.

[01:00:50] Jim Amos: Thank you, Debbie. It's been my pleasure. This is a very big subject, as you can imagine. Right. And it can be explored so many different ways, but it's a great thing to talk about and importance.

[01:01:02] So thank you for inviting me and giving me the space to share a few things.

[01:01:06] Debbie Reynolds: Yeah, that's great. It's great. Well, we'll talk soon. Thank you so much.

[01:01:11] Jim Amos: All right. Bye for now.


Next
Next

E215 -Jennifer Pierce, PhD, Founder of Singular XQ, AI and Performance Anthropology