The Nostalgic Nerds Podcast

S1E11 Tech Ethics: A Short History of Doing It Anyway

Renee Murphy, Marc Massar, Chris McClean Season 1 Episode 11

Send us a text

Warning up front - we discuss some ethical conundrums and situations with tech. There's one mention of suicide and we talk about the Manhattan Project. We're respectful, but we just wanted to put it out there. 

Every breakthrough in technology brings the same old question: just because we can, should we? In this episode of The Nostalgic Nerds Podcast, Marc and Renee are joined by tech ethicist, Chris McClean, for a time-traveling look at innovation’s moral blind spots...from Gutenberg’s press and the Luddite rebellion to nuclear power, the internet, and the AI boom. Together they unpack why humans seem incapable of debating ethics before the damage is done and why Big Tech isn’t likely to break the cycle. Along the way, they ask what responsibility inventors, executives, and even end-users really carry as we keep pushing the boundaries of what’s possible. Ethical challenges will always echo throughout the ages and tech just makes those echoes louder.

Join Renee and Marc as they discuss tech topics with a view on their nostalgic pasts in tech that help them understand today's challenges and tomorrow's potential.

email us at nostalgicnerdspodcast@gmail.com

Renee:

Hey, everyone. Welcome back to the Nostalgic Nerds podcast, the show where we dive into the coolest tech stories from the past and explore how they shape the present. And of course, we can't forget the big ethical questions that come with it. I'm Renee. And as always, I'm joined by my trusty co-pilot and BFF of 30 years. Marc, say hi.

Marc:

Yo, dog.

Renee:

And this is our ethics expert, Chris McLean. Chris, say hi.

Chris:

Hi, everyone.

Renee:

Now you know our voices. Well, it would just be the guy who wasn't me or Marc. Okay. All right. So we're good today, right? So the history of technology isn't just about the gadgets and gizmos. It's also about the decisions we make with them. Two, not one, two Eagle Scouts. How's that for nerd street cred? We're all in. All right. I'm going to fix with Eagle Scouts. All right, let's kick things off with the printing press, you guys. So this is the new era of information, and it kicks off with the printing press, a game changer from Johann Gutenberg. It was the 1400s and basically laid the foundation for the spread of knowledge across the world. Books were expensive and rare, but the printing press made them affordable and importantly, widely accessible. But here and there for sure, right? They're big, but they're here and there for sure. Like you could get your hands on them, but not everyone had them. On one hand, you had the power to share knowledge on a massive scale, but it also meant that controversial ideas could quickly spread. So think about the Catholic Church back in the day. They were worried about Luther's 95 thesis and the ideas challenging the church doctrine. The church didn't want heretical ideas to get out of hand, and there were debates about whether printing should be regulated to control what could be published. It's wild to think how something as simple as printing could stir up so much controversy. Right, Marc?

Marc:

Yeah, you know that I'm an English lit major. People probably know that. And I've seen the Gutenberg Bible. Haven't touched one because I'll chop your hands off if you try to touch it at the Huntington Library there in Pasadena. But printing press, if you look back, it's kind of hard to imagine a society, how it could have progressed without, right? And that battle between the freedom of information and controlling it is a struggle we still see today in the digital world. But, you know, it strikes me about the praying press. It's not just about accessibility of information. It's about who gets to control the flow. And, you know, you talk about the church, the church wasn't. It wasn't necessarily trying to say reading is bad, but reading the wrong things is bad. And yeah. So, but then, but then, you know, the question is, well, who gets to decide what the wrong things are? Right. And that's, that's the battle that we're constantly faced with and faced with today, except now we're, you know, instead of the church, it's governments and tech bros and tech platforms and advertisers. All deciding what we see. So the technology has changed, which is, you know, look, this is a recurring theme that Chris probably would hear her say, but the power struggle didn't. The behaviors of the people participating in whatever ecosystem we're talking about, those didn't change. The technology has, though.

Renee:

Chris, what say you? We have to turn it over to somebody who can make up their mind about this stuff, and clearly we're not qualified.

Chris:

Well, what's funny is you mentioned that I was an Eagle Scout. You did not maybe know that I was also an alter boy. And I don't know if that either helps or hurts my street credibility here, but I feel like it's an important disclosure.

Renee:

It elevates your nerd credibility. Okay. Okay. Yeah, definitely.

Chris:

Yeah, I'm going to go out on a wild whim here and say that knowledge and communication are generally good things and positive developments for humankind. I don't think that's a contentious standpoint here. And, you know, if you think about what the printing press did and what, you know, this kind of innovation has done in, you know, over its history, you know, it's giving people access to commerce, right? Ability to participate in the economy and society, helping them invent and innovate on their own, you know, building their own new technologies and new ideas and sharing them and communicating. So all of these seem like generally good things. But you mentioned this as kind of a foundation, and it does create a foundation of understanding the world around you, being able to ask questions, and maybe finding other people that have similar questions. Again, all good things generally, but when you create these kind of foundations, you also create a platform for asking questions, and in some cases, actually challenging authority. And we are always going to have authorities, right? We're always going to have institutions that see that kind of challenge as a potential threat. I just don't know if there's any way we'll ever figure out a way to eliminate that kind of conflict.

Renee:

Wow, that's the person.

Marc:

I know. He's so much more articulate than us.

Renee:

Way more. Like, we'd have been like, dude, listen.

Chris:

Not my first podcast.

Renee:

So thoughtful. So thoughtful. I would have just made fun of somebody and moved on, which I'm going to do right now. So let's jump forward a few centuries, right? Let's talk about the Industrial Revolution, which, honestly, no one misses the Industrial Revolution. I feel like we should say that. No one really misses it, right? But I feel like we can't talk about that without talking about the ethical dilemmas surrounding it. With all the technological advances like the steam engine and mechanized factories, you had these massive leaps in productivity, but you also had child labor, dangerous working condition, and exploitation of workers. So while the steam engines were churning out products, there was a whole other side where workers were treated like cogs in the machine. And not to mention the rise of pollution. It wasn't about getting rich off of industrialized labor. It was also about the responsibility that comes with technological process. The environmental impact was pretty dire. And we see today how unchecked industrial growth can leave scars on the planet. We have to ask ourselves, I think, you know, just because we can innovate, does that mean we should, without thinking about bigger impacts? I mean, is that the ethical dilemma, Massar? Is that, am I in an ethical dilemma?

Marc:

Yes, I think it is an ethical dilemma to some extent, right? Because you're talking about innovation at the expense of someone's benefit, right? And, you know, coming back to information sharing, what are the things that, you know, the Industrial Revolution isn't just about, you know, steam and, you know, coal power and J.P. Morgan and, you know, cotton gin and all of that stuff. There's a huge advancement in the way technology happens in industrial situations, but also in communication situations. So think about the telegraph, right? The telegraph is born in the 1840s, and all of a sudden, information moves pretty much instantly across vast distances. So we have steam engines that are powered to move commerce across the country via rail. And at the same time, the wires get spread out on those train lines, and all of a sudden information flows everywhere there's a train line. Before that, you had the Wells Fargo coaches, right? Have you ever seen one of the Wells Fargo?

Renee:

Right, the Pony Express, right?

Marc:

The Pony Express, right? And it would.

Renee:

Take three weeks to get to you. Like, oh, mom's sick. You should come home. Oh, that's not that way.

Marc:

So all of a sudden, you've got communication instantly across vast, vast distances. And I think that this kind of progress and compromise, you know, one very concrete example of this with telegraphs is communication, but privacy of that communication, right? So Western Union is accused, I think it was 1876, essentially of giving information from one party to another. And this is a hotly contested political election and an 1876 Republican Party wins. And part of the reason, you know, people think is because information was leaked from Western Union to Republicans. And the Democratic political machine had no choice, right, but to use Western Union as the lines because that's it. That's all that's there, right? So, yeah, we've got this scenario where we've got a lot of progress, progress in communication, but progress didn't come up along with it with privacy and security, right? And we talk about this all the time where privacy and security are after the fact rather than part of the design to start with. And the impact is that potentially people's lives are impacted negatively or positively. And that's a huge problem. What are the people's choice? Who are the individuals and how are they empowered in that situation? Right. These corporations control the information. These political parties control the information. And the impact is to people's lives. And that's a huge impact. You know, sort of balance, right? How do we balance that ability for technical growth and progress to people's, you know, livelihood?

Renee:

Yeah. So, McLean, I mean, are you a fan of the Industrial Revolution then?

Chris:

I mean, generally, yeah, I think a lot of the aspects of our world that work well are maybe direct descendants of Industrial Revolution in a lot of ways. But in my line of work, working around tech ethics, you can't really talk about industrial revolution without bringing up the legendary General Ludd and his merry band of Luddites. And actually, what I really like about the Luddites is actually they came from roughly the same part of the world that brought us Robin Hood, which I think is really telling. So the Luddites, I think a lot of people know about, maybe have heard of, but they were not just this kind of underdog squadron of people trying to fight against authority and power Kind of in the way that Robinhood was. And they're also not necessarily against technical progress in a way that I think the current conversation often depicts them as. They were actually fighting against a lot of the things, Renee, that you mentioned. So they were fighting against poor labor practices and poor pay. And actually, interestingly, the poor quality of products that were developed using some of these newfangled technologies, like the stocking room, for example. They were fighting against the reputation of the stocking industry because people thought, well, we've been working on these stockings forever. We have these good craftsmanship capabilities. And then these stockings are going to give us a bad reputation. People are still paying high prices and getting poor quality. So a lot of what they were fighting for makes a whole lot of sense. Again, it's not fighting against technical progress for its own sake. It's fighting against some of the implications of that technical progress. And I think the key takeaway for me is that a lot of these technical advances are great. A lot of them do yield terrific benefits, but a lot of those benefits tends to gravitate toward people that are already in power. Marc was mentioning people in political power, for example, people that own these companies or control lines of communication. And so I think the Luddites were fighting against the way that some of these technologies were benefiting certain people and either leaving some people out or what often happens is that they were putting some people actually in a worse position. So again, I think the key takeaway is that we should be asking ourselves, if we are pursuing some of these technical embarrasses, how will they impact people? What are the benefits that we are hoping to pursue and achieve? And who is going to be that kind of group of people that will benefit? And we should be thinking about the potential negative impacts as well. So potential negative impacts to other people, to society as a whole, to the environment, as you mentioned. and think, okay, how do we start to steer those impacts in a positive direction?

Renee:

Which brings me to, yeah, go ahead.

Marc:

I just want to know if, you know, would I have better socks if the Luddites had, you know, kind of won? Because, you know, come on, socks makes kind of suck, right? You get holes in them and stuff. I just want better socks, man.

Chris:

Yeah, I think there are probably people in positions of power right now that have much better socks than you do. And you should take it up with them.

Marc:

Yeah dude man.

Renee:

With your local with your local government you deserve better dude you should just stand up for yourself i deserve

Marc:

Better socks like i do like socks and i just yeah well okay maybe that's another.

Renee:

Socks were worthless what was that like the tube sock is the most worthless sock ever it doesn't even have a heel like it's worthless it's absolutely worthless it is that was the least amount of work you could possibly do to make a sock right a tube sock yeah all Well,

Marc:

That's what those stocking looms did. All right. All right.

Renee:

We're going to move on because talk about...

Marc:

The questions that people want.

Renee:

Right? Like you are. You're asking the burning questions of our day. Yeah. I'm glad to be here for it. Speaking of consequences, how about the atomic bomb? Right? This is... This one's really heavy nerds, all right? The Manhattan Project in the 1940s gave us the first nuclear weapon, and suddenly, humanity had the power to destroy itself. There's no lighthearted way to talk about this one. It's literally life and death. What's wild is even the scientists who worked on it, like Robert Oppenheimer, had to reckon with the moral implications. Imagine developing something that you know can wipe out entire cities. When the bombs were dropped on Hiroshima and Nagasaki, there was a huge moral outcry. Was it necessary? Was it justified? And the debate still rages on, whether it was the right decision or whether it was just an example of a humanitarian disaster in the name of ending a war. It raises so many questions about the ethical responsibility of scientists and governments when it comes to powerful technologies. And even now, we're still dealing with the implications of nuclear weapons. You know, the thing that strikes me about Oppenheimer especially, and I can't wait to hear what Chris thinks about this, but he comes to the reckoning of, what did I do? Way late. Like, way late. He wasn't thinking about it when he's in the push to, like, we're going to do this. We're going to split the atom. We're going to split the atom. And then he's like, uh-oh, we split the atom. Um yeah that's not good right and so like when you come to it that late when you come to the ethical thing so late like like think of everything like that like it's just too late we can't unring that bell now we have to find the peaceful use of the atom which by the way is putting it in paint and using it on dinnerware Marc, does the ends justify the means? Is it a fair argument?

Marc:

Yeah, maybe uranium glass is not the example right there. But, I mean, the Manhattan Project is like, I don't know, it's the ultimate just because we can doesn't mean we should moment. And I think a lot of positive came out of the Manhattan Project, right? at nuclear power and energy and a lot of advancements in particle physics and things like that.

Renee:

An Oscar. It won an Oscar for Best Picture.

Marc:

I mean, okay.

Renee:

So many good things came out of that.

Marc:

So many. Well, but at the expense of millions of people, not millions, you know, thousands of people in Hiroshima and Nagasaki, but, you know, decades of just atomic waste and challenge. I mean, so there's definitely... Definitely, you know, big problems there, right? So, and my daughter, my oldest works, not nuclear physics, but she works in mycology and fungal research. And some of the things that they work in the labs are, you know, it's dangerous stuff, right? And yet you kind of think, well, is this the right thing? and you know for her in particular the ethical framework is is very clear but you know if you if you think about you know what i think certainly with the manhattan project the situation is if we don't do it then somebody else will and is that is that a valid argument i you know We can say.

Renee:

That about everything, though, right? You could say that about the first microcomputer. If we don't do it, somebody else will. You can say that about cloud. If we don't do it, somebody else will. You could say it about AI. If we don't do it, somebody else will. I just feel like it has that potential. At no point do we ever say to ourselves, but should we? So, hey, McLean, should we? I mean, that's it. If you're doing the Manhattan Project, and I know you're going to talk to me about what you think about this, But if you're in the Manhattan Project and you're going to follow a best practice for ethics, when do you talk about that? When did you talk about it?

Chris:

Yeah, I mean, I think that's a clear tee up that you want to talk about it early. I mean, this is this is so challenging of a topic to cover and especially trying to do it in a 50 couple minutes because there will be continued to be books and lessons and lectures and discussions on this. And then there are, you know, Marc, you made some great points. There are clearly benefits out of the research that we've gotten. I've been to the Hiroshima Peace Memorial Museum, and I don't know how you spend more than an hour walking around there and reading and watching the videos and listening to people without thinking, there's no possible way that this was justifiable. I can't conceive of somebody being in a position to have to make that decision in the first place. But the fact that you're watching the actual results of what we did and thinking, you know what, maybe there was a good moral or ethical framework that led us to that conclusion, that that was the justifiable thing to do. So I don't know if I have anything to say about what kind of ethical framework should have been used and when in that particular case. But I will say, you know, a lot of in the tech industry right now, there is a lot of that kind of conversation that you mentioned about this feeling like kind of an arm's wrist, right? I don't know if it's necessarily where it started, but it could have been blockchain or metaverse or now it's definitely AI. You know, we don't do it. Somebody else is going to do it first, you know, AGI, for example. And so the conversation often goes like this. If we don't build, let's say, AGI, for example, somebody else is going to do it first. And of course, we should do it faster. We should do it before then because, let's say, we have better values, we have better morals. Let's say, we care more about human flourishing or human rights or representative democracy. So we should be the ones to build this thing. But then usually the next part of that argument is, well, let's move very quickly and maybe not pay so much attention to oversight and principles and controls and regulatory review and ethical review and all those, because we need to move very fast. But to me, that just feels like you are immediately thwarting the logic of your argument. So if you're going to say we should build this first because we care about X, then we should make sure to include that whatever the X is from the beginning. Right. All the decisions from concept to design to development, implementation, operation, the whole gamut should keep in mind those ethical values or whatever ethical framework that you've developed hopefully beforehand.

Renee:

Oh, that's such a good answer. Massar, you got anything you want to add? That's such a good answer, though.

Marc:

No, it's a great answer. I think that speed question, right, getting to market first. And Renee and I have talked about this before, that industries that are under scrutiny for regulation, the regulation exists, yes, to protect people. But why is it there? It's because the proper controls weren't put in place in the first place.

Renee:

If you are in a heavily regulated industry, you're in that industry because you didn't know how to behave in the first place.

Chris:

That's right.

Marc:

Right. Like, that's it.

Renee:

That's it. Like, right. Like, like if, you know, like the I look at financial services, look at everything they're saddled with. It's because savings and loans went under. It's because, you know, we cheated on mortgages. It's because we we rigged the labor. It's because all that stuff. Right. It's because we took advantage with predatory lending. I think, yeah, you don't know how to behave and you end up in a heavily regulated environment. I believe that.

Chris:

One example that I think both of you would be familiar with, on the flip side, if you look at HIPAA, that was an industry standard. In my mind, it was a successful way of implementing controls across an industry that actually kept the government out for a little while. I'm sorry, not HIPAA. PCI. PCI DSS. It kept the government out of payments, security, and privacy for quite some time, right? I mean, that was a really good industry standard where everybody said, okay, we're going to abide by these baseline set of controls, and the government didn't have to step in and do it. Whereas, I'm sorry, HIPAA, SOX, a lot of other things that we've passed since then, it's because maybe some of these companies were doing the right thing, or we're not paying enough attention to these controls. Yeah.

Marc:

So there's a great point on PCI. As a former PCI board member, I would say that that's probably the intent. Visa, MasterCard, Amish, Discover, JCB, they said, well, we frankly don't want to have people coming in and tell us what to do. But then the compromises continued to happen. And I think that what's happened since is that the card systems have been engineered in such a way that the data has been devalued. So they took a different approach rather than going down the heavily regulated, you know, space of like, I want to control the data, control license data. Well, actually, let's change the payment system so that the data is not valuable anymore. So I think that was a valid tactic. I think it's sort of off the side here. But, you know, to come back to regulations and things, I think it was a great example. But like the nuclear, you know, regulatory entities, right?

Chris:

Yeah.

Marc:

Like, these are, they exist because you can't let, you know, crazy people have access to nuclear fissile material and then, you know, build bombs. Long live NERC.

Renee:

Yeah. Long live NERC.

Marc:

Yeah.

Renee:

That's what I'll say.

Marc:

I mean, these are, these are, I mean, it's definitely got a time and a place. Definitely has a time and a place.

Renee:

I'm going to move on to the 1990s when Marc and I first meet. It's like early 90s when you and I first meet. Isn't that crazy? We're like 150 years old. It's the rise of the Internet. It brought the world closer together in ways we never imagined. But let's be real. There's some major ethical issues here. It's like the Wild West out there still. Suddenly we could connect with anyone, anywhere, but we also had privacy concerns, you know, and the more connected we got, the more vulnerable we came to things like identity theft and cyber attacks. And as the Internet grew, it opened up conversations about censorship. I mean, you guys, not a lot of people remember MySpace, but it's the precursor to Facebook that eventually gets overrun by nefarious people and criminals. And so like they close it down rather than try to fix it. And then, boom, here comes Facebook. And we all kind of migrated to that. Right. But you used to have a MySpace page like that was the beginning of all of this. Then as the internet grew, it opened up conversations about censorships, like what should be allowed, what needs to be policed. And then there's the rise of fake news and hate speech and whether tech companies should moderate or censor content at all. We have been embroiled in this our entire professional career, Massar. We haven't dealt with it. We haven't dealt with it. No. Let's say you about, you know, the ethics of the internet, dude. Like, it's a lot, but we haven't.

Marc:

I mean, we've got, you know, 30 seconds, right? But the reality is that you're right. We haven't dealt with it, right? This is the perfect example of what we were talking about, that if we don't build it, somebody else will. And the tagline from Google, right, don't be evil is gone now, right?

Renee:

It only lasted like six years, too. It went from don't be evil to we have a new CEO. Like, it lasted like six years.

Marc:

I think this is... And I can't help but think about Popper's, you know, paradox here, right? The tolerance and intolerance. And, you know, we tolerate so much on the internet, but because of that, the intolerant is winning, right? And that's just really freaking sad to me. How do you balance that freedom of speech versus protecting people? And I think that's the scenario of tolerance versus intolerance. But you know what we thought we were building was this like global village.

Renee:

Yeah utopia we thought we were doing but i mean that goes right to it like every beautiful thing you build someone will weaponize it and ruin it like like everything like how do you build the beautiful place and keep it from being weaponized and ruined and with that i come to chris mclean hey mclean how do we build a utopia and keep the walking dead out like i don't i mean

Chris:

It's it's in the name right utopia literally means a place that doesn't exist right so we knew this whole time that anytime we try to build a utopia it's just never going to be successful i and

Renee:

I feel like i'm in an mlm dude like i feel like they sell me a thing i buy the box of leggings and then i can't sell them and i'm disappointed and i just do it over and over and over again engagement

Marc:

It's the engagement, you know, engagement rules. And because of that, you know, it's, it's, people are more engaged when they're angry and upset and... You know, they're intolerant. And so you have to foster that. The platforms actively foster that. And it just breeds just real content.

Renee:

What did they say in Edelman, Chris? It's the era of grievance. We're in a grievance era. That was the 2025 report. So, yeah, like we're in the era of grievance. We really like it. So, yeah, what do you say, Chris?

Chris:

Yeah, and that's a massive problem for sure. And I'll add another felony privacy challenge, which is it's important to respect people's privacy online. But at the same time, we also want to support law enforcement agencies that are doing some monitoring and investigation of criminal activity. And that's a very tricky thing to balance. In some cases, you can balance. You have to choose one or the other. And so it doesn't take, in my mind, it doesn't take too long to figure out that a lot of these kind of tech ethics issues are business ethics or societal or just plain, you know, fundamental ethical questions about who we are, you know, what we value about ourselves and about each other and society and the world. And all of these issues take careful deliberation, lots of discussion and compromise and maybe changes over time. And we just frankly aren't taking the time to have those discussions. We are not given, we're not afforded, or we don't afford ourselves time to sit down and think, who are we as a society? What do we value in each other? What are we trying to accomplish, as I mentioned earlier? And so if you think about, you know, the way the internet has helped, you know, it's been able to connect us with and learn from, you know, people from around the world, people that we would not have encountered otherwise. That's a massive benefit. I'm thankful for that. There's tremendous benefit, you know, coming out of the internet. And still to this day, it's beneficial in a lot of ways. But one thing that is kind of always in the back of my mind is that, you know, we're connecting more and more to people through screens, You know, through reading things that they've written or maybe things about them. And, you know, even right now, you know, we're talking through, you know, we can see each other through video and things like that. But, you know, if you are not outside in your neighborhood, in your community, seeing people eye to eye, you know, talking with them about potentially challenging and thorny issues and talking about, oh, you know, what do we value and how to treat each other and things like that. I think it's very easy for some of our communities to start slipping away. We're not thinking about how to have respectful conversations with people that we might disagree with. If we are saying, okay, how do I go on to whatever social media you use and blame somebody that I think just said something really stupid? And that is not a very humane way or a very human-like way to respond. It's more of a kind of flash in the pan, as Marc mentioned. You want engagement. You want people to see it. So you almost have an incentive to be a little bit meaner, a little bit more vocal in a negative way than you would have otherwise.

Renee:

Okay, that brings me to today where we're sitting here talking about like ad nauseum about artificial intelligence, right? And how what you just said is all like, like just the velocity goes through the roof and how much algorithms can really just shape what we see, what our reality is, and how to trigger us for any given thing. Like you can trigger me on Cupcake Wars, just cupcakes. Cupcakes can like throw me over an edge. Like it's gotten so bad, right? So let's fast forward to today and it's evolving fast. We've seen both amazing advancements and serious concerns about ethics from AI bias to job displacement. Let's not forget about autonomous weapons, the ethical questions are growing, right? The stuff that AI is capable of, it's like it could decide who gets a job or who gets a loan, or even one day who gets targeted by police or military forces. It's powerful, but it's also really scary. One of the biggest concerns is bias. AI is only as good as the data it learns from. And if that data is flawed or biased, then AI will reflect that. We also have to ask ourselves if we want AI to make decisions in areas like healthcare and law enforcement, even warfare. It's one of those things where the use of technology's potential is almost limitless, but the risks are just as limitless. What if we misuse this stuff? If we didn't do what Chris said, if we didn't say, what is it that we're trying to get AI to do? Save lives? Well, then we don't put it on weapons, then. We want to use it to save lives. Do we want to do it to educate people? Then we can't allow it to be used to mis-educate or dis-educate, right? So what is it? How are we going to deal with this? This is our future. This is the next generation's future. We're already being manipulated by algorithms on ad platforms. Seriously, ad platforms. Just one more time. Ad platforms. What happens when that finds its way into more nefarious parts of our lives? What if it's misused? Massar, what say you?

Marc:

Yeah, look, it's, I mean, we talk about this all the time, right? The weaponization and the ad platform is the perfect example that that's been weaponized to get more engagement. Right. But, but to come back to what you and Chris were saying about the questions, right? Asking the questions, what do we want to be as a society or, you know, what problems are we really trying to solve? You know those questions don't sell ads right those questions don't don't drive the revenue of google so or facebook or whatever so those are not questions in my opinion that are likely to happen until we mature to the point where we understand that.

Renee:

It's already too late though right by the time we wait till we're smart enough to know better it's now too late to know better

Marc:

I'm with you. I'm with you. I mean, maybe this is history. And that's what this whole podcast is always about, is history repeats itself over and over and over again.

Renee:

Over and over and over again. It's the one thing we say every time we do one of these.

Marc:

We're never going to freaking learn. And if we don't ever learn, you know, then I don't know. Who's responsible for the tech we create? How do we make sure it's used for good and not evil? I've worked on crypto systems for autonomous robots. And it's not that that I'm worried about. It's not the Terminators. It's not the robots taking over. At least not the physical robots taking over. It's the more boring stuff, right? I work in financial services. The algorithm that denies your loan. the system that, you know, tells you whether or not you have a good credit score or something like that. These are the things that, you know, trip me. The different flags that you think about this, right? If AI today is, especially in the LLM, the Gen AI space, the responses that AI produces today are probabilistic. They're the most likely outcomes to the questions that you input as tokens into the algorithm. And, you know, if that's the case, then think about what's the probabilistic outcome based on law enforcement data that already is biased against, you know. Economically challenged folks, you know, racial minorities, right? Then all it's going to do is reinforce that behavior. So the application of an algorithm that's trained on historic data is going to produce the same bias. Like this is not, like this isn't even a question. It's a fact. And that's what scares me. You know, is the model. Am I a good fit for the job? You know, the model decides whether or not I get a job and you never get to see why you just get rejected. And that's, you know, Chris is right, right? This is already happening, you know, and how invisible these sorts of things are with AI. We don't even know who is judging who.

Renee:

Yeah, right. What keeps you up at night? If I were you, I would not be sleeping. I just would not like I'm that dumb and stupid over here. I really like that about me. But like, if I had to think about this stuff all the time, I think I would not sleep ever again.

Chris:

Yeah, I mean, every one of the use cases, the challenging use cases that you mentioned has already happened and in not so great ways, right? We have already seen algorithms being used to decide which resumes get through to be reviewed, decide, you know, in some cases who gets arrested, right? We've seen errant facial recognition technology actually lead to the arrest of people that it shouldn't have. criminal justice and recidivism rates is a well-known news case that went poorly. The thing that keeps me up at night, there are actually two different cases that ended up very similar, the Michigan Unemployment Office and the Federal Tax Authority. In both cases, an algorithm was used to flag people that were seen as potentially fraudulent applicants for either unemployment benefits or for child tax credits. The algorithm mistakenly identified or mistakenly identified tens of thousands of applicants in both cases as being potentially fraudulent. So it garnished their wages. Unbelievable. It tore families apart. Yes. It's difficult to talk about, but it tore families apart. And in some cases, it did irreparable harm. And we are often seeing this technology popping up in law enforcement and in warfare technologies. So all of these things are currently happening. So I agree, Marc. It's not necessarily the 10 years out, Skynet's going to take over, there's going to be a Terminator walking down the street. It's the current things that we are designing, developing, and implementing and operating that worries me. So I've been lucky enough, you know, the last six years, I've been working with lots of companies on developing principles and policies, implementing controls and oversight, helping people that conceive of and design and develop and implement technology make more responsible decisions at each stage. And so you ask, who's responsible? Definitely, there are these kinds of people that are building the technologies and implementing them. But I think we all have a part to play. We all shop with our, I guess, vote with our wallets, as they say. We also vote with our votes. It's, you know, who gets elected to represent us in government can have actually quite a big impact on whether or not technology is going to be regulated in a way that makes it more safe for us. I'm actually working on a PhD dissertation that kind of covers this. It's this idea of what kind of institutions or systems or corporations or parties do we trust, right? And in the literature, often trust is a very kind of subjective and personal thing. You know, if I trust you, I get a benefit, but maybe I'm vulnerable or I'm at risk in some way. But what I'm working on is this idea that we have an obligation where we trust these systems or parties or institutions in power. Or we should be trusting on behalf of other people who might be harmed by these systems or institutions or corporations or things like that. So it's kind of opening the aperture to consider what are the criteria with which we should decide who do we trust or what do we trust, especially when they have these kind of powers. Your power over things like who gets health care or insurance or who gets arrested and how long should they stay in jail if they do get found guilty, for example. So I would say, you know, it seems trite to say we need to think more about humanity as we pursue these technologies. But honestly, I've seen how much this approach can help. And by the way, it's also good for business. If you were thinking about humanity, who is using this, how do they benefit? It's often very helpful for, for example, adoption rates or engagement.

Marc:

I have a question.

Chris:

Yeah, please.

Marc:

I have a question. So you talked about responsibility. That's, and I think we're, you know, we've been through there. You know, thinking about those examples that you put out there, like the accountability, like who's accountable, you know, what's the accountability model, you know, what's the framework for that when you think about, well, it was an AI LLN, you know, that did that, you know, is the, the model generator, the model owner, are they accountable? The implementer are they accountable like the agency that did like who's accountable like but i think most of the time right now people just throw their hands up and go well it was ai.

Chris:

Yeah it's a great question and it is complicated but it's not completely unprecedented like if there's a car accident we might say we we assume that the driver's accountable but then the driver could say, wait a minute, no, the brakes did. And so you can actually go back and look through the value chain of that vehicle and say, okay, who made the brakes, who manufactured them, who implements them? And you can look at the contract language. You can look at, you know, claim supply ability and things like that. You can investigate where it went wrong, like who made what promises and which promises may have been breached in some way. There might actually be, you know, quality assurance or maybe even regulatory oversight that was supposed to look at, you know that that vehicle to make sure that it was safe to be on the street and maybe that's where it failed so this is a complicated question with ai for sure there are dozens and dozens of different entities that could have been involved in putting one of these systems into place but it's not a completely unprecedented problem we do have mechanisms to look through that kind of chain of liability yeah

Marc:

I think oh man i mean this is, I just feel like if somebody, somebody didn't get a loan because somebody was using an open AI LLM, you know, open AI is not going to say, yeah, that's on us. You know, no problem. You know, that's just not going to happen.

Renee:

Well, I mean, don't you see that they're going to get the same pass-through indemnification that Facebook gets? And the same, it's just going to be the same thing. It's like, I'm an internet company. It's pass-through indemnification. I'm not to blame. I can't, I can't, I can't be responsible if you trust it. Like I told you not to trust it. Right. Like, so, yeah, I feel like they would be off the hook right out of the gate. Passer identification. That's how it's going to work.

Chris:

But there are currently laws in the books that say you are not allowed to discriminate when you are reviewing people for a job, for a loan, for health care, for all of those things. So the whatever bank or health care or insurance company that we're talking about has an obligation not just to buy technology and let it go and then not look at the outcomes. They have an obligation to monitor those outcomes. They have an obligation to put technology in place that's going to help them meet their regulatory obligations. And if they say, well, OpenAI promised that they did all their fairness testing and that 100%, when we use this technology to look at resumes, OpenAI promised, in that case, OpenAI could have some liability there. But they're not making those promises. They're not promising in that way. So it's the corporations that are thinking, oh, I'm just going to make this thing do all of my work and it's going to make decisions around loans or health care or who gets a job. That's on them, right? If they're using unproven technology in a way that it's not meeting their regulatory obligations, that liability seems pretty clear to me.

Renee:

I'll say as a former auditor, you'd be on the hook for that. They said they wouldn't. Did you do any due diligence there? Is it contractually required anywhere? If I came in as your auditor, I'd be like, we need to talk.

Chris:

And in every case, you should be monitoring the results. You should double check. You should do quality control over your process. Yeah, you should be. There are other controls in place in addition to your contract language.

Marc:

You totally agree. And I think that's spot on. But then you look at the use case or, you know, this not a use case, but an incident, right, where one of the platforms, I think it was OpenAI, basically a young woman, a young girl was chatting with a ChatGPT instance. And it was chatting about, you know, loss of life, death, you know, suicide. You know, and the model itself was not... It was not trained to provide mental health counseling.

Renee:

No model should be counseling.

Marc:

Yeah, exactly. Right. And people were using it that way. And, and it's not, you know, now they've gone back fortunately and, and, you know, kind of said, no, you know, the model can't do this, can't do this. And they put some guardrails around it and that's, that's good. Right. But like, that's a real instance. And I, and I feel like every social media platform has the same exact problem, right? Passer indemnification, as Rene said. And they're just not accountable for the things that happen after people use their platforms. And I just find that that's really messed up, you know?

Chris:

Yeah, it's heartbreaking. There have been several very similar cases. And there are lawsuits and investigations and things like that ongoing. But I think fundamentally, this reflects the kind of loss of humanity that I mentioned earlier. You know, I don't think we need to get to the point that to say, you know, no child under 18 should ever interact with social media or with an AI product or something like that. But we should have mechanisms in place to have difficult conversations, to give them literacy and training and help them understand this is what AI is good for and this is what it's not good for. Or if you are having doubts or challenges, you should come and talk to people and not to AI. If young kids spend less time on a screen and more time talking to each other and then to the parents and community, maybe that would happen less. And again, this seems trite. It seems easy to say, well, focus more on humanity. But I do think that tech companies could do a better job anticipating it. Yeah. Misuse, abuse, risks, you know, ethical harms and so forth and take precautionary measures to prevent these types of things from happening. I don't think they're standing up to the challenge at all.

Renee:

Here's all I'll say to this before we move on to close this out. Here's what I'll say. I've been doing deepfake Fridays for almost five years, like maybe five and a half years, almost six years. And here's what I know for sure. If I go and Google myself on Google Images, it will not pull back a single deepfake. It'll be the stuff you see on, you know, webinar photos or me at a conference or me with someone on social media because it's an actual photo of me and it's not a deepfake. Like those are the only ones it brings back. So what that tells me is Facebook and Google, that software works. They know when it's not me, which means they know when it's not the president. They know when it's not the vice president. They know when it's not. And they still publish it, right? Like it literally, I know for a fact it knows the difference. And so even when they can do it, they don't because they want to drive interaction, right? And so here, this was a heavy one, you guys. I feel like if I was driving in my car, I'd be sitting in the McDonald's drive-thru with like five 3D printed McRibs, like trying to come to terms with what this is all going to be. But it was an important conversation. Technically, technology has always come with its own set of ethical challenges. And I think it's up to us to keep asking the tough questions. It's not just about what we can do with technology, but what we should do.

Marc:

Yeah, totally agree. If we're not careful, we might be creating a future where the ethics of technology get lost in the hype. It's all about weaponization and what can we do to prevent the weaponization of some of these new techs.

Chris:

Yeah, I appreciate that. Yeah, I mean, I think it's fair to be optimistic about technology still. I mean, I think there are lots of technologies that we should continue to pursue. There's a lot of great potential benefit out there. But the way I feel from what I've seen, I think our current path is just leaving far too many people out of the equation, people that are not participating in the decision-making process. They're not able to enjoy the benefits that other people can. And so they have a right to complain. And I think they have a right to ask, how else should technology work in order to spread those benefits more fairly? And I would say all of us have a role to play here. All of us can ask those questions. All of us can think about, okay, what are the impacts to humanity, to society, to the environment? What kind of impacts would we like to see? And what are we currently seeing? And how do we steer these impacts to a more positive direction?

Renee:

That was so hopeful. Thank you, Chris. Make sure you subscribe. Smash that subscribe button. You guys, I've been watching way too much YouTube. Smash that subscribe button. Share. Let us know your thoughts on ethics and technology. What's your take on the tech developments we talked about today? Hit us up on social media or leave a comment. It's our 10th show, and we thought we would actually talk about who's tuning in. Marc, who's tuning in?

Marc:

Okay. All right. Now I got to find it. There it is. There it is. Okay. So we've added a couple of countries, which is cool. So we talked about Singapore last time. I can't remember if we talked about Finland.

Renee:

Finland, the happiest people on earth. Yeah.

Marc:

I know. Yeah. So we've added Australia. Maybe we said Australia last time.

Renee:

Yeah.

Marc:

Mexico.

Renee:

Oh.

Marc:

Spain and France. Oh, and here's a new one. I was surprised when we saw this one. Bulgaria.

Renee:

Bulgaria. Hello, Bulgaria.

Marc:

Yeah.

Renee:

How lovely. Who's our biggest? Who tunes in the most?

Marc:

Okay. Well, it's a toss-up between the U.S. and the U.K., and they're literally neck and neck right now with about, you know, 40% each. So that's, yeah, so that's pretty close there. But the number one city still, London, England is our largest city. About 18% of our total downloads came out of London, England. And, you know, I will shout out some people here. Okay, go ahead. Our next biggest one is Bentonville, Arkansas.

Renee:

Bentonville?

Marc:

Yeah. Walmart's tuning in. Yeah, no, it's my wife's sister, my sister-in-law. Oh, Bentonville. Yeah, yeah. So I think she's probably listened to everything and then probably had her kids listen as well. Oh, nice. So there you go. A couple folks in Austin, Texas, which is great. Shout out, Austin. yeah melbourne kent canterbury that's probably my daughter let's see let's find one that's oh toronto toronto.