The Culture of Artificial Intelligence

Céline Semaan: It is being reported that AI will make humans dumber than ever, that it is here to rule the world, and to subjugate us all by bringing on a climate apocalypse. Being an AI and tech expert, how can you help people better understand AI as a phenomenon that will impact us but that we shouldn’t necessarily fear?

Sinead Bovell: It depends on where you are… in the Global North, and particularly in the US, perspectives on artificial intelligence and advanced technologies are more broadly negative. When you look at regions in the Global South, when you look at regions in Asia, AI is seen in a much more positive light. Their societies tend to focus on the benefits new technology can bring and what it can do for their quality of life. The social media ecosystem thrives on negative content, but it really does depend on where you are in the world as to how negatively you’re going to view AI. When it comes to the actual fears and the threats themselves, most of them have some validity.

Humans could become less intelligent over time if they’re overly reliant on artificial intelligence systems, and the data does show that AI can erode core cognitive capacities.

For example, most of us can’t read maps anymore. If you are in the military and your satellite gets knocked down and you need to understand your coordinates, that might be a problem. But for the average person, not reading a map has allowed us to optimize our time; we can get from A to B much more quickly. What do we fill the time with that AI gives us back with? That’s a really important question.

Another important question is: How do we purposely engineer cognitive friction into the learning and thinking environment so we don’t erode that core capability? That’s not something that is just going to happen. We are humans, we take the path of least resistance, like all evolutionary species do. If you look at the printing press, the chaotic abundance of information eventually led to the scientific method and the peer review. Educators, academics, scientists, and creators needed to figure out a way to sort through the valuable information and the nonsense, and that led to more cognitive friction. Those pathways haven’t been developed yet for AI. How we use and assimilate AI depends on the actions we take when it comes to the climate apocalypse, for instance. As of now, how AI uses water and energy is nothing short of a nightmare. However, it’s not really AI in isolation. It’s our social media habits in general. When you look at them in aggregate and globally, our digital habits and patterns aren’t good for the climate in general. And then AI just exacerbates all of that.

AI is not a technology that you are going to tap into and tap out of. It’s not like Uber where maybe you don’t use the app because you would prefer to bike, and that’s the choice that you make. AI is a general-purpose technology, and it’s important that we get that distinction, because general-purpose technologies, over time, become infrastructure, like the steam engine, electricity, and the internet. We rebuild our societies on top of them, and it’s important that we see it that way, so people don’t just unsubscribe out of protest. That only impedes their ability to make sure they keep up with the technology, and give adequate feedback and critiques of the technology.

Céline Semaan: I recently saw you on stage and heard your response to a question about whether AI and its ramifications could be written into an episode of the TV show Black Mirror. Would you be able to repeat the answer you gave?

Sinead Bovell: The stories we see and read about AI are usually dystopian. Arguably, there are choices we continue to make over and over again that we know will lead to negative outcomes, yet we don’t make different choices. To me, that’s the real Black Mirror episode… can we rely on ourselves? In some circumstances, we continually pick the more harmful thing. Most of the big challenges we face are complicated but not unsolvable. Even with climate, a lot of the solutions exist, and actually most of them are grounded in technology. What isn’t happening is the choice to leverage them, or the choice to subsidize them so they become more accessible, or the choice to even believe in them. That scares me a lot more than a particular use case of technology. Most of the biggest challenges we face are down to human choices, and we’re not making the right choices.

Céline Semaan: Are you afraid of AI taking over the world and rendering all of our jobs useless? How do you see that?

Sinead Bovell: There’s AI taking over the world, and that’s AI having its own desire and randomly rising up out of the laptop or out of some robot. I’m not necessarily concerned about that. You can’t say anything is a 0% chance, right? We don’t know. There are so many things you can’t say with 100% certainty. I mean, are we alone the universe? It’s really hard to prove or disprove those types of things. Where I stand on that is… sure allocate research dollars to a select group of scientists who can work on that problem. However, I am quite concerned about the impact AI is going to have on the workforce. We can see the destruction of certain jobs coming. It’s going to happen quickly, and we’re not preparing for it properly. Every general-purpose technology has led to automation and reconfiguration of the shape of the workforce. Let’s look at the first industrial revolution which lasted from approximately 1760-1840. If we were to zoom in on people working in agriculture, by the end of the 19th Century, around 70-80% of those people were doing something different. That is an astounding change. People had jobs, they just looked very different from working on the farm. But what if that happens in seven years rather than 80 years? That’s what scares me.

I think the transition will be quite chaotic because it’s going to be quite quick, but it doesn’t have to be. History isn’t a great predictor of the future, but it does give you a lot of examples of what you don’t need to do again.

The reason the industrial revolution turned out to be a good thing in the end, in terms of the life we all live, is that, for instance, we have MRIs and don’t have to have our blood drained to see if we’re sick. But people were just left to fend for themselves. It was chaos, and it turned into this kind of every person for themselves. Kind of figure it out. Get to the city. Bring your family. Don’t bring your family. It was really chaotic. How are we going to not repeat that? I don’t know if we are putting the security measures in place to make sure people are protecting that transition.

The most obvious one to me is health care in the United States. I don’t know the exact number, maybe it’s around 60% of people, but don’t quote me on that, are reliant on their job for health care. That’s where their insurance comes from. What is going to happen to their insurance if their job goes away or if they transition to being self-employed? How do we help people transition? People don’t even dare go down that road, but those are the types of conversations that need to happen.

Céline Semaan: In 10 years from now, will we look at AI as just another super calculator. And we will be asking the same questions that we are asking today, meaning that the change we’re seeking is not necessarily technological, but philosophical and cultural. How do you see that?

Sinead Bovell: AI will look like much more of a philosophical, cultural, and social transition than solely a technological one. This is true of a lot of general-purpose technologies.

The inventions in technology lead to how we organize our societies and how we govern them. If you look at the printing press, it led to a secular movement and gave power to that engine. You get big social, philosophical, cultural changes, and revolutions in society when you experience this scale of technical disruption. I think we will look back on the AI inflection point as one of the most pivotal transitions in human history in the past couple 100 years. I would say it’s going to be as disruptive as the printing press and maybe steam engine combined. And we made it through both of those. There was a lot of turmoil and chaos, but we did make it through both of those.

We are a much more vibrant, healthy society now. We live longer and, relatively speaking, we have much more equality. There is a path where it works out, but we have to be making the decisions to make that happen. However, it’s not practical that a subset of the population makes the decisions on behalf of everyone. And that’s why I think it’s so important for people to get in the game and not see AI as this really technical device or technology, but instead, as a big social, cultural and philosophical transition. Your lived experience qualifies you to participate in these conversations; there’s nobody who can carry the weight of this on their own.

In Conversation:
Topics:
Filed under:

Admin:

Download docx

Schedule Newsletter

Schedule →

Keep reading: