Digital & Print Membership
Yearly + Receive 8 free printed back issues
$420 Annually
Monthly + Receive 3 free printed back issues
$40 Monthly
The Culture of Artificial Intelligence
Céline Semaan: It is being reported that AI will make humans dumber than ever, that it is here to rule the world, and to subjugate us all by bringing on a climate apocalypse. Being an AI and tech expert, how can you help people better understand AI as a phenomenon that will impact us but that we shouldn’t necessarily fear?
Sinead Bovell: It depends on where you are… in the Global North, and particularly in the US, perspectives on artificial intelligence and advanced technologies are more broadly negative. When you look at regions in the Global South, when you look at regions in Asia, AI is seen in a much more positive light. Their societies tend to focus on the benefits new technology can bring and what it can do for their quality of life. The social media ecosystem thrives on negative content, but it really does depend on where you are in the world as to how negatively you’re going to view AI. When it comes to the actual fears and the threats themselves, most of them have some validity.
Humans could become less intelligent over time if they’re overly reliant on artificial intelligence systems, and the data does show that AI can erode core cognitive capacities.
For example, most of us can’t read maps anymore. If you are in the military and your satellite gets knocked down and you need to understand your coordinates, that might be a problem. But for the average person, not reading a map has allowed us to optimize our time; we can get from A to B much more quickly. What do we fill the time with that AI gives us back with? That’s a really important question.
Another important question is: How do we purposely engineer cognitive friction into the learning and thinking environment so we don’t erode that core capability? That’s not something that is just going to happen. We are humans, we take the path of least resistance, like all evolutionary species do. If you look at the printing press, the chaotic abundance of information eventually led to the scientific method and the peer review. Educators, academics, scientists, and creators needed to figure out a way to sort through the valuable information and the nonsense, and that led to more cognitive friction. Those pathways haven’t been developed yet for AI. How we use and assimilate AI depends on the actions we take when it comes to the climate apocalypse, for instance. As of now, how AI uses water and energy is nothing short of a nightmare. However, it’s not really AI in isolation. It’s our social media habits in general. When you look at them in aggregate and globally, our digital habits and patterns aren’t good for the climate in general. And then AI just exacerbates all of that.
AI is not a technology that you are going to tap into and tap out of. It’s not like Uber where maybe you don’t use the app because you would prefer to bike, and that’s the choice that you make. AI is a general-purpose technology, and it’s important that we get that distinction, because general-purpose technologies, over time, become infrastructure, like the steam engine, electricity, and the internet. We rebuild our societies on top of them, and it’s important that we see it that way, so people don’t just unsubscribe out of protest. That only impedes their ability to make sure they keep up with the technology, and give adequate feedback and critiques of the technology.
Céline Semaan: I recently saw you on stage and heard your response to a question about whether AI and its ramifications could be written into an episode of the TV show Black Mirror. Would you be able to repeat the answer you gave?
Sinead Bovell: The stories we see and read about AI are usually dystopian. Arguably, there are choices we continue to make over and over again that we know will lead to negative outcomes, yet we don’t make different choices. To me, that’s the real Black Mirror episode… can we rely on ourselves? In some circumstances, we continually pick the more harmful thing. Most of the big challenges we face are complicated but not unsolvable. Even with climate, a lot of the solutions exist, and actually most of them are grounded in technology. What isn’t happening is the choice to leverage them, or the choice to subsidize them so they become more accessible, or the choice to even believe in them. That scares me a lot more than a particular use case of technology. Most of the biggest challenges we face are down to human choices, and we’re not making the right choices.
Céline Semaan: Are you afraid of AI taking over the world and rendering all of our jobs useless? How do you see that?
Sinead Bovell: There’s AI taking over the world, and that’s AI having its own desire and randomly rising up out of the laptop or out of some robot. I’m not necessarily concerned about that. You can’t say anything is a 0% chance, right? We don’t know. There are so many things you can’t say with 100% certainty. I mean, are we alone the universe? It’s really hard to prove or disprove those types of things. Where I stand on that is… sure allocate research dollars to a select group of scientists who can work on that problem. However, I am quite concerned about the impact AI is going to have on the workforce. We can see the destruction of certain jobs coming. It’s going to happen quickly, and we’re not preparing for it properly. Every general-purpose technology has led to automation and reconfiguration of the shape of the workforce. Let’s look at the first industrial revolution which lasted from approximately 1760-1840. If we were to zoom in on people working in agriculture, by the end of the 19th Century, around 70-80% of those people were doing something different. That is an astounding change. People had jobs, they just looked very different from working on the farm. But what if that happens in seven years rather than 80 years? That’s what scares me.
I think the transition will be quite chaotic because it’s going to be quite quick, but it doesn’t have to be. History isn’t a great predictor of the future, but it does give you a lot of examples of what you don’t need to do again.
The reason the industrial revolution turned out to be a good thing in the end, in terms of the life we all live, is that, for instance, we have MRIs and don’t have to have our blood drained to see if we’re sick. But people were just left to fend for themselves. It was chaos, and it turned into this kind of every person for themselves. Kind of figure it out. Get to the city. Bring your family. Don’t bring your family. It was really chaotic. How are we going to not repeat that? I don’t know if we are putting the security measures in place to make sure people are protecting that transition.
The most obvious one to me is health care in the United States. I don’t know the exact number, maybe it’s around 60% of people, but don’t quote me on that, are reliant on their job for health care. That’s where their insurance comes from. What is going to happen to their insurance if their job goes away or if they transition to being self-employed? How do we help people transition? People don’t even dare go down that road, but those are the types of conversations that need to happen.
Céline Semaan: In 10 years from now, will we look at AI as just another super calculator. And we will be asking the same questions that we are asking today, meaning that the change we’re seeking is not necessarily technological, but philosophical and cultural. How do you see that?
Sinead Bovell: AI will look like much more of a philosophical, cultural, and social transition than solely a technological one. This is true of a lot of general-purpose technologies.
The inventions in technology lead to how we organize our societies and how we govern them. If you look at the printing press, it led to a secular movement and gave power to that engine. You get big social, philosophical, cultural changes, and revolutions in society when you experience this scale of technical disruption. I think we will look back on the AI inflection point as one of the most pivotal transitions in human history in the past couple 100 years. I would say it’s going to be as disruptive as the printing press and maybe steam engine combined. And we made it through both of those. There was a lot of turmoil and chaos, but we did make it through both of those.
We are a much more vibrant, healthy society now. We live longer and, relatively speaking, we have much more equality. There is a path where it works out, but we have to be making the decisions to make that happen. However, it’s not practical that a subset of the population makes the decisions on behalf of everyone. And that’s why I think it’s so important for people to get in the game and not see AI as this really technical device or technology, but instead, as a big social, cultural and philosophical transition. Your lived experience qualifies you to participate in these conversations; there’s nobody who can carry the weight of this on their own.
{
"article":
{
"title" : "The Culture of Artificial Intelligence",
"author" : "Sinead Bovell, Céline Semaan",
"category" : "interviews",
"url" : "https://everythingispolitical.com/readings/sinead-bovell-on-ai-artifial-intelligence",
"date" : "2025-07-20 21:35:46 -0400",
"img" : "https://everythingispolitical.com/uploads/sinead-bovell-headshot.jpg",
"excerpt" : "Céline Semaan: It is being reported that AI will make humans dumber than ever, that it is here to rule the world, and to subjugate us all by bringing on a climate apocalypse. Being an AI and tech expert, how can you help people better understand AI as a phenomenon that will impact us but that we shouldn’t necessarily fear?",
"content" : "Céline Semaan: It is being reported that AI will make humans dumber than ever, that it is here to rule the world, and to subjugate us all by bringing on a climate apocalypse. Being an AI and tech expert, how can you help people better understand AI as a phenomenon that will impact us but that we shouldn’t necessarily fear?Sinead Bovell: It depends on where you are… in the Global North, and particularly in the US, perspectives on artificial intelligence and advanced technologies are more broadly negative. When you look at regions in the Global South, when you look at regions in Asia, AI is seen in a much more positive light. Their societies tend to focus on the benefits new technology can bring and what it can do for their quality of life. The social media ecosystem thrives on negative content, but it really does depend on where you are in the world as to how negatively you’re going to view AI. When it comes to the actual fears and the threats themselves, most of them have some validity. Humans could become less intelligent over time if they’re overly reliant on artificial intelligence systems, and the data does show that AI can erode core cognitive capacities.For example, most of us can’t read maps anymore. If you are in the military and your satellite gets knocked down and you need to understand your coordinates, that might be a problem. But for the average person, not reading a map has allowed us to optimize our time; we can get from A to B much more quickly. What do we fill the time with that AI gives us back with? That’s a really important question.Another important question is: How do we purposely engineer cognitive friction into the learning and thinking environment so we don’t erode that core capability? That’s not something that is just going to happen. We are humans, we take the path of least resistance, like all evolutionary species do. If you look at the printing press, the chaotic abundance of information eventually led to the scientific method and the peer review. Educators, academics, scientists, and creators needed to figure out a way to sort through the valuable information and the nonsense, and that led to more cognitive friction. Those pathways haven’t been developed yet for AI. How we use and assimilate AI depends on the actions we take when it comes to the climate apocalypse, for instance. As of now, how AI uses water and energy is nothing short of a nightmare. However, it’s not really AI in isolation. It’s our social media habits in general. When you look at them in aggregate and globally, our digital habits and patterns aren’t good for the climate in general. And then AI just exacerbates all of that.AI is not a technology that you are going to tap into and tap out of. It’s not like Uber where maybe you don’t use the app because you would prefer to bike, and that’s the choice that you make. AI is a general-purpose technology, and it’s important that we get that distinction, because general-purpose technologies, over time, become infrastructure, like the steam engine, electricity, and the internet. We rebuild our societies on top of them, and it’s important that we see it that way, so people don’t just unsubscribe out of protest. That only impedes their ability to make sure they keep up with the technology, and give adequate feedback and critiques of the technology.Céline Semaan: I recently saw you on stage and heard your response to a question about whether AI and its ramifications could be written into an episode of the TV show Black Mirror. Would you be able to repeat the answer you gave?Sinead Bovell: The stories we see and read about AI are usually dystopian. Arguably, there are choices we continue to make over and over again that we know will lead to negative outcomes, yet we don’t make different choices. To me, that’s the real Black Mirror episode… can we rely on ourselves? In some circumstances, we continually pick the more harmful thing. Most of the big challenges we face are complicated but not unsolvable. Even with climate, a lot of the solutions exist, and actually most of them are grounded in technology. What isn’t happening is the choice to leverage them, or the choice to subsidize them so they become more accessible, or the choice to even believe in them. That scares me a lot more than a particular use case of technology. Most of the biggest challenges we face are down to human choices, and we’re not making the right choices.Céline Semaan: Are you afraid of AI taking over the world and rendering all of our jobs useless? How do you see that?Sinead Bovell: There’s AI taking over the world, and that’s AI having its own desire and randomly rising up out of the laptop or out of some robot. I’m not necessarily concerned about that. You can’t say anything is a 0% chance, right? We don’t know. There are so many things you can’t say with 100% certainty. I mean, are we alone the universe? It’s really hard to prove or disprove those types of things. Where I stand on that is… sure allocate research dollars to a select group of scientists who can work on that problem. However, I am quite concerned about the impact AI is going to have on the workforce. We can see the destruction of certain jobs coming. It’s going to happen quickly, and we’re not preparing for it properly. Every general-purpose technology has led to automation and reconfiguration of the shape of the workforce. Let’s look at the first industrial revolution which lasted from approximately 1760-1840. If we were to zoom in on people working in agriculture, by the end of the 19th Century, around 70-80% of those people were doing something different. That is an astounding change. People had jobs, they just looked very different from working on the farm. But what if that happens in seven years rather than 80 years? That’s what scares me. I think the transition will be quite chaotic because it’s going to be quite quick, but it doesn’t have to be. History isn’t a great predictor of the future, but it does give you a lot of examples of what you don’t need to do again.The reason the industrial revolution turned out to be a good thing in the end, in terms of the life we all live, is that, for instance, we have MRIs and don’t have to have our blood drained to see if we’re sick. But people were just left to fend for themselves. It was chaos, and it turned into this kind of every person for themselves. Kind of figure it out. Get to the city. Bring your family. Don’t bring your family. It was really chaotic. How are we going to not repeat that? I don’t know if we are putting the security measures in place to make sure people are protecting that transition.The most obvious one to me is health care in the United States. I don’t know the exact number, maybe it’s around 60% of people, but don’t quote me on that, are reliant on their job for health care. That’s where their insurance comes from. What is going to happen to their insurance if their job goes away or if they transition to being self-employed? How do we help people transition? People don’t even dare go down that road, but those are the types of conversations that need to happen.Céline Semaan: In 10 years from now, will we look at AI as just another super calculator. And we will be asking the same questions that we are asking today, meaning that the change we’re seeking is not necessarily technological, but philosophical and cultural. How do you see that?Sinead Bovell: AI will look like much more of a philosophical, cultural, and social transition than solely a technological one. This is true of a lot of general-purpose technologies.The inventions in technology lead to how we organize our societies and how we govern them. If you look at the printing press, it led to a secular movement and gave power to that engine. You get big social, philosophical, cultural changes, and revolutions in society when you experience this scale of technical disruption. I think we will look back on the AI inflection point as one of the most pivotal transitions in human history in the past couple 100 years. I would say it’s going to be as disruptive as the printing press and maybe steam engine combined. And we made it through both of those. There was a lot of turmoil and chaos, but we did make it through both of those.We are a much more vibrant, healthy society now. We live longer and, relatively speaking, we have much more equality. There is a path where it works out, but we have to be making the decisions to make that happen. However, it’s not practical that a subset of the population makes the decisions on behalf of everyone. And that’s why I think it’s so important for people to get in the game and not see AI as this really technical device or technology, but instead, as a big social, cultural and philosophical transition. Your lived experience qualifies you to participate in these conversations; there’s nobody who can carry the weight of this on their own."
}
,
"relatedposts": [
{
"title" : "Mercy Over Speed: Revolutionizing Our Political Imagination",
"author" : "Sue Ariza",
"category" : "essays",
"url" : "https://everythingispolitical.com/readings/mercy-over-speed",
"date" : "2025-12-11 13:40:00 -0500",
"img" : "https://everythingispolitical.com/uploads/Cover_EIP_Mercy_Speed.jpg",
"excerpt" : "2025 was a masterclass in haste.",
"content" : "2025 was a masterclass in haste.Policies rushed to enact a merciless agenda that benefit only the few—President Donald Trump scrapped Biden’s AI executive order within hours of taking office, wiping out safety and transparency requirements as we enter a new digital age. Immigration officials were ordered to quadruple immigration arrests overnight. Food assistance was frozen while billions in relief funds sat unused; hunger used as a pawn in the longest government shutdown in American history. Entire communities pushed not just to autopilot, but to survival—by algorithms that cannot see them, by bureaucracies that cannot pause long enough to understand them, by political actors who confuse immediacy with leadership.Of course, the real crisis isn’t speed on its own. It’s what speed erases: attention, nuance, reflection, and the fundamental truth that human beings are not statistics or administrative burdens. Perhaps nowhere was this clearer than in the State Department’s human rights reports earlier this year. In the name of “streamlining,” references to prison abuse, LGBTQIA+ persecution, and attacks on human rights defenders were quietly removed. The language was technocratic—reduce redundancy, tidy up the narrative—but the effect was ideological: whole communities and categories of suffering erased from national memory.Because the truth is, what speed strategically, ruthlessly, obliterates is the one crucial political practice we need most: mercy.Our world has taught us to think of mercy in opposition to speed, too soft for our lived realities, though it’s anything but that: Mercy is the commitment to respond to harm, conflict, or complexity with clarity rather than panic—with discernment instead of reflex. Mercy is the refusal to collapse a person, an idea, or a crisis into something smaller than it is. Mercy is political imagination: the capacity to see beyond what urgency allows and stay with one another long enough to resist the reflexes that turn disagreement into instant judgment—so we can listen before we attack or defend.But what does mercy actually demand of us? For us to reclaim it politically, we first must understand what it means and how it offers a counter-rhythm to our frantic culture of speed and instant gratification.The word itself tells a story. Mercy comes from the Latin merces—wages, payment, the price of goods. Ancient Romans understood it as a transaction. But early Christians shifted the word toward the sacred: the spiritual reward for showing kindness where cruelty was expected. They moved a word about the marketplace into a vocabulary of grace.Judaism’s rachamim, Islam’s rahma, Buddhism’s karuṇā, and Hinduism’s dayā all insist on the same truth: mercy is a way of recognizing the sacredness in others.That transformation mirrors what mercy asks of us now: to move beyond the logic of exchange, beyond what is earned or owed. It asks us to look at someone who has caused pain, and instead of asking What do they deserve? ask, What does healing require here? It is seeing beyond someone’s worst moment and choosing curiosity over condemnation.But mercy is more than individual forgiveness. It is a way of moving through the world that assumes people are larger than their failures; that redemption remains possible; that, importantly, time is not a scarce resource, but something we can afford to give. Mercy requires attention—what French philosopher Simone Weil called “the rarest and purest form of generosity.” It is why American novelist James Baldwin described love as an active emotion: the daily labor of truly seeing another person, especially when the systems around us tell us to look away.The problem, however, is that attention is precisely what our culture has made almost impossible to give. We are overstimulated, overextended, algorithmically hijacked, not only bearing witness to incredible amounts of suffering, but scrolling past it. We don’t refuse mercy because we’re cruel. We refuse it because we’ve built a world that makes stopping feel unimaginable—impractical.This is why mercy is not opposed to speed; it is opposed to false urgency. There are moments when mercy requires swift, decisive intervention. The problem is not action—it’s reaction: the unexamined acceleration that mistakes immediacy for moral clarity and treats nuance as an inconvenience.Consider how the culture of speed is destabilizing basic public systems. Take the Supplemental Nutrition Assistance Program (SNAP) that feeds more than 42 million Americans. This year, households faced unprecedented threats to their benefits—not because their needs had changed, not because the money didn’t exist, but because the administration chose to let billions in contingency funds sit untouched. The crisis wasn’t a failure of capacity. It was a political choice dressed up as inevitability.Or look at the rush to implement AI—a race happening not because anyone has thought deeply about what these systems are for, but because companies fear being the last to adopt them. Across industries, AI is being plugged into hiring platforms, healthcare systems, education tools, corporate workflows, and crisis-response mechanisms, often with little understanding of the consequences. “Innovation” has become a justification to move faster than ethics, oversight, or even common sense can keep up. In that scramble to avoid falling behind, speed becomes a substitute for understanding what people actually need and for the mercy that governance requires.A merciful politics would insist that deliberation is not inefficiency but protection, and that slowing down is an ethical requirement. Because the stakes of leadership and governance without it are real: if AI systems are going to help determine who gets hired, who gets healthcare, who receives support, which students get flagged for discipline, then refusing to slow down is not neutrality—it is a political choice with human costs.Our addiction to speed also shapes how we respond to political disagreement. Our culture no longer rewards thinking or meaningful conversation. Instead, it rewards reacting. Watch how career Democrats responded to New York Assembly member Zohran Mamdani’s mayoral campaign in November. Rather than engaging with his proposals on housing, healthcare, or municipal governance, establishment voices moved immediately to demonization. Senate Minority Leader Chuck Schumer withheld his endorsement entirely. His ideas required discussion, which takes time and attention. His vision challenged party orthodoxy, which requires deliberation to refute or incorporate. Instead of dialogue, we see instant censure, moral panic, and swift punishment.The speed of the response is the point. It signals that dissent is tolerable only when it can be quickly absorbed or quickly dismissed. Ideas that require conversation are treated as threats simply because they resist rapid processing. The issue isn’t whether Mamdani’s proposals are correct (and of course, it remains to be seen how they will actually be implemented); it’s that the reflex to demonize rather than debate reveals a political culture that has forgotten how to think collectively.We see this punitive speed logic everywhere. Students disciplined for language before conversations can happen. Social movements judged by headlines rather than the work. Communities criminalized in real time by social media cycles that flatten context into consumable outrage. We’ve built a society quicker to punish than to understand, quicker to condemn than to contextualize.But mercy could help us move differently. Mercy would refuse to relegate a person or an idea to a caricature simply because the truth requires time. Mercy asks us to hold uncertainty long enough to respond with discernment rather than reflex. It asks us to think—together.Legal scholar Matthias Mahlmann writes that dignity is “subversive,” an insistence that every human life carries irreducible worth. But dignity has a temporal requirement: you cannot witness another person’s humanity at speed. You cannot attend to the complexity of a life if you’re only interested in the fastest possible outcome.This is why systems built around optimization always feel so violent. Algorithmic welfare reviews, automated policing, real-time public shaming—all of them demand that human beings be compressed into categories that can be processed quickly. The violence isn’t just in the outcome; it’s in the refusal of attention itself.Mercy and dignity are inseparable. Dignity names the inherent worth that every person carries; mercy is the discipline that protects that worth in practice. Dignity says there is something unbreakable in each of us. Mercy is how we honor that unbreakable thing, especially when harm or conflict tempts us to forget it. What would shift if our reflex wasn’t How fast can we react?, but How deeply can we understand? What becomes possible when we refuse to hurry past another person’s humanity?Mercy is not sentiment. It is resistance. It is the refusal of dignity fatigue. It is the discipline of witnessing: in political policy, in the conversations we have, in how we treat each other’s failures and hopes. 2025 taught us what haste can destroy. The question now is whether we’re willing to build something slower—and more human—in its place."
}
,
{
"title" : "What We Can Learn from the Inuit Mapping of the Arctic",
"author" : "William Rankin",
"category" : "excerpts",
"url" : "https://everythingispolitical.com/readings/inuit-mapping-arctic",
"date" : "2025-12-02 12:49:00 -0500",
"img" : "https://everythingispolitical.com/uploads/Cover_EIP_Template-Inuit_Map.jpg",
"excerpt" : "This excerpt is from RADICAL CARTOGRAPHY by William Rankin, published by Viking, an imprint of Penguin Publishing Group, a division of Penguin Random House, LLC. Copyright © 2025 by William Rankin.",
"content" : "This excerpt is from RADICAL CARTOGRAPHY by William Rankin, published by Viking, an imprint of Penguin Publishing Group, a division of Penguin Random House, LLC. Copyright © 2025 by William Rankin.In 1994, the Berkeley geographer Bernard Nietschmann made a famous claim about the power of mapping in the global struggle for Indigenous rights. It was a claim about how the tools of historical oppression could be reclaimed by the oppressed: “More Indigenous territory has been claimed by maps than by guns. This assertion has its corollary: more Indigenous territory can be defended and reclaimed by maps than by guns.” The idea was that by putting themselves on the map—documenting their lives and their communities—Indigenous peoples would not be so easy to erase. Nietschmann was working in Central America, often heroically, during a time of violence and displacement, and he inspired a generation of researchers and activists interested in flipping the power structure of state-centric cartography on its head.But despite the spread of bottom-up mapping projects in the past 30 years, perhaps the most successful example of Indigenous mapping actually predates Nietschmann’s call to action. Just one year prior, in 1993, the Inuit of northern Canada signed a treaty creating the territory of Nunavut—the largest self-governing Indigenous territory in the world—and mapping was central to both the negotiation and the outcome. It remains one of the rare cases of Indigenous geographic knowledge decolonizing the world map.So why hasn’t the Inuit project been replicable elsewhere, despite decades more work on Indigenous mapping? The answer lies in the very idea of territory itself, and in particular in one of the most threatened parts of the Inuit landscape today: ice. The winter extent of Arctic sea ice reached a record low earlier this year, and a new low is predicted for the winter ahead. Yet the shrinking ice isn’t just an unshakable sign of Arctic warming; it’s also a poignant reminder of what Nietschmann got right—and what he missed—about the relationship between cartography and power. In particular, it shows how Inuit conceptions of space, place, and belonging are rooted in a dynamic, seasonal geography that’s often completely invisible on Western-style maps.The story begins in the 1970s, when the young Inuit leader Tagak Curley, today considered a “living father” of Nunavut, hired the Arctic anthropologist Milton Freeman to lead a collaborative mapping project of unprecedented scope and ambition. Freeman taught at McMaster University about an hour outside Toronto; he was white, but his wife, Mini Aodla Freeman, was Inuit (she was a translator and later a celebrated writer). Freeman assembled a team of other anthropologists and Arctic geographers—also white—to split the mapping into regions. They called their method the “map biography.” The goal was to capture the life history of every Inuit hunter in cartographic form, recording each person’s memories of where, at any point in their life, they had found roughly three dozen species of wildlife—from caribou and ptarmigan to beluga, narwhal, and seaweed. Each map biography would be a testimony of personal experience.After the mapping was split into regions, about 150 field-workers—almost all Inuit—traveled between 33 northern settlements with a stack of government-issued topographic maps to conduct interviews. Each hunter was asked to draw lines or shapes directly on the maps with colored pens or pencils. The interviewers stayed about 10 weeks in each settlement, visiting most hunters in their own homes, and the final participation rate was an astonishing 85 percent of all adult Inuit men. They collected 1,600 biographies in total, some on maps as large as 10 feet square.Then came the cartographers, back in Ontario: one professor and a team of about 15 students. The first map below (Figure 1) shows how the individual map biographies were transformed into summary maps, one for each community. For every species, the overlap of all hunters’ testimony became a single blob, and then blobs for all species were overlaid to make a complete map. The second map (Figure 2) shows one of the finished atlas pages along the Northwest Passage. The immediate impression is that the Arctic is in no way an empty expanse of barren land and unclaimed mineral riches. It is dense with human activity, necessary for personal and collective survival. The community maps combined to show almost uninterrupted Inuit presence stretching from northern Labrador to the Alaska border.Figure 1: Top left is a simplified version of a “map biography” from a single Inuit hunter, showing his birthplace and the places he hunted caribou, fox, wolf, grizzly bear, moose, and fish at various points in his life. (The original biography would have been drawn over a familiar government-issued topographic map.) The other three maps show how multiple biographies were then combined into patterned blobs for all hunters and all species. (Map courtesy of William Rankin/ Penguin Random House LLC.)Figure 2: A two-page spread from the finished atlas showing the seven kinds of animals hunted from the settlements of Igloolik and Hall Beach, in an area about 500 by 300 miles: caribou, polar bear, walrus, whale, fish, seal, and waterfowl. (Because of the large number of individual species recorded in the map biographies, some species were grouped together in the final maps.) The blobs are a strong, even overpowering figure atop an unusually subtle ground. Notice in particular how difficult it is to distinguish land and water areas, since the dark shading extends beyond coastlines even for individual species. This map in fact includes the Northwest Passage—the famous sea route around the tip of North America—but the crucial Fury and Hecla Strait (named after the two British ships that first learned of, but did not navigate, the passage in 1822) is almost entirely obscured. (Map courtesy of William Rankin/ Penguin Random House LLC.)Nothing about the cartography was meant to be subversive—or even controversial. For the cartographers, the only message was that the Inuit hunted a variety of species over large areas. But look again at the finished map in Figure 2. Yes, a foreground is layered over a background in the usual way, but the visual argument is strikingly different from a typical layered map in, say, a census atlas, where the foreground data doesn’t stray beyond crisp pre-existing borders. Here, in contrast, even the basic distinction between land and water is often obscure. The maps’ content is the facts of species and area; the maps’ argument is that Inuit culture is grounded in a substantially different understanding of territory than the one Western cartography was designed to show.As a result, this new atlas shifted the negotiations between the Inuit and the Canadian government decisively. Not only did the maps provide a legal claim to the Inuit-used land, documenting 750,000 square miles—an area the size of Mexico—but also a claim to the sea, showing an additional 325,000 square miles offshore.It took many years for the full implications to play out, but the erosion of the land–water boundary became central to the Inuit vision. At the time, wildlife on land was managed by the regional Northwest Territories government, while offshore marine species were the responsibility of centralized federal agencies. The Inuit used the atlas to win agreement for a new agency with equal responsibility over both. At the same time, the Inuit also improved their position by offering their offshore claims as evidence the Canadian government would use—not just in the 1980s, but even as recently as 2024—to resist foreign encroachment in the Northwest Passage. The final agreement in 1993 granted the Inuit $1.15 billion in cash, title to about 17 percent of the land in the “settlement area,” representation on several new management agencies, a share of all natural-resource revenue, broad hunting and fishing rights, and a promise that the territory of Nunavut would come into being on April 1, 1999.It’s easy to count this project as a success story, but it’s also important to remember that it depended both on the government’s own interest in negotiation and on the willingness of Indigenous peoples, or at least their leadership, to translate their sense of space onto a map, solidifying what had previously been fluid. It also meant abandoning claims to ancestral lands that had not been used in living experience and provoking new boundary disputes with neighboring, and previously amicable, Indigenous groups. These tradeoffs have led some scholars to critique mapping as only “drawing Indigenous peoples into a modern capitalist economy while maintaining the centrality of state power.” But for the Inuit, the alternatives seemed quite a bit worse.With the more recent proliferation of Indigenous mapping initiatives elsewhere—in Latin America, Africa, and Asia—the tradeoffs have been harder to evaluate. Most governments have shown little interest in addressing Indigenous claims, and when bottom-up mapping has been pushed instead by international nonprofits interested in environmental conservation, the downsides of mapping have often come without any of the upsides.Yet it’s not just the attitude of the state that’s been different; it’s also the cartography. In nearly all these other cases, the finished maps have shown none of the territorial inversion of the Inuit atlas. Instead, Indigenous knowledge is either overlaid on an existing base map in perfectly legible form, or it’s used to construct a new base map of a remarkably conventional sort, using the same visual vocabulary as Western maps.Did the Inuit project just show the data so clearly that its deeper implications were immediately apparent? No, not really, since the great irony here is that the cartographers were in fact quite dissatisfied. Follow-up surveys reached the conclusion that the atlas was only “moderately successful” by their usual mapmaking standards.The Inuit atlas was a kind of happy accident—one that doesn’t conform to any of the usual stories about Indigenous mapping, in Canada or elsewhere. The lesson here isn’t that maps should be as Indigenous as possible, or that they should be as orthodox as possible. These maps were neither. My take is simpler: the atlas shows that maps can, in fact, support alternative conceptions of space—and that showing space in a different way is crucial.The possibilities aren’t endless, but they’re broader than we might think. Plotting different sorts of data is a necessary step, but no less important are the relationships between that data and the assumptions of what lies below. For the Inuit, these assumptions were about land, water, and territory. These were in the background both visually and politically, and they were upstaged by an unexpectedly provocative foreground. The layers did not behave as they were meant to, and despite the tradeoffs, they allowed an Indigenous community to fight for their home and their way of life."
}
,
{
"title" : "Malcolm X and Islam: U.S. Islamophobia Didn’t Start with 9/11",
"author" : "Collis Browne",
"category" : "essays",
"url" : "https://everythingispolitical.com/readings/malcolm-x-and-islam",
"date" : "2025-11-27 14:58:00 -0500",
"img" : "https://everythingispolitical.com/uploads/life-malcolm-3.jpg",
"excerpt" : "",
"content" : "Anti-Muslim hate has been deeply engrained and intertwined with anti-Black racism in the United States for well over 60 years, far longer than most of us are taught or are aware.As the EIP team dug into design research for the new magazine format of our first anniversary issue, we revisited 1960s issues of LIFE magazine—and landed on the March 1965 edition, published just after the assassination of Malcolm X.The reporting is staggering in its openness: blatantly anti-Black and anti-Muslim in a way that normalizes white supremacy at its most fundamental level. The anti-Blackness, while horrifying, is not surprising. This was a moment when, despite the formal dismantling of Jim Crow, more than 10,000 “sundown towns” still existed across the country, segregation remained the norm, and racial terror structured daily life.What shocked our team was the nakedness of the anti-Muslim propaganda.This was not yet framed as anti-Arab in the way Western Islamophobia is often framed today. Arab and Middle Eastern people were not present in the narrative at all. Instead, what was being targeted was organized resistance to white supremacy—specifically, the adoption of Islam by Black communities as a source of political power, dignity, and self-determination. From this moment, we can trace a clear ideological line from anti-Muslim sentiment rooted in anti-Black racism in the 1960s to the anti-Arab, anti-MENA, and anti-SWANA racism that saturates Western culture today.The reporting leaned heavily on familiar colonial tropes: the implication of “inter-tribal” violence, the suggestion that resistance to white supremacy is itself a form of reverse racism or inherent aggression, and the detached, almost smug tone surrounding the violent death of a cultural leader.Of course, the Nation of Islam and Elijah Muhammad represent only expressions within an immense and diverse global Muslim world—spanning Morocco, Sudan, the Gulf, Iraq, Pakistan, Indonesia, and far beyond. Yet U.S. cultural and military power has long blurred these distinctions, collapsing complexity into a singular enemy image.It is worth naming this history clearly and connecting the dots: U.S. Islamophobia did not begin with 9/11. It is rooted in a much older racial project—one that has always braided anti-Blackness and anti-Muslim sentiment together in service of white supremacy, at home and abroad."
}
]
}