Manufacturing Fear: How Data Became the Most Powerful Political Weapon

For years, I worked in marketing and advertising, building a career around fully integrated campaigns that blended social media, publicity, activations, and traditional advertising into coordinated, multipronged strategies. This work combined cohesive creative direction and story integration, constantly refined through audience behavior. Over time, I became one of the leading consultants in this space across several industries, developing a deep fluency in how audience data, psychology, and targeted messaging can shape behavior and advance client goals.

But over time, the work began to feel increasingly hollow, often pushing people toward things they didn’t need. Filmmaking had always been my end goal, so I eventually left brand work behind, using the skills I’d developed to transition into the entertainment industry and apply that same understanding of audiences toward something more meaningful, connecting people with stories rather than persuading them to buy something. Now working as a filmmaker, I’m grateful to be far removed from that world, though my time inside it revealed just how powerful these tools can become when placed in the wrong hands.

At our core, people are fairly simple. We gravitate toward our interests, our hobbies, and the communities that feel authentic to us. Marketing has always tapped into those affinities; that part isn’t new. What is new is the scale and precision made possible by social media platforms and data extraction. Companies like Meta, X, Google and Palantir operate with unprecedented access to personal behavioral data, shaping information flows and social dynamics in ways that deepen division rather than strengthen communities. The leaders behind these companies have exploited these systems for profit and influence, contributing to social fragmentation and growing mistrust, particularly amid ongoing controversies and revelations involving elite networks and abuses of power. Whether through the active pursuit of profit or power, the result is the same: communities are pulled apart while enormous wealth concentrates at the top. Below is a deeper look at the real harms created through the exploitation of our data.

THE PROBLEM

The exploitation of our data, affinities, and personal interests by powerful tech elites has helped drive one of the most alarming developments of the modern world. These companies have built vast empires on the extraction and manipulation of human behavior, while increasingly consolidating influence through ownership of major media institutions. The consequences reach far beyond privacy violations. Through coordinated efforts, this concentration of power now threatens the very foundations of personal autonomy, democratic discourse, and our shared sense of truth.

At the most basic level, our data affinities are intimate reflections of who we are: what we fear, what we desire, what we believe, and what we might become. When these signals are harvested at massive scale, corporations gain the ability not only to predict behavior, but to shape and manipulate it. Platforms no longer simply reflect our preferences; they actively steer them, optimizing for engagement, profit, and power rather than well-being or social good. Human attention becomes a commodity, and people become instruments within systems designed to keep them reactive, predictable, and profitable.

The political danger emerges when this behavioral power is used to manipulate public emotion, particularly fear. Data-driven platforms learn precisely which messages provoke anxiety, outrage, tribal loyalty, or resentment… and then amplify them. Fear spreads faster than nuance. Anger generates more engagement than reason. Algorithms reward the most emotionally activating content, regardless of accuracy or social consequence.

The result is a public sphere increasingly shaped by manipulation rather than deliberation. Citizens are pushed toward emotional reaction instead of reflection. Complex issues are reduced to outrage cycles. Political actors, both domestic and foreign, exploit these systems to divide populations, suppress trust in institutions, and mobilize people around perceived threats rather than shared solutions. Democracy depends on citizens capable of independent thought and informed debate, yet these systems bypass reason and trigger reflex.

The danger intensifies when this influence is concentrated in the hands of a few unaccountable actors. Companies wielding data resources and algorithmic power rival nation-states in influence, yet operate without any democratic oversight. Their platforms increasingly determine which information spreads, which narratives gain legitimacy, and which voices are amplified or buried. Shared reality fractures into personalized information bubbles, making consensus nearly impossible.

Even more troubling is how these systems exploit psychological vulnerabilities at scale. Platforms learn what makes individuals feel insecure, validated, or afraid, then feed them more of it to maintain engagement. Fear and identity-based conflict become business assets. Polarization deepens, radicalization accelerates, and populations become easier to manipulate politically and economically.

Children and young people are especially vulnerable. Their psychological development now unfolds within systems designed to capture attention and shape behavior before they can meaningfully consent. Emotional manipulation becomes normalized from an early age, embedding patterns of dependency and comparison that persist into adulthood.

Geopolitically, data concentration creates new forms of power. Detailed psychological and demographic profiling enables targeted propaganda and information warfare across borders. Elections and social movements become vulnerable to invisible influence campaigns designed to inflame division and undermine trust.

Security risks add another layer: massive data stores inevitably attract breaches, exposing intimate details of millions of lives. Personal histories cannot be reset once stolen, creating long-term vulnerability. And each time a major company is breached, the stolen information often becomes a tool for financial fraud, political manipulation, or other forms of exploitation.

Culturally, personalization fragments shared experience. Citizens increasingly inhabit separate informational realities. Without common reference points, public dialogue collapses into conflict, and compromise becomes impossible.

At the moral core lies a deeper problem: human emotion itself becomes raw material for profit and power. Loneliness, fear, hope, curiosity… every signal is captured, monetized, and manipulated. Emotional states become inputs in business models and political strategy that reward systems capable of keeping people anxious, reactive, and engaged.

The next wave of artificial intelligence threatens to magnify these risks. As predictive systems grow more sophisticated, platforms will anticipate needs and influence decisions before individuals consciously recognize them. Persuasion becomes personalized, invisible, and continuous. Machines may understand human behavior better than humans understand the systems shaping them.

Ultimately, the exploitation of data affinities is dangerous because it shifts real power away from individuals and communities and into largely invisible, unregulated systems designed to influence behavior rather than respect human dignity. We’re told we’re making free choices, but those choices are increasingly shaped long before we ever realize it. Freedom begins to look intact on the surface, even as our decisions, attention, and beliefs are quietly steered behind the scenes.

The deeper fear is political: that societies built on citizens making independent decisions will gradually give way to populations guided by tech CEOs who engineer emotion, especially fear, through systems designed to influence behavior rather than inform the public.

If left unchecked, this trajectory risks creating a world where algorithmic incentives outweigh human values, where shared reality dissolves, and where those with the most data know us better than we know ourselves, allowing bad actors to exploit these systems and use that knowledge not to empower us, but to manipulate us and pull communities apart.

THE RESPONSE

A meaningful response starts with recognizing that this problem isn’t, abstract. It affects how we think, how we relate to each other, how we receive information, and ultimately how our societies function. The consequences show up in our daily lives, in the way conversations break down, in rising mistrust, in growing isolation, and in the difficulty of finding shared ground even within our own communities.

First, we have to reclaim agency over our own attention. That means being conscious of how these platforms try to provoke emotional reactions, especially fear and outrage, and refusing to let algorithms decide what we care about. It also means understanding the intentions behind each platform and making deliberate choices about how we use them, including how they monetize our participation and whether we want to support those models with our time and money. Slowing down before sharing, questioning emotionally charged content, practicing critical thinking, and seeking out diverse sources of information are small acts, but they help restore independence of thought.

Second, we must demand transparency and accountability from the companies and legislators who shape our information ecosystems. Supporting stronger data privacy laws, algorithmic transparency, and regulation of behavioral targeting isn’t anti-technology, it’s pro-democracy. The digital world has grown faster than the rules governing it, and citizens must push policymakers to catch up.

Accountability must extend to the companies and leaders who have amassed unprecedented influence over how information moves through society. Companies like Meta, X, Alphabet (Google) and Palantir wield enormous power through platforms and technologies that shape public discourse, behavior, and access to information. Holding these companies accountable means demanding meaningful limits on how personal data is collected and used, increasing transparency around algorithmic systems, and reducing the unchecked access corporations have to our lives and attention. Rebalancing that relationship isn’t about rejecting technology, but about ensuring it serves the public interest rather than undermines it.

The goal isn’t to reject technology or retreat from the digital world, because I believe we can find community and build meaningful connections there. It’s to ensure that human dignity, truth, and community come before profit, engagement metrics, and systems that intentionally divide our communities. The future of public discourse, democracy, and even personal autonomy depends on whether we choose to take that responsibility seriously now, rather than later.

Fourth, we need to rebuild real-world community. Algorithmic systems thrive when people are isolated and angry. Shared physical spaces, local organizations, arts communities, schools, neighborhood initiatives, and cultural events… create bonds that can’t be easily manipulated by digital systems. Strong communities are harder to divide.

Finally, those of us who work in media, storytelling, technology, and culture carry a real responsibility to use our skills ethically and with intention. The stories we tell, the platforms we build, and the systems we design shape how people see one another and how societies understand themselves. Film, journalism, art, and technology can deepen empathy and reconnect communities rather than fracture them. Technology itself isn’t the enemy. The future depends on whether we choose to build and use it in ways that serve people, strengthen truth, and bring us closer together rather than push us apart.

Organizations Working on Digital Rights & Platform Accountability:

  • Electronic Frontier Foundation (EFF)
    One of the leading organizations defending digital privacy, free expression, and user rights online. They actively fight surveillance abuse and advocate for stronger data protections.

    https://www.eff.org

  • Center for Humane Technology
    Focused on exposing how technology exploits attention and behavior, and on pushing platforms and policymakers toward healthier tech ecosystems.

    https://www.humanetech.com

  • ACLU
    Active in cases involving privacy, surveillance, free speech, and digital civil liberties in the U.S.

    https://www.aclu.org

  • Public Knowledge
    A nonprofit advocating for consumer rights in communications and digital policy, including competition, media ownership, and data regulation.

    https://publicknowledge.org

  • Data & Society
    Research organization studying how data and technology shape society, policy, and public discourse.

    https://datasociety.net

Previous
Previous

The Fragmented Process

Next
Next

The End of Social Media