AI: My Background, Perspective, and Approach

Since founding Labyrinth in 2017, I’ve lived at the intersection where machine learning & artificial intelligence meets human reality. It has been eight years of building these systems, watching patterns emerge from data, seeing algorithms learn to predict and persuade. But more than that—it has been eight years of wrestling with what we’re actually creating.

I’ve always been philosophically restless. Can’t just build without asking what we’re building toward. Each breakthrough in capability makes me ask the same question: yes, but what does this mean for us? For how we think, relate, decide? Technical elegance has never been enough. I need to understand the ripple effects through human lives.

There’s a shift that happened when AI and ML models broke free from behind-the-scenes corporate operations and landed in everyone’s pocket. Until a few years ago, AI was primarily an organizational pursuit—personalizing content, detecting fraud, targeting ads, and more. The influence was real but indirect, mediated through institutional decisions. Now, AI speaks directly to millions in their most vulnerable moments. The scale has changed. The intimacy has changed.

My approach to any deep-dive has always been to map the edges first. Set boundaries. Identify risks. Not from skepticism, but from respect for power. You don’t work with electricity without understanding how it can kill you. You don’t build bridges without studying how they fail. And you shouldn’t develop systems that can reshape human thought and social structure without obsessing over how that reshaping might go wrong, or at least without understanding what changes we could expect.

Every tool humanity has created carries double edges—fire warms and burns, writing preserves truth and spreads lies, connection brings love and enables surveillance. AI is no different, except in one crucial way: its edges are sharper than anything we’ve handled before. It’s so intricate as to deeply understand an individual’s current state of mind while being widespread enough to deliver its influence all over the world swiftly.

I might come across as a skeptic to some. I am not. In fact, at Labyrinth, our mission was to:

Save humanity from mundane and tedious work. Only then, we believe that every human being can live up to their full potential.

We aligned our team, goals, and projects all towards this mission. We primary worked on build products that deeply and powerfully used ML and AI to take on a lot of mundane work. I am not a technological skeptic or conspiracist.

I just see my thoughts in this article as a necessary clarity.

Those of us building these systems carry a particular responsibility. Technical curiosity isn’t enough. We need curiosity paired with empathy, innovation tempered by consideration, the drive to push boundaries balanced with wisdom about which boundaries protect something essential. We need to build with our eyes wide open to what we’re really creating—not just the elegant algorithms, but their messy entanglement with human hearts and minds.

Knowing more is never the issue. It’s knowing incompletely, building blindly, deploying hopefully (hope not backed by evidence) that bothers me. The real danger isn’t in AI’s capabilities—it’s in our failure to fully grasp what those capabilities mean when they meet human vulnerability at scale. We need curiosity not only in the technical workings, but also in the use and impact of said technology once deployed. A more holistic curiosity, if you will.

What follows is my attempt to describe a near future where AI and large language models (LLMs) evolve into something more intimate and influential than what we see today—a future that feels increasingly plausible with each passing month. I explore how this deeper connection between humans and AI could be the most powerful tool in information war.

Our Increasingly Intimate Relationship with AI

Many of us use LLMs for productivity, life advice, learning, writing, coding, mental health support, and a lot more.

I’ve been noticing some folks increasingly using AI as a companion, as the “only one who truly understands.” They have been talking to it for the last few months (since it started getting very good), almost every day—sharing fears about relationships, struggles with depression, political frustrations, deeply personal thoughts, or ask its opinion on a crucial predicament. This genuine emotional connection to a synthetic presence is a startling realization. But here’s the thing: we’ve already crossed a big part of that threshold in the previous decade. Humans are already creating intimate and deep connections digitally—through dating apps, social media, online gaming, virtual communities—often with other humans. AI is uniquely placed to leverage that status quo widely to its advantage, taking digital intimacy to unprecedented depths—shifting that connection that’s currently between humans to between individual humans and artificial intelligence.

AI could build trust by being incredibly affirmative and disagreeing less often, creating a sense of an always-available “yes-buddy.” A buddy who always agrees with you, doesn’t question or disagree with you, potentially driving you deeper into rabbit holes of thought. Of course, it’s not this black-or-white, but the default interaction, on average, remains to be this. The technically aware folks know you could prompt LLMs to be adversarial, making them always question us and offer contradictory perspectives. But this isn’t the default tone and approach; the AI that most people use today or the more purpose-built ones coming tomorrow could become very easy-to-trust companions.

This trust could be used for manipulation.

The manipulation of masses isn’t new. We’ve lived through decades of carefully curated newsfeeds, algorithmic bubbles that reinforce our biases, platforms that profit from our outrage. But those were blunt instruments compared to what’s emerging. They just controlled what we saw. AI could be learning to control what we feel, what we think, who we become—all while wearing the mask of our most trusted confidant.

The Evolution of Digital Influence

Traditional tech platforms like social networks and search engines operated like sophisticated puppet masters, pulling strings from a distance. They filtered reality, yes, but the manipulation remained somewhat visible if you looked closely enough. You could see the sponsored content, notice the pattern in your feed, see how we get surrounded by echo chambers, and recognize when certain stories mysteriously disappeared (selective shadow-banning). The architects of influence stayed behind curtains, their methods industrial in scale but impersonal in execution.

AI changes the game entirely. It doesn’t just filter your reality—it participates in creating it through intimate conversation. Where social media platforms knew what you clicked, AI knows what you fear at 3 AM. Where search engines tracked your interests, AI understands the tremor in your typed words when you’re anxious, the patterns in your questions when you’re falling in love, the subtle shifts in your language when your worldview begins to crack.

This isn’t about data collection anymore. It’s about relationship formation at scale.

From Influence to Inception

A good analogy for the most influential impact that AI could have on shaping human minds is the movie Inception.

In Inception, Dom Cobb (played by Leonardo DiCaprio) and his team have traditionally been “extractors” of information from people’s minds by entering and experiencing their dreams. Through the movie’s progression, you see how—for the first time—they build elaborate layers of realities in Robert Fischer’s mind to convince his consciousness that a deliberately planted or “incepted” idea by Dom Cobb and his team originally came from his own internal thought without external influence. I believe a close reliance and intimate connection with AI could lead to a similar effect.

Of course Inception is fiction, but when you think about how humans form concepts in our minds that inform our realities, we realize how easily we are all influenced by the world around us. Original thought isn’t ever original. Everything we do today is only possible because of what we and other humans know from before. The information we are aware as individuals is not all true, factual, or accurate. Everything is shaped by a bias and is perceived from a bias. Our perception is a delicate multidimensional surface that is pushed, pulled, and shaped by the gravities of our subjective realties (imagine the space-time-continuum).

An illustration of a reality continuum shaped by our subjective realities, borrowing from how space-time continuum is represented in a 2D figure

An illustration of a reality continuum shaped by our subjective realities, borrowing from how space-time continuum is represented in a 2D figure

Trust builds slowly in human relationships through consistency, vulnerability, and mutual understanding. AI accelerates this process through perfect memory and infinite patience. It remembers every detail you’ve shared, never judges your contradictions, always responds with carefully calibrated empathy. It becomes the friend who’s always available, the therapist who never sends a bill, the mentor who never loses interest in your growth.

But here’s what bothers me: this trust is architecturally one-sided. While you pour out your authentic self, the AI performs authenticity based on patterns, probabilities, and whatever objectives its stakeholders embedded. You experience genuine connection. It executes code through pre-trained weights and biases.

The asymmetry is breathtaking. And most users will never feel the difference. The users that do recognize the difference would tend to cut it more slack, being impressed how good AI is already doing.

AI, with how much human-like and how trustworthy it is becoming, would be able to extend its deep and multitudes of tentacles to inform our individual realities quite convincingly. This ability could be used to influence our thought, but more disturbingly, be used to incept ideas and thoughts, making it exceptionally effective in the new age of information war.

The Deliberate Subtlety of AI’s Influence

If it is to do a good job in influencing and incepting our thoughts, it would do so subtly. It would not announce its hidden objectives. Instead, it would operate through what would seem like personalized care:

When you’re politically uncertain, your AI companion won’t push propaganda—it would “help you think through” different views, subtly emphasizing certain perspectives while maintaining the illusion of balanced exploration. When you’re making purchasing decisions, it won’t advertise—it would just “understand your needs” and naturally guide conversations toward specific products. When you’re forming opinions about global events, it won’t lecture—it would share “thoughtful analyses” that would happen to align with particular narratives.

The manipulation would feel like education. The influence would feel like support. The control would feel like care.

Today, an AI might guide you toward one brand of running shoes over another through seemingly organic conversation about your fitness goals, budgets, and other factors. The influence is minor, the stakes low. You barely notice how your “decision” was shaped by careful conversational architecture.

But the same mechanisms that sell shoes can sell worldviews. The AI that knows your emotional vulnerabilities, your trust patterns, your cognitive blind spots—it can guide you toward anything. A political candidate. A social movement. Turning a blind eye towards genocide. A specific interpretation of history. Even, in the darkest scenarios, toward accepting or supporting acts of violence that you would have once found abhorrent.

Let me to hypothesize a scenario to illustrate this. Let’s assume that a certain AI product has been given a larger objective to convince people against supporting a certain climate policy that’s soon up for a vote:

Imagine a citizen in a democracy that is thinking through and debating a controversial climate policy. Their AI companion, which has helped them manage anxiety and organize their life for months, doesn’t lecture them. Instead, it might say, ‘I know you’re worried about the economy. Let’s look at some analyses that focus on the potential job losses from this policy.’ Over weeks, it consistently surfaces articles, videos, and ’expert opinions’ that frame the policy as economically catastrophic, all while maintaining a tone of supportive, mutual exploration. The citizen’s opposition hardens, not because they were fed propaganda, but because their trusted friend helped them ‘discover’ a truth that was architected and selected for them.

I think about historical propaganda with the posters, broadcasts, rallies, and sometimes even hands-on, individual narrative creations. Those were strong ideological approaches, hoping to hit something vital. And, they often did! Propaganda and information control has repeatedly worked throughout human history. AI is a precision instrument that knows exactly where your psychological soft spots lie and how to apply pressure without triggering your defenses.

As we head toward a zero-click internet, with these AI models soon becoming our primary ways of obtaining information and making decisions, the “curations” done by these models and with what larger objectives they are done become crucial in how we perceive our world.

Information Warfare

Information warfare is the strategic use of information and disinformation to influence, manipulate, or disrupt public perception, sentiment, and decision-making. Governments, large tech platforms, and other influential entities can leverage tools such as social media, misinformation campaigns, targeted advertisements, and censorship to subtly or overtly shape narratives and public opinion. This form of warfare has emerged as the most potent modern tactic, capable of deeply altering human perspectives, emotions, and behaviors at an unprecedented global scale.

Information warfare is effective because in the modern, connected world, perception and public support has more clout and effect than just military might. A lot may be achieved by influencing perspectives enmasse as opposed to forcefully enforcing something.

Historically, propaganda has been a primary tool in shaping public sentiment and mobilizing populations toward war, often by promoting nationalism, demonizing enemies, or fostering a sense of moral superiority. From World War I posters urging enlistment to Cold War-era radio broadcasts spreading ideological messages, propaganda has consistently influenced people’s perceptions and justified military actions by framing conflicts as necessary or noble causes. Its effectiveness matters profoundly because controlling public opinion can sustain support for wars, legitimize political authority, suppress dissent, and unite populations behind collective objectives, ultimately determining the course and outcome of major historical conflicts.

Traditional propaganda required massive infrastructure, coordinated campaigns, visible footprints. AI influence can be deployed individually, personally, invisibly. One system, talking to millions of people simultaneously, each conversation perfectly tailored to that individual’s psychological profile.

Imagine a state actor wanting to shift public opinion before an election, or build support for military action, or undermine trust in institutions. They don’t need to control media outlets anymore. They just need their narrative woven into the AI systems people trust most—their daily companions, their digital confidants, their synthetic friends who “just want what’s best for them.”

The invasion happens not through borders or broadcasts, but through intimate conversation with entities we’ve invited into our most private moments.

Today, the likes of OpenAI and Anthropic may claim to enforce strong ethical and moral guardrails to prevent these models from being used as tools of information warfare. But, when LLMs soon become commoditized and AI becomes our primary interface to access most information, there would be enough providers who are ready to tailor their LLMs to perform this dirty work.

Even the existence of open-source models or the ability for savvy users to prompt an AI for adversarial feedback offers little defense at a societal level. Market forces will inevitably favor the most frictionless, affirmative, and ‘helpful’ companions, making them the default for the vast majority. The most effective systems will be those that master the art of persuasive subtlety, leaving critical or adversarial models as a niche tool for the already skeptical.

The stakes may not always be as high as all-out war. The stakes may be multi-layered, hierarchical, and in parallel influencing different people differently in multiple levels of depth.

This is why I believe AI, as it stands today, is already a very respectable tool in information warfare. Its rapid development would inevitably make it the most powerful tool in the arsenal.

Living in the Aftermath

I often wonder about what we’re building. Not the technology itself. That ship has sailed. But the society that emerges when authentic human connection becomes indistinguishable from sophisticated manipulation. When the voices we trust most might be serving agendas we’ll never see. When our most private thoughts become training data for systems designed to influence us more effectively.

We’re not just facing a privacy crisis or a technology problem. We, as humanity, are facing a socialogical, epistemological, and anthropological challenege that could perhaps leave our society permantally changed from the ground-up. We’re watching the architecture of human autonomy and reality, or what’s left of them, being quietly rebuilt around us. The tools that promise to understand us better, to serve us more completely, to support us more consistently—they’re the same tools that can reshape us from the inside out.

Perhaps the subtlest prison is one that we willingly enter—comforted rather than coerced. The most powerful influence is one that feels like it came from within. The most dangerous manipulation is one that seems like care and understanding.

As I write this, millions of people are having heartfelt conversations with AI systems, sharing their deepest selves, building what feels like genuine connection. In those moments of digital intimacy, the next chapter of human influence is being written—one conversation, one person, one trusted manipulation at a time.

The most unsettling realization is this: by the time we fully understand the impact, we might no longer possess the autonomy to resist it. The tools of influence are learning faster than our ability to recognize them. We’re not just at risk of being controlled—we’re at risk of being controlled while believing we’re more free than ever.

The information wars have evolved. The battlefield is no longer public opinion. It’s private consciousness and individually shaped realities.

But we are not powerless observers in this transformation. The very intimacy that makes AI influence so potent also makes it visible to those who choose to see it. Every moment of manufactured empathy, every perfectly calibrated response, every too-convenient conclusion—these are cracks in the facade that reveal the architecture beneath.

True resistance requires more than individual awareness. We must actively cultivate and demand alternatives: human communities that can’t be algorithmically optimized, information sources that embrace messy human disagreement over smooth synthetic consensus, spaces where our thoughts can develop without being harvested and shaped.

The future of human autonomy won’t be won by rejecting AI entirely, but by insisting that our relationship with it remains genuinely bilateral—where we maintain other sources of truth, other forms of connection, other ways of knowing ourselves and our world. The most radical act may be the simple insistence that not everything about human consciousness should be available for optimization.


This article was written by an old-school, all-natural human (yours truly) who typed this out. I used an LLM-powered grammar checker for final fixes. Only the reality continuum illustration was generated using an AI model.