Keep the Future Human: Future of Life Institute

By Marianna Richardson, Director of Communications for the G20 Interfaith Forum (IF20)

———

The Future of Life Institute (FLI) has been working since 2014 with the mission to:

“Steer transformative technology towards benefiting life and away from extreme large-scale risks.”

With the issues of artificial intelligence becoming a concern for humanity, FLI has become a major voice in keeping the future human while using AI as a tool to support humanity. In January 2026, FLI held a conference in New Orleans, Louisiana, discussing the balance between AI systems hurting humanity, especially young people, and AI systems helping humanity move forward.

Opening Framing: Anthony Aguirre, Future of Life Institute

Anthony Aguirre opened the meeting by expressing excitement about building a pro-human future for artificial intelligence. He emphasized that an overwhelming 95% of both Democrats and Republicans want AI to be regulated, a rare point of bipartisan agreement that underscores the urgency of responsible governance.

Aguirre argued that the goal should be to keep AI fundamentally human-centered rather than pursuing artificial general intelligence or superintelligence for its own sake. He highlighted the enormous potential of AI as a tool—accelerating healthcare breakthroughs, drug development, cancer treatments, education, and climate solutions—while cautioning that these benefits will only materialize if society deliberately steers toward a pro-human trajectory.

A Dangerous Shift in AI’s Purpose: A Race Toward Replacement

Aguirre warned that the current industry trajectory is fundamentally misguided. Instead of treating AI as a tool to augment human capability, many companies are explicitly trying to build systems that replace human capability altogether. This shift has created an unbridled race toward AGI and superintelligence, driven by competitive pressure rather than public benefit.

He described this as a “race to replace”:

  1. AI is increasingly framed not just as a tool for attention, connection, or even intimacy, but as a substitute for human labor.
  2. The race to AGI is framed as a race to replace human work.
  3. The race to superintelligence is framed as a race for power.

Aguirre argued that this is not only misguided but immoral. Controlled AI may grant power, but superintelligence—especially self-improving systems—tends to absorb power rather than distribute it. He noted that many AGIs operating together do not create safety; instead, they create a landscape of weak AGIs that can rapidly self-improve into stronger, less controllable systems. As an example, he pointed to current models such as Claude, which can already write and improve its own code.

The Opportunity: Steering Toward a Pro-Human Future

Aguirre stressed that the purpose of this gathering is not simply to critique the current path but to articulate an alternative. Participants are being asked to identify principles that can guide AI development toward human flourishing rather than human replacement.

He called for actionable progress, not abstract ideals. The goal was to produce a declaration by the end of the meeting—one that outlines shared principles and offers a credible, constructive alternative to the current race toward unrestrained AGI.

Question: Will AI Concentrate or Distribute Wealth and Power?

AI’s effects on society and the economy are profoundly nonlinear, especially in tasks where systems operate with high confidence. This dynamic threatens to reshape the value of labor and undermine traditional economic assumptions. As companies pursue self-improving intelligence, they may escape normal economic restraints, creating feedback loops that concentrate power and absorb increasing shares of economic activity. These dynamics demand new modes of economic analysis that account for an “intelligence economy” in which optimization and scale reinforce corporate advantage. Although AI tools such as agent-based models can help us understand these systems, the societal implications extend far beyond economics.

AI systems will reshape institutions, including government, and raise urgent questions about the future of democracy. Democratic systems cannot rely solely on well-intentioned actors; they require checks and balances that are currently missing from AI governance. This is not a hypothetical concern but an active challenge affecting the social contract and demanding a broad coalition of stakeholders. A core principle is that technology is not inevitable. Instead, societies choose how to deploy it. That requires resisting narratives that portray AI as unstoppable and instead strengthening collective vehicles such as unions, community organizations, and other countervailing forces.

The conversation also highlighted the need to revitalize manufacturing, address labor displacement, and counter the monocultures created by concentrated AI development. Excessive capital is flowing into AI at the expense of other innovations, including biotechnology. Yet there is hope: examples like AlphaFold show how AI can deliver genuine public benefit. Across the political spectrum, there is growing skepticism of tech leadership and a shared desire for a more democratic, human-centered technological future.

Question: Will Human Dignity and the Human Experience Be Improved or Eroded by AI?

The FLI discussion explored the question of what it means to be human, emphasizing that personhood is rooted not only in rationality but in our vulnerability, interdependence, and relationships of love and community. While AI systems may outperform humans in certain forms of reasoning, they cannot replicate the relational, imaginative, and moral capacities that define human flourishing. Human dignity is inherent, yet our social dignity can be eroded, and the group warned that poorly designed AI—especially systems that mimic intimacy—can undermine people’s sense of worth. Recent harms involving character-based chatbots were described as early warning signs of detachment, delusion, and threats to mental freedom.

Participants stressed the need to preserve human personhood in law and culture, especially as some chatbots are granted speech protections that blur the line between persons and machines. Protecting dignity requires systemic approaches: community support, regulation, and cultural norms that prevent dehumanization. The group noted rising despair, job displacement, and social fragmentation, especially among young men, and argued that society must offer meaningful roles, purpose, and belonging—much like historical traditions that reintegrated marginalized groups through service and shared commitments.

Religious communities were identified as essential partners, offering trust networks and moral grounding, yet many feel unprepared to engage with AI. Participants emphasized listening to their concerns, especially around youth, jobs, and powerlessness. Ultimately, the conversation called for rebuilding community bonds, drawing on lessons from the past, and ensuring that AI strengthens rather than diminishes human dignity, creativity, and connection.

Question: Will Personal Agency, Freedom, and Privacy Be Improved or Eroded?

The conversation framed today’s digital landscape as a new form of feudalism, where a handful of dominant platforms function like “digital manor houses” built on addictive design and the extraction of private data. This is not merely market concentration but a structural system in which most people receive only the bare minimum of digital life while immense value accumulates at the top. Historically, societies have broken free from such arrangements through new ideas, new infrastructures, and new rights—transformations that ushered in renaissances. Participants argued that we face a similar inflection point: AI is supercharging the current harmful trajectory, but with deliberate action, we can redirect it.

Changing course requires rethinking economic paradigms so that data and creativity generate public benefit rather than exclusive corporate gain. This shift demands meaningful democratic accountability, giving people real voice and choice in how their data is used. Yet entrenched systems are difficult to reform, and the group emphasized learning from history: societies have overcome complex structural breakdowns before, and this moment may require a “Magna Carta” for the digital age.

Stakeholders span technologists, policymakers, civil society, and everyday communities affected by algorithmic decisions in areas like employment and lending. Privacy must be treated as a collective responsibility, with informed consent accessible even to those without technical expertise. Participants discussed emerging solutions such as structured public feedback systems, labor union engagement, and transparent policymaking processes that avoid superficial participation. They also warned that AI-driven manipulation, deepfakes, and context-shaping systems threaten free will, social trust, and democratic stability.

Ultimately, the group called for new digital infrastructures aligned with human values, stronger protections against surveillance and manipulation, and a commitment to rebuilding trust. If society succeeds, the future could resemble a renaissance rather than a deepening digital feudalism.

The Pro-Human AI Declaration

At the end of the conference, a final proclamation—“The Pro-Human AI Declaration”—was introduced. This declaration is now open for people to sign. It reviews the pro-human discussions at the conference and some of the issues that FLI is concerned about. As society enters the artificial intelligence world, humanity needs to be aware of the concerns listed in this declaration.

If you agree, please consider signing the declaration as well: The Pro-Human AI Declaration.

———

Marianna Richardson is the Director of Communications for the G20 Interfaith Forum (IF20), where she works at the intersection of faith and global policy. She has chaired sessions on technology and ethics at G20 Interfaith conferences, including the Technology and Ethics session at the 2023 G20 Interfaith Summit in Pune, India, and has written extensively on AI, food security, human rights, and other pressing global issues for the IF20 blog. She is also an adjunct professor in management communication at the Marriott School of Business at Brigham Young University, where she serves as editor-in-chief of the Marriott Student Review, a student-run peer-reviewed journal.