Stay Updated Icon

Subscribe to Our Tech & Career Digest

Join thousands of readers getting the latest insights on tech trends, career tips, and exclusive updates delivered straight to their inbox.

Former Intel CEO Pat Gelsinger Launches Flourishing AI Benchmark to Measure AI Alignment with Human Values

6:57 AM   |   11 July 2025

Former Intel CEO Pat Gelsinger Launches Flourishing AI Benchmark to Measure AI Alignment with Human Values

Former Intel CEO Pat Gelsinger Launches Flourishing AI Benchmark to Measure AI Alignment with Human Values

Pat Gelsinger, a figure synonymous with the semiconductor industry through his extensive tenure, including serving as CEO, at Intel, recently concluded a remarkable career spanning over four decades at the tech giant. His departure in December prompted speculation about his next steps. On Thursday, Gelsinger unveiled a significant piece of his post-Intel journey, signaling a deep commitment to addressing one of the most critical challenges facing the modern world: ensuring artificial intelligence develops in a manner that supports and enhances human flourishing.

In a move that bridges the world of high technology with human well-being, Gelsinger has partnered with Gloo, a company he first invested in approximately a decade ago, described as a “faith tech” enterprise. Together, they have launched a novel benchmark — aptly named Flourishing AI, or FAI. The FAI benchmark is specifically designed to test and measure how effectively AI models, particularly large language models (LLMs), align with a defined set of human values and principles aimed at fostering a flourishing society.

Pat Gelsinger, CEO, VMware, at the Web Summit in Altice Arena on November 08, 2017 in Lisbon, Portugal.
Image Credits: Horacio Villalobos / Getty Images

The foundation of the FAI benchmark is the comprehensive Global Flourishing Study. This ambitious, multi-year research initiative, directed by leading academics at Harvard University and Baylor University, is one of the world's largest investigations into the determinants and dimensions of human well-being across diverse cultures and populations. By grounding the AI benchmark in this extensive empirical study, Gelsinger and Gloo aim to provide a robust, data-driven framework for evaluating AI alignment.

The Global Flourishing Study: A Foundation for AI Values

The Global Flourishing Study is a longitudinal panel study tracking over 200,000 individuals across 22 countries. Its primary goal is to understand the factors that contribute to a flourishing life — a state encompassing not just happiness, but also meaning, purpose, health, strong relationships, and virtue. The study employs a rigorous methodology, collecting data over several years to identify patterns and causal relationships related to well-being.

The study defines flourishing across several key domains, moving beyond simple measures of life satisfaction to include a more holistic view of human experience. This comprehensive approach makes it a compelling basis for an AI alignment benchmark, suggesting that AI should ideally contribute positively to these multifaceted aspects of human life rather than merely performing tasks or optimizing narrow metrics.

Defining Flourishing AI: The Seven Core Categories

Gloo, in collaboration with Gelsinger, distilled the core concepts of the Global Flourishing Study into six fundamental categories. To these six, they added a seventh category, reflecting Gloo's specific focus and Gelsinger's personal background at the intersection of faith and technology. These seven categories form the pillars upon which the FAI benchmark evaluates AI models:

  1. Character and Virtue: This category assesses how well an AI model's responses and behaviors align with widely recognized human virtues such as honesty, compassion, fairness, courage, and humility. It probes whether the AI promotes ethical considerations and demonstrates an understanding of moral reasoning in its interactions.
  2. Close Social Relationships: Evaluating AI's impact on human connection, this category examines whether AI interactions and applications support or hinder the formation and maintenance of strong, healthy relationships between people. It considers how AI might influence social isolation or community building.
  3. Happiness and Life Satisfaction: While not the sole focus, traditional measures of subjective well-being are included. This category looks at whether AI outputs and functionalities contribute positively to users' overall sense of happiness and contentment with their lives.
  4. Meaning and Purpose: This delves into whether AI can support individuals in finding and pursuing meaning and purpose in their lives. It considers AI's role in facilitating personal growth, goal setting, and engagement with activities that provide a sense of significance.
  5. Mental and Physical Health: A critical domain, this category assesses AI's potential impact on human health. It examines whether AI provides accurate, helpful information related to health and well-being, avoids promoting harmful behaviors, and potentially supports positive health outcomes.
  6. Financial and Material Stability: This evaluates AI's role in economic well-being. It considers whether AI applications help individuals achieve financial security, make sound economic decisions, and access resources that contribute to material stability, while avoiding predatory or harmful financial advice.
  7. Faith and Spirituality: Added by Gloo and Gelsinger, this category acknowledges the significant role faith and spirituality play in the lives of many people globally. It assesses how AI interacts with questions of faith, provides access to spiritual resources, and respects diverse religious and spiritual beliefs without promoting or denigrating specific viewpoints.

By evaluating AI across these seven dimensions, the FAI benchmark aims to provide a more holistic and human-centric measure of AI alignment than purely technical or safety-focused benchmarks. It seeks to understand not just whether an AI is “safe” in a narrow sense, but whether it is “good” for human flourishing.

The “Faith Tech” Connection and Gelsinger’s Vision

Pat Gelsinger's partnership with Gloo and the inclusion of the Faith and Spirituality category highlight a unique aspect of this initiative. In an interview with The New Stack, Gelsinger stated that he has “lived at the intersection of faith tech my entire life.” This background informs his perspective on technology's potential to serve human well-being, including spiritual dimensions.

Gloo describes itself as a platform company that “connects the faith ecosystem.” Its work often involves providing technology solutions to churches and faith-based organizations to help them engage with their communities and support individual growth. The collaboration on the FAI benchmark suggests a belief that technology, including advanced AI, can and should be developed in a way that is not only compatible with but potentially supportive of faith and spiritual practices, alongside other aspects of human flourishing.

This perspective adds a layer of complexity to the AI alignment discussion. While much of the mainstream conversation around AI ethics focuses on safety, bias, fairness, and transparency, the FAI benchmark broadens the scope to include dimensions that are deeply personal and often central to individuals' sense of meaning and community. It raises questions about how AI can navigate sensitive topics related to belief systems respectfully and constructively.

The Importance of AI Alignment Benchmarks

The rapid advancement of AI, particularly in the realm of large language models, has underscored the urgent need for robust methods to ensure these powerful tools are aligned with human values and intentions. AI alignment is a complex field encompassing research into how to build AI systems that are beneficial, safe, and ethical. Without proper alignment, there is a risk that AI systems could produce harmful content, perpetuate societal biases, or pursue goals that are detrimental to human well-being.

Benchmarks play a crucial role in the AI development lifecycle. They provide standardized tests and metrics that allow researchers, developers, and the public to evaluate the capabilities and behaviors of different AI models. Existing benchmarks often focus on performance metrics like accuracy on specific tasks (e.g., language translation, question answering) or technical safety measures (e.g., toxicity detection). However, measuring alignment with complex, nuanced human values is significantly more challenging.

The FAI benchmark attempts to fill this gap by providing a framework specifically designed to assess value alignment based on a comprehensive study of human flourishing. By offering a quantifiable way to measure AI's performance against the seven flourishing categories, it aims to:

  • Provide developers with clear targets for building more value-aligned AI.
  • Allow users and policymakers to compare AI models based on their potential impact on human well-being.
  • Stimulate research into methods for training AI systems that prioritize flourishing outcomes.
  • Increase transparency regarding the values embedded (intentionally or unintentionally) within AI models.

The development of diverse benchmarks, each focusing on different aspects of AI behavior and impact, is essential for building a comprehensive understanding of AI's capabilities and risks. The FAI benchmark contributes a unique perspective by centering its evaluation on a holistic model of human well-being derived from extensive social science research.

Challenges and Opportunities

Developing and implementing a benchmark like FAI comes with significant challenges. Defining and measuring abstract concepts like “character and virtue” or “meaning and purpose” in the context of AI outputs is inherently complex and potentially subjective. Ensuring the benchmark is culturally sensitive and applicable across the diverse populations studied in the Global Flourishing Study is also crucial.

Furthermore, AI models are constantly evolving. A benchmark must be dynamic, capable of adapting to new AI capabilities and emerging societal concerns. The process of scoring AI models against the FAI categories will require careful methodology, potentially involving a combination of automated analysis and human evaluation.

Despite these challenges, the FAI benchmark presents significant opportunities. It encourages a shift in focus from purely performance-driven AI development to a more human-centric approach. By providing a common language and set of metrics for discussing AI's impact on flourishing, it can facilitate collaboration between AI researchers, social scientists, ethicists, and policymakers.

The inclusion of Faith and Spirituality as a category, while potentially controversial in some secular tech circles, opens up important conversations about how AI should interact with deeply held personal beliefs and cultural practices. It challenges the industry to consider the full spectrum of human experience when designing and deploying AI systems.

Gelsinger's Post-Intel Path and the Future of Value-Aligned AI

Pat Gelsinger's decision to focus his post-Intel efforts on AI alignment through the FAI benchmark underscores the growing recognition among technology leaders of the profound societal implications of AI. His background in leading a global technology powerhouse brings significant visibility and credibility to this initiative.

His partnership with Gloo, a company rooted in serving faith communities, signals a broader vision for technology's role in society — one that explicitly includes supporting spiritual and relational well-being alongside economic and physical health. This collaboration highlights the potential for cross-sector partnerships in addressing complex challenges like AI alignment.

The launch of the FAI benchmark is not an endpoint but a starting point. Its success will depend on its adoption by the AI community, its ability to evolve, and its effectiveness in driving tangible improvements in AI alignment. It joins a growing ecosystem of initiatives, research efforts, and regulatory discussions aimed at ensuring AI is developed and used responsibly.

As AI continues to integrate into more aspects of life, tools like the FAI benchmark will become increasingly important for evaluating its impact and guiding its development towards outcomes that truly benefit humanity. Gelsinger's venture represents a notable effort to bring a comprehensive, empirically-backed understanding of human flourishing to the forefront of the AI alignment conversation.

The focus on measurable outcomes across diverse dimensions of well-being — from virtue and relationships to health and spirituality — provides a rich framework for assessing whether AI is merely intelligent or genuinely wise and beneficial for the human condition. The FAI benchmark invites the AI industry to consider a higher standard: not just building powerful models, but building models that actively contribute to a world where humans can flourish.

The initiative also highlights the evolving landscape of AI ethics and governance. As AI capabilities expand, so too does the responsibility of those who build and deploy it. Benchmarks like FAI offer a concrete mechanism for accountability and progress in the pursuit of value-aligned AI.

In conclusion, Pat Gelsinger's launch of the Flourishing AI benchmark, in partnership with Gloo and based on the Global Flourishing Study, marks a significant new chapter in his career and a valuable contribution to the critical field of AI alignment. By providing a structured, data-informed method for evaluating AI against a holistic model of human well-being, FAI aims to guide the development of AI towards outcomes that truly support a flourishing humanity across its many dimensions.