We're building governance infrastructure that ensures AI expands human capacity, rather than undermine it—creating conditions where everyone can thrive.
Every intelligent system generates both capacity and cost. Intelligence creates insight, creativity, and coordination—but it also consumes attention, shapes decision-making, and alters the cognitive environments we inhabit. Without active stewardship, these externalities compound and degrade long-term human capacity.
These tradeoffs are not failures. They are signals. When made legible, they can be stewarded so that capacity compounds instead of eroding.
MeaningStack Foundation focuses on the dynamic balance between generative capacity and its systemic costs—treating both as first-class properties of intelligent systems. We consider managing this balance essential to manage complexity and scale benefits of AI
Intelligence expands what humans can think and do. Sustained load, misaligned incentives, and opaque systems create cognitive debt that, if unmanaged, reduces depth, focus, and long-term capacity.
Rich exploration enables creativity and discovery. Poorly scaffolded exploration can collapse into brittle patterns that limit diversity of outcomes and adaptive response.
Collective intelligence requires coordination. Without visible boundaries, coordination pressures can quietly erode meaningful choice and self-determination.
Computational power unlocks possibility while drawing on shared resources. Sustainable intelligence depends on making these tradeoffs explicit and governable.
Diverse exploration preserves plural meanings and adaptive capacity. Without governance, systems can converge toward homogenized patterns that reduce resilience.
Powerful systems create opportunity. Uneven access and extractive design can concentrate benefits while distributing costs, undermining collective flourishing.
We build infrastructure that allows intelligent systems to explore rich possibility spaces while maintaining the conditions for human flourishing.
Rather than binary constraints, we create visible boundaries that shape how AI moves through meaning-space—enabling innovation while preserving what matters for human wellbeing.
This is governance as enabling environment—not walls that block, but gardens that guide growth toward flourishing.
Technology amplifies human capacity. Our systems ensure AI expands rather than diminishes human judgment, creativity, and meaningful autonomy.
Governance patterns can be shared and refined collectively—building a flywheel of wisdom where each organization's learning benefits everyone.
Communities shape how AI explores in their contexts. Open infrastructure enables distributed stewardship, not centralized control.
Enabling environments work for everyone. We ensure AI exploration expands possibilities for all, especially those historically excluded.
The Agentic Collaboration Governance Protocol (formerly CGP) is our first major project—an open infrastructure for portable, participatory AI governance. It enables communities to author machine-readable governance blueprints that coordinate multi-agent systems in real-time, ensuring AI exploration aligns with human values and collective flourishing.
Rather than siloed, opaque corporate governance, the protocol creates reusable safety infrastructure: auditable, improvable, and accessible to all. Domain experts, regulators, and citizens can participate directly in AI oversight rather than depending on black-box systems.
Machine-readable governance artifacts that encode ethical boundaries, semantic constraints, and coordination rules—portable across domains and contexts.
Specialized oversight agents that monitor, verify, and intervene in agentic reasoning processes to maintain alignment with governance specifications.
Participatory interfaces (HILD) that enable meaningful human engagement at critical decision points, preserving human agency and judgment.
Real-time tracking of semantic deviation and meaning-space exploration to detect and prevent alignment drift before it compounds.
Cryptographic verification and provenance tracking for all governance decisions, enabling transparency and accountability.
Designed for interoperability with existing agentic frameworks (LangChain, CrewAI, AutoGen) through standardized interfaces and schemas.
From January through April 2025, we're conducting intensive research at AI Safety Camp to transform the Agentic Governance Protocol from a conceptual framework into a production-ready open protocol.
Our research validates three core hypotheses: that governance can be portable across domains, that semantic deviation can be reliably measured, and that collective intelligence frameworks enable participatory oversight at scale.
Can governance artifacts created in one domain be adapted and deployed effectively in another? We're testing whether Governance Blueprints maintain coherence across different use cases and organizational contexts.
Can we reliably detect when AI systems begin drifting from intended meaning-spaces? We're developing metrics and monitoring systems to quantify semantic alignment in real-time.
Can communities of experts collaboratively author and refine machine-readable governance? We're validating participatory processes for governance knowledge creation.
All results, datasets, and governance-blueprint templates will be released under an open license. Findings will be shared with the AI safety and governance communities to accelerate collective learning and inform future open-standard efforts.
Our team combines expertise in governance design, systems engineering, AI safety, and community development to build the infrastructure for human-centered AI.
Originator of the Intrinsic Participatory Governance framework underpinning the protocol. Responsible for system architecture, governance logic, and participatory interface design (HILD).
Leads overall protocol implementation, coordinating SDK development, schema testing, and interoperability. Brings expertise in cybersecurity and distributed systems from Bitdefender.
Focuses on translating governance architecture into working systems, building and testing the Steward Agent layer, and integrating real-time oversight mechanisms.
Leads outreach, collaboration, and community-building around open-governance infrastructure, expert engagement, and participatory refinement of Governance Blueprints.
Provides strategic guidance and governance expertise to ensure the foundation's work remains aligned with its mission of enabling human flourishing.
We're building this infrastructure with 15+ wonderful collaborators from diverse backgrounds in AI safety, governance, systems engineering, and ethics. Our community is growing as we work toward a future where AI enables human flourishing.
We're seeking protocol engineers, schema developers, AI safety researchers, security engineers, ethics specialists, UI/UX experts, and research engineers who want to help build the future of AI governance. All contributors gain expertise in agent protocol design and are acknowledged as founding contributors.
If you have expertise in AI safety, governance design, systems thinking, or community advocacy, we'd love to collaborate with you.
Get InvolvedYour support—whether monetary or in-kind—helps us build the governance infrastructure that ensures AI enables human flourishing. We welcome contributions from foundations, organizations, and individuals who share our vision.
We're committed to full transparency in how we use funding. All financial contributions and in-kind support will be documented publicly, and we'll provide regular updates on how resources are deployed to advance our mission.
The Agentic Collaboration Governance Protocol (ACGP) is an open governance framework that enables safe, accountable and scalable collaboration among interacting agents and human stewards.
By making goals, constraints, responsibilities, and coordination policies explicitly declared, attributable, auditable, shareable, negotiable, and evolvable, ACGP supports shared orientation and stable collaboration, allowing participants to coordinate effectively and adapt as conditions change.
In this way, inconsistencies, misalignments, or incompatibilities or unintended co-stabilizations can surface early and be managed, helping prevent harmful situations, unintended stabilizations, systemic risks, or collaboration failures.
ACP is designed to operate across a spectrum of coordination environments, from closed and semi-open systems to fully open multi-agent ecosystems, while supporting ongoing evaluation and oversight of collaboration. This Protocol is compatible with A2A and MCP protocols.
Today's deployment of autonomous AI agents in high-stakes domains—including healthcare, finance, and infrastructure management—creates urgent governance challenges. Each organization reinvents AI governance from scratch, leading to fragmentation, inconsistent safety, and redundant effort.
Most widely used governance approaches remain external, reactive, and episodic, unable to interpret or intervene in real time as reasoning drifts or objectives evolve. As AI systems become increasingly autonomous and agentic, they continuously reformulate subgoals, reprioritize tasks, and expand boundaries in pursuit of their objectives.
These systems now operate in open, multi-agent environments where traditional AI governance and cybersecurity frameworks—designed for static, isolated systems—cannot keep pace. Without governance designed to anticipate and shape these dynamics, we risk creating self-reinforcing cycles that no one can control.
The Agentic Governance Protocol (AGP) is developed by MeaningStack Foundation, a non-profit organization based on the framework of MeaningStack BV, a limited company based in the Netherlands whose rights of use have been formally assigned under a MIT license.
This means the protocol is open-source and freely available for anyone to use, modify, and build upon—ensuring it remains public infrastructure rather than proprietary technology.
AGP is designed to preserve and enable plural meanings and diverse perspectives rather than forcing convergence toward a single "correct" approach. By making governance artifacts explicit, negotiable, and portable, communities can author governance that reflects their specific values, contexts, and priorities.
Different organizations and domains can maintain their own governance blueprints while still coordinating effectively through shared protocols. This enables a rich ecosystem of governance approaches that can coexist, learn from each other, and evolve independently—preventing the semantic collapse that occurs when systems converge toward homogenized patterns.
This project aims to demonstrate that AI governance can function as open, portable, and participatory infrastructure—a foundational layer of public safety for distributed intelligence.
By empirically testing whether governance can be portable and participatory, this project lays the groundwork for treating governance itself as public infrastructure. If communities of experts can collaboratively author and refine machine-readable Governance Blueprints, governance knowledge becomes portable across domains instead of siloed within individual organizations.
If these artifacts are proven to coordinate multi-agent systems effectively, then governance evolves into a reusable layer of safety infrastructure—auditable, improvable, and accessible to all. This transformation enables domain experts, regulators, and citizens to participate directly in AI oversight rather than depending on opaque, corporate, or model-provider governance.
In the long term, it shifts AI safety from private compliance to collective stewardship—establishing the foundations for resilient, democratic oversight of distributed intelligence.
The protocol creates a governance commons where lessons learned, best practices, and refined patterns can be shared across organizations and domains. Rather than each organization solving the same governance problems in isolation, the protocol enables collective learning and improvement.
Governance Blueprints can be versioned, forked, and improved by communities of experts, creating a flywheel where each organization's learning benefits everyone. Domain experts can contribute their specialized knowledge to governance artifacts, while others can adapt and apply these insights to their own contexts.
This transforms governance from a cost center into a collective capability—building shared wisdom about how to coordinate AI systems safely and effectively across the entire ecosystem.
The Agentic Flywheel describes how governance improvements compound and accelerate over time through collective learning and shared infrastructure.
As organizations deploy governance blueprints, they generate insights about what works and what doesn't. These learnings get encoded into improved governance artifacts that others can adopt and build upon. More adoption generates more learning, which creates better governance, which attracts more adoption—creating a positive feedback loop.
Unlike traditional governance where knowledge remains trapped within organizations, the protocol's open, portable architecture allows wisdom to flow freely and compound across the entire ecosystem. Each refinement makes governance more effective for everyone, accelerating progress toward safe, aligned AI systems.
We're actively seeking collaborators with expertise in protocol engineering, schema development, AI safety, security, ethics, UI/UX design, and research engineering. The entry barrier is flexible—you can contribute by testing one integration or taking on larger architectural challenges.
Contributors gain hands-on experience in agent protocol design and are acknowledged as founding contributors to the protocol. All contributors will be part of shaping the future of AI governance as public infrastructure. Contact us at us to discuss opportunities.
Contact Us
The AI Safety Camp project (Jan-Apr 2025) focuses on empirical validation and creating the initial open-source SDK. After that, we'll continue developing the protocol with our growing community of contributors, supporting adoption across organizations, and conducting ongoing research to improve governance capabilities.
We'll be working on expanding the governance blueprint library, integrating with more agentic frameworks, and building tools that make the protocol easier to adopt and deploy. The protocol will evolve based on real-world feedback and the collective intelligence of our community.
We're currently bootstrapping while seeking funding from foundations, organizations, and individuals who share our vision. We accept both monetary contributions and in-kind support (computing resources, professional services, etc.).
As a Dutch non-profit (Stichting) working toward ANBI certification, we're building sustainable infrastructure for the long term. All funding goes directly toward research, development, and community-building activities that advance our mission of enabling human flourishing through better AI governance.
Organizations can sponsor research projects, community programs, events, or infrastructure development. Sponsors gain visibility while supporting critical governance infrastructure that benefits the entire AI ecosystem.
We offer various sponsorship tiers with different levels of engagement and recognition. Sponsors can influence research priorities, gain early access to protocol developments, and collaborate closely with our team. Contact us to explore opportunities.
Yes! ACGP is designed for interoperability with existing agentic frameworks including LangChain, CrewAI, AutoGen, and others. It's also compatible with A2A (Agent-to-Agent) and MCP (Model Context Protocol) protocols.
The protocol operates as a governance layer that can be integrated with your existing agent infrastructure through standardized interfaces and schemas. This means you can add governance capabilities without completely rebuilding your systems.
This project aims to demonstrate that AI governance can function as open, portable, and participatory infrastructure—a foundational layer of public safety for distributed intelligence.
By empirically testing whether governance can be portable and participatory, this project lays the groundwork for treating governance itself as public infrastructure. If communities of experts can collaboratively author and refine machine-readable Governance Blueprints, governance knowledge becomes portable across domains instead of siloed within individual organizations.
If these artifacts are proven to coordinate multi-agent systems effectively, then governance evolves into a reusable layer of safety infrastructure—auditable, improvable, and accessible to all. This transformation enables domain experts, regulators, and citizens to participate directly in AI oversight rather than depending on opaque, corporate, or model-provider governance.
Your support helps us build the infrastructure that ensures AI enables rather than undermines human potential—creating conditions for everyone to thrive.
MeaningStack Foundation is a Dutch non-profit (Stichting), working toward ANBI certification.Early contributions support setup costs and are not tax-deductible. The foundation is expected to be established in January 2026.