Why it matters
AI safety is one of the most important and fastest-growing areas in the field, but the landscape of organizations working on it — from research nonprofits to government bodies to industry teams — is hard to map. Some focus on technical alignment research, others on policy, others on evaluation and red-teaming. A structured catalog in Geo connecting these organizations to their people, research areas, and funding makes the safety ecosystem visible and navigable.
What to publish
Create or enrich entities for every major AI safety and alignment organization
For each organization, publish:
Name and description
Type (nonprofit, research lab, government body, industry team, think tank)
Mission statement or focus
Key research areas — link to Topic entities (alignment, interpretability, robustness, governance, etc.)
Website URL
Key people — link to Person entities (founders, directors, researchers)
Funding sources and amounts if publicly available
Notable publications or projects
Year founded
Link each organization to:
People who work there
Topics they focus on
Parent organizations if applicable (e.g. Anthropic alignment team → Anthropic)
Cover all major organizations:
Research labs: MIRI, Redwood Research, ARC (Alignment Research Center), Center for AI Safety, Conjecture, Apollo Research, FAR AI
Industry teams: OpenAI Safety, Anthropic alignment, Google DeepMind safety, Meta AI safety
Policy: Center for AI Safety, Future of Life Institute, Partnership on AI, AI Now Institute, Ada Lovelace Institute
Government: NIST AI Safety Institute, UK AI Safety Institute, EU AI Office
Academic: CHAI (Berkeley), MIT FutureTech, Oxford FHI (legacy)
Scope
All major organizations — likely 40–60. Include both technical research groups and policy/governance organizations.
Potential sources
Organization websites, AI safety community directories (aisafety.world), Wikipedia lists, 80,000 Hours job board (safety-focused orgs), research paper affiliations.
