Google.org has released its AI Readiness Playbook for Funders, a comprehensive 63-page guide aimed at helping philanthropic foundations enhance AI literacy across their grantmaking, operations, and communication teams. This initiative arrives at a critical time, as nonprofits encounter what Google.org describes as “unprecedented demand and budget uncertainty.” The playbook encourages funders to categorize AI-related expenses—like licenses, cloud infrastructure, and model maintenance—as essential program costs rather than overhead.
Carina Box, Program Manager at Google, announced the playbook’s launch on LinkedIn, emphasizing the increasing interest from foundations eager to support AI for social impact but unsure where to start. “As a first step, Google.org is open-sourcing our AI Readiness Playbook for Funders,” Box wrote, framing the resource around five practical shifts to improve AI integration into funding strategies.
The playbook outlines a strategic approach that begins with a five-minute readiness survey and progresses through various workshops tailored to specific roles. It also encourages funders to focus on the problem at hand rather than the technology during due diligence. “Funders must act deliberately,” writes Maggie Johnson, Global Head and Vice President of Google.org, in the playbook’s opening letter. She underscores the necessity for all staff members to comprehend AI technology, regardless of their roles, warning that without proactive measures, the benefits of AI may not reach the communities served by nonprofits.
Fundamentals of AI Strategy for Foundations
The playbook is structured around three modules, each designed to facilitate different aspects of AI strategy. The first module establishes the strategic foundation, providing templates for an AI Readiness Survey, guides for workshops, and an Ethics Inquiry Deck adapted from Google DeepMind. The second module focuses on implementation, offering role-based training resources and a central hub model to aggregate use cases from various departments. The third module emphasizes evaluation, featuring a due diligence framework for assessing AI proposals from grantees and a revised funding approach for technology-related expenses.
Google.org’s internal strategy revolves around three core pillars: Bold Innovation, Responsible Development, and Collaborative Progress. The playbook introduces the ACT framework for responsible AI use, which stands for Ask, Check, and Tell. It addresses the inconsistency in AI capabilities, urging teams to rigorously test every use case rather than relying on assumptions.
For organizations within the EdTech sector that depend heavily on philanthropic support, the guidance in the third module is particularly significant. Google.org suggests that funders consider expenses related to licenses, computing power, cloud infrastructure, and model maintenance as program costs. The playbook points out that inference expenses, incurred each time a model is queried, often scale with user volume and can be underestimated in grant budgets.
Moreover, it advises funders to design grants that allow flexibility for experimentation during the initial 12 to 18 months of an AI project. It also promotes proactive funding for essential infrastructure, such as data cleaning and storage, while realistically addressing the technical talent costs that many organizations struggle to cover.
The due diligence framework outlined in the playbook is adapted from existing guidelines, distinguishing between back-office and beneficiary-facing AI solutions. It encourages funders to scrutinize aspects like training data representation and the explainability of predictive models. Additionally, Google.org emphasizes that tools developed by grantees should ideally be open-sourced and co-designed with the communities they aim to serve.
To further aid users, an interactive NotebookLM experience accompanies the playbook, allowing users to query the document and create tailored roadmaps specific to their foundation’s challenges. Google.org is now assessing whether mid-sized foundations, which may lack dedicated technical strategy teams, will successfully adopt the framework, and how quickly these practices will reach EdTech grantees whose budgets often overlook the costs associated with inference and model maintenance.
See also
Andrew Ng Advocates for Coding Skills Amid AI Evolution in Tech
AI’s Growing Influence in Higher Education: Balancing Innovation and Critical Thinking
AI in English Language Education: 6 Principles for Ethical Use and Human-Centered Solutions
Ghana’s Ministry of Education Launches AI Curriculum, Training 68,000 Teachers by 2025
57% of Special Educators Use AI for IEPs, Raising Legal and Ethical Concerns



















































