Last year, in a U.S. House Committee on Oversight and Accountability hearing about advances in AI, Dr. Eric Schmidt, chair of the Special Competitive Studies Project, closed his testimony by highlighting that “Generative AI is one of the most exciting developments in computing today. It has the potential to transform the way we live and work, opening up new possibilities for creativity, innovation, and discovery. But with great power comes great responsibility, and we must be mindful of the challenges and risks that come with this technology and work together to ensure that it is used responsibly and ethically.”
Dr. Schmidt’s testimony highlights two crucial elements in the current conversation about AI and its implementation in the real world: It is a technology that opens us up to a world of new opportunities, but also brings several risks. The nonprofit sector is no exception to this transformation. Charitable organizations are slowly coming to grips with the reality of deciding how to integrate AI into their operations. For example, Stanford University’s Institute for Human-Centered Artificial Intelligence (HAI) reported that “78% of founders and 77% of nonprofits believe their organization would benefit from using more AI.” On the other hand, “78% of nonprofits and 72 % of funders that use AI in their work do not have an organizational policy guiding AI usage.” More recently, Google.org shared that “75% of nonprofits said that generative AI had the potential to transform their marketing efforts.” However, “Two-thirds of nonprofits said a lack of familiarity with generative AI was their biggest barrier to adoption.”
To overcome this barrier, it is necessary to implement the use of AI in our organizations in a responsible and gradual manner. Adopting a successful AI governance policy in our organizations can be the first step toward optimizing the use of our most precious resource — the time and efforts of the nonprofit workforce.
Imagine if you could reduce the workload related to repetitive tasks and focus your team on tasks where human contact adds value and makes a difference? As pointed out by Allison Fine, president of Every.org, a nonprofit that provides a free online fundraising platform, “The biggest return on investment in A.I. could be the ‘dividend of time,’ by creating extra time for staff to focus on relationship-based work. That, to me, is the greatest upside we could hope for with A.I(…) But that only happens if you use A.I. carefully and strategically on the rote tasks that are sucking up so much of our work time — the 30 percent of time that we spend that way.”
To achieve a responsible adoption of AI in our organizations, here are five steps that specialists agree are the right path to benefit from these technologies1.
1. Make sure your organization has clear data policies. Data policies are the foundation of AI governance policy. Amy Sample Ward, CEO of NTEN, notes that nonprofits “should have a clear policy on data collection and retention. If you gathered data without consent, then you’ll also be using that data without consent when sharing it with A.I. Similarly, this policy should clarify what will happen to your organization’s data over time, including how long you’ll hold on to and use it.”
One of the main risks associated with the inappropriate use of AI is the use of personal data on it. So it is essential that your entire team is clear about how to treat the data of your work networks, your donors, your constituents, your board members, and others. If your organization has a clear classification of the data, how to treat and store it, this will create a solid foundation for properly using AI. If you need a guide on how to build a data privacy policy in your organization, refer to NetHope’s Data Policy toolkit.
2. Set the game’s rules. Creating a clear AI governance policy in your organization is a key piece to enhance AI’s proper use and mitigate risks. The AI Governance Framework for Nonprofits by Microsoft provides a guide on how to build an AI governance policy. It includes questions about whether the staff can use AI to:
- generate or modify external-facing or internal-facing written or visual content;
- capture and summarize meeting notes with internal or external parties;
- translate content;
- conduct research? (e.g.: summarize text, create tables or datasets, trends); and
- analyze data and conduct predictive analytics.
Other related questions include whether AI training is mandatory, whether staff must disclose their use of AI, and whether all contractors must adhere to the AI policy.
Asking the right questions makes establishing an internal policy on the use of AI much clearer. The following resources may help with your own AI internal policy. The Microsoft policy template, the Emerson Collective template, and the NTEN Generative AI Use Policy are all comprehensive resources that can guide you in creating an AI policy in your organization.
3. Assign roles and responsibilities.
One key element in implementing AI in your organization is the roles and control mechanisms you implement to ensure the appropriate use of these technologies. Microsoft recommends asking the following questions: Does staff need approval to use AI? Is the staff allowed to use generative AI tools wholesale, or must the output be human-modified or verified? Who should review in what cases require disclosure that AI tools were used?
Furthermore, TechSoup suggests that “Your policy needs to provide clear guidance on how generative AI tools should and should not be used at your organization. This can be as simple as a ‘do’s and don’ts’ list.” You also should “ensure that your team has read and understood the guidelines, including somewhere for them to sign. This encourages them to take personal responsibility for their AI on a day-to-day basis and gives you something to fall back on if you encounter issues going forward.”
4. Take a deeply human-centered and principled approach.
This is one of the key points that can make a difference and is central to implementing AI in charitable organizations. Implementing AI with your organization’s values at its core and identifying critical activities that require human interaction will make a difference and give meaning to the use of AI in an organization.
Raffi Krikorian, chief technology officer at Emerson Collective, suggests that “if there are tasks that involve writing or correspondence that eat up a lot of staff’s time(…) maybe those are the opportunities to use some generative A.I. to make the team more efficient and effective.” The key is to identify what tasks can be automated without affecting the value generation of our organization. Allison Fine mentions a great example to illustrate this: “A few years ago, the Trevor Project, which operates a hotline for LGBTQ youths, was having trouble training enough volunteers. Instead of replacing volunteers with a bot, they created a chatbot called Riley to train the volunteers — always with human supervision. (…) That was the pain point for them, and they’ve done a beautiful job of always making sure that there’s human oversight of the bots.”
Once you identify the tasks that you can automate, make sure to create goals for the technology. This will help track the success or failure of using AI and aid in applying those successes to other areas in which your organization may consider using AI. Remember that this is a dynamic and evolving field.
5. Take baby steps.
Start slowly and create a safe space to explore AI’ capabilities, “don’t experiment with A.I. on your next fundraiser keynote speech. Do it in ways that are much lower stakes but allow you to repeat and get those practice exposures — and try to have a little bit of fun with it,” indicates Philip Deng, CEO of Grantable. Allison Fine agrees and indicates that the best way to start using AI is to take baby steps, implementing these tools very slowly with “tiny pilots” and checking what happens if you introduce AI in some specific tasks in your daily workflow. For example, you can try implementing AI to create internal meeting agendas, slides for internal purposes, or have fun challenging your staff to ask some generative AI to create some images, audio, video, or story that could help training about using prompts to have better results using AI.
Bonus: Independent Sector’s Artificial Intelligence (AI) Resources Guide for Nonprofits includes access to the policies and templates created by Microsoft and the Emerson Collective; a Guide to Usefulness of Existing AI Solutions in Nonprofit Organizations created by NetHope, and guides from Google and Microsoft to help nonprofits use AI in their day-to-day work, and more.