In 1455, Johannes Gutenberg perfected a printing technique based on moving parts called “artificial writing” and changed how knowledge was produced and disseminated forever. The first printed books are commonly known as Incunabula and are historical pieces that show us the transition between manuscripts made by human beings and books created by a machine. Can you imagine the impact it had on society to go from producing a bible in several years to making 200 bibles in three years?
The history of printing has at least two lessons, which today are fundamental to understanding and adapting into the arrival of artificial intelligence (AI). The first is that the introduction of AI (like printing) offers the possibility of exponentially increasing human productivity and the creation and diffusion of knowledge. The second is that the arrival of any disruptive technology implies risks and requires that societies adapt to minimize those risks and enhance their benefits.
At Independent Sector, we have been following the conversation about data privacy regulation and how it can affect nonprofits, and more recently, we have analyzed why AI regulation should be part of the conversation of the future of our organizations. Now, let me introduce you to three areas that AI regulatory discussions are focusing on and how our organizations can and should take action to adapt to change.
In November 2022, OpenAI (yes! a nonprofit) launched Chat GPT. Since then, we have been compelled to learn about generative artificial intelligence, general-purpose models (GPAI), and prompts (here is a guide from Congress’ own research arm as an introduction to this new world). But in addition to educating ourselves, the arrival of AI forces nonprofits to rethink themselves as organizations and adapt to take advantage of these new tools, while preserving the values that define our identity as a sector.
Before I begin, let me give you a few of the latest elements to update you on the debate. From a regulatory perspective, the debate on Capitol Hill has been lively. The Senate Committee on the Judiciary hosted hearings about rules and AI and Human Rights and AI, and the House Judiciary Committee hosted another hearing about AI and copyrights. At the same time, Senate Majority Leader Chuck Schumer announced the introduction of the SAFE Innovation framework “to ensure this powerful new technology and its potentially wide-ranging impact on society is put to proper use by advancing strong, bipartisan legislation.” All this added to the debates around the safety of children on the internet, data privacy, and a proposed TikTok ban. In addition, The White House released its Blueprint for an AI Bill of Rights, and the European Parliament is close to approving the first comprehensive regulation on AI.
I know it’s a lot! However, without going into the details of each one, the debates and legislative proposals around the regulation of AI have elements that coincide with what experts identify as the main risks and opportunities for nonprofit organizations and how we can start to adapt today.
Data Privacy Risks. AI feeds on data from the internet and the information you provide in ads to generate content. Rasheeda Childress, senior editor at The Chronicle of Philanthropy, raises a troubling issue: “Problems can arise if a nonprofit feeds private information about donors or other constituents into an A.I. it doesn’t own or control.” An email to motivate donors or a response to a constituent can be generated in seconds by an AI, which frees up valuable time from our staff and can be invested in other strategic tasks; however, if it is not done correctly, this type of practice could expose the personal data of our network, which could represent a legal and reputational problem for the organization. The regulatory debates have agreed on the need to impose disclosures and rules that control data retention from these platforms. Whatever the way, it is necessary that each of our organizations design protocols and educational strategies so that our team understands how to use these new technologies and establish barriers that restrict the introduction of personal data from our network on these platforms.
Misinformation. What happens if the data fed to AI is wrong? What happens if the response of a chatbot spreads misinformation? Your organization’s reputation and the trust of people in our sector is at stake (remember that nonprofits are among the most trusted of institutions in the nation). An illuminating example of this was the chatbot of the National Eating Disorders Association, a nonprofit that decided to address its staffing challenges by automating certain responses, and then “takes its AI chatbot offline after complaints of ‘harmful’ advice.” The Blueprint for an AI Bill of Rights of The White House and the regulation advancing in the European Union require that AI platforms include human alternatives, as well as support measures to guarantee the quality of the information generated by AI. If conversations have already begun in your organization about the inclusion of AI within the operation, this conversation must be linked to the implementation of controls and human supervision that guarantees information quality.
Algorithmic bias. In an interview for The Chronicle of Philanthropy, nonprofit lawyer Jeffrey Tenenbaum explains the risk to nonprofits that use AI tools that might be biased in the hiring process. “If there’s bias built into resume screening and they make decisions about whether to not hire someone because of that built-in bias in the platform, that can give rise to a potential discrimination claim,” he says.
The public debate is raising this as a major problem that must be addressed from the generation of the algorithm to its application. As organizations, we must carefully review the tools that we are using.
After Gutenberg printed his 200 bibles in 15th Century, Pope Pius II wrote to Cardinal Carvajal in Rome saying that the newly printed bibles were “exceedingly clean and correct in their script, and without error, such as Your Excellency could read effortlessly without glasses.” We hope that our continual education in AI helps us see its benefits clearly, but always under the principle that they are not error-free.
Manuel Gomez is Manager, Public Policy at Independent Sector.