Skip to main content

What’s in the Box: Tools that Enhance AI Transparency

The AI revolution is here—and no, robots are not taking over. Artificial intelligence can accurately detect diseases years before diagnosis; pilot self-driving vehicles; and direct robots to harvest crops. AI algorithms power our social media platforms. It is an intelligent, versatile, and deeply complex technology that has only scratched the surface of its massive potential. But people tend to fear what they do not understand, which is why warnings of an AI-driven apocalypse remain dire.

After a wave of federal legislation, congressional hearings, Majority Leader Schumer’s AI Insight Forums, and President Biden’s landmark Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, one thing many agree on when it comes to AI governance is transparency. The inability to understand how AI systems reach their conclusions is known as the “black box problem.” To instill public trust in AI systems, American citizens deserve to know what is happening behind the curtain of these powerful technologies.

“Transparent AI” has become a focus for companies, indicating that they’re taking AI safety seriously. While many governments and stakeholders agree on international norms and principles of AI transparency (see Figure 1), implementing the concept in practice is complicated. This blog will demystify common mechanisms that promote AI transparency.

Share
Read Next

Principles of AI Transparency (from oecd.ai):

  • to foster a general understanding of AI systems,
  • to make stakeholders aware of their interactions with AI systems, including in the workplace,
  • to enable those affected by an AI system to understand the outcome, and,
  • to enable those adversely affected by an AI system to challenge its outcome based on plain and easy-to-understand information on the factors, and the logic that served as the basis for the prediction, recommendation or decision.

Figure 1

Transparency in AI Development

Impact Assessments

Guaranteeing automated systems are safe before being launched is fundamental to responsible AI development. Throughout a model’s life cycle, developers and deployers of AI can conduct comprehensive impact assessments for evaluating the risks and benefits of their automated machines. These risk assessments inspect a variety of information such as the data sources used to train the models, the level of human involvement, security measures, ecological impacts, regulatory compliance, and more (see Figure 2). Many tech companies create their own AI Impact Assessments (AI-IAs) voluntarily in adherence to NIST’s AI Risk Management Framework and OSTP’s Blueprint for an AI Bill of Rights. Once completed, some companies, such as Salesforce, publish their AI-IAs results openly (omitting personally identifiable information), which can foster trust with stakeholders, including investors, regulators, and the public.

Examples of Impact Assessment Questions (from cio.gov):

  • Please describe the level of autonomy of the system and whether the system will be used to assist or replace human decision-making.
  • Please describe the highest security classification of the input and training data used by the system.
  • Is the algorithmic process difficult to interpret or explain?
  • Does the system make a decision or take an action that may have an impact on children under the age of 18?
  • What level of impacts the decision will have on the health, well-being, or healthcare of individuals?

Figure 2

Third-Party Audits

There are no current standards on whether AI impact assessments should be conducted internally by the AI developer or by third-party auditors. Third-party audits may require disclosing internal processes and underlying data to outside vendors or independent researchers. Third-party vendors are expected to protect against potential data breaches through strict cybersecurity measures. For example, New York City enacted the Automated Employment Decision Tools law, which requires an annual third-party AI “bias audit” for employers who use automated recruiting tools and for the audit findings to be published on their websites. The audits are designed to ensure AI hiring tools don’t discriminate based on sex, race, and ethnicity.

Red-Teaming Tests

Another mode of transparency is red-teaming tests—when a team of ethical hackers simulate various cyber-attacks to purposely find flaws and vulnerabilities in AI systems. Many leading AI companies share their red-teaming procedures in open transparency reports per a voluntary commitment with the Biden-Harris Administration. Recently, President Biden’s AI Executive Order built on these commitments by requiring companies to share their red-team test results for particularly high-risk AI systems such as the creation of chemical, biological, radiological, or nuclear weapons.

Certifications and Licensing

Various organizations and initiatives designed certifications promoting responsible AI development. They involve comprehensive questionnaires that evaluate and score the data protection, explainability, fairness, and level of consumer protection in an AI system. Examples of certifications include Responsible Artificial Intelligence Institute’s Certification Program, IAPP’s Artificial Intelligence Governance Professional certification, or OECD’s Algorithmic Transparency Certification for Artificial Intelligence Systems. In addition to acquiring certifications, companies that apply for and adhere to Responsible AI Licenses, or RAILs, demonstrate to the public that they are investing in AI trust and safety. For example, U.S. Senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO) announced a bipartisan legislative framework that advocates creating an independent oversight body that standardizes a licensing process for AI companies.

Transparency in AI Deployment

User Control and Reporting Mechanisms

Once an AI product is launched for public use, AI systems need to be continuously monitored, retrained, and improved through user controls. Key elements of user control are the ability to manage settings or opt-out. This fosters user empowerment by placing users in the driver’s seat when it comes to customizing their AI experience. For example, a basic example of opting-out is clicking the “unsubscribe” button for automated emails. Users must understand their degree of choice, especially for AI systems that could negatively affect their lives. If users are unhappy with AI products or features, customer reviews or feedback surveys help companies test quality assurance and address discriminatory outcomes.

Notifications

Users should know when they are interacting with or being monitored by AI. Providing sufficient disclosure statements or warnings can increase transparency. There is growing momentum towards a notice-and-consent approach before collecting and/or selling user data to third-party entities. In California’s Age-Appropriate Design Code Act, if a guardian is monitoring a child’s online activity or tracking the child’s GPS location, the company is required to provide an obvious signal to the child when they are being tracked such as “You are being recorded.”

Labeling

Labeling AI systems refers to the practice of embedding the output of an AI system with a permanent disclaimer, such as digital watermarking—a method of adding markers to artificially generated content. With the boom of AI-generated text, images, and audio, synthetic imagery is becoming hyper-realistic and imperceptible to the human eye. Watermarking can help verify fact from fiction. A new bipartisan bill called the AI Labeling Bill requires any content made by AI to disclose a clear and conspicuous notice. The race to authenticate content is especially controversial leading up to the 2024 presidential election given concerns about the proliferation of AI-generated election disinformation, such as deepfakes.

Model Cards

Model cards are short documents that provide simple overviews for using an AI system. Like food nutrition labels, model cards are intended to be concise, one-page fact sheets that communicate key information and help build understanding. For example, a model card for facial recognition technology may list factors that limit its performance, such as poor lighting, blurry faces, rapid movement, and crowded spaces. OpenAI, Meta, and Google, or Microsoft’s transparency notes all have released their model cards.

Transparency Reports

Transparency reports share detailed statistics about the effectiveness of an AI system’s performance. They invite accountability into the extent companies adhere to their own policies, which fosters collaborative information-sharing and continued research from policymakers, advocacy groups, and academia. Transparency reports can also disclose the number of requests the company received from state actors (including government bodies, regulatory authorities, law enforcement agencies and courts), providing critical insight into when governments collect private data. Many technology companies, such as social media platforms (see Figure 3), consistently release their own transparency reports voluntarily.

Examples of Transparency Report Prompts (from Santa Clara Principles 2.0):

  • Total number of pieces of content actioned and accounts suspended.
  • The specific clause of the guidelines that the content was found to violate.
  • The number of demands or requests made by state actors for content or accounts to be actioned.
  • The extent to which there is human oversight over any automated processes, including the ability of users to seek human review of any automated content moderation decisions.

Figure 3

Moving Forward

How to perform meaningful transparency frameworks without compromising companies’ trade secrets and intellectual property rights is a top consideration. An overly transparent system, or “glass-box” model, could encourage malicious actors to abuse AI systems and leak sensitive personal data. Previously in our AI National Strategy, BPC recommends that a federal data privacy legislation is a critical first step toward building trust in AI-specific technologies. If people trust that an AI system will respect their privacy, they will be more willing to grant access to their data when appropriate. Protecting data privacy and intellectual property while breaking open the black box will be a key challenge in AI regulation.

Support Research Like This

With your support, BPC can continue to fund important research like this by combining the best ideas from both parties to promote health, security, and opportunity for all Americans.

Give Now
Tags
Share