Skip to main content

The Open or Closed AI Dilemma

A brief history lesson: in the 1600s, the rising demand for access to scientific knowledge led to the development of scientific journals. Fast-forward, similar practices arose in the 1970s around open-source and free software movements when computer programmers pushed for open access to research and decentralized innovation, which would allow anyone to contribute to the development of science and technology.

Recently, AI breakthroughs have raised new questions around the open versus closed technology debate. However, defining openness in AI is not easy for two reasons. First, today’s AI industry lacks consensus on what “openness” means, and second, it is not a set concept; openness is instead considered a spectrum. To help alleviate these concerns, this piece will explain the fundamentals of these concepts and the current debates around them.

The Openness in AI Spectrum

To understand what “open AI” is, it is helpful to review its intention first. “Openness” seeks to advance the development of AI through stakeholder collaboration. There are three key stakeholders in AI development: developers, deployers, and users. Developers grant a certain level of access to other stakeholders when releasing their AI systems. A fully closed AI system is only accessible to a particular group. It could be an AI developer company or a specific group within it, mainly for internal research and development purposes. On the other hand, more open systems may allow public access or make available certain parts, such as data, code, or model characteristics, to facilitate external AI development. All of the following components are considered when determining where a system falls on the openness spectrum:

  • Access stages: Developers may release publicly available AI systems through hosted or Application Programming Interfaces (APIs) that can be free or paid for. By hosting the system, developers maintain ownership and only allow basic interface interaction without seeing the internal components of the system’s infrastructure. API-based access enables outside deployers to integrate AI models into their own applications making AI technologies more accessible to a broader range of users.
  • Open model: Making all the properties of the model publicly available, such as its architecture, capabilities, components, risks, and other characteristics of the model itself. An example is model cards, which provides an outline of the model’s performance, intended uses, and possible limitations. Like food nutrition labels, model cards are intended to be concise, one-page fact sheets that communicate key information and help build understanding.
  • Open code: Making the source code of an AI system and other derived works related to it accessible because anyone can inspect, modify, and distribute the software.
  • Open data: Developers may disclose the data sources and related information so that it is transparent and accessible for public use. An emerging practice is the documentation of a datasheet, which provides details about the data collection, context, purpose, and other characteristics of the data. It also provides information about potential biases, such as the geographical region or target group from which the data was collected.

A fully open system will have all components, including model, code, and data, publicly available, which could allow external developers and stakeholders to advance the development of AI. However, organizations can choose to share their AI systems in particular ways by only publicly releasing certain components or granting limited access, hence illustrating the spectrum of openness.

Share
Read Next

The Ongoing Debate Recap

Comparing the risks and advantages associated with openness is challenging. One debate revolves around transparency. Advocates argue open systems facilitate transparency by allowing external researchers, programmers, and the public to examine, monitor, and audit an AI system and its data for errors, bias, and security risks. Greater transparency could facilitate broader stakeholder participation and boost innovation. On the other hand, critics assert that open systems could be misused by malicious actors to generate deepfakes, facilitate cybersecurity attacks, or other manipulative activities that decrease responsible oversight of AI systems. These systems may expose information to bad actors and the public in ways that breach privacy, expose trade secrets, and increase national security concerns.

Innovation is the focus of another debate. Currently, a lot of innovation occurs in the deployer stage, with many companies adopting AI models to create new applications. Some argue that more open models increase participation in the developer stage by encouraging collective contributions which leads to rapid advancements and diversification in AI applications. However, major industry players may lean towards more closed systems that protect the developer’s intellectual property from being replicated by rivals.

Finally, there are concerns around the increased concentration of power within a few industry players. The resource-intensive nature of AI systems is a barrier to entry for small tech companies because they require high computing resources, access to data, and trained people. These high expenses could lead to limited perspectives, reduced diversity in model development, and significantly limit the future trajectory of AI development to a few entities. High-resource organizations may choose closed models to protect their intellectual property and competitive advantages. But other stakeholders may advocate for more open models to facilitate market access to small players and allow external perspectives in AI development.

Is Openness in Artificial Intelligence the Policy Answer?

Understanding the spectrum of openness in AI is crucial because it helps policymakers craft informed legislation. The decision is not as simple as open versus closed AI; instead, its nature requires a deep analysis of the spectrum and intricacies of its dynamics, as it could carry benefits and risks that need consideration.

The Biden administration’s recent Executive Order (EO) on AI requires companies with some of the largest AI models to report their developing, training, and testing procedures to guarantee AI security in the U.S, but implementation has not started. Policymakers can support regulations that encourage resource access, knowledge-sharing, fair competition, U.S. leadership, consumer protection, and AI education and training.

AI governance is critical because the choice between fostering open or closed AI hinges on diverse factors that can impact the industry’s future. By promoting an informed and comprehensive regulatory framework that aligns with democratic principles, policymakers can guide the responsible evolution of AI for the benefit of all.

Support Research Like This

With your support, BPC can continue to fund important research like this by combining the best ideas from both parties to promote health, security, and opportunity for all Americans.

Give Now
Tags
Share