Skip to main content

Advancing Sociotechnical Considerations for AI

This week marked a major milestone in artificial intelligence (AI) governance with the launch of the Assessing Risks and Impacts of AI (ARIA) program at the National Institute of Standards and Technology (NIST). With a focus on assessing the societal risks and impacts of AI systems, this program underscores NIST’s commitment to advancing AI that is reliable, safe, transparent, explainable, privacy-enhanced, and fair. The Bipartisan Policy Center has long recognized that addressing AI’s impacts requires more than technical solutions and advocated for funding to develop technical and non-technical solutions that ensure human values are at the center of AI governance frameworks.

The use of AI could profoundly reshape our society and economy. Numerous real-world examples have already demonstrated the positive impacts AI can have, from improving business operations to securing reliable crop yields for farmers. These examples highlight the potential of AI to drive efficiency and innovation and solve complex problems across various sectors.

Yet along with these advantages, AI also carries significant risks. Not all AI applications have positive impacts; for instance, algorithms designed to keep our youth online longer irrespective of impacts to their social lives. Additionally, predictive tools used in high-stakes decision-making processes have been observed to reflect human biases, prompting ethical concerns about fairness and accountability.

Robust safeguards and standards can help developers and deployers of AI tools navigate the multifaceted risks inherent at every stage of AI design, development, and deployment. By incorporating the perspectives of a diverse community of users and advocates, these measures can more accurately reflect the real-world implications of the technology. This approach not only helps in managing potential harms but also elevates our benchmarks for achieving positive outcomes.

For the U.S. to lead the world in AI, policymakers must consider factors affecting both daily life and long-term impacts. To understand how human behavior and social norms influence AI’s development and how AI in turn affects humans, we must identify the sociotechnical factors influencing AI at every stage of its lifecycle and the real-world impacts and consequences it might have.

Social vs Technical Definitions

AI, like many technologies, is developed by humans using technical rules and methodologies and deployed into our lived environments, where it then weaves itself into our social dynamics. Sociotechnical considerations underlie many technological developments, particularly those as broad as AI. Therefore, we can draw insights from historical innovations as to how we manage societal risks and rewards.

Research by TRAILS (the NIST-NSF Institute for Trustworthy AI in Law and Society), in collaboration with research institutions and industry, documented numerous historical precedents demonstrating that sociotechnical factors are a crucial consideration for innovation and the successful widespread integration of new technologies. For instance, given the broad implications biomedical devices can have in contexts beyond clinical care, sociotechnical considerations were critical to ensure that devices like artificial joint replacements and pacemakers received rigorous testing and monitoring so people could use them effectively in daily life. Similarly, sociotechnical considerations in geological engineering help better account for the environmental impacts and consequences on local communities.

These examples also demonstrate that there is no universal sociotechnical solution applicable to all technologies. Every technology presents distinct implications and varying degrees of impact on individuals, communities, and the environment. It is important to consider nuanced sociotechnical approaches and solutions that account for the diversity of AI technologies, their myriad applications, and the potential for AI evolution over time. Such standards must remain adaptable to the evolving landscape of AI and responsive to emerging values.


A growing number of frameworks are emerging to monitor various aspects of AI such as ethical guidelines, regulatory standards, and industry best practices. Several comprehensive frameworks are beginning to address the interaction between social and technical factors by defining sociotechnical approaches to AI and related terminology.

Read Next

Understanding the nuanced relationships between technology and society allows us to craft policies and frameworks that promote responsible innovation and ensure AI contributes positively to our economy and societal well-being. To do this responsibly, we must listen to diverse perspectives, develop a common language to more effectively articulate and analyze AI’s effects, and encourage shared values around AI governance that can adapt to the rapidly evolving AI landscape. By aligning shared values, such as transparency, accountability, reliability, and safety, we can foster trust and greater collaboration among stakeholders.

What’s Next?

AI presents both promise and risk for society and the economy. Recent initiatives like the NIST ARIA program signify a proactive step towards mitigating these risks while promoting the positive innovations that already benefit so many Americans. By fostering collaboration between government, industry, academia, and civil society, such initiatives can pave the way for responsible AI development that aligns with societal values and priorities.

Moving forward, policymakers and stakeholders must adopt a holistic approach that considers not only the technical aspects of AI, but also its societal implications. We must leverage both what we have learned from the past and emerging frameworks that incorporate diverse values and cultural perspectives. By prioritizing inclusive governance frameworks and investing in interdisciplinary research, we can steer AI towards a future where innovation truly creates a better future for all.

Support Research Like This

With your support, BPC can continue to fund important research like this by combining the best ideas from both parties to promote health, security, and opportunity for all Americans.

Give Now