Skip to main content

AI Impact Today: Examples Across Health, Finance, and Education

The explosion of ChatGPT usage is capturing the public’s imagination, alighting dreams of surging economic prosperity and inducing fears of runaway Artificial Intelligence. This focus on long-term macro impacts is crowded out by discussions of where and how AI is making an impact today and the more immediate challenges the technology presents. Appreciating this dynamic, this blog aims to encourage creative thought about how AI policies will impact today’s industry by drawing on a handful of cases where AI is already creating beneficial opportunities and introducing risks.

Cheaper, Faster Therapies For Those In Need | Improved Pathogen Development

Shockingly, an estimated 86% of drug candidates created between 2000 and 2015 failed to meet their objectives. This failure rate can contribute to several aspects of the pharmaceutical market, such as expensive therapies and extended product development timelines. To address these issues, some firms are turning to AI to enhance their ability to design novel molecules, analyze experimental data, and inject innovation into the pharmaceutical development process. While it is still too early to understand AI’s full impact in this sector, it may drive down costs, increase research velocity, and allow firms to engage in previously unprofitable therapy areas.

Unfortunately, the same qualities that make AI an attractive tool to improve pharmaceutical development can also make it dangerous as it can be used to develop biological and chemical weapons. In the same way that AI can reduce the cost and time to develop pharmaceutical therapies, it could also be used to facilitate the production of novel or exigent pathogens.

The clear benefits and risks of AI in this sector highlight the need for policymakers to engage with domain experts to understand the nuance of how to encourage innovation without unwittingly encouraging malicious behavior or allowing for the development of unintended risks.

Expanded Financial Inclusion | Market Manipulation And Collusion

AI has been present in the financial markets for years, but new applications of AI have paved the way for potential innovative benefits. One area with increasing adoption is the rollout of AI-augmented account management services, such as automatic portfolio rebalancing, to a broader swath of consumers. While these systems could pose consumer risks if they lack adequate consumer protections and safeguards, if properly implemented and regulated, AI-augmented financial systems may increase financial inclusion.  It could also help reduce inequality by giving a larger swath of the population access to financial services and instruments previously reserved for only the highest net wealth investors. As such, the use of AI in finance may benefit society by expanding financial inclusion, but it may also create unique risks and concerns.

Studies have shown that exigent AI trading models have already independently “discovered” and attempted to leverage illegal trading practices such as spoofing. As machine learning models become increasingly prevalent, more algorithms will discover and attempt to leverage illegal tactics even if their programmers never intended to encourage such behavior. Furthermore, as companies begin using increasingly complex models such as deep learning and neural networks, it can become more challenging, if not impossible, to audit exactly why an algorithm made any individual decision.

This challenge is due to the functional form that advanced AI models take. Specifically, to aid in their predictive capabilities, advanced AI models will autonomously create and use artificial variables with no real-world meaning. For example, regulators may discover that an AI trading model frequently places large market orders but never fulfills said orders – a sign that the algorithm may be engaging in spoofing, which is an illegal behavior. However, upon closer inspection, the regulators see that the variable in the model most strongly associated with canceling the market orders is an unintelligible amalgamation of 15 esoteric financial signals, none of which can be tied to intent to spoof. Situations like this already occur and require significant resources for regulators to examine.

The increasing deployment of AI systems in the financial sector has many potential benefits, but it also brings new risks and concerns. As the sector continues to investigate how to leverage these new technologies to improve its offerings, policymakers should be wary of the risks and unintended consequences of these systems and the difficulty that untangling them may present.

Unparalleled Educational Support | Moral Hazard Of AI And Homework

Traditionally, a minority of students have received tutoring, but the rapid expansion and success of large language models and generative AI may change that. Although educational chatbots are not yet at the level of high-quality human tutors, one day, they may be, and they already have the potential to scale more economically than human tutoring. In some cases, requiring only a free log-in, students can now have a real-time, one-on-one chat with AI programs about math, science, foreign language, history, and more. While AI chatbots still require significant work to fix issues such as “hallucinations bias, and equitable access, the ability for all students to access some form of individualized educational assistance could be revolutionary.

Unfortunately, as we have seen in the prior sections, the same capabilities that allow AI to augment student performance could also pose a significant risk to their educational development.  In a recent study, 30% of the sampled college students had used ChatGPT to complete a written assignment, and 20% used ChatGPT for at least half of their assignments. While this doesn’t necessarily mean that students are or will use ChatGPT to complete the course for them, it certainly is a possibility that a meaningful percentage of students will find the allure of low-effort, high-quality assignments, completed instantaneously by AI too tempting to pass up. In some cases, using a chatbot can assist in learning, such as providing tutor-like assistance. However, there are concerns that students may use AI to complete the entirety of an assignment which may result in said students failing to learn valuable lessons.

While significant work remains to determine the most equitable and pedagogically sound manner to incorporate AI into learning, there is potential that it could increase access to beneficial tutoring that may improve student outcomes – as long as the risks of plagiarism and related issues are adequately addressed.

Conclusion

There are countless areas where AI is already making an impact, and while it is important to consider the long-term effects of this technology, it is also vital to examine the plethora of spaces in which AI is making changes today. Policymakers should be aware that AI is a powerful tool with capabilities that can lead to both beneficial innovation and risky behavior. Moreover, it is generally the same capabilities that make AI a powerful growth driver that enables deleterious behaviors. As such, it is essential to engage with domain area experts and understand that many of the risks of AI are a reflection of the benefits it can confer.

Share
Read Next

Support Research Like This

With your support, BPC can continue to fund important research like this by combining the best ideas from both parties to promote health, security, and opportunity for all Americans.

Give Now
Tags