ICYMI – the U.S. must not only lead in artificial intelligence but also its ethical application
By Chairwoman Eddie Bernice Johnson for The Hill
Artificial intelligence (A.I.) is sometimes called a herald of the fourth industrial revolution. That revolution is already here. Whenever you say “Hey Siri” or glance at your phone to unlock it, you’re using A.I. Its current and potential applications are numerous, including medical diagnosis and predictive technologies that enhance user interactions.
As chairwoman of the U.S. House Committee on Science, Space, and Technology, I am particularly interested in the potential for A.I. to accelerate innovation and discovery across the science and engineering disciplines. Just last year, DeepMind announced that its A.I. system AlphaFold had solved a protein-folding challenge that had stumped biologists for half a century. It is clear that not only will A.I. technologies be integral to improving the lives of Americans, but they will also help determine America’s standing in the world in the decades to come.
However, the vision of A.I.’s role in humanity’s future isn’t all rosy. Increasingly autonomous devices and growing amounts of data will exacerbate traditional concerns, such as privacy and cybersecurity. Other potential dangers of A.I. have also arrived, appearing as patterns of algorithmic bias that often reflect our society’s systemic racial and gender-based biases. We have seen discriminatory outcomes in A.I. systems that predict credit scores, health care risks, and recruitment potential. These are domains where we must mitigate the risk of bias in our decision-making and the tools we use to augment that decision-making.
Technological progress does not have to come at the expense of safety, security, fairness, or transparency. Embedding our values into technological development is central to our economic competitiveness and national security. Our federal government is responsible for working with private industry to ensure that we can maximize the benefits of A.I. technology for society while simultaneously managing its emerging risks.
To this end, the Science Committee has engaged in efforts to promote trustworthy A.I. Last year, one of our signature achievements was passing the bipartisan National Artificial Intelligence Initiative Act, which directs the Department of Commerce’s National Institute of Standards and Technology (NIST) to develop a process for managing A.I. risks.
NIST may not be the most well-known government institution. Still, it has long conducted critical work on standard-setting and measurement research used by federal agencies and private industry. Over the past year, NIST has conducted workshops examining topics like A.I. trustworthiness, bias, explainability, and evaluation. These workshops are geared at helping industry professionals understand how to detect, catalogue, and ultimately prevent the harmful outcomes that erode public trust in A.I. technology.
Most recently, NIST has been working to construct a voluntary Risk Management Framework intended to support the development and deployment of safe and trustworthy A.I. This framework will be essential for informing the work of both public and private sector A.I. researchers as they pursue their game-changing research. NIST solicits public comments until September 15, 2021, and will develop the framework in several iterations, allowing for continued input. Interested stakeholders should submit comments and/or participate in the ongoing processes at NIST.
We know that A.I. has the potential to benefit society and make the world a better place. For the U.S. to be a true global leader in this technology, we must ensure that the A.I. we create does just that.
Source: Press Release
Date: September 14, 2021
Catherine Anderson
Hannah Robinson
