Jake Sullivan, US national security adviser, during a news conference in the James S. Brady Press … [+]
Unregulated AI is scarier than Halloween. No Congressional legislation exists that specifically regulates AI. On October 24, the White House released a National Security Memorandum on Artificial Intelligence that provides important guidance on ensuring U.S. leadership in AI, advancing AI capabilities, and clarifying that AI will not be used to launch nuclear weapons. However, the memorandum’s focus contrasts sharply with the initial guidance on AI by the Department of Defense and IC. The DoD and IC developed principles for the ethical use of AI years before the White House memorandum, and before publicly launching major AI initiatives. The NSM, by contrast, barely mentions ethics as it speeds ahead with urgency. To maintain trust in government and protect civil liberties, federal agencies and the next White House must swiftly establish ethical, transparent, and accountable frameworks for use of AI.
The Dangers of AI Misuse
National Security Advisor Jake Sullivan rolled out the memorandum before an audience of senior civilian national security strategists and military officers who are students at the National Defense University, where I teach. (Many press outlets erroneously reported that the rollout occurred at the National War College, one of NDU’s five statutory components; Sullivan himself misspoke about both his whereabouts and the institution’s history). The students rightfully pressed Sullivan on questions of transparency, accountability, and ethics. The memorandum provides direction on “appropriate” use of AI in the U.S. government, especially in national security systems. It asserts strongly that the U.S. must lead the world in “responsible” application of AI and reap AI’s national security benefits. The memorandum cautions that if misused, AI could threaten national security, promote authoritarianism and weaken democracy, facilitate human rights abuses, and undermine the rules-based international order.
Despite this dire warning, the memorandum gives insufficient guidance on how to responsibly use AI, and leaves much room for its misuse. The memorandum says that the U.S. must retain AI leadership and harness AI to promote national security objectives while protecting “human rights, civil rights, civil liberties, privacy, and safety”—words given the patina of boilerplate because they are repeated so frequently and without explanation. The memorandum includes important safeguards for free speech and to protect human rights, including that AI will not be used to make asylum decisions. However, these safeguards on constitutional rights can be waived for national security reasons, even in prohibited use cases and high impact cases of use of AI. The memorandum provides no detailed discussion of transparency, bias mitigation, oversight, or accountability. It largely leaves agencies to regulate themselves, behind closed doors, with little accountability or oversight, and no specified provisions for individual notice and redress. The memorandum does not even specify which government programs will fall under its ambit.
Putting Ethics Before Speed in AI Strategy
Unlike the White House, the DoD and IC understood the importance of putting ethics before speed. The DoD’s Ethical Principles for Artificial Intelligence and the Intelligence Community’s Principles of Artificial Intellgence Ethics for the Intelligence Community and ethics guide were released in 2020—four years prior to the memorandum (that’s 28-52 cyber dog years). These documents served as a touchstone for ethical use of AI in the defense and intelligence communities before either launched any large-scale AI initiatives. Both were designed to promote public trust in the agencies, ensure agency compliance with ethics and civil liberties, and ensure that humans remained in charge over AI. The DoD’s framework includes responsibility, equitability, traceability, reliability, and governability and emphasizes the critical importance of human accountability and oversight in AI development and deployment. The IC’s framework specifies that it will respect the law and act with integrity, respecting human dignity, rights, and freedoms. Besides prioritizing transparency and accountability, it seeks to AI objectively and equitably, mitigate bias, and ensure that development and use of AI are human-centered.
The DoD and IC’s ethical principles aim to ensure AI use is consistent with both Constitutional rights and international norms. They ensure human oversight of AI and seek to ensure that AI does not replicate the worst of human behavior: bias, prejudice, and impulsive lethal action. They aim to ensure that AI can be trusted for certain uses, and that the public can trust their agencies to use AI in a way that will protect both civil liberties and national security.
The White House memorandum provides no such assurances. Without unified ethical standards set forth by the White House, agencies will be more likely to misuse AI. Without proper oversight, for example, AI-powered surveillance systems used in airports or at the border, for example, could perpetuate biases and lead to discriminatory outcomes. The Department of Homeland Security has already been criticized in recent years failing to stop racial and religious profiling, targeting Americans for surveillance based on their political beliefs, and building intelligence reports on journalists. AI could magnify these abuses. The NSM’s lack of attention to transparency could undermine public trust in DHS and other agencies.
Ethics Cannot Be an Afterthought In National Security
The memorandum will be reviewed and possibly repealed by the next presidential administration. The next White House should act swiftly to clarify the ethical principles behind this memorandum or any replacement. Meanwhile, individual agencies across the national security enterprise must also develop ethical frameworks as they implement the memorandum’s directives, drawing on the DoD and IC examples. Federal agencies should act to assure their employees and stakeholders that AI will be employed ethically and in their interests. Doing so will set the tone for the private sector to do the same.
Ethics cannot be an afterthought in national security strategy. AI has the potential to make the U.S. safer and jumpstart American innovation. Without oversight, accountability, and ethical standards, however, it can also endanger civil liberties and freedoms. As AI transforms our lives, the U.S. must ensure that its AI policies reflect an ethical framework backed by transparency, accountability, and human-centered design. Leadership without ethics is tyranny. The U.S. must engage in ethical AI leadership if it wants a better world to follow.
Source: politics.einnews.com…
Leave a Reply