US Governments Race to Utilize AI While Navigating Pitfalls

As the infrastructure for safely integrating generative artificial intelligence (AI) into the U.S technology sector continues to be addressed, governments at various levels in the U.S. are also grappling with how to use and regulate AI-powered tools like ChatGPT.

OpenAI, the parent company of ChatGPT, only continues to grow in reach and popularity. With its first office located outside San Francisco and a new facility in London, OpenAI is now expecting to open its second official office located in Dublin.

Federal Government

In July, ChatGPT’s creator, OpenAI, faced its first major regulatory threat with an FTC investigation that has demanded answers to questions involving the ongoing volume of complaints that accuse the AI startup of misusing consumer data and increasing instances of “hallucination” that makes up facts or narratives at the expense of innocent people or organizations. 

The Biden Administration is expecting to release its initial guidelines for how the federal government can use AI in summer 2024. 

Local Government

U.S. Senate Majority Leader Chuck Schumer (D-NY) predicted in June that new AI legislation was just months away from its final stage, coinciding with the European Union moving into its final stages of negotiations for its EU AI Act

On the other hand, while some municipalities are adopting guidelines for their employees to harness the potential of generative AI, other U.S. Government institutions are imposing restrictions out of concern for cybersecurity and accuracy, according to a recent report by WIRED. 

City officials throughout the U.S. told WIRED that at every level, governments are searching for ways to harness these generative AI tools to improve some of the “bureaucracy’s most annoying qualities by streamlining routine paperwork and improving the public’s ability to access and understand dense government material.”

However, this long-term mission is also hindered by the legal and ethical obligations contained within the country’s transparency laws, election laws, and others – creating a distinct line between the public and private sectors. 

The U.S. Environmental Protection Agency (EPA), for example, blocked its employees from accessing ChatGPT on May 8, pursuant to (a now completed) FOIA request, while the U.S. State Department in Guinea embraces the tool and uses it to draft speeches and social media posts. 

It’s undeniable that 2023 has been the year of accountability and transparency, beginning with the fallout and collapse of FTX, which continues to shake our financial infrastructure as today’s modern-day Enron.

“Everybody cares about accountability, but it’s ramped up to a different level when you are literally the government,” said Jim Loter, interim chief technology officer for the city of Seattle. 

In April, Seattle released its preliminary generative AI guidelines for its employees, while the state of Iowa made headlines last month after an assistant superintendent utilized ChatGPT to determine which books should be removed and banned from Mason City, pursuant to a recently enacted law that prohibits texts that contain descriptions of “sex acts.”

For the remainder of 2023 and into the beginning of 2024, city and state agencies are expected to begin releasing the first wave of generative AI policies that address the balance of utilizing AI-powered tools like ChatGPT with inputting text prompts that may contain sensitive information that could violate public records laws and disclosure requirements. 

Currently, Seattle, San Jose, and the state of Washington have warned its respective staff that any information that is entered into a tool like ChatGPT could automatically be subject to disclosure requirements under current public record laws. 

This concern also extends to the strong likelihood of sensitive information being subsequently ingested into corporate databases used to train generative AI tools, opening up the doors for potential abuse and the dissemination of inaccurate information.

For example, municipal employees in San Jose (CA) and Seattle are required to fill out a form every time they use a generative AI tool, while the state of Maine is prioritizing cybersecurity concerns and prohibiting its entire executive branch of employees from using generative AI tools for the rest of 2023. 

According to Loter, Seattle employees have expressed interest in using generative AI to even summarize lengthy investigative reports from the city’s Office of Police Accountability, which contain both public and private information. 

When it comes to large language models (LLMs) in which data is trained on, there’s still an extremely high risk of either machine hallucinations or mistranslating specific language that could convey an entirely different meaning and effect. 

For example, San Jose’s current guidelines with respect to using generative AI to create a public-facing document or press release isn’t prohibited – however, the likelihood of the AI tool replacing certain words with incorrect synonyms or associations is strong (e.g. citizens vs. residents). 

Regardless, the next maturation period of AI is here, taking us far beyond the early days of word processing tools and other machine learning capabilities that we have often ignored or overlooked. 

Editor’s note: This article was written by an nft now staff member in collaboration with OpenAI’s GPT-3.

The post US Governments Race to Utilize AI While Navigating Pitfalls appeared first on nft now.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *