House Restricts Congressional Use Of Generative AI Other Than ChatGPT Plus, Report Says
KEY POINTS
- The House memo reiterated that sensitive data cannot be fed to the OpenAI chatbot
- House offices can only use ChatGPT Plus for 'research and evaluation'
- OpenAI reportedly lobbied EU officials for weaker AI regulations
The House of Representatives has issued new rules on the use of OpenAI's artificial intelligence chatbot ChatGPT, indicating that lawmakers are taking the explosive growth of generative AI seriously.
"Use of the product is for research and evaluation only. House offices are authorized to experiment with the tool on how it may be useful to congressional operations, but offices are not authorized to incorporate it into regular workflow," House chief administrative officer Catherine L. Szpindor wrote in a memo to staffers Monday that was obtained by Axios.
Szpindor noted that House offices using OpenAI's large language model (LLM) are only authorized to use the $20-per-month ChatGPT Plus version of the chatbot, which "incorporates important privacy features that are necessary to protect House data."
The memo reiterated that the chatbot can only be used "with non-sensitive data," adding that House offices should use ChatGPT Plus "with privacy settings enabled" to ensure that history and interactions with the bot are not stored in the LLM's system.
Finally, Szpindor wrote that no other versions of OpenAI's chatbot "or other" LLMs have been authorized for use in the House, which means other chatbots such as Google's Bard and Microsoft's Bing cannot be used in House offices.
The latest guardrails placed on ChatGPT use in the House suggest how lawmakers are seriously taking into account the colossal rise of generative AI and how it affects the legislative circle, Axios reported.
News of the House's efforts to rein in generative AI came about a week after Senate Majority Leader Chuck Schumer, D-N.Y., urged his colleagues to hasten work on AI regulation.
Schumer introduced a strategy on how the fast-evolving technology can be regulated, recommending "AI Insight Forums" that will see AI tech leaders, stakeholders, critics, and policymakers coming together to discuss AI risks and mitigation measures, as per Semafor.
The uphill climb in AI regulation and its impacts on the legal sector and potentially the legislative realm became even more apparent after a New York-based lawyer used ChatGPT to prepare a court filing.
Earlier this month, lawyer Steven Schwartz apologized to a judge for submitting to the court a brief with fake cases and rulings generated by ChatGPT. "I simply had no idea that ChatGPT was capable of fabricating entire case citations or judicial opinions, especially in a manner that appeared authentic," Schwartz said.
On the legislative side, Rep. Ted Lieu, D-Calif., introduced in January what he claims was "the first ever piece of federal legislation written by artificial intelligence."
Responding to the prompt "You are Congressman Ted Lieu. Write a comprehensive congressional resolution generally expressing support for Congress to focus on AI," ChatGPT came up with a resolution that emphasizes Congress' "responsibility" in AI regulation, according to a press release from Lieu's office.
Also in January, Rep. Jake Auchincloss, D-Mass., delivered a ChatGPT-generated speech on the floor that his staff believes was the first time an AI-written speech was presented in Congress.
As the House finds ways to mitigate AI risks within its reach, national regulations appear to be far off even as the European Union races ahead.
Among the major topics of debate at the Capitol is AI's potential impact on the livelihood of people in finance, medicine and other industries, according to Reuters.
Another key point of discussion is whether lawmakers should regulate the AI developer or the company that uses the technology to interact with consumers.
The U.S. has yet to make huge announcements about AI regulation, but ChatGPT, the chatbot that the House put congressional restraints on, has reportedly passed through some EU restraints.
Time recently reported how OpenAI lobbied EU officials to water down some provisions of the sweeping AI Act. In particular, OpenAI wanted to remove ChatGPT's designation as a high-risk tech. An expert said the company "got what they asked for."
Meanwhile, some observers noted how last month's Senate hearing with OpenAI CEO Sam Altman was "dangerously friendly." Some industry analysts said the hearing's atmosphere suggested that tech leaders may be allowed to write the rules governing AI, which in turn could hurt smaller firms and result in weak regulations.
© Copyright IBTimes 2024. All rights reserved.