US PATENT OFFICE BANS GENERATIVE AI USE CITING SECURITY AND BIAS CONCERNS, ALLOWS TESTING WITHIN AGENCY
In a bid to preserve security, minimize bias, and forestall potentially malicious activities, the US Patent and Trademark Office (USPTO) has placed a ban on the use of generative artificial intelligence (AI). The clampdown impacts AI systems such as OpenAI’s ChatGPT and Anthropic’s Claude, precluding their use in any work tasks outside the agency's internal testing environment.
The USPTO is seeking to comprehensively understand both the capacities and limits of AI and prototype AI-based solutions, thus AI is permitted within the agency's confined testing environment. Consequently, staff can't employ any outputs from AI-powered tools, including images or videos.
Despite these restrictions, the USPTO allows for the utilization of certain vetted AI programs. For instance, employees can harness the AI applications lodged in the agency's own database to sort through registered patents and patent applications.
Furthering this incorporation of AI into patent searches, the USPTO approved a contract amounting to $75 million with Accenture Federal Services earlier this year. The goal is to augment its patent database with AI-enabled search mechanisms, aiming to enhance user experience and expedite the search process.
The USPTO is not fighting a lone battle when it comes to cautionary measures for generative AI. Similarly, the National Archives and Records Administration has outlawed the use of ChatGPT on government-affiliated laptops. Such restrictions demonstrate the shared apprehension amongst government institutions about the potential downsides of generative AI.
NASA, another government agency, has also joined the fray with its own guidelines for AI use. The rules ban the application of AI on sensitive data, yet it allows staff to experiment with it for tasks such as coding and summarising research.
Collaboratively, NASA and Microsoft are developing an AI chatbot capable of aggregating satellite data for easy searchability. At present, it's exclusively available to NASA scientists and researchers. However, their ambitious endgame is to make spaceborne data accessible to everyone, demonstrating the vast, requisite potential of AI.
While regulatory measures may seem stringent, they are crucial in driving AI development in the right direction, emphasizing security and transparency while avoiding potential misuse and bias. As institutions continue to delineate the bounds of AI use, the balance between harnessing its potential and circumventing its risks will ever be the goal, as government agencies pursue the horizon of AI capabilities, steering its future trajectory.
As we closely monitor these developments play out within government agencies and beyond, the intersection of AI technology and policy continues to shape, hinting at what future AI use will look like—not just within the hallowed halls of government but also in tech-savvy businesses and private lives. We could well be witnessing the writing of the road map for the extensive and ethical use of AI. Yet, how far we can travel without losing sight of security and ethics is an ongoing challenge. A cautious embrace of AI technology at the government level may well dictate the pace at which it permeates the lower tiers of society. A prudent approach indeed – but then, who said revolution was easy?