US opens official AI platform for federal workers as other countries weigh options

 

Work News | New Stardom

Federal employees in the United States will soon have access to some of the world’s most widely used artificial intelligence tools. The General Services Administration (GSA) last month announced the launch of USAi, a secure platform giving civil servants the chance to experiment with models from OpenAI, Anthropic, Google and Meta. Agencies can opt in voluntarily, and employees will be able to use the system for tasks ranging from coding to document drafting.



The move is framed as part of the Trump administration’s AI Action Plan, which promises to give the government “a competitive advantage” by embedding new technology into routine work. The platform is hosted on GSA-managed cloud servers to prevent sensitive data from being fed back into commercial models. OpenAI and Anthropic have offered their systems for just one dollar in the first year, a symbolic deal that also secures them a foothold in federal procurement. Critics warn the arrangement could squeeze out smaller competitors before they have a chance to bid.

The announcement follows the addition of ChatGPT, Claude and Gemini to the GSA’s Multiple Award Schedule, which sets pricing baselines for federal buyers. Officials present the initiative as a way to improve workflows, but it comes at a time when civil servants face deep cuts under the Department of Government Efficiency, and at a moment when the Government Accountability Office has flagged persistent weaknesses in training, data governance and oversight. Cybersecurity analysts caution that expanding the use of generative models inside government systems creates new vulnerabilities that the state has not yet fully addressed.

While Washington has opted for rapid deployment, Europe is moving in a different direction. The EU AI Act, finalised earlier this year, introduces binding rules on transparency, risk classification and accountability. The regulation is the first of its kind worldwide, yet it lands at the same time as governments across the continent are testing generative tools in day-to-day work. European Parliament President Roberta Metsola has warned that over-reliance on automated replies risks reducing political communication to formulaic outputs. At the same time, a Microsoft-backed study estimated that public sector workers across Europe, the Middle East and Africa could save a combined 23 million hours every week if routine administrative tasks were automated with AI.

National administrations are also grappling with their own challenges. In the Netherlands, the Court of Audit concluded in 2024 that ministries were already using AI without sufficient oversight. It found that the government lacked clear safeguards to prevent bias and ensure accountability. Researchers at Utrecht University added another layer of concern, pointing to the environmental impact of running large language models and urging policymakers to take sustainability into account when expanding use.

In the United Kingdom, civil servants have been given formal guidance on how to handle generative AI. The Cabinet Office told staff that while the tools can help summarise information and draft correspondence, officials remain responsible for the accuracy of the output and must avoid feeding in sensitive material. Parliament’s Public Accounts Committee has pressed departments to do more to ensure accountability, warning in a 2025 report that without clear oversight the government risks deploying systems it cannot fully explain.

Advisory groups and consultancies underline the scale of change underway. Boston Consulting Group has highlighted productivity gains across sectors, arguing that governments could benefit from efficiency improvements similar to those already visible in business. OpenGov, which works with public authorities worldwide, describes AI as a potential driver of transparency and citizen engagement if deployed carefully. But these optimistic projections sit uneasily alongside warnings from cybersecurity experts and national watchdogs that controls are lagging behind.

The picture that emerges is one of rapid integration paired with uneven governance. In the United States, civil servants will soon have routine access to generative models at their desks. In Europe, regulators have designed an ambitious legal framework but must now ensure that it is applied consistently across member states. In the Netherlands and the UK, watchdogs have identified gaps in accountability, sustainability and oversight. Across all regions, governments are experimenting with the same class of commercial tools, often supplied by a small number of American firms.

AI has already moved from pilot projects to everyday use inside government offices. The rhetoric of experimentation belies the fact that these systems are now embedded in procurement frameworks, training programmes and daily workflows. What has not kept pace is oversight. Questions of security, accountability, environmental impact and long-term vendor dependence remain unresolved.

As with earlier waves of digital technology, adoption is running ahead of regulation. For governments, the harder task is ensuring that those gains do not come at the cost of transparency, fairness and control.

Follow global work and job trends. Subscribe to The Monthly Work Roundup newsletter.



Have insights on work and the future of work? Submit an opinion piece to New Stardom. Love work and career books? Explore our fun workplace book collection.

New Stardom is an independent magazine covering the Future of Work, AI, and emerging job trends.

Next
Next

Europe Faces Leadership and STEM Shortages, Cedefop Data Shows