Although innovation typically outpaces regulation, federal agencies are actively exploring how to use AI responsibly—and formal guidance has begun to emerge.
Staying up-to-date on the latest regulations has always been a critical part of healthcare, and artificial intelligence (AI) has introduced an entirely new set of policies for providers to follow. Although innovation typically outpaces regulation, federal agencies are actively exploring how to use AI responsibly—and formal guidance has begun to emerge.
The U.S. Food and Drug Administration (FDA), for example, is spearheading many efforts to understand AI’s potential impact on healthcare and public safety. In May 2025, the FDA appointed Jeremy Walsh as its first Chief AI Officer. In this role, Walsh is tasked with designing, developing, and delivering cutting-edge AI solutions for healthcare. Further agency announcements and plans signal an intent to actively lead in the use of AI in medical research, development, and regulation.
In this article, we’ll provide an overview of how emerging federal policies and initiatives from the FDA and other organizations are driving the future of AI in healthcare, as well as the potential impacts providers need to prepare for.
The FDA Makes Major AI Announcements
What does the creation of a Chief AI Officer signify? Within the FDA, we’re seeing not just an operational shift, but also a cultural one. In the past, the FDA has mostly focused on reviewing and approving AI-based products and algorithms. Today, the agency is using AI to enhance its own operations.
In May 2025, the FDA announced the completion of its first AI-assisted scientific review pilot, a test of how generative AI could streamline regulatory document review. In tandem, the agency also revealed intentions for an agency-wide AI rollout. The plan involves deploying AI tools across all its centers, including drugs, biologics, medical devices, and food safety.
The goal of this plan is to reduce repetitive administrative tasks, accelerate review cycles, and improve the consistency of scientific assessments. It also frees up time for reviews to focus on analysis rather than text synthesis and summarization, for example. FDA Commissioner Martin A. Makary, M.D., M.P.H., explained the major benefits, saying, “There have been years of talk about AI capabilities in frameworks, conferences, and panels, but we cannot afford to keep talking. […] The opportunity to reduce tasks that once took days to just minutes is too important to delay.”
The takeaway for providers: The FDA’s adoption of AI likely signals faster regulatory reviews and more consistent oversight, especially once this technology is tested and fully implemented. If successful, this would mean new technology and treatments reach patients much more quickly, creating a faster-paced landscape for providers to keep up with.
NIH’s Role: Shaping Responsible AI in Research
The National Institutes of Health (NIH) has issued two new Requests for Information (RFIs): NOT-OD-25-117 and NOT-OD-25-118. NOT-OD-25-117 invites comments from the public on the NIH’s broader AI strategy. Meanwhile, NOT-OD-25-118 focuses on responsibly developing and sharing generative AI tools using NIH-controlled access data (e.g., genomic or clinical datasets).
These two RFIs are designed to define how AI can responsibly advance biomedical discovery. The NIH’s goal is to create a foundation for safe, interoperable, and equitable AI biomedical research.
The takeaway for providers: AI will inevitably shape how biomedical data is generated, shared, and applied in clinical research. In the near future, AI-driven insights will likely play a much larger role in assisting providers with patient care and evidence-based decision-making.
The Proposed 10-Year Moratorium on AI
The original “One Big Beautiful Bill Act” included a 10-year moratorium on AI development across several government functions. This moratorium generated vast criticism due to its potential to limit technological progress and yield innovation leadership to other countries, and was ultimately removed.
However, there is significant state-level AI legislation that will go into effect in 2026 to be aware of, as it will likely set a trend for future regulations. Some of these include:
- California’s Assembly Bill 489, which restricts entities that deploy or develop AI technology from using titles or language that imply an AI system is a licensed medical professional.
- California’s Assembly Bill 2013, which mandates comprehensive transparency in AI training datasets.
- Colorado’s Senate Bill 205, which requires detailed disclosure and guardrails against discrimination for high-risk AI systems.
- The Texas Responsible AI Governance Act, which categorically restricts the deployment of AI for certain purposes (i.e., behavioral manipulation, discrimination, creation or distribution of child sexual abuse content or unlawful deepfakes, and the infringement of constitutional rights).
The takeaway for providers: Providers should expect a growing network of AI regulations that will shape how clinical and administrative technologies are used in healthcare. Keeping up with both state and federal level changes will be crucial for compliance.
America’s AI Action Plan Defines a National AI Vision
In July 2025, the White House introduced America’s AI Action Plan. This comprehensive national strategy was designed to keep the U.S. at the forefront of AI innovation and includes three pillars:
- Accelerate AI innovation: Ensuring the U.S. has the most powerful AI systems in the world and leads in the creative and transformative application of such systems. Key points include funding research, education, and technology transfer to foster innovation across industries, including healthcare.
- Build American AI infrastructure: Building a vastly greater energy generation capacity than is currently available today. This pillar highlights investment in computing resources, data access, and model development to enable both public and private sectors to scale AI capabilities.
- Lead in global AI diplomacy and security: Promoting the adoption of American AI systems, computing hardware, and standards throughout the world. To achieve this goal, the pillar emphasizes creating an enduring global alliance with other nations while protecting our innovations and investments from adversaries.
Notably, the plan directs federal agencies to review and repeal regulations that could hinder AI adoption. In healthcare, this directive could accelerate the validation of AI-driven diagnostics or drug development platforms. It also supports open-source and open-weight AI models, enabling researchers and clinicians to validate and adapt AI tools and removing barriers to adoption.
The takeaway for providers: America’s AI Action Plan reveals increased support at the federal level for AI innovation that could potentially streamline clinical workflows and access to advanced diagnostic and research tools.
Navigating the Future of AI in Healthcare
Just as governing bodies are attempting to balance AI’s innovative possibilities with management of its risks, healthcare providers should ensure that any tools they use are validated, bias-mitigated, and compliant before being integrated into clinical practice. With RXNT’s new AI software solutions like Ambient IQ, you can adopt AI-driven tools to enhance operations without compromising safety and compliance. Discover how RXNT can help your practice embrace AI responsibly.