MHRA launches "AI Airlock": new regulatory sandbox for medical devices based on Artificial Intelligence
The AI Airlock initiative, launched by the MHRA, is an innovative regulatory sandbox designed to support the development and evaluation of Artificial Intelligence medical devices (AIaMD). This controlled regulatory environment allows manufacturers of AI-based medical technology to test compliance approaches, clinical evidence and algorithmic validation prior to formal submission. With a focus on adaptive machine learning models, algorithmic security and continuous monitoring, AI Airlock represents a strategic opportunity to accelerate market access with UKCA compliance.
MDCG 2019-11 Rev.1 - New guidance on qualification and classification of software as a medical device or in vitro diagnostic medical device
The new revised version of the MDCG 2019-11 (Rev.1) clarifies the qualification and classification of software as a medical device (MDSW ) and in vitro diagnostic software (IVD), in the context of the MDR and IVDR. The document covers classification rules, integration with AI Act and EHDS, and includes practical and up-to-date examples. Smart MDR supports MDSW manufacturers in managing compliance and technical documentation.
MDCG publishes guide on the joint application of the MDR, IVDR and AI Act: what manufacturers should know
The new MDCG 2025-6 guide, published in June 2025, clarifies the joint application of the AI Act and the European MDR/IVDR regulations for medical devices and IVDs with artificial intelligence systems. The document addresses classification, risk management, life cycle, technical documentation and post-market monitoring. Smart MDR supports manufacturers in integrating these requirements into their quality management system and preparing for audits.
Is your organization prepared for the AI literacy requirements under the EU AI Act?
The European Union's AI Act establishes new legal obligations for all organizations that develop or use artificial intelligence systems. Article 4 of the AI Act, applicable since February 2, 2025, requires companies to ensure an adequate level of AI literacy among their employees and service providers.
FDA Concludes Scientific Review Pilot Project with AI and Prepares for Transversal Implementation by June 2025
The recent completion of the FDA's artificial intelligence-supported pilot project marks an important step in the modernization of scientific review processes. With the promise of speeding up evaluations and reducing repetitive tasks, the FDA plans to implement generative AI across the board by June 2025.
Team-NB Publishes Official Position on the Application of the AI Act to Artificial Intelligence Medical Devices
Team-NB 's official position paper on the implementation of the AI Act offers key guidance for manufacturers of medical devices with artificial intelligence. The document analyzes the integration between the AI Regulation and the requirements already established by the MDR and IVDR, underlining the classification of devices with AI as high-risk systems.
WHO Publishes Guidelines on Ethics and Governance of Artificial Intelligence in Healthcare
AI in healthcare has transformative potential, but it also raises ethical and regulatory challenges. The WHO has published guidelines on the governance of AI, with a focus on large-scale multimodal models (LMMs). These systems can revolutionize medical diagnosis, scientific research and drug development, but it is essential to guarantee data security, transparency of algorithms and equity of access.
IMDRF Presents Playbook for Evaluating Medical Devices Based on Artificial Intelligence
The IMDRF has published a Playbook for the evaluation of Artificial Intelligence and Machine Learning (AI/ML) medical devices, addressing safety, efficacy and regulatory requirements. Global harmonization of these medical devices is essential to ensure transparency, reliability and compliance.
FDA Publishes New Guidelines for the Use of Artificial Intelligence in Medical Products
The FDA has released a new document on the use of Artificial Intelligence (AI) in the life cycle of medical products, reinforcing the need for transparency, safety and regulatory compliance. The FDA has identified four priority areas: global collaboration to protect public health, support for regulatory innovation, development of standards and best practices, and continuous monitoring of AI performance. The guidelines address everything from algorithm evaluation and bias mitigation to the resilience and cybersecurity of AI-based medical products. This initiative reinforces the FDA's commitment to responsible innovation, ensuring that AI applied to health contributes to medical advances without compromising patient safety.