Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

EU’s Artificial Intelligence Act will lead the world on regulating AI

EU’s Artificial Intelligence Act to lead world

The proposed Artificial Intelligence Act (AI Regulation) was accepted by the EU Member States—the Council of the EU—on December 6, 2022, after numerous revisions and deliberations.

Once passed, the AI Act will be the first piece of horizontal EU legislation to govern AI systems, establishing guidelines for the secure and reliable marketing of goods containing AI in the EU. The Regulation’s extraterritorial scope (i.e., application to providers and users outside the EU when the output produced by the system is used in the EU) and exceptionally high fines of up to €30 million or up to 6% of the company’s total worldwide annual turnover for the prior financial year are anticipated to have an impact on the market.

The European Parliament will then need to accept the present version of the AI Act (Parliament).

The key developments and issues that could have a special influence on life sciences companies are examined below:

Which systems will be included in the definition of “AI systems” has been a subject of discussion. The Council is in support of a more limited definition, which the Commission may further define in order to exclude all forms of “conventional” software. The proposed concept covers “elements of autonomous” systems. However, this part of the definition is neither specified or quantified, leaving potential for legal ambiguity and interpretation.

These are systems that can be integrated into high-risk AI systems or surroundings and used for a variety of diverse applications. What liabilities manufacturers of such AI systems would be subject to or how much of these obligations they would share with the high-risk AI system creators are not made clear in the current text of the Artificial Intelligence Act. For example, personalized medicine research, patient engagement apps, and other applications that may be based on general-purpose AI systems but licenced for particular uses and incorporated as high-risk AI systems will all be directly affected by this.

The original AI Act suggested adopting harmonized standards to promote compliance and subjecting high-risk devices to conformity testing. This crucial element of the AI Act has been preserved in its current form. Due to potential dangers of overlap with existing conformity assessment requirements, such as for medical equipment, the industry is particularly worried about this element. Developers of such systems face extra challenges because there are currently no unified standards by which they may assess compliance and no recognized Notified Bodies for AI conformity evaluations. The proposed timeline is only three years, which has lately been shown to be insufficient for a thorough implementation of the EU Medical Devices Regulation.

Artificial Intelligence Act

The Artificial Intelligence Act suggests the idea of creating “sandboxes,” or controlled environments, for the development, training, testing, and validation of cutting-edge AI systems in actual-world settings. Both the industry and authorities have recognised this as being essential for innovation in order to guarantee developers have reasonable commitments and predictability, which will lower costs and increase stakeholder participation. It has been recommended that there should be less strict guidelines overall and decreased or perhaps no accountability for solutions tested in member state sandboxes. This is specifically intended to help SMEs and start-ups, who are the key innovators in this field.

National surveillance agencies would be in charge of enforcing the Artificial Intelligence Act, but an AI Board would be in charge of overseeing it overall. The AI Board is proposed to have more power and autonomy in the most recent draught of the AI Act. The primary responsibility of the AI Board would be to ensure that the AI Act is consistently applied and enforced across all EU member states. It is designed to be flexible and quick to engage with stakeholders. Enhancing stakeholder involvement is a key component of efforts to make the sandboxes operational and support AI innovation in the EU.

The Artificial Intelligence Act is soon to be implemented, notwithstanding these doubts and its eventual shape. Businesses using AI systems are urged to acquaint themselves with the idea, think about how their existing applications would be categorised, and think about what regulatory restrictions would apply to them. In addition, once the final AI Act is passed, an agile integration will be possible thanks to a pragmatic approach to the application of AI principles and data governance.

Businesses should also take into account the draught AI Liability Directive, which contains regulations on the duties and obligations of participants in the AI supply chain.

The post EU’s Artificial Intelligence Act will lead the world on regulating AI appeared first on IndiaFrontline.


This post first appeared on India Frontline, please read the originial post: here

Share the post

EU’s Artificial Intelligence Act will lead the world on regulating AI

×

Subscribe to India Frontline

Get updates delivered right to your inbox!

Thank you for your subscription

×