On December 9 th 2022, the fourth meeting of the interdisciplinary team consisting of researchers from the Future Law Lab of the Jagiellonian University, computer scientists/programmers and lawyer-practitioners from the BSJPtech project took place.
Our joint meeting, in hybrid form, took place at the Faculty of Law and Administration of the Jagiellonian University in the Auditorium Maximum. The entire meeting was about the European Commission's draft regulation on artificial intelligence (AI), which would facilitate access to justice in this area and increase public confidence in such systems.
Proceedings of the meeting.
The seminar was led by lawyers from the BSJPTech project, who started the meeting by discussing the workflow and main features of the (AI Regulation), followed by a detailed presentation of prohibited practices and high-risk AI systems. In addition, the obligations of artificial intelligence providers and users were discussed, along with an analysis of the standards and an assessment of their compliance with the law. At the very end, the lawyers touched on transparency obligations and post-marketing monitoring of artificial intelligence. The aforementioned issues were addressed by Jakub Kabza, Ani Sokolowska, Maciej Jura and Marcin Kroll.
What is the AI Regulation?
The Regulation is an attempt to improve the functioning of the internal market by establishing a single legal framework, in particular for the development, marketing and use of artificial intelligence. This will involve restrictions on the freedom and conduct of economic and scientific activities. The aim of these potential restrictions is to shift the focus towards humans, in particular the protection of fundamental rights.
What is artificial intelligence?
Zgodnie z definicją proponowaną w projekcie rozporządzenia, jest to According to the definition proposed in the draft regulation, it is ‘software developed using one or more of the techniques and approaches listed in Annex I that can, for a given set of human-defined purposes, generate outputs such as content, predictive recommendations or decisions that affect the environments with which it interacts.’
The aforementioned Annex I, provides a list of techniques and methods that are the
basis for qualifying a given system as an artificial intelligence system. Eexamples include: machine learning mechanisms, logic and knowledge-
based methods and statistical approaches.
What risks does AI pose?
The risks associated with marketing AI stem from the way it has been designed and the data it works on. Both design and data can be biased, intentionally or unintentionally. AI algorithms can be programmed for a - predetermined - outcome. Describing a complex and ambiguous reality with numbers can also be a problem.
Therefore, the EU wants to introduce a risk-based approach to AI. This will be based on a case-by-case analysis of AI systems from a fundamental rights and security perspective. All solutions used in the regulation will be in line with current legislation, e.g. the Charter of Fundamental Rights; RODO, as well as in line with EU’s previous AI policies on technology development, digital markets, etc.
Transparency obligations will be imposed on system providers or their users, and will apply to systems: that interact with humans, are used to detect emotions or identify links to (social) categories based on biometric data, generate or manipulate content (deepfake).
Innovation support measures.
The AI regulation provides for the creation of regulatory sandboxes by national authorities. These are top-down controlled environments that facilitate the development, testing and validation of innovative AI systems for a limited period of time before they are marketed or commissioned according to a defined plan. All activities will take place under the direct supervision and guidance of the competent authorities.
Authorities will have specific remedies in case of any significant risks to health and safety and fundamental rights during the development and testing phase of the systems. The Regulation tests the possibility of processing personal data collected for other purposes within the sandbox, subject to certain limitations.

