Legal frameworks for digital transformation and artificial intelligence

News
The ongoing digital transformation and the rapid growth of artificial intelligence (AI) are influencing societies, economies, and legal systems profoundly across the world. Central to the contemporary debate are questions on achieving a balance between innovation, individual rights, and collective responsibility. Above all, it remains essential that humans, not machines, stay in control of decision-making processes.

The National Information Processing Institute (OPI PIB) conducts extensive research on artificial intelligence and the digital transformation, including their legal implications.

The law vs AI

‘Understanding the legal impact of artificial intelligence is vital in ensuring the protection of individuals’ rights as automated decision-making becomes more widespread,’ explains Marek Michajłowicz, Deputy Head for Software Development at OPI PIB. ‘In Poland, such analyses enable the adaptation of national legislation to new challenges, including data protection and liability for algorithmic errors. At the European level, common frameworks such as the AI Act guarantee uniform protection standards across the EU and help prevent market fragmentation. Assessing the legal effects also contributes to public trust in emerging technologies, which is crucial for innovation to be embraced.  Finally, conscious and careful lawmaking ensures that AI development serves the common good, while preventing inequality and abuse.’

Although algorithms deliver substantial advantages, they also pose considerable risks. Their power lies in their processing of vast amounts of data that can predict our choices and shape our behaviours. Platforms like Amazon and Netflix already influence our purchasing and viewing decisions, limiting the options we consider. Whoever controls the algorithm holds real power, which, in turn, determines modern wealth distribution.

‘From a legal perspective, it highlights the pressing necessity of regulations that protect individuals and ensure their fundamental rights. Without prudent, well-considered regulation, we risk a “Darwinian anarchy”,  in which the strongest prevail, regardless of whether they act in the public interest,’ says Luigi Lai, research and technical expert at OPI PIB.

European regulations

The General Data Protection Regulation (GDPR) forms the foundation of The EU’s data protection framework, safeguarding the right not to be subjected to decisions made exclusively by automated systems and ensuring human involvement in decision-making. The AI Act, another key legal instrument, classifies AI systems based on their level of risk, ranging from minimal to unacceptable.  The act also specifies the responsibilities of algorithm developers and users—particularly in sensitive sectors such as healthcare, finance, and the judiciary. The goals of the act are to strengthen the protection of fundamental rights, and to support transparency and accountability.

Regulations in the United States

Across the Atlantic, major online platforms are governed by the Digital Services Act (DSA) and the Digital Markets Act (DMA). These regulations are intended to support fair competition, ensure transparency, and protect consumers in the digital marketplace.

A challenge for the future

Constantly evolving legislation reflects an increasing recognition of the need to adapt laws to meet the demands of rapidly advancing technology. An unregulated digital transformation could have far-reaching consequences for both current and future generations.

Shaping a fair digital future is a responsibility shared by legislators, technology firms, and individuals. Collaborative action is crucial in the development of a society in which innovation empowers individuals without restricting their rights and freedoms.

Watch the latest episode of the OPI PIB Academy series on our YouTube channel, which features an OPI PIB expert discussing the legal aspects of AI.