**This is an AI-powered machine translation of the original text in Portuguese
The LGPD and AI governance are not only compatible, they are interdependent. This relationship requires an approach that goes beyond procedural compliance, focusing on substantive precautions aimed at the social and individual impacts of technology use. “Data Protection in Light of Responsible AI” critically analyzes the intersection between the LGPD and the development of AI systems from the perspective of responsible AI.
On the one hand, a procedural approach, focusing on the formal requirements for each stage of processing. On the other, a finalistic view, focusing on the concrete effects of AI applications on rights, including informational self-determination. The study proposes a middle ground: substantive precaution, which combines the social value of privacy with the promotion of innovation and collective benefits.
The research revisits the evolution of data protection rights in Europe: from the affirmation of personality and democratic participation to the procedural accountability of the GDPR. It then recovers more recent movements returning to the social value of privacy, with emphasis on German constitutional jurisprudence and documents from the European Data Protection Board, which recognize the importance of collective trust in technological systems.
In the field of AI regulation, the study analyzes the European Union's AI Act and Brazil's PL 2338/2023, highlighting how both employ a risk-based approach. However, it is proposed that such risks be interpreted in light of AI ethics, and not only through formal categories of compliance. These ethics include values such as transparency, non-discrimination, explainability, reliability, and respect for privacy, which should guide the actions of developers, regulators, and law interpreters.
The study uses real cases to show how the substantive precautionary principle avoids the pitfalls of overly restrictive or formalistic interpretations of the LGPD. It analyzes the use of AI in credit scoring models and the processing of sensitive data in public health contexts (such as the COVID-19 pandemic), demonstrating that data protection should not be a barrier to innovation, but rather a lever for promoting fundamental values in the digital world.

