Data protection impact assessment (DSIA) is required by the DS-GVO for certain types of data processing and is intended to help identify and minimize risks. Does it also apply to systems that use artificial intelligence? An overview with recommendations.
Podcast on the topic
Risk assessment for machine learning and artificial intelligence following episode 29 of the privacy Deluxe podcast:
Introduction
Article 35 of the DSGVO introduces the concept of data protection impact assessment and describes when such an assessment is to be established. In paragraph 1, it is mentioned that the regulation applies in particular to new technologies. Artificial intelligence is one such technology. ([1])
A risk assessment as part of a DSFA should always be quickly producible. Since risk assessment is both a prerequisite for determining whether a DSGA is required and an integral part of any DSGA, it must therefore always be part of the DSGA.
Risk assessment = Multiplication of three values, see contribution.
Must a DSFA be established for each machine learning system? Machine Learning can also be seen as new technology. For because revolutionary approaches like Transformer or powerful pre-trained AI-models, but also the resurgence of LSTM (Long Short-Term Memory, invention from Germany) are certainly in combination or partly also individually novel.
The referred legal provision is based on the type, scope, circumstances and purposes of processing personal data. In digital services, the criterion regarding the scope of data processing should regularly be considered fulfilled.
Since a data protection impact assessment is not required for all data processing operations, the effort for the work that also has to be performed outside of a DPA is not attributable to the DPA.
Examples of such work: Mandatory information, system security, training.
A complete DSFA is required pursuant to Article 35(1) GDPR, if data processing is likely to result in a high risk for the rights and freedoms of natural persons. Which data processing takes place must be known anyway within the scope of the obligation to provide information pursuant to Article 13 GDPR or Article 14 GDPR.
According to Art. 35 Abs. 2 GDPR, the controller shall consult the data protection officer when conducting a data protection impact assessment . However, this point is irrelevant for the question of DSFA, as one can already see from the fact that in the aforementioned paragraph 2 it is stated that the DSBs are only to be added if such an appointment has been made (cf. § 38 BDSG). ([1])
In accordance with the provision of Article 35, paragraph 4, supervisory authorities compile a list of processing activities that are relevant for a DSFA. The DSK's list gives examples and mentions, for instance, customer support through artificial intelligence.
Data Protection Impact Assessment
Initially, the DSGVO only applies to personal data. Accesses to end devices, as regulated by lex specialis (§ 25 TDDDG, until 14.05.2024 it was TTDSG), are usually not the core subject of KI applications and can be left out here.
All other data than potentially personal data are therefore irrelevant for a DSFA. In this context, it should be noted that a non-personal data point is also personal if it appears together with a personal data point and knowledge of both data points exists simultaneously at the same responsible person. See Cookies, which are to be considered as personal because of their contact with the IP address.
As already mentioned, AI-systems are new technologies. It must therefore be looked at more closely according to legal regulations. This also makes sense, because when something new is introduced, there was no previous consideration of whether a DSFA should be established or not.
In Art. 35 Abs. 3 GDPR are cases mentioned, in which a DPO (Data Protection Officer) is to be appointed. These cases are briefly as follows:
- Comprehensive and systematic evaluation of personal aspects of natural persons including profiling.
- Comprehensive processing of special categories of personal data (political opinions, health data etc.), see Art. 9(1) GDPR.
- Comprehensive systematic surveillance of publicly accessible areas.
For any type of system, a DSFA must be performed if one of these cases is given and the other conditions apply, which includes the risk for affected persons. Let's take the example of video conferencing software Zoom. Zoom writes in its terms of use (valid from 07.08.2023, as of 10.08.2023):
They agree that Zoom may access, use, collect, create, modify, distribute, process, transmit, maintain, and store any data generated by the service for any purpose permitted by law, including for product and service development, marketing, analysis, quality assurance, machine learning or artificial intelligence (including for training and tuning algorithms and models) …
Excerpt from the Terms of Use of Zoom, bolded by me.
As stated, all data from video conferences held on Zoom can be used by Zoom for virtually any purpose in virtually any way. The video images of conference participants are included just as much as spoken words or transcripts of these words. Also, according to these terms, passing on or further use of the transcripts and other data is permitted by Zoom. After public pressure, Zoom made an addition that promises that customer data will only be used with consent for AI training. Nevertheless, Zoom reserves the right to use customer data without consent for numerous other purposes including marketing and machine learning! See also comments at the end of this post.
Zoom mentions applications of artificial intelligence in its terms of service. Whether or not these are included does not seem to be relevant to the question of a DSFA.




My name is Klaus Meffert. I have a doctorate in computer science and have been working professionally and practically with information technology for over 30 years. I also work as an expert in IT & data protection. I achieve my results by looking at technology and law. This seems absolutely essential to me when it comes to digital data protection. My company, IT Logic GmbH, also offers consulting and development of optimized and secure AI solutions.
