Data Ethics and Responsible 
Artificial Intelligence (AI) 

Data analytics and Artificial Intelligence (AI) offer the potential for great improvements for customers and our business: At Allianz, we use AI to personalize products and services, so that we can find the best-fit product for each customer’s individual needs. We improve our underwriting and pricing capabilities, so that customers can get accurate offers quickly. We embed AI in our claims processes, so that, when something happens, claims are handled quickly and effectively. We define AI as any technique that enables machines to learn and reason similar to humans. 
We define AI as any technology that enables machines to learn and think in a similar way to humans.

Public and regulatory attention towards potential unintended consequences of AI and the use of data is increasing. 

As a trusted partner, Allianz has developed a range of initiatives related to data ethics and responsible usage of AI systems, centered on the interests and needs of stakeholders, such as our customers, employees, and business units.

We go beyond compliance with all applicable legal and regulatory requirements by implementing a human centric approach to AI. Working cross-functionally, we consult all relevant stakeholders – whether from the business, data science, IT and security, or privacy functions – to ensure implementation of both Privacy and Ethics by Design. 

Allianz has five principles for a responsible usage of AI. Read the principles to find out how our data scientists apply them in their work. AI systems may only be used in line with applicable privacy laws and standards, including the Allianz Privacy Standard.

We follow core Privacy Principles of Purpose Limitation, Data Minimization, Data Accuracy, Transparency, Lawfulness, Storage Limitation, and Security and Confidentiality and perform a dedicated Privacy and Ethics Impact Assessment (PEIA) whenever personal data are utilized in an AI system.

We openly communicate the purpose of our AI systems. This includes, for example, informing customers whenever they interact directly with an AI system such as a chatbot. 

We follow a risk-based approach to attain suitable levels of interpretability. 

AI systems may only be used in line with applicable privacy laws and standards, including the Allianz Privacy Standard.

We follow core Privacy Principles of Purpose Limitation, Data Minimization, Data Accuracy, Transparency, Lawfulness, Storage Limitation, and Security and Confidentiality and perform a dedicated Privacy and Ethics Impact Assessment (PEIA) whenever personal data are utilized in an AI system.

Each usage of AI systems must respect customers’ autonomy and decision-making. We consider the perspective of customers and other stakeholders when designing and developing AI systems for our insurance processes.

We ensure an appropriate, risk-based level of human control. 

We commit to avoiding both direct and indirect discrimination e.g., on the basis of gender or race, in line with applicable non-discrimination law. 

We mitigate potential bias both in the data itself as well as in each individual model, keeping in mind that correctly implemented AI systems can help to reduce the risk of human bias.

We monitor AI systems in production to ensure that the decisions match the intended outcome, integrating user feedback. 

We keep auditable documentation on our compliance with applicable laws and regulations. 

Allianz has a Data Advisory Board (DAB) with representatives from the Board of Management and several operating entities and functions. Its objective is to bundle expertise and support decision-making on Data Ethics and further Data-related topics within the Allianz Group. 
Our Allianz Practical Guidance for AI defines the principles for a responsible usage of AI. Each principle is accompanied by implementation measures consisting of state-of-the-art data analytics working approaches and techniques. It serves as a guidance for the use of AI across the Allianz Group. The Allianz Practical Guidance for AI rollout started in 2022. It is being implemented already by our major European operating entities. Employees involved in the design and development or use of AI systems attend dedicated workshops and training sessions. 
As part of the development phase of each AI project, a PEIA identifies and addresses AI-specific risk. It determines the appropriate level of human involvement according to the likely inherent impact on customers’ rights and interests. 
At Allianz, we are motivated by the need to build and maintain stakeholder trust through a human-centric approach. We have translated this into guiding principles of Transparency, Privacy, Human Agency and Control, Fairness and Non-Discrimination, and Accountability, supported by determining the appropriate level of human involvement for each AI system. These principles are defined in our Allianz Practical Guidance for AI, which is available to all employees. 
When our AI systems use personal data, we make sure that there is an appropriate level of human oversight incorporated in every use case. We determine the appropriate level of human oversight according to the risk level of the AI system by applying a structured Privacy and Ethics Impact Assessment (PEIA) considering the likely impact on customers’ rights and interests. Depending on the risk level the human oversight could range from the human being in full control to various degrees of human supervision in the development as well as in the use of AI systems.
Yes, the Transparency principle which forms part of our Allianz Practical Guidance for AI states that customers are informed whenever they interact with a conversational AI system, such as a customer service chatbot. 
Privacy Principles such as Data Minimisation are especially important to achieve in the data-driven AI context. Data Minimisation means that we only collect personal data which is directly relevant and necessary to achieve a specified, justified purpose. We only keep personal data for as long as is necessary to achieve this purpose, with AI system development and deployment being no exception. By applying techniques and measures such as pseudonymisation and feature selection by importance, where we document which data points actually contribute to model performance and discard the rest, we aim to limit the type and volume of processed personal data.
Our Data Scientists and especially Privacy Professionals/Data Protection Officers work together on an ongoing basis to embed ‘Ethics by Design’ from the very beginning of each AI system’s lifecycle. The Privacy and Ethics Impact Assessment (PEIA) process requires input from Data Scientists and review by the responsible Privacy Professional, anchoring their collaboration in the mandatory project approval process. By bringing together their expertise, we aim to ensure that our development and use of AI systems are both technically reliable, compliant with regulation and ethically justifiable.
By training employees who develop or use AI, as well as Privacy Professionals, we empower employees with the necessary knowledge to apply the Allianz Practical Guidance for AI principles in their daily work. 
Our Privacy Statement explains our privacy strategy and framework in more depth. You can contact goodprivacy@allianz.com if you would like to exercise your data subject rights, as laid down by the GDPR. 
Allianz strives for diversity and inclusion at all levels of the organization, including when developing and deploying AI systems. We recognise that considering different perspectives and experiences can help to identify and mitigate potential biases or unintended consequences in AI systems. Consequently, our cross-functional highly-trained teams, including data analytics colleagues, comprise individuals with a wide range of personal and professional backgrounds. In addition, we actively seek and incorporate stakeholder feedback to bring our principles, including Accountability, to life.