Responsible Use of Artificial Intelligence (AI) 

AI presents significant opportunities to enhance both our customer experience and our business operations. At Allianz, we recognise that these opportunities come with substantial responsibilities. We consider AI as any technique that enables machines to learn and reason similar to humans, encompassing a range of technologies from machine learning to natural language processing. By leveraging AI, we can tailor our products and services to ensure that each customer receives the best-fit solution for their unique needs. We can for example integrate AI into our claims processes to ensure swift and efficient handling when incidents occur. 

Public and regulatory attention towards potential unintended consequences of AI and the use of data is increasing. In response, Allianz has developed a range of initiatives focused on AI and data usage, prioritising the interests and needs of our stakeholders, including customers, employees, and business units. Allianz has embraced a global approach to Responsible AI (RAI), with a new global RAI Governance. Additionally, eight overarching RAI principles have been established to guide the responsible handling and application of AI. As a trusted partner, Allianz is committed to enhancing awareness of regulatory measures and maintaining the highest standards of integrity and responsibility in all AI-related endeavours. 

We are implementing a human centric approach to AI. Working cross-functionally, we consult relevant stakeholders – whether from the business, data science, IT and security, legal or privacy functions – to ensure implementation of both Privacy and Ethics by Design.

At Allianz, we are investing in both AI technology and our people by nurturing in-house talent and attracting top-tier AI engineers and data scientists. We are committed to talent development, leadership, and transparency. We are continuously enhancing our AI capabilities by strategically upskilling our workforce, providing them with the technical expertise and insights needed to understand the impact of AI on the future of work.

Allianz has eight core principles for a responsible usage of AI, namely:

  1. Prohibited AI
  2. Transparency
  3. Accountability, Accuracy and Proficiency
  4. Security and Resilience
  5. Non-Discrimination
  6. Data Privacy
  7. Data Governance
  8. Human Oversight

Read more about the principles here below to find out how our employees apply them in their work. AI systems and use cases may only be used in line with applicable laws and Allianz standards, including the RAI Standard and the Privacy Standard.

At Allianz, we do not develop or use AI Systems, which are in violation of the law.

We openly communicate the purpose of our AI systems. Further, we inform customers whenever they interact directly with an AI system, such as a chatbot. 

We follow a risk-based approach to attain suitable levels of interpretability.

We monitor AI systems in production to ensure that the decisions match the intended outcome, integrating user feedback. 
We ensure that adequate and state-of-the-art cybersecurity and resilience measures are in place throughout the lifecycle of an AI System, which particularly includes an AI System’s development and deployment.

We commit to avoiding both direct and indirect discrimination, e.g., on the basis of gender or race, in line with applicable non-discrimination law. 

We mitigate potential bias both in the data itself as well as in each individual model, keeping in mind that correctly implemented AI systems can help to reduce the risk of human bias. 

AI systems may only be used in line with applicable privacy laws and standards, including the Allianz Privacy Standard.

We follow core Privacy Principles of Purpose Limitation, Data Minimisation, Data Accuracy, Transparency, Lawfulness, Storage Limitation, and Security and Confidentiality and perform a dedicated privacy risk assessment whenever personal data are utilised in an AI system.

An appropriate level of data quality has to be ensured when training and using AI Systems through a sound data governance.

Each usage of AI systems must respect customers’ autonomy and decision-making. We consider the perspective of customers and other stakeholders when designing and developing AI systems for our insurance processes.

We ensure an appropriate, risk-based level of human control.

We strive for responsible usage of AI in all our business activities through a range of practical initiatives.
Allianz has established a Group Data and AI Trust Advisory Board (DAITAB) with representatives from the Board of Management and several operating entities and Group functions. The DAITAB is an advisory body that provides recommendations to the Board of Management of Allianz SE and other decision-making bodies. Its objective is to gather expertise and support decision-making on Data Ethics and further data-related topics, and the responsible and compliant use of AI within the Allianz Group. 

The Allianz Standard for Responsible Use of Artificial Intelligence and the Allianz Functional Rule for Responsible AI collectively establish the core principles and framework for the responsible use of AI within the Allianz Group. These rules aim to ensure responsible use of AI while promoting innovation and mitigating risks, and are currently being rolled out. 

The RAI Governance establishes the new role of an AI Trust Officer at both Group and local level. 

The Group AI Trust Officer, as part of Group Privacy, advises on implementation of the responsible use of AI through guidance, communication with authorities and extending privacy risk assessments to cover all RAI principles.

The local AI Trust Officer supports Business Owners in incident management, monitor AI risks, conduct RAI-assessments with the CDO, and escalate unresolved risk disagreements to the DAITAB. 

The Group AI Office serves as a Centre of Excellence, providing operational guidance, technical support, AI literacy, and oversight for AI across the organisation, while ensuring adherence to responsible AI in collaboration with Group Privacy.

At Allianz, we are motivated by the need to build and maintain stakeholder trust through a human-centric approach. We have translated this into guiding principles of 1. Prohibited AI, 2. Transparency, 3. Accountability, Accuracy and Proficiency, 4. Security and Resilience, 5. Non-Discrimination, 6. Data-Privacy, 7. Data Governance, and 8. Human Oversight, supported by determining the appropriate level of human oversight for each AI system. These principles are defined in our Allianz Standard for Responsible AI, which is available to all employees. 
When our AI systems use personal data, we make sure that there is an appropriate level of human oversight incorporated in every use case. We determine the appropriate level of human oversight according to the risk level of the AI system by applying a structured Privacy Impact Assessment (PIA) and an AI Risk Assessment considering the likely impact on customers’ rights and interests. Depending on the risk level the human oversight could range from the human being in full control to various degrees of human supervision in the development as well as in the use of AI systems.
Yes, the Transparency principle, which forms part of our Allianz Standard for Responsible AI, states that customers are informed whenever they interact with an AI system. 
Allianz has designed the AI Literacy framework to support employees, leaders, and experts across varying levels of expertise. It ensures a shared understanding of artificial intelligence risks and opportunities, while upskilling the learner on how to use AI effectively and responsibly.
Privacy Principles such as Data Minimisation are especially important to achieve in the data-driven AI context. Data Minimisation means that we only collect personal data which is directly relevant and necessary to achieve a specified, justified purpose. We only keep personal data for as long as is necessary to achieve this purpose, with AI system development and deployment being no exception. By applying techniques and measures such as pseudonymisation and feature selection by importance, where we document which data points actually contribute to model performance and discard the rest, we aim to limit the type and volume of processed personal data.
Our Privacy Statement explains our privacy strategy and framework in more depth. You can contact [email protected] if you would like to exercise your data subject rights, as laid down by the GDPR. 
Allianz strives for diversity and inclusion at all levels of the organisation, including when developing and deploying AI systems. We recognise that considering different perspectives and experiences can help to identify and mitigate potential biases or unintended consequences in AI systems. In addition, we actively seek and incorporate stakeholder feedback to bring our principles, including Accountability, to life.