Green heart on number lines with 1s and 0s

AI guidelines

Responsible Commerce

Guidelines on the responsible use of artificial intelligence

Artificial intelligence (AI) is programmed by humans. It is thus in line with human thought and behavioral patterns, assumptions and cultural influences – but it is still a machine. In order to achieve a responsible co-existence between human and machine, we should address both the “why” and the “how” of any collaboration. We create trust by aligning algorithms with ethical principles.
 
When developing automated decisions based on artificial intelligence or statistical methods, therefore, the following guidelines are important to us. Our top priority is to develop AI applications for people – not against them. In doing so, we are guided by regulatory frameworks, prevailing legislation and ethical principles – above all the Otto Group’s Code of Ethics and our OSP corporate values.
Asterisk

1. People take precedence over AI

Human action takes precedence over AI – it supports our decision making. We monitor the performance of our applications and can intervene at any point. We also regularly validate AI models in our operational activities. We visualize results so that we can quickly identify any irregularities. This approach is adopted particularly in cases in which far-reaching consequences and loss of trust are possible. The department assesses the relevant situation.

Code close

2. Trust through transparency

An AI application is only successful if consumers recognize its added value and if it generates trust. We therefore inform users that they are interacting with artificial intelligence and flag this to them. We communicate openly about the possibilities and the limitations of our AI. It is important to us to make the functionality of the AI understandable and, where possible, to also explain the results that are obtained. We critically examine whether an objective can also be achieved with an algorithmic system that is less complex and easier to follow without a significant loss in quality.

Plus

3. Non-discrimination, diversity and fairness

Diversity is important to us. Our aim is to develop AI models that make fair decisions and do not discriminate. This is why the assumptions and data on which the AI is based must be as representative as possible. However, we are aware that automated decision making the algorithm – at times compete with one another. Thus, discriminatory distortions cannot be entirely avoided even with AI-based decision-making systems, as the elimination of one type of unfairness can bring to light other types of unfairness. We take this on board when developing and programming AI and critically review our results to that effect. We also apply the principle of reversibility: AI results are inherently reversible, which means that decisions can be reversed following human intervention if the appropriate evidence is gathered.

Semicolon

4. Sustainability

We only develop an AI application when it is appropriate. If a simple method works better, we will use this and draw our customers’ attention to it. We also apply the principle of data minimalism: we specifically select only data for our models that is really necessary and we streamline our models if possible. This is for the purposes of data protection, duration and lower consumption of resources.

Omnichannel

5. Secure and robust against manipulation

We are aware that AI applications can be deliberately deceived or manipulated. We therefore observe the relevant security standards for live applications and ensure that the data and decision-making basis is protected against both intentional and unintentional manipulation. To this end, we consciously invest ti me in the necessary application-specific research – particularly in cases in which data can be imported from outside. Our goal is to have an AI system that assists people and does not harm them.

Equal

6. Data protection and data management

Responsible data management and secure data storage are not just obligations for us as a member of the Otto Group but are also a mark of trust. We recognize just how important it is to protect personal and sensitive data. We communicate as transparently as possible which data is being used by whom for what purpose.

Square

7. Responsibility, liability and accountability

In our automated applications, we clearly define who is responsible for which system and which function, of the associated tasks – this also applies for shared responsibility. Questions about liability are answered within the framework of legally valid requirements.

We do not see artificial intelligence as an end in itself but as an important tool in supporting retail and logistics. Responsibility ultimately always remains with humans. For us, this means responsible commerce.
Please note the disclaimer regarding the use of the AI assistant.
text bubble How can I help you?