The rise of artificial intelligence (AI) has brought ethics to the center of attention. For example, the recognition that core ethical values such as privacy must be included immediately in the design of AI models. Answer in advance the question of what you want with the models, in what context, with what data. And whether there is bias in the data set. New legislation and regulations from the EU Commission accelerate this once again, after which everyone will ultimately have to deal with it.
The new National AI and Ethics course, which starts today, is therefore not an unnecessary luxury; a logical continuation of the National AI Course. Nine experts in this field start at the Amsterdam Science Center.
One of the speakers is Jeroen van den Hoven, professor of ethics and technology at TU Delft. “Gone are the days when ethics were laughed at,” he says. “This is rightly being taken seriously now.” To comply with privacy laws, many organizations have turned to consulting firms. You are now seeing the same thing happening with AI and ethics. The use of artificial intelligence raises many ethical questions. It is important to think about this at an early stage. Prepare well, advises the Delft professor.
Van den Hoven: ‘The problem with ethics is that many concepts are not clearly defined. Take for example Justice. There are twenty definitions of this. You have to be honest with algorithms. For example, selection by ethnic origin is excluded. The question is which definition of justice is best to use and why. Computer scientists have not investigated for this. Therefore, it is still quite difficult to translate abstract ethical concepts into concrete requirements that an algorithm or AI application must fulfill. The devil is in the details.’
“Digital ethics is becoming more and more technical”
The design should not only take into account values such as fairness and non-discrimination, but also privacy, security and sustainability. But how does one do that? And are you transparent about this? How do you arrive at a certain result? Van den Hoven: ‘It shouldn’t be a rabbit with a top hat.’
The Delft professor has seen his field of expertise change in recent years. ‘Digital ethics is becoming more and more technical. It often attempts technical solutions to moral problems. Grainy faces in photos make people unrecognizable. Several techniques have been developed that increase privacy. Data can be anonymised. You can still see what you need to see. But you don’t have to look at the level of the individual. Take homomorphic encryption that allows performing (certain) computations on ciphertext. To perform the calculation, the encrypted data must not be decrypted first. Machine learning is possible by working with synthetic data.’
This development means that publications that previously only appeared in ethical journals now frequently end up in scientific informatics journals. Ethics must function interdisciplinary. TU Delft, which specializes in ethics and engineering, collaborates with other universities partly because of the lack of a law faculty. Ethics is now a compulsory subject in Delft. Scientific institutes work closely with the (central) government.
The EU Commission in particular has a lot to do with digital ethics. The EU wants to speed up digitalisation and artificial intelligence plays a key role in smart solutions to social issues related to climate, energy, care and transport.
According to Van den Hoven, policy makers in Brussels and The Hague are convinced that ethics in entrepreneurship and technical innovation is a critical factor for success. ‘You can innovate what you want. But if the citizens don’t trust it, it won’t work’, he concludes.