What is the main rule in digitization

How much human is there in AI?

“The protection of people has priority over all other considerations of usefulness” - this is a main rule of the ethics commission “Automated and networked driving” set up by the federal government. The automated and networked technology should avoid accidents as much as possible, so critical situations should not arise in the first place.

How much human is there in AI? A comment by Prof. Dr. Oliver Mayer. (Photo credit: iStock © tommaso)

But how does it behave in a so-called dilemma situation, when an automated vehicle is faced with the “decision” of having to realize one of two evils that cannot be weighed up? Who should AI primarily protect in a conflict situation: the occupants of the vehicle or external road users?

What happens when AI systems become more intelligent than humans and have their own motives? "Then they would probably make decisions that favor their own kind and possibly harm people," says the computer scientist Prof. Dr. Fred Hamker, who researches how the brain works at Chemnitz University of Technology, with the aim of developing novel, intelligent, cognitive systems. The scientist rightly sees a danger in such a scenario. Before fully automated driving goes into large-scale production, there are still a few exciting questions to be answered!