Can an algorithm replace a human?

Ships are getting smarter. Now they can make suggestions on how to solve various tricky traffic situations. But what really happens then? A research project that Lighthouse runs within the framework of the Swedish Transport Administration's R&D program Sustainable Shipping is investigating this.

Fully automated unmanned vessels are probably a long way off, but so-called smart vessels with AI-based decision support systems are already a reality, says Reto Weber at Chalmers who works in the project Operationalizing COLREGs in SMART ship navigation: Learning from Algorithm Based decision Support systems.

“But it is the same algorithm that is used in both cases. In today's decision support system, this means that the operator receives a proposal on how to navigate. Then he makes a decision. The next step in the development will be that the system becomes automatic and that someone on the bridge monitors it. Only in the third step does the system become fully automatic and unmanned.”

So we are in step one, which feels safe. The algorithm only gives suggestions to support decisions. But can they even give good suggestions? Can things like experience, good seamanship, situational awareness and other non-technical skills that affect safe and efficient navigation be implemented in algorithms? Just before Christmas, simulator tests were carried out at Chalmers which will hopefully provide some answers to the questions.

“One of the things we want to take a closer look at is how well an algorithm can relate to rule 8 in the international shipping rules, COLREG, which is about measures to avoid collisions. It says that measures must be taken in good time and with good seamanship. These are diffuse values that even people interpret differently. Of course, getting an algorithm to grasp it is even more difficult”, says doctoral student Katie Aylward, who also participates in the project.

So far, the algorithm does not work without a navigator. But what happens to him when the decision support systems are so good that they do most things right? Is there not a risk that the navigator casually trusts the system even when it is wrong?

“Absolutely. It will be seen in our results. We also had a great discussion with the participants in the simulator test about this. However, they felt that as a decision-making navigator you were obliged to fully evaluate the suggestions the system gives them before accepting. They were careful to point out that it must always be clear to the navigator that it is a decision-making system for which he is responsible”, says Katie Aylward.

The research project will run until 2022.