I just submitted the following short position statement on how to work with ML / KI techniques in software engineering. This is a statement on using such techniques for the engineering of software, not in the software itself, which is a (not completely, but mostly) separate issue.
ML / KI techniques can be use in software development to assist the human engineer. Properly applied, they can make engineers more productive by helping them focus on understanding and solving the human problem behind the software to be developed (essential complexity) and by freeing them from getting distracted by technical implementation details (accidental complexity).
ML-based automation should not be used for autonomous decision making by the machine on matters of the human problem that the software solves. Otherwise, the biases that are necessarily built into tools, data, and models based on such data will reach affected parties without filter and correction. In the usual case, this will cement existing biases, in the worst case it will make affected parties reject the software and the best intentions behind it.
This position does not discourage automation, only unreflected one. As I argue in the case for a moral machine in autonomous driving, there is nothing wrong if a machine makes decisions on behalf of a human, as long as it adequately reflects that human’s intentions.
Leave a Reply