This text is the first part of two blog posts on this topic in our blog series on functional safety.
Machine learning (or colloquially Artificial Intelligence) is generating a lot of excitement, as it is redefining the boundaries of what we think is possible for computer-based systems. During the last 15 years we have seen incredible applications of machine learning, where some traditional approaches have made no or little headway. Good examples of the power of machine learning include beating the world champion in GO, deep fake videos and much more. Applications that many of us are familiar with are e.g. smart speakers or face recognition in smart phones.
Industrial applications of machine learning are spreading. For instance, Huld has been involved in developing machine learning based approaches to quality control on production lines and optical character recognition. As there has been a lot of hype surrounding the use of machine learning in autonomous vehicles and self-driving cars, it is quite natural to ask is it possible to use machine learning in safety critical systems?
If you look to functional safety standards, you will get different answers depending on the standard. The mother standard of functional safety, the IEC 61508 Ed 2.0, is quite clear. For Safety Integrity Level 2 and above artificial intelligence, as the standard calls it, is “Not Recommended” — so practically forbidden. In the medical domain, a more liberal approach has been taken and, indeed, the American medical regulator FDA has approved medical applications of machine learning. The initial success has, however, been tempered by the limited progress made after the first applications. The standard cited most often in the automotive context is the ISO 21448 “Road vehicles — Safety of the intended functionality” (aka SOTIF), which approaches safety by understanding the function and its limitations. The SOTIF approach does not rule out the use machine learning, but rather emphasises that safety can only be achieved if the potential for hazardous behaviours is understood.
Why do functional safety standards take such dim view of using machine learning? In order to understand this, let’s recall a few basic facts common to most machine learning approaches. At an abstract level, most machine learning approaches are about learning an unknown function f(x). In supervised learning we learn the function f(x) from a set of data, where we know how x and f(x) are related. Usually, the more data we have the better the performance of the learned function fL(x) will be.
In the second part of the blog post, I will share more thoughts on some problems using traditional approaches of ensuring software integrity when using machine learning in safety critical systems.
Text: Timo Latvala