(an Indo-French research collaboration, funded by CEFIPRA)
by Meenakshi D'Souza, Vincent Penelle, Sreejith A. V
Software driven controllers are used by engineers to command, direct and regulate behaviour of systems (called plants) and achieve system level functional requirements. Plants vary from large systems like aircrafts, to cyber-physical systems that include household appliances like washing machines, ACs etc. These controllers work in a reactive fashion, taking sensed inputs from the plant, implementing the control algorithm and generating control outputs through actuators back to the plants. There is an increasing use of machine learning as part of control algorithms. The machine learning model can be a neural network or simpler classification tools like decision trees. Such algorithms are called adaptive control algorithms - controllers can continously learn and adapt to changes in environment.
When it comes to verification of such adaptive control systems, traditional testing and formal methods cannot be directly applied or extended. Predictions of machine learning algorithms may not always be correct. In safety critical systems, it is important to ensure that the overall safety of the controller is not compromised. Imagine, adaptive controllers used in aeroplanes learning and making decisions on the fly. For safety critical systems, industry standards like ISO 26262, DO-178C recommend the use of formal methods based verification.
Formal verification of adaptive control algorithms is in the nascent stage. The aim of this project is to develop the necessary theoretical ideas, and develop tools and techniques to verify controls logics that use machine learning algorithms.
The project is funded by CEFIPRA: Indo-French centre for promotion of advanced research.