Skip to the content.

Interdisciplinary Machine Learning in Science and Engineering ()

Guni Sharon, Department of Computer Science & Engineering
Learning an Interpretable Control Policy Through Deep Neural Networks


  • Guni Sharon
  • James Ault


This talk presents and analyzes several on-line optimization techniques for tuning interpretable control functions. Although these techniques are defined in a general way, this talk will assume a specific class of interpretable control functions (polynomial functions) for analysis purposes. Empirical results will be presented showing that such an interpretable policy function can be as effective as a deep neural network for approximating an optimized actuation policy. Furthermore, evidence supporting the use of value-based reinforcement learning for on-line training of the control function will be presented. Specifically, the talk will present and study three variants of the Deep Q-learning algorithm that allow the training of an interpretable policy function. Our Deep Regulatable Hardmax Q-learning variant will be shown to be particularly effective for the signalized intersection controller domain, resulting in up to 19.4% reduced vehicles delay compared to commonly deployed actuated signal controllers.