Skip to yearly menu bar Skip to main content


Poster

Deep Grey-Box Models With Adaptive Data-Driven Models Toward Trustworthy Estimation of Theory-Driven Models

Naoya Takeishi · Alexandros Kalousis

Auditorium 1 Foyer 146

Abstract:

The combination of deep neural nets and theory-driven models (deep grey-box models) can be advantageous due to the inherent robustness and interpretability of the theory-driven part. Deep grey-box models are usually learned with a regularized risk minimization to prevent a theory-driven part from being overwritten and ignored by a deep neural net. However, an estimation of the theory-driven part obtained by uncritically optimizing a regularizer can hardly be trustworthy if we are not sure which regularizer is suitable for the given data, which may affect the interpretability. Toward a trustworthy estimation of the theory-driven part, we should analyze the behavior of regularizers to compare different candidates and to justify a specific choice. In this paper, we present a framework that allows us to empirically analyze the behavior of a regularizer with a slight change in the architecture of the neural net and the training objective.

Live content is unavailable. Log in and register to view live content