You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

dynamic_dirichlet_deep_gp.tex 51KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639
  1. \input{preamble/packages.tex}
  2. \input{preamble/abbreviations.tex}
  3. \input{figures/preamble/tikz_colors.tex}
  4. % We use precompiled images and do not add tikz for speed of compilation.
  5. \newcommand{\includestandalonewithpath}[2][]{%
  6. \begingroup%
  7. \StrCount{#2}{/}[\matches]%
  8. \StrBefore[\matches]{#2}{/}[\figurepath]%
  9. \includestandalone[#1]{#2}%
  10. \endgroup%
  11. }
  12. % Show overfull boxes
  13. % \overfullrule=5pt
  14. \addbibresource{zotero_export.bib}
  15. \addbibresource{additional.bib}
  16. \toctitle{Data Association with Gaussian Processes}
  17. \tocauthor{Markus~Kaiser, Clemens~Otte, Thomas~A.~Runkler, and Carl~Henrik~Ek}
  18. \titlerunning{Data Association with Gaussian Processes}
  19. \authorrunning{M.~Kaiser, C.~Otte, T.~A.~Runkler, and C.~H.~Ek}
  20. \title{Data Association with Gaussian Processes}
  21. \author{
  22. Markus Kaiser\inst{1,2}%
  23. \thanks{The project this report is based on was supported with funds from the German Federal Ministry of Education and Research under project number 01\,IS\,18049\,A.}
  24. \Letter{}
  25. \and
  26. Clemens Otte\inst{1}
  27. \and \\
  28. Thomas A. Runkler\inst{1,2}
  29. \and
  30. Carl Henrik Ek\inst{3}
  31. }
  32. \institute{
  33. Siemens AG, \email{markus.kaiser@siemens.com}
  34. \and
  35. Technical University of Munich
  36. \and
  37. University of Bristol
  38. }
  39. \begin{document}
  40. \maketitle
  41. \begin{abstract}
  42. The data association problem is concerned with separating data coming from different generating processes, for example when data comes from different data sources, contain significant noise, or exhibit multimodality.
  43. We present a fully Bayesian approach to this problem.
  44. Our model is capable of simultaneously solving the data association problem and the induced supervised learning problem.
  45. Underpinning our approach is the use of Gaussian process priors to encode the structure of both the data and the data associations.
  46. We present an efficient learning scheme based on doubly stochastic variational inference and discuss how it can be applied to deep Gaussian process priors.
  47. % \keywords{Bla \and blubb}
  48. \end{abstract}
  49. \section{Introduction}
  50. \label{sec:introduction}
  51. Real-world data often include multiple operational regimes of the considered system, for example a wind turbine or gas turbine~\parencite{hein_benchmark_2017}.
  52. As an example, consider a model describing the lift resulting from airflow around the wing profile of an airplane as a function of the attack angle.
  53. At low values the lift increases linearly with the attack angle until the wing stalls and the characteristic of the airflow changes fundamentally.
  54. Building a truthful model of such data requires learning two separate models and correctly associating the observed data to each of the dynamical regimes.
  55. A similar example would be if our sensors that measure the lift are faulty in a manner such that we either get an accurate reading or a noisy one.
  56. Estimating a model in this scenario is often referred to as a \emph{data association problem}~\parencite{Bar-Shalom:1987, Cox93areview}, where we consider the data to have been generated by a mixture of processes and we are interested in factorising the data into these components.
  57. \Cref{fig:choicenet_data} shows an example of faulty sensor data, where sensor readings are disturbed by uncorrelated and asymmetric noise.
  58. Applying standard machine learning approaches to such data can lead to model pollution, where the expressive power of the model is used to explain noise instead of the underlying signal.
  59. Solving the data association problem by factorizing the data into signal and noise gives rise to a principled approach to avoid this behavior.
  60. \begin{figure}[t]
  61. \centering
  62. \includestandalone{figures/choicenet_data_intro}
  63. \caption{
  64. \label{fig:choicenet_data}
  65. A data association problem consisting of two generating processes, one of which is a signal we wish to recover and one is an uncorrelated noise process.
  66. }
  67. \end{figure}
  68. Early approaches to explaining data using multiple generative processes are based on separating the input space and training local expert models explaining easier subtasks~\parencite{jacobs_adaptive_1991,tresp_mixtures_2001, rasmussen_infinite_2002}.
  69. The assignment of data points to local experts is handled by a gating network, which learns a function from the inputs to assignment probabilities.
  70. However, it is still a central assumption of these models that at every position in the input space exactly one expert should explain the data.
  71. Another approach is presented in~\parencite{bishop_mixture_1994}, where the multimodal regression tasks are interpreted as a density estimation problem.
  72. A high number of candidate distributions is reweighed to match the observed data without modeling the underlying generative process.
  73. In contrast, we are interested in a generative process, where data at the same location in the input space could have been generated by a number of global independent processes.
  74. Inherently, the data association problem is ill-posed and requires assumptions on both the underlying functions and the association of the observations.
  75. In~\parencite{lazaro-gredilla_overlapping_2012} the authors place Gaussian process (GP) priors on the different generative processes which are assumed to be relevant globally.
  76. The associations are modelled via a latent association matrix and inference is carried out using an expectation maximization algorithm.
  77. This approach takes both the inputs and the outputs of the training data into account to solve the association problem.
  78. A drawback is that the model cannot give a posterior estimate about the relevance of the different generating processes at different locations in the input space.
  79. This means that the model can be used for data exploration but additional information is needed in order to perform predictive tasks.
  80. Another approach in~\parencite{bodin_latent_2017} expands this model by allowing interdependencies between the different generative processes and formulating the association problem as an inference problem on a latent space and a corresponding covariance function.
  81. However, in this approach the number of components is a free parameter and is prone to overfitting, as the model has no means of turning off components.
  82. In this paper, we formulate a Bayesian model for the data association problem.
  83. Underpinning our approach is the use of GP priors which encode structure both on the functions and the associations themselves, allowing us to incorporate the available prior knowledge about the proper factorization into the learning problem.
  84. The use of GP priors allows us to achieve principled regularization without reducing the solution space leading to a well-regularized learning problem.
  85. Importantly, we simultaneously solve the association problem for the training data taking both inputs and outputs into account while also obtaining posterior belief about the relevance of the different generating processes in the input space.
  86. Our model can describe non-stationary processes in the sense that a different number of processes can be activated in different locations in the input space.
  87. We describe this non-stationary structure using additional GP priors which allows us to make full use of problem specific knowledge.
  88. This leads to a flexible yet interpretable model with a principled treatment of uncertainty.
  89. The paper has the following contributions:
  90. In \cref{sec:model}, we propose the data association with Gaussian processes model (DAGP).
  91. In \cref{sec:variational_approximation}, we present an efficient learning scheme via a variational approximation which allows us to simultaneously train all parts of our model via stochastic optimization and show how the same learning scheme can be applied to deep GP priors.
  92. We demonstrate our model on a noise separation problem, an artificial multimodal data set, and a multi-regime regression problem based on the cart-pole benchmark in \cref{sec:experiments}.
  93. \section{Data Association with Gaussian Processes}
  94. \label{sec:model}
  95. \begin{figure}[t]
  96. \centering
  97. \includestandalone{figures/dynamic_graphical_model}
  98. \caption{
  99. \label{fig:dynamic_graphical_model}
  100. The graphical model of DAGP.
  101. The violet observations $(\mat{x_n}, \mat{y_n})$ are generated by the latent process (green).
  102. Exactly one of the $K$ latent functions $f^{\pix{k}}$ and likelihood $\mat{y_n^{\pix{k}}}$ are evaluated to generate $\mat{y_n}$.
  103. We can place shallow or deep GP priors on these latent function values $\mat{f_n^{\pix{k}}}$.
  104. The assignment $\mat{a_n}$ to a latent function is driven by input-dependent weights $\mat{\alpha_n^{\pix{k}}}$ which encode the relevance of the different functions at $\mat{x_n}$.
  105. The different parts of the model are determined by the hyperparameters $\mat{\theta}, \mat{\sigma}$ (yellow) and variational parameters $\mat{u}$ (blue).
  106. }
  107. \end{figure}
  108. The data association with Gaussian processes (DAGP) model assumes that there exist $K$ independent functions $\Set*{f^{\pix{k}}}_{k=1}^K$, which generate pairs of observations $\D = \Set*{(\mat{x_n}, \mat{y_n})}_{n=1}^N$.
  109. Each data point is generated by evaluating one of the $K$ latent functions and adding Gaussian noise from a corresponding likelihood.
  110. The assignment of the $\nth{n}$ data point to one of the functions is specified by the indicator vector $\mat{a_n} \in \Set*{0, 1}^K$, which has exactly one non-zero entry.
  111. Our goal is to formulate simultaneous Bayesian inference on the functions $f^{\pix{k}}$ and the assignments $\mat{a_n}$.
  112. For notational conciseness, we follow the GP related notation in \parencite{hensman_scalable_2015} and collect all $N$ inputs as $\mat{X} = \left(\mat{x_1}, \ldots, \mat{x_N}\right)$ and all outputs as $\mat{Y} = \left(\mat{y_1}, \ldots, \mat{y_N}\right)$.
  113. We further denote the $\nth{k}$ latent function value associated with the $\nth{n}$ data point as $\rv{f_n^{\pix{k}}} = \Fun{f^{\pix{k}}}{\mat{x_n}}$ and collect them as $\mat{F^{\pix{k}}} = \left( \rv{f_1^{\pix{k}}}, \ldots, \rv{f_N^{\pix{k}}} \right)$ and $\mat{F} = \left( \mat{F^{\pix{1}}}, \ldots, \mat{F^{\pix{K}}} \right)$.
  114. We refer to the $\nth{k}$ entry in $\mat{a_n}$ as $a_n^{\pix{k}}$ and denote $\mat{A} = \left(\mat{a_1}, \ldots, \mat{a_N}\right)$.
  115. Given this notation, the marginal likelihood of DAGP can be separated into the likelihood, the latent function processes, and the assignment process and is given by,
  116. \begin{align}
  117. \begin{split}
  118. \label{eq:true_marginal_likelihood}
  119. \Prob*{\mat{Y} \given \mat{X}} &=
  120. \int
  121. \Prob*{\mat{Y} \given \mat{F}, \mat{A}}
  122. \Prob*{\mat{F} \given \mat{X}}
  123. \Prob*{\mat{A} \given \mat{X}}
  124. \diff \mat{A} \diff \mat{F} \\
  125. \Prob*{\mat{Y} \given \mat{F}, \mat{A}} &=
  126. \prod_{n=1}^N\prod_{k=1}^K
  127. \Gaussian*{\mat{y_n} \given \mat{f_n^{\pix{k}}}, \left(\sigma^{\pix{k}}\right)^2}_{^{\displaystyle,}}^{\Fun{\Ind}{a_n^{\pix{k}} = 1}}
  128. \end{split}
  129. \end{align}
  130. where $\sigma^{\pix{k}}$ is the noise of the $\nth{k}$ Gaussian likelihood and $\Ind$ is the indicator function.
  131. Since we assume the $K$ processes to be independent given the data and assignments, we place independent GP priors on the latent functions
  132. $\Prob*{\mat{F} \given \mat{X}} = \prod_{k=1}^K \Gaussian*{\mat{F^{\pix{k}}} \given \Fun*{\mu^{\pix{k}}}{\mat{X}}, \Fun*{\K^{\pix{k}}}{\mat{X}, \mat{X}}}$ with mean function $\mu^{\pix{k}}$ and kernel $\K^{\pix{k}}$.
  133. Our prior on the assignment process is composite.
  134. First, we assume that the $\mat{a_n}$ are drawn independently from multinomial distributions with logit parameters $\mat{\alpha_n} = \left( \alpha_n^{\pix{1}}, \ldots, \alpha_n^{\pix{K}} \right)$.
  135. One approach to specify $\mat{\alpha_n}$ is to assume them to be known a priori and to be equal for all data points~\parencite{lazaro-gredilla_overlapping_2012}.
  136. Instead, we want to infer them from the data.
  137. Specifically, we assume that there is a relationship between the location in the input space $\mathbf{x}$ and the associations.
  138. By placing independent GP priors on $\mat{\alpha^{\pix{k}}}$, we can encode our prior knowledge of the associations by the choice of covariance function
  139. $\Prob*{\mat{\alpha} \given \mat{X}} = \prod_{k=1}^K \Gaussian*{\rv{\alpha^{\pix{k}}} \given \mat{0}, \Fun{\K_\alpha^{\pix{k}}}{\mat{X}, \mat{X}}}$.
  140. The prior on the assignments $\mat{A}$ is given by marginalizing the $\mat{\alpha^{\pix{k}}}$, which, when normalized, parametrize a batch of multinomial distributions,
  141. \begin{align}
  142. \begin{split}
  143. \label{eq:multinomial_likelihood}
  144. \Prob*{\mat{A} \given \mat{X}} &=
  145. \int
  146. \Multinomial*{\mat{A} \given \Fun{\softmax}{\mat{\alpha}}} \Prob*{\mat{\alpha} \given \mat{X}}
  147. \diff \rv{\alpha}.
  148. \end{split}
  149. \end{align}
  150. Modelling the relationship between the input and the associations allows us to efficiently model data, which, for example, is unimodal in some parts of the input space and bimodal in others.
  151. A simple smoothness prior will encode a belief for how quickly the components switch across the input domain.
  152. Since the GPs of the $\mat{\alpha^{\pix{k}}}$ use a zero mean function, our prior assumption is a uniform distribution of the different generative processes everywhere in the input space.
  153. If inference on the $\mat{a_n}$ reveals that, say, all data points at similar positions in the input space can be explained by the same $\nth{k}$ process, the belief about $\mat{\alpha}$ can be adjusted to make a non-uniform distribution favorable at this position, thereby increasing the likelihood via $\Prob*{\mat{A} \given \mat{X}}$.
  154. This mechanism introduces an incentive for the model to use as few functions as possible to explain the data and importantly allows us to predict a relative importance of these functions when calculating the posterior of the new observations $\mat{x_\ast}$.
  155. \Cref{fig:dynamic_graphical_model} shows the resulting graphical model, which divides the generative process for every data point in the application of the latent functions on the left side and the assignment process on the right side.
  156. The interdependencies between the data points are introduced through the GP priors on $\rv{f_n^{\pix{k}}}$ and $\rv{\alpha_n^{\pix{k}}}$ and depend on the hyperparameters $\mat{\theta} = \Set*{\mat{\theta^{\pix{k}}}, \mat{\theta_\alpha^{\pix{k}}}, \sigma^{\pix{k}}}_{k=1}^K$.
  157. The priors for the $f^{\pix{k}}$ can be chosen independently to encode different prior assumptions about the underlying processes.
  158. In \cref{subsec:choicenet}, we use different kernels to separate a non-linear signal from a noise process.
  159. Going further, we can also use deep GP as priors for the $f^{\pix{k}}$~\parencite{damianou_deep_2013, salimbeni_doubly_2017}.
  160. Since many real word systems are inherently hierarchical, prior knowledge can often be formulated more easily using composite functions~\parencite{kaiser_bayesian_2018}.
  161. \section{Variational Approximation}
  162. \label{sec:variational_approximation}
  163. Exact inference is intractable in this model.
  164. Instead, we formulate a variational approximation following ideas from~\parencite{hensman_gaussian_2013, salimbeni_doubly_2017}.
  165. Because of the rich structure in our model, finding a variational lower bound which is both faithful and can be evaluated analytically is hard.
  166. To proceed, we formulate an approximation which factorizes along both the $K$ processes and $N$ data points.
  167. This bound can be sampled efficiently and allows us to optimize both the models for the different processes $\Set*{f^{\pix{k}}}_{k=1}^K$ and our belief about the data assignments $\Set*{\mat{a_n}}_{n=1}^N$ simultaneously using stochastic optimization.
  168. \subsection{Variational Lower Bound}
  169. \label{subsec:lower_bound}
  170. As first introduced by~\textcite{titsias_variational_2009}, we augment all GP in our model using sets of $M$ inducing points $\mat{Z^{\pix{k}}} = \left(\mat{z_1^{\pix{k}}}, \ldots, \mat{z_M^{\pix{k}}}\right)$ and their corresponding function values $\mat{u^{\pix{k}}} = \Fun*{f^{\pix{k}}}{\mat{Z^{\pix{k}}}}$, the inducing variables.
  171. We collect them as $\mat{Z} = \Set*{\mat{Z^{\pix{k}}}, \mat{Z_\alpha^{\pix{k}}}}_{k=1}^K$ and $\mat{U} = \Set*{\mat{u^{\pix{k}}}, \mat{u_\alpha^{\pix{k}}}}_{k=1}^K$.
  172. Taking the function $f^{\pix{k}}$ and its corresponding GP as an example, the inducing variables $\mat{u^{\pix{k}}}$ are jointly Gaussian with the latent function values $\mat{F^{\pix{k}}}$ of the observed data by the definition of GPs.
  173. We follow~\parencite{hensman_gaussian_2013} and choose the variational approximation $\Variat*{\mat{F^{\pix{k}}}, \mat{u^{\pix{k}}}} = \Prob*{\mat{F^{\pix{k}}} \given \mat{u^{\pix{k}}}, \mat{X}, \mat{Z^{\pix{k}}}}\Variat*{\mat{u^{\pix{k}}}}$ with $\Variat*{\mat{u^{\pix{k}}}} = \Gaussian*{\mat{u^{\pix{k}}} \given \mat{m^{\pix{k}}}, \mat{S^{\pix{k}}}}$.
  174. This formulation introduces the set $\Set*{\mat{Z^{\pix{k}}}, \mat{m^{\pix{k}}}, \mat{S^{\pix{k}}}}$ of variational parameters indicated in~\cref{fig:dynamic_graphical_model}.
  175. To simplify notation we drop the dependency on $\mat{Z}$ in the following.
  176. A central assumption of this approximation is that given enough well-placed inducing variables $\mat{u^{\pix{k}}}$, they are a sufficient statistic for the latent function values $\mat{F^{\pix{k}}}$.
  177. This implies conditional independence of the $\mat{f_n^{\pix{k}}}$ given $\mat{u^{\pix{k}}}$ and $\mat{X}$.
  178. The variational posterior of a single GP can then be written as,
  179. \begin{align}
  180. \begin{split}
  181. \Variat*{\mat{F^{\pix{k}}} \given \mat{X}}
  182. &=
  183. \int \Variat*{\mat{u^{\pix{k}}}}
  184. \Prob*{\mat{F^{\pix{k}}} \given \mat{u^{\pix{k}}}, \mat{X}}
  185. \diff \mat{u^{\pix{k}}}
  186. \\
  187. &=
  188. \int \Variat*{\mat{u^{\pix{k}}}}
  189. \prod_{n=1}^N \Prob*{\mat{f_n^{\pix{k}}} \given \mat{u^{\pix{k}}}, \mat{x_n}}
  190. \diff \mat{u^{\pix{k}}},
  191. \end{split}
  192. \end{align}
  193. which can be evaluated analytically, since it is a convolution of Gaussians.
  194. This formulation simplifies inference within single GPs.
  195. Next, we discuss how to handle the correlations between the different functions and the assignment processes.
  196. Given a set of assignments $\mat{A}$, this factorization along the data points is preserved in our model due to the assumed independence of the different functions in~\cref{eq:true_marginal_likelihood}.
  197. The independence is lost if the assignments are unknown.
  198. In this case, both the (a priori independent) assignment processes and the functions influence each other through data with unclear assignments.
  199. Following the ideas of doubly stochastic variational inference (DSVI) presented by~\textcite{salimbeni_doubly_2017} in the context of deep GPs, we maintain these correlations between different parts of the model while assuming factorization of the variational distribution.
  200. That is, our variational posterior takes the factorized form,
  201. \begin{align}
  202. \begin{split}
  203. \label{eq:variational_distribution}
  204. \Variat*{\mat{F}, \mat{\alpha}, \mat{U}}
  205. &= \Variat*{\mat{\alpha}, \Set*{\mat{F^{\pix{k}}}, \mat{u^{\pix{k}}}, \mat{u_\alpha^{\pix{k}}}}_{k=1}^K} \\
  206. \MoveEqLeft = \prod_{k=1}^K\prod_{n=1}^N \Prob*{\mat{\alpha_n^{\pix{k}}} \given \mat{u_\alpha^{\pix{k}}}, \mat{x_n}}\Variat*{\mat{u_\alpha^{\pix{k}}}}
  207. \prod_{k=1}^K \prod_{n=1}^N \Prob*{\mat{f_n^{\pix{k}}} \given \mat{u^{\pix{k}}}, \mat{x_n}}\Variat*{\mat{u^{\pix{k}}}}.
  208. \end{split}
  209. \end{align}
  210. Our goal is to recover a posterior for both the generating functions and the assignment of data.
  211. To achieve this, instead of marginalizing $\mat{A}$, we consider the variational joint of $\mat{Y}$ and $\mat{A}$,
  212. \begin{align}
  213. \begin{split}
  214. \Variat*{\mat{Y}, \mat{A}} &=
  215. \int
  216. \Prob*{\mat{Y} \given \mat{F}, \mat{A}}
  217. \Prob*{\mat{A} \given \mat{\alpha}}
  218. \Variat*{\mat{F}, \mat{\alpha}}
  219. \diff \mat{F} \diff \mat{\alpha},
  220. \end{split}
  221. \end{align}
  222. which retains both the Gaussian likelihood of $\mat{Y}$ and the multinomial likelihood of $\mat{A}$ in \cref{eq:multinomial_likelihood}.
  223. A lower bound $\Ell_{\text{DAGP}}$ for the log-joint $\log\Prob*{\mat{Y}, \mat{A} \given \mat{X}}$ of DAGP is given by,
  224. \begin{align}
  225. \begin{split}
  226. \label{eq:variational_bound}
  227. \Ell_{\text{DAGP}} &= \Moment*{\E_{\Variat*{\mat{F}, \mat{\alpha}, \mat{U}}}}{\log\frac{\Prob*{\mat{Y}, \mat{A}, \mat{F}, \mat{\alpha}, \mat{U} \given \mat{X}}}{\Variat*{\mat{F}, \mat{\alpha}, \mat{U}}}} \\
  228. &= \sum_{n=1}^N \Moment*{\E_{\Variat*{\mat{f_n}}}}{\log \Prob*{\mat{y_n} \given \mat{f_n}, \mat{a_n}}}
  229. + \sum_{n=1}^N \Moment*{\E_{\Variat*{\mat{\alpha_n}}}}{\log \Prob*{\mat{a_n} \given \mat{\alpha_n}}} \\
  230. &\quad - \sum_{k=1}^K \KL{\Variat*{\mat{u^{\pix{k}}}}}{\Prob*{\mat{u^{\pix{k}}} \given \mat{Z^{\pix{k}}}}}
  231. - \sum_{k=1}^K \KL{\Variat*{\mat{u_\alpha^{\pix{k}}}}}{\Prob*{\mat{u_\alpha^{\pix{k}}} \given \mat{Z_\alpha^{\pix{k}}}}}.
  232. \end{split}
  233. \end{align}
  234. Due to the structure of~\cref{eq:variational_distribution}, the bound factorizes along the data enabling stochastic optimization.
  235. This bound has complexity $\Fun*{\Oh}{NM^2K}$ to evaluate.
  236. \subsection{Optimization of the Lower Bound}
  237. \label{subsec:computation}
  238. An important property of the variational bound for DSVI~\parencite{salimbeni_doubly_2017} is that taking samples for single data points is straightforward and can be implemented efficiently.
  239. Specifically, for some $k$ and $n$, samples $\mat{\hat{f}_n^{\pix{k}}}$ from $\Variat*{\mat{f_n^{\pix{k}}}}$ are independent of all other parts of the model and can be drawn using samples from univariate unit Gaussians using reparametrizations~\parencite{kingma_variational_2015,rezende_stochastic_2014}.
  240. Note that it would not be necessary to sample from the different processes, since $\Variat*{\mat{F^{\pix{k}}}}$ can be computed analytically~\parencite{hensman_gaussian_2013}.
  241. However, we apply the sampling scheme to the optimization of both the assignment processes $\mat{\alpha}$ and the assignments $\mat{A}$ as for $\mat{\alpha}$, the analytical propagation of uncertainties through the $\softmax$ renormalization and multinomial likelihoods is intractable but can easily be evaluated using sampling.
  242. We optimize $\Ell_{\text{DAGP}}$ to simultaneously recover maximum likelihood estimates of the hyperparameters $\mat{\theta}$, the variational parameters $\Set*{\mat{Z}, \mat{m}, \mat{S}}$, and assignments $\mat{A}$.
  243. For every $n$, we represent the belief about $\mat{a_n}$ as a $K$-dimensional discrete distribution $\Variat*{\mat{a_n}}$.
  244. This distribution models the result of drawing a sample from $\Multinomial*{\mat{a_n} \given \Fun{\softmax}{\mat{\alpha_n}}}$ during the generation of the data point $(\mat{x_n}, \mat{y_n})$.
  245. Since we want to optimize $\Ell_{\text{DAGP}}$ using (stochastic) gradient descent, we need to employ a continuous relaxation to gain informative gradients of the bound with respect to the binary (and discrete) vectors $\mat{a_n}$.
  246. One straightforward way to relax the problem is to use the current belief about $\Variat*{\mat{a_n}}$ as parameters for a convex combination of the $\mat{f_n^{\pix{k}}}$, that is, to approximate $\mat{f_n} \approx \sum_{k=1}^K \Variat*{\mat{a_n^{\pix{k}}}}\mat{\hat{f}_n^{\pix{k}}}$.
  247. Using this relaxation is problematic in practice.
  248. Explaining data points as mixtures of the different generating processes violates the modelling assumption that every data point was generated using exactly one function but can substantially simplify the learning problem.
  249. Because of this, special care must be taken during optimization to enforce the sparsity of $\Variat*{\mat{a_n}}$.
  250. To avoid this problem, we propose using a different relaxation based on additional stochasticity.
  251. Instead of directly using $\Variat*{\mat{a_n}}$ to combine the $\mat{f_n^{\pix{k}}}$, we first draw a sample $\mat{\hat{a}_n}$ from a concrete random variable as suggested by~\textcite{maddison_concrete_2016}, parameterized by $\Variat*{\mat{a_n}}$.
  252. Based on a temperature parameter $\lambda$, a concrete random variable enforces sparsity but is also continuous and yields informative gradients using automatic differentiation.
  253. Samples from a concrete random variable are unit vectors and for $\lambda \to 0$ their distribution approaches a discrete distribution.
  254. Our approximate evaluation of the bound in \cref{eq:variational_bound} during optimization has multiple sources of stochasticity, all of which are unbiased.
  255. First, we approximate the expectations using Monte Carlo samples $\mat{\hat{f}_n^{\pix{k}}}$, $\mat{\hat{\alpha}_n^{\pix{k}}}$, and $\mat{\hat{a}_n}$.
  256. And second, the factorization of the bound along the data allows us to use mini-batches for optimization~\parencite{salimbeni_doubly_2017, hensman_gaussian_2013}.
  257. \subsection{Approximate Predictions}
  258. \label{subsec:predictions}
  259. Predictions for a test location $\mat{x_\ast}$ are mixtures of $K$ independent Gaussians, given by,
  260. \begin{align}
  261. \begin{split}
  262. \label{eq:predictive_posterior}
  263. \Variat*{\mat{f_\ast} \given \mat{x_\ast}}
  264. &= \int \sum_{k=1}^K \Variat*{a_\ast^{\pix{k}} \given \mat{x_\ast}} \Variat*{\mat{f_\ast^{\pix{k}}} \given \mat{x_\ast}} \diff \mat{a_\ast^{\pix{k}}}
  265. \approx \sum_{k=1}^K \hat{a}_\ast^{\pix{k}} \mat{\hat{f}_\ast^{\pix{k}}}.
  266. \end{split}
  267. \end{align}
  268. The predictive posteriors of the $K$ functions $\Variat*{\mat{f_\ast^{\pix{k}}} \given \mat{x_\ast}}$ are given by $K$ independent shallow GPs and can be calculated analytically~\parencite{hensman_gaussian_2013}.
  269. Samples from the predictive density over $\Variat*{\mat{a_\ast} \given \mat{x_\ast}}$ can be obtained by sampling from the GP posteriors $\Variat*{\mat{\alpha_\ast^{\pix{k}}} \given \mat{x_\ast}}$ and renormalizing the resulting vector $\mat{\alpha_\ast}$ using the $\softmax$-function.
  270. The distribution $\Variat*{\mat{a_\ast} \given \mat{x_\ast}}$ reflects the model's belief about how many and which of the $K$ generative processes are relevant at the test location $\mat{x_\ast}$ and their relative probability.
  271. \subsection{Deep Gaussian Processes}
  272. \label{subsec:deep_gp}
  273. For clarity, we have described the variational bound in terms of a shallow GP.
  274. However, as long as their variational bound can be efficiently sampled, any model can be used in place of shallow GPs for the $f^{\pix{k}}$.
  275. Since our approximation is based on DSVI, an extension to deep GPs is straightforward.
  276. Analogously to~\parencite{salimbeni_doubly_2017}, our new prior assumption about the $\nth{k}$ latent function values $\Prob*{\mat{F^{\prime\pix{k}}} \given \mat{X}}$ is given by,
  277. \begin{align}
  278. \begin{split}
  279. \Prob*{\mat{F^{\prime\pix{k}}} \given \mat{X}} = \prod_{l=1}^L \Prob*{\mat{F_l^{\prime\pix{k}}} \given \mat{u_l^{\prime\pix{k}}} \mat{F_{l-1}^{\prime\pix{k}}}, \mat{Z_l^{\prime\pix{k}}}},
  280. \end{split}
  281. \end{align}
  282. for an $L$-layer deep GP and with $\mat{F_0^{\prime\pix{k}}} \coloneqq \mat{X}$.
  283. Similar to the single-layer case, we introduce sets of inducing points $\mat{Z_l^{\prime\pix{k}}}$ and a variational distribution over their corresponding function values $\Variat*{\mat{u_l^{\prime\pix{k}}}} = \Gaussian*{\mat{u_l^{\prime\pix{k}}} \given \mat{m_l^{\prime\pix{k}}}, \mat{S_l^{\prime\pix{k}}}}$.
  284. We collect the latent multi-layer function values as $\mat{F^\prime} = \Set{\mat{F_l^{\prime\pix{k}}}}_{k=1,l=1}^{K,L}$ and corresponding $\mat{U^\prime}$ and assume an extended variational distribution,
  285. \begin{align}
  286. \begin{split}
  287. \label{eq:deep_variational_distribution}
  288. \Variat*{\mat{F^\prime}, \mat{\alpha}, \mat{U^\prime}}
  289. &= \Variat*{\mat{\alpha}, \Set*{\mat{u_\alpha^{\pix{k}}}}_{k=1}^K, \Set*{\mat{F_l^{\prime\pix{k}}}, \mat{u_l^{\prime\pix{k}}}}_{k=1,l=1}^{K,L}} \\
  290. \MoveEqLeft[4] = \prod_{k=1}^K\prod_{n=1}^N \Prob*{\mat{\alpha_n^{\pix{k}}} \given \mat{u_\alpha^{\pix{k}}}, \mat{x_n}}\Variat*{\mat{u_\alpha^{\pix{k}}}}
  291. \prod_{k=1}^K \prod_{l=1}^L \prod_{n=1}^N \Prob*{\mat{f_{n,l}^{\prime\pix{k}}} \given \mat{u_l^{\prime\pix{k}}}, \mat{x_n}}\Variat*{\mat{u_l^{\prime\pix{k}}}},
  292. \end{split}
  293. \end{align}
  294. where we identify $\mat{f_n^{\prime\pix{k}}} = \mat{f_{n,L}^{\prime\pix{k}}}$.
  295. As the $\nth{n}$ marginal of the $\nth{L}$ layer depends only on the $\nth{n}$ marginal of all layers above sampling from them remains straightforward~\parencite{salimbeni_doubly_2017}.
  296. The marginal is given by,
  297. \begin{align}
  298. \begin{split}
  299. \Variat{\mat{f_{n,L}^{\prime\pix{k}}}} =
  300. \int
  301. \Variat{\mat{f_{n,L}^{\prime\pix{k}}} \given \mat{f_{n,L-1}^{\prime\pix{k}}}}
  302. \prod_{l=1}^{L-1} \Variat{\mat{f_{n,l}^{\prime\pix{k}}} \given \mat{f_{n,l-1}^{\prime\pix{k}}}}
  303. \diff \mat{f_{n,l}^{\prime\pix{k}}}.
  304. \end{split}
  305. \end{align}
  306. The complete bound is structurally similar to \cref{eq:variational_bound} and given by,
  307. \begin{align}
  308. \begin{split}
  309. \label{eq:deep_variational_bound}
  310. \Ell^\prime_{\text{DAGP}}
  311. &= \sum_{n=1}^N \Moment*{\E_{\Variat*{\mat{f^\prime_n}}}}{\log \Prob*{\mat{y_n} \given \mat{f^\prime_n}, \mat{a_n}}}
  312. + \sum_{n=1}^N \Moment*{\E_{\Variat*{\mat{\alpha_n}}}}{\log \Prob*{\mat{a_n} \given \mat{\alpha_n}}} \\
  313. \MoveEqLeft - \sum_{k=1}^K \sum_{l=1}^L \KL{\Variat{\mat{u_l^{\pix{k}}}}}{\Prob{\mat{u_l^{\pix{k}}} \given \mat{Z_l^{\pix{k}}}}}
  314. - \sum_{k=1}^K \KL{\Variat*{\mat{u_\alpha^{\pix{k}}}}}{\Prob*{\mat{u_\alpha^{\pix{k}}} \given \mat{Z_\alpha^{\pix{k}}}}}.
  315. \end{split}
  316. \end{align}
  317. To calculate the first term, samples have to be propagated through the deep GP structures.
  318. This extended bound thus has complexity $\Fun*{\Oh}{NM^2LK}$ to evaluate in the general case and complexity $\Fun*{\Oh}{NM^2\cdot\Fun{\max}{L, K}}$ if the assignments $\mat{a_n}$ take binary values.
  319. \section{Experiments}
  320. \label{sec:experiments}
  321. \begin{table}[t]
  322. \centering
  323. \caption{
  324. \label{tab:model_capabilities}
  325. Comparison of qualitative model capabilities.
  326. A model has a capability if it contains components which enable it to solve the respective task in principle.
  327. }
  328. \scriptsize
  329. \newcolumntype{Y}{>{\centering\arraybackslash}X}%
  330. \newcommand{\yes}{\checkmark}
  331. \newcommand{\no}{--}
  332. \newcommand{\resultrow}[9]{#1 & #4 & #7 & #3 & #9 & #5 & #6 & #8 \\}
  333. % \setlength{\tabcolsep}{1pt}
  334. \begin{tabularx}{\linewidth}{lYYYYYYYY}
  335. \toprule
  336. \resultrow{}{Bayesian}{Scalable Inference}{Predictive Posterior}{Data Association}{Predictive Associations}{Multimodal Data}{Separate Models}{Interpretable Priors}
  337. \midrule
  338. Experiment & & & & & \cref{tab:choicenet} & \cref{tab:cartpole} & \cref{fig:semi_bimodal} \\
  339. \midrule
  340. \resultrow{DAGP (Ours)}{\yes}{\yes}{\yes}{\yes}{\yes}{\yes}{\yes}{\yes}
  341. \addlinespace
  342. \resultrow{OMGP \parencite{lazaro-gredilla_overlapping_2012}}{\yes}{\no}{\yes}{\yes}{\no}{\yes}{\yes}{\yes}
  343. \resultrow{RGPR \parencite{rasmussen_infinite_2002}}{\yes}{\no}{\yes}{\no}{\no}{\yes}{\no}{\yes}
  344. % \resultrow{MLE \parencite{tresp_mixtures_2001}}{\yes}{\no}{\yes}{\no}{\no}{\no}{\no}{\yes}
  345. % \resultrow{LatentGP \parencite{bodin_latent_2017}}{\yes}{\no}{\yes}{\no}{\no}{\yes}{\no}{\yes}
  346. \resultrow{GPR}{\yes}{\yes}{\yes}{\no}{\no}{\no}{\no}{\yes}
  347. \addlinespace
  348. \resultrow{BNN+LV \parencite{depeweg_learning_2016}}{\yes}{\yes}{\yes}{\no}{\no}{\yes}{\no}{\no}
  349. \resultrow{MDN \parencite{bishop_mixture_1994}}{\no}{\yes}{\yes}{\no}{\no}{\yes}{\no}{\no}
  350. \resultrow{MLP}{\no}{\yes}{\yes}{\no}{\no}{\no}{\no}{\no}
  351. \bottomrule
  352. \end{tabularx}
  353. \end{table}
  354. In this section, we investigate the behavior of the DAGP model.
  355. We use an implementation of DAGP in TensorFlow~\parencite{tensorflow2015-whitepaper} based on GPflow~\parencite{matthews_gpflow_2017} and the implementation of DSVI~\parencite{salimbeni_doubly_2017}.
  356. \Cref{tab:model_capabilities} compares qualitative properties of DAGP and related work.
  357. All models can solve standard regression problems and yield unimodal predictive distributions or, in case of multi-layer perceptrons (MLP), a single point estimate.
  358. Both standard Gaussian process regression (GPR) and MLP do not impose structure which enables the models to handle multi-modal data.
  359. Mixture density networks (MDN)~\parencite{bishop_mixture_1994} and the infinite mixtures of Gaussian processes (RGPR)~\parencite{rasmussen_infinite_2002} model yield multi-modal posteriors through mixtures with many components but do not solve an association problem.
  360. Similarly, Bayesian neural networks with added latent variables (BNN+LV)~\parencite{depeweg_learning_2016} represent such a mixture through a continuous latent variable.
  361. Both the overlapping mixtures of Gaussian processes (OMGP)~\parencite{lazaro-gredilla_overlapping_2012} model and DAGP explicitly model the data association problem and yield independent models for the different generating processes.
  362. However, OMGP assumes global relevance of the different modes.
  363. In contrast, DAGP infers a spacial posterior of this relevance.
  364. We evaluate our model on three problems to highlight the following advantages of the explicit structure of DAGP:
  365. \emph{Interpretable priors give structure to ill-posed data association problems.}
  366. In \cref{subsec:choicenet}, we consider a noise separation problem, where a signal of interest is disturbed with uniform noise.
  367. To solve this problem, assumptions about what constitutes a signal are needed.
  368. The hierarchical structure of DAGP allows us to formulate independent and interpretable priors on the noise and signal processes.
  369. \emph{Predictive associations represent knowledge about the relevance of generative processes.}
  370. In \cref{subsec:semi_bimodal}, we investigate the implicit incentive of DAGP to explain data using as few processes as possible.
  371. Additional to a joint posterior explaining the data, DAGP also gives insight into the relative importance of the different processes in different parts of the input space.
  372. DAGP is able to explicitly recover the changing number of modes in a data set.
  373. \emph{Separate models for independent generating processes avoid model pollution.}
  374. In \cref{subsec:cartpole}, we simulate a system with multiple operational regimes via mixed observations of two different cart-pole systems.
  375. DAGP successfully learns an informative joint posterior by solving the underlying association problem.
  376. We show that the DAGP posterior contains two separate models for the two original operational regimes.
  377. \subsection{Noise Separation}
  378. \label{subsec:choicenet}
  379. %
  380. \begin{table}[t]
  381. \centering
  382. \caption{
  383. \label{tab:choicenet}
  384. Results on the ChoiceNet data set.
  385. The gray part of the table shows RMSE results for baseline models from~\parencite{choi_choicenet_2018}.
  386. For our experiments using the same setup, we report RMSE comparable to the previous results together with MLL.
  387. Both are calculated based on a test set of 1000 equally spaced samples of the noiseless underlying function.
  388. }%
  389. \newcolumntype{H}{>{\setbox0=\hbox\bgroup}c<{\egroup}@{}}
  390. \newcolumntype{Y}{>{\centering\arraybackslash}X}%
  391. \newcolumntype{Z}{>{\columncolor{sStone!33}\centering\arraybackslash}X}%
  392. \begin{tabularx}{\linewidth}{rYYYY|ZZZZHZ}
  393. \toprule
  394. Outliers & DAGP & OMGP & DAGP & OMGP & CN & MDN & MLP & GPR & LGPR & RGPR \\
  395. & \scriptsize MLL & \scriptsize MLL & \scriptsize RMSE & \scriptsize RMSE & \scriptsize RMSE & \scriptsize RMSE & \scriptsize RMSE & \scriptsize RMSE & \scriptsize RMSE & \scriptsize RMSE \\
  396. \midrule
  397. 0\,\% & \textbf{2.86} & 2.09 & 0.008 & \textbf{0.005} & 0.034 & 0.028 & 0.039 & 0.008 & 0.022 & 0.017 \\
  398. 20\,\% & \textbf{2.71} & 1.83 & 0.008 & \textbf{0.005} & 0.022 & 0.087 & 0.413 & 0.280 & 0.206 & 0.013 \\
  399. 40\,\% & \textbf{2.12} & 1.60 & \textbf{0.005} & 0.007 & 0.018 & 0.565 & 0.452 & 0.447 & 0.439 & 1.322 \\
  400. 60\,\% & 0.874 & \textbf{1.23} & 0.031 & \textbf{0.006} & 0.023 & 0.645 & 0.636 & 0.602 & 0.579 & 0.738 \\
  401. 80\,\% & \textbf{0.126} & -1.35 & 0.128 & 0.896 & \textbf{0.084} & 0.778 & 0.829 & 0.779 & 0.777 & 1.523 \\
  402. \bottomrule
  403. \end{tabularx}
  404. \end{table}
  405. \begin{figure}[t]
  406. \centering
  407. \includestandalone{figures/choicenet_data_40}\hspace{-7pt}
  408. \includestandalone{figures/choicenet_joint_40}\hspace{-7pt}
  409. \includestandalone{figures/choicenet_attrib_40}
  410. \\
  411. \includestandalone{figures/choicenet_data}\hspace{-7pt}
  412. \includestandalone{figures/choicenet_joint}\hspace{-7pt}
  413. \includestandalone{figures/choicenet_attrib}
  414. \caption{
  415. \label{fig:choicenet}
  416. DAGP on the ChoiceNet data set with 40\,\% outliers (upper row) and 60\,\% outliers (lower row).
  417. We show the raw data (left), joint posterior (center) and assignments (right).
  418. The bimodal DAGP identifies the signal perfectly up to 40\,\% outliers.
  419. For 60\,\% outliers, some of the noise is interpreted as signal, but the latent function is still recovered.
  420. }
  421. \end{figure}
  422. %
  423. We consider an experiment based on a noise separation problem.
  424. We apply DAGP to a one-dimensional regression problem with uniformly distributed asymmetric outliers in the training data.
  425. We use a task proposed by~\textcite{choi_choicenet_2018} where we sample $x \in [-3, 3]$ uniformly and apply the function $\Fun{f}{x} = (1 - \delta)(\Fun{\cos}{\sfrac{\pi}{2} \cdot x}\Fun{\exp}{-(\sfrac{x}{2})^2} + \gamma) + \delta \cdot \epsilon$, where $\delta \sim \Fun{\Ber}{\lambda}$, $\epsilon \sim \Fun{\Uni}{-1, 3}$ and $\gamma \sim \Gaussian{0, 0.15^2}$.
  426. That is, a fraction $\lambda$ of the training data, the outliers, are replaced by asymmetric uniform noise.
  427. We sample a total of 1000 data points and use $25$ inducing points for every GP in our model.
  428. Every generating process in our model can use a different kernel and therefore encode different prior assumptions.
  429. For this setting, we use two processes, one with a squared exponential kernel and one with a white noise kernel.
  430. This encodes the problem statement that every data point is either part of the signal we wish to recover or uncorrelated noise.
  431. To avoid pathological solutions for high outlier ratios, we add a prior to the likelihood variance of the first process, which encodes our assumption that there actually is a signal in the training data.
  432. The model proposed in~\parencite{choi_choicenet_2018}, called ChoiceNet (CN), is a specific neural network structure and inference algorithm to deal with corrupted data.
  433. In their work, they compare their approach to the MLP, MDN, GPR, and RGPR models.
  434. We add experiments for both DAGP and OMGP.
  435. \Cref{tab:choicenet} shows results for outlier rates varied from 0\,\% to 80\,\%.
  436. Besides the root mean squared error (RMSE) reported in~\parencite{choi_choicenet_2018}, we also report the mean test log likelihood (MLL).
  437. Since we can encode the same prior knowledge about the signal and noise processes in both OMGP and DAGP, the results of the two models are comparable:
  438. For low outlier rates, they correctly identify the outliers and ignore them, resulting in a predictive posterior of the signal equivalent to standard GP regression without outliers.
  439. In the special case of 0\,\% outliers, the models correctly identify that the process modelling the noise is not necessary, thereby simplifying to standard GP regression.
  440. For high outlier rates, stronger prior knowledge about the signal is required to still identify it perfectly.
  441. \Cref{fig:choicenet} shows the DAGP posterior for an outlier rate of 60\,\%.
  442. While the function has still been identified well, some of the noise is also explained using this process, thereby introducing slight errors in the predictions.
  443. \subsection{Multimodal Data}
  444. \label{subsec:semi_bimodal}
  445. %
  446. \begin{figure}[t]
  447. \centering
  448. \includestandalone{figures/semi_bimodal_joint}
  449. \includestandalone{figures/semi_bimodal_attrib}
  450. \includestandalone{figures/semi_bimodal_attrib_process}
  451. \caption{
  452. \label{fig:semi_bimodal}
  453. The DAGP posterior on an artificial data set with bimodal and trimodal parts.
  454. The joint predictions (top) are mixtures of four Gaussians weighed by the assignment probabilities $\mat{\alpha}$ (bottom).
  455. The weights are represented via the opacity of the modes.
  456. The model has learned that the mode $k = 2$ is irrelevant, that the mode $k = 1$ is only relevant around the interval $[0, 5]$.
  457. Outside this interval, the mode $k = 3$ is twice as likely as the mode $k = 4$.
  458. The concrete assignments $\mat{a}$ (middle) of the training data show that the mode $k = 1$ is only used to explain observations where the training data is trimodal.
  459. The mode $k = 2$ is never used.
  460. }
  461. \end{figure}
  462. %
  463. Our second experiment applies DAGP to a multimodal data set.
  464. The data, together with recovered posterior attributions, can be seen in \cref{fig:semi_bimodal}.
  465. We uniformly sample 350 data points in the interval $x \in [-2\pi, 2\pi]$ and obtain $y_1 = \Fun{\sin}{x} + \epsilon$, $y_2 = \Fun{\sin}{x} - 2 \Fun{\exp}{-\sfrac{1}{2} \cdot (x-2)^2} + \epsilon$ and $y_3 = -1 - \sfrac{3}{8\pi} \cdot x + \sfrac{3}{10} \cdot \Fun*{\sin}{2x} + \epsilon$ with additive independent noise $\epsilon \sim \Gaussian*{0, 0.005^2}$.
  466. The resulting data set $\D = \Set{\left( x, y_1 \right), \left( x, y_2 \right), \left( x, y_3 \right)}$ is trimodal in the interval $[0, 5]$ and is otherwise bimodal with one mode containing double the amount of data than the other.
  467. We use squared exponential kernels as priors for both the $f^{\pix{k}}$ and $\alpha^{\pix{k}}$ and $25$ inducing points in every GP.
  468. \Cref{fig:semi_bimodal} shows the posterior of a DAGP with $K = 4$ modes applied to the data, which correctly identified the underlying functions.
  469. The figure shows the posterior belief about the assignments $\mat{A}$ and illustrates that DAGP recovered that it needs only three of the four available modes to explain the data.
  470. One of the modes is only assigned points in the interval $[0, 5]$ where the data is actually trimodal.
  471. This separation is explicitly represented in the model via the assignment processes $\mat{\alpha}$ (bottom panel in \cref{fig:semi_bimodal}).
  472. Importantly, DAGP does not only cluster the data with respect to the generating processes but also infers a factorization of the input space with respect to the relative importance of the different processes.
  473. The model has disabled the mode $k = 2$ in the complete input space and has learned that the mode $k = 1$ is only relevant in the interval $[0, 5]$ where the three enabled modes each explain about a third of the data.
  474. Outside this interval, the model has learned that one of the modes has about twice the assignment probability than the other one, thus correctly reconstructing the true generative process.
  475. The DAGP is implicitly incentivized to explain the data using as few modes as possible through the likelihood term of the inferred $\mat{a_n}$ in \cref{eq:variational_bound}.
  476. At $x = -10$ the inferred modes and assignment processes start reverting to their respective priors away from the data.
  477. \subsection{Mixed Cart-pole Systems}
  478. \label{subsec:cartpole}
  479. \begin{table}[t]
  480. \centering
  481. \caption{
  482. \label{tab:cartpole}
  483. Results on the cart-pole data set.
  484. We report mean log likelihoods with their standard error for ten runs.
  485. The upper results are obtained by training the model on the mixed data set and evaluating it jointly (left) on multi-modal predictions.
  486. We evaluate the two inferred sub-models for the default system (center) and short-pole system (right).
  487. We provide gray baseline comparisons with BNN+LV and GPR models which cannot solve the data assignment problem.
  488. BNN+LV yields joint predictions which cannot be separated into sub-models.
  489. Specialized GPR models trained the individual training sets give a measure of the possible performance if the data assignment problem would be solved perfectly.
  490. }%
  491. \sisetup{
  492. table-format=-1.3(3),
  493. table-number-alignment=center,
  494. separate-uncertainty=true,
  495. % table-align-uncertainty,
  496. table-figures-uncertainty=1,
  497. detect-weight,
  498. }
  499. \newcolumntype{H}{>{\setbox0=\hbox\bgroup\begin{math}}c<{\end{math}\egroup}@{}}
  500. \setlength{\tabcolsep}{1pt}
  501. \begin{tabular}{HlSSHSHS}
  502. \toprule
  503. & & \multicolumn{2}{c}{Mixed} & \multicolumn{2}{c}{Default only} & \multicolumn{2}{c}{Short-pole only} \\
  504. \cmidrule(lr){3-4} \cmidrule(lr){5-6} \cmidrule(lr){7-8}
  505. Runs & & {Train} & {Test} & {Train} & {Test} & {Train} & {Test} \\
  506. \midrule
  507. 10 & DAGP & \bfseries 0.575 \pm 0.013 & \bfseries 0.521 \pm 0.009 & 0.855 \pm 0.002 & 0.844 \pm 0.002 & 0.686 \pm 0.009 & \bfseries 0.602 \pm 0.005 \\
  508. % 10 & U-DAGP & 0.472 \pm 0.002 & 0.425 \pm 0.002 & {\textemdash} & {\textemdash} & {\textemdash} & {\textemdash} \\
  509. 10 & DAGP 2 & 0.548 \pm 0.012 & \bfseries 0.519 \pm 0.008 & 0.851 \pm 0.003 & \bfseries 0.859 \pm 0.001 & 0.673 \pm 0.013 & 0.599 \pm 0.011 \\
  510. 10 & DAGP 3 & 0.527 \pm 0.004 & 0.491 \pm 0.003 & 0.858 \pm 0.002 & 0.852 \pm 0.002 & 0.624 \pm 0.011 & 0.545 \pm 0.012 \\
  511. % 10 & DAGP 4 & 0.517 \pm 0.006 & 0.485 \pm 0.003 & 0.858 \pm 0.001 & 0.852 \pm 0.002 & 0.602 \pm 0.011 & 0.546 \pm 0.010 \\
  512. % 10 & DAGP 5 & 0.535 \pm 0.004 & 0.506 \pm 0.005 & 0.851 \pm 0.003 & 0.851 \pm 0.003 & 0.662 \pm 0.009 & 0.581 \pm 0.012 \\
  513. \addlinespace
  514. 10 & OMGP & -1.04 \pm 0.02 & -1.11 \pm 0.03 & 0.64 \pm 0.02 & 0.66 \pm 0.02 & -0.9 \pm 0.2 & -0.81 \pm 0.12 \\
  515. \midrule
  516. \rowcolor{sStone!33}
  517. 10 & BNN+LV & 0.519 \pm 0.005 & 0.524 \pm 0.005 & {\textemdash} & {\textemdash} & {\textemdash} & {\textemdash} \\
  518. \rowcolor{sStone!33}
  519. 10 & GPR Mixed & 0.452 \pm 0.003 & 0.421 \pm 0.003 & {\textemdash} & {\textemdash} & {\textemdash} & {\textemdash} \\
  520. \rowcolor{sStone!33}
  521. 10 & GPR Default & {\textemdash} & {\textemdash} & 0.873 \pm 0.001 & 0.867 \pm 0.001 & -7.01 \pm 0.11 & -7.54 \pm 0.14 \\
  522. \rowcolor{sStone!33}
  523. 10 & GPR Short & {\textemdash} & {\textemdash} & -5.24 \pm 0.04 & -5.14 \pm 0.04 & 0.903 \pm 0.003 & 0.792 \pm 0.003 \\
  524. \bottomrule
  525. \end{tabular}
  526. \end{table}
  527. Our third experiment is based on the cart-pole benchmark for reinforcement learning as described by~\textcite{barto_neuronlike_1983} and implemented in OpenAI Gym~\parencite{brockman_openai_2016}.
  528. In this benchmark, the objective is to apply forces to a cart moving on a frictionless track to keep a pole, which is attached to the cart via a joint, in an upright position.
  529. We consider the regression problem of predicting the change of the pole's angle given the current state of the cart and the action applied.
  530. The current state of the cart consists of the cart's position and velocity and the pole's angular position and velocity.
  531. To simulate a dynamical system with changing system characteristics our experimental setup is to sample trajectories from two different cart-pole systems and merging the resulting data into one training set.
  532. The task is not only to learn a model which explains this data well, but to solve the association problem introduced by the different system configurations.
  533. This task is important in reinforcement learning settings where we study systems with multiple operational regimes.
  534. We sample trajectories from the system by initializing the pole in an almost upright position and then applying 10 uniform random actions.
  535. We add Gaussian noise $\epsilon \sim \Gaussian*{0, 0.01^2}$ to the observed angle changes.
  536. To increase the non-linearity of the dynamics, we apply the action for five consecutive time steps and allow the pole to swing freely instead of ending the trajectory after reaching a specific angle.
  537. The data set consists of 500 points sampled from the \emph{default} cart-pole system and another 500 points sampled from a \emph{short-pole} cart-pole system in which we halve the mass of the pole to 0.05 and shorten the pole to 0.1, a tenth of its default length.
  538. This short-pole system is more unstable and the pole reaches higher speeds.
  539. Predictions in this system therefore have to take the multimodality into account, as mean predictions between the more stable and the more unstable system can never be observed.
  540. We consider three test sets, one sampled from the default system, one sampled from the short-pole system, and a mixture of the two.
  541. They are generated by sampling trajectories with an aggregated size of 5000 points from each system for the first two sets and their concatenation for the mixed set.
  542. For this data set, we use squared exponential kernels for both the $f^{\pix{k}}$ and $\alpha^{\pix{k}}$ and 100 inducing points in every GP.
  543. We evaluate the performance of deep GPs with up to three layers and squared exponential kernels as models for the different functions.
  544. As described in~\parencite{salimbeni_doubly_2017,kaiser_bayesian_2018}, we use identity mean functions for all but the last layers and initialize the variational distributions with low covariances.
  545. We compare our models with OMGP and three-layer relu-activated Bayesian neural networks with added latent variables (BNN+LV).
  546. The latent variables can be used to effectively model multimodalities and stochasticity in dynamical systems for model-based reinforcement learning~\parencite{depeweg_decomposition_2018}.
  547. We also compare DAGP to three kinds of sparse GPs (GPR)~\parencite{hensman_scalable_2015}.
  548. They are trained on the mixed data set, the default system and the short-pole system respectively and serve as a baseline comparison as these models cannot handle multi-modal data.
  549. \Cref{tab:cartpole} shows results for ten runs of these models.
  550. The GPR model predicts a unimodal posterior for the mixed data set which covers both systems.
  551. Its mean prediction is approximately the mean of the two regimes and is physically implausible.
  552. The DAGP and BNN+LV models yield informative multi-modal predictions with comparable performance.
  553. In our setup, OMGP could not successfully solve the data association problem and thus does not produce a useful joint posterior.
  554. The OMGP's inference scheme is tailored to ordered one-dimensional problems.
  555. It does not trivially translate to the 4D cart-pole problem.
  556. As BNN+LV does not explicitly solve the data association problem, the model does not yield sub-models for the two different systems.
  557. Similar results would be obtained with the MDN and RGPR models, which also cannot be separated into sub-models.
  558. OMGP and DAGP yield such sub-models which can independently be used for predictions in the default or short-pole systems.
  559. Samples drawn from these models can be used to generate physically plausible trajectories in the respective system.
  560. OMGP fails to model the short-pole system but does yield a viable model for the default system which evolves more slowly due to higher torque and is therefore easier to learn.
  561. In contrast, the two sub-models inferred by DAGP perform well on their respective systems, showing that DAGP reliably solves the data association problem and successfully avoids model pollution by separating the two systems well.
  562. Given this separation, shallow and deep models for the two modes show comparable performance.
  563. The more expressive deep GPs model the default system slightly better while sacrificing performance on the more difficult short-pole system.
  564. \section{Conclusion}
  565. \label{sec:conclusion}
  566. We have presented a fully Bayesian model for the data association problem.
  567. Our model factorises the observed data into a set of independent processes and provides a model over both the processes and their association to the observed data.
  568. The data association problem is inherently ill-constrained and requires significant assumptions to recover a solution.
  569. In this paper, we make use of interpretable GP priors allowing global a priori information to be included into the model.
  570. Importantly, our model is able to exploit information both about the underlying functions and the association structure.
  571. We have derived a principled approximation to the marginal likelihood which allows us to perform inference for flexible hierarchical processes.
  572. In future work, we would like to incorporate the proposed model in a reinforcement learning scenario where we study a dynamical system with different operational regimes.
  573. \printbibliography
  574. \end{document}