You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

dynamic_dirichlet_deep_gp.tex 48KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616
  1. \input{preamble/packages.tex}
  2. \input{preamble/abbreviations.tex}
  3. \input{figures/preamble/tikz_colors.tex}
  4. % We use precompiled images and do not add tikz for speed of compilation.
  5. \newcommand{\includestandalonewithpath}[2][]{%
  6. \begingroup%
  7. \StrCount{#2}{/}[\matches]%
  8. \StrBefore[\matches]{#2}{/}[\figurepath]%
  9. \includestandalone[#1]{#2}%
  10. \endgroup%
  11. }
  12. % Show overfull boxes
  13. % \overfullrule=5pt
  14. \addbibresource{zotero_export.bib}
  15. \addbibresource{additional.bib}
  16. % We set this for hyperref
  17. \title{Data Association with Gaussian Processes}
  18. % \author{\href{mailto:markus.kaiser@siemens.com}{Markus Kaiser}}
  19. \author{Anonymous}
  20. \begin{document}
  21. \twocolumn[
  22. \icmltitle{Data Association with Gaussian Processes}
  23. % NOTE(mrksr): We leave anonymous information here for now.
  24. \begin{icmlauthorlist}
  25. \icmlauthor{Aeiau Zzzz}{to}
  26. \end{icmlauthorlist}
  27. \icmlaffiliation{to}{Department of Computation, University of Torontoland, Torontoland, Canada}
  28. \icmlcorrespondingauthor{Cieua Vvvvv}{c.vvvvv@googol.com}
  29. \vskip 0.3in
  30. ]
  31. \begin{abstract}
  32. The data association problem is concerned with separating data coming from different generating processes, for example when data come from different data sources, contain significant noise, or exhibit multimodality.
  33. We present a fully Bayesian approach to this problem.
  34. Our model is capable of simultaneously solving the data association problem and the induced supervised learning problems.
  35. Underpinning our approach is the use of Gaussian process priors to encode the structure of both the data and the data associations.
  36. We present an efficient learning scheme based on doubly stochastic variational inference and discuss how it can be applied to deep Gaussian process priors.
  37. \end{abstract}
  38. \section{Introduction}
  39. \label{sec:introduction}
  40. Real-world data often include multiple operational regimes of the considered system, for example a wind turbine or gas turbine~\parencite{hein_benchmark_2017}.
  41. As an example, consider a model describing the lift resulting from airflow around the wing profile of an airplane as a function of attack angle.
  42. At a low angle the lift increases linearly with attack angle until the wing stalls and the characteristic of the airflow fundamentally changes.
  43. Building a truthful model of such data requires learning two separate models and correctly associating the observed data to each of the dynamical regimes.
  44. A similar example would be if our sensors that measure the lift are faulty in a manner such that we either get an accurate reading or a noisy one.
  45. Estimating a model in this scenario is often referred to as a \emph{data association problem}~\parencite{Bar-Shalom:1987, Cox93areview}, where we consider the data to have been generated by a mixture of processes and we are interested in factorising the data into these components.
  46. \Cref{fig:choicenet_data} shows an example of faulty sensor data, where sensor readings are disturbed by uncorrelated and asymmetric noise.
  47. Applying standard machine learning approaches to such data can lead to model pollution, where the expressive power of the model is used to explain noise instead of the underlying signal.
  48. Solving the data association problem by factorizing the data into signal and noise gives rise to a principled approach to avoiding this behavior.
  49. \begin{figure}[t]
  50. \centering
  51. \includestandalone{figures/choicenet_data_intro}
  52. \caption{
  53. \label{fig:choicenet_data}
  54. A data association problem consisting of two generating processes, one of which is a signal we wish to recover and one is an uncorrelated noise process.
  55. }
  56. \end{figure}
  57. Early approaches to explaining data using multiple generative processes are based on separating the input space and training local expert models explaining easier subtasks~\parencite{jacobs_adaptive_1991,tresp_mixtures_2001, rasmussen_infinite_2002}.
  58. The assignment of data points to local experts is handled by a gating network, which learns a function from the inputs to assignment probabilities.
  59. However, it is still a central assumption of these models that at every position in the input space exactly one expert should explain the data.
  60. Another approach is presented in~\parencite{bishop_mixture_1994}, where the multimodal regression tasks are interpreted as a density estimation problem.
  61. A high number of candidate distributions are reweighed to match the observed data without modeling the underlying generative process.
  62. In contrast, we are interested in a generative process, where data at the same location in the input space could have been generated by a number of global independent processes.
  63. Inherently, the data association problem is ill-posed and requires assumptions on both the underlying functions and the association of the observations.
  64. In~\parencite{lazaro-gredilla_overlapping_2012} the authors place Gaussian process priors on the different generative processes which are assumed to be relevant globally.
  65. The associations are modelled via a latent association matrix and inference is carried out using an expectation maximization algorithm.
  66. This approach takes both the inputs and the outputs of the training data into account to solve the association problem.
  67. A drawback is that the model cannot give a posterior estimate about the relevance of the different generating processes at different locations in the input space.
  68. This means that the model can be used for data exploration but additional information is needed in order to perform predictive tasks.
  69. Another approach in~\parencite{bodin_latent_2017} expands this model by allowing interdependencies between the different generative processes and formulating the association problem as an inference problem on a latent space and a corresponding covariance function.
  70. However, in this approach the number of components is a free parameter and is prone to overfitting, as the model has no means of turning off components.
  71. In this paper we formulate a Bayesian model for the data association problem.
  72. Underpinning our approach is the use of Gaussian process priors which encode structure both on the functions and the associations themselves, allowing us to incorporate the available prior knowledge about the proper factorization into the learning problem.
  73. The use of Gaussian process priors allows us to achieve principled regularization without reducing the solution space leading to a well-regularized learning problem.
  74. Importantly, we simultaneously solve the association problem for the training data taking both inputs and outputs into account while also obtaining posterior belief about the relevance of the different generating processes in the input space.
  75. Our model can describe non-stationary processes in the sense that a different number of processes can be activated in different locations in the input space.
  76. We describe this non-stationary structure using additional Gaussian process priors which allows us to make full use of problem specific knowledge.
  77. This leads to a flexible yet interpretable model with a principled treatment of uncertainty.
  78. The paper has the following contributions:
  79. In \cref{sec:model}, we propose the data association with Gaussian Processes model (DAGP).
  80. In \cref{sec:variational_approximation}, we present an efficient learning scheme via a variational approximation which allows us to simultaneously train all parts of our model via stochastic optimization and show how the same learning scheme can be applied to deep Gaussian process priors.
  81. We demonstrate our model on a noise separation problem, an artificial multimodal data set, and a multi-regime regression problem based on the cart-pole benchmark in \cref{sec:experiments}.
  82. \section{Data Association with Gaussian Processes}
  83. \label{sec:model}
  84. \begin{figure}[t]
  85. \centering
  86. \includestandalone{figures/dynamic_graphical_model}
  87. \caption{
  88. \label{fig:dynamic_graphical_model}
  89. The graphical model of DAGP.
  90. The violet observations $(\mat{x_n}, \mat{y_n})$ are generated by the green latent process.
  91. Exactly one of the $K$ latent functions $f^{\pix{k}}$ and likelihood $\mat{y_n^{\pix{k}}}$ are evaluated to generate $\mat{y_n}$.
  92. We can place shallow or deep GP priors on these latent function values $\mat{f_n^{\pix{k}}}$.
  93. The assignment $\mat{a_n}$ to a latent function is driven by input-dependent weights $\mat{\alpha_n^{\pix{k}}}$ which encode the relevance of the different functions at $\mat{x_n}$.
  94. The different parts of the model are determined by the yellow hyperparameters and blue variational parameters.
  95. }
  96. \end{figure}
  97. The data association with Gaussian Processes (DAGP) model assumes that there exist $K$ independent functions $\Set*{f^{\pix{k}}}_{k=1}^K$, which generate pairs of observations $\D = \Set*{(\mat{x_n}, \mat{y_n})}_{n=1}^N$.
  98. Each data point is generated by evaluating one of the $K$ latent functions and adding Gaussian noise from a corresponding likelihood.
  99. The assignment of the $\nth{n}$ data point to one of the functions is specified by the indicator vector $\mat{a_n} \in \Set*{0, 1}^K$, which has exactly one non-zero entry.
  100. Our goal is to formulate simultaneous Bayesian inference on the functions $f^{\pix{k}}$ and the assignments $\mat{a_n}$.
  101. For notational conciseness we collect all $N$ inputs as $\mat{X} = \left(\mat{x_1}, \ldots, \mat{x_N}\right)$ and all outputs as $\mat{Y} = \left(\mat{y_1}, \ldots, \mat{y_N}\right)$.
  102. We further denote the $\nth{k}$ latent function value associated with the $\nth{n}$ data point as $\rv{f_n^{\pix{k}}} = \Fun{f^{\pix{k}}}{\mat{x_n}}$ and collect them as $\mat{F^{\pix{k}}} = \left( \rv{f_1^{\pix{k}}}, \ldots, \rv{f_N^{\pix{k}}} \right)$ and $\mat{F} = \left( \mat{F^{\pix{1}}}, \ldots, \mat{F^{\pix{K}}} \right)$.
  103. We refer to the $\nth{k}$ entry in $\mat{a_n}$ as $a_n^{\pix{k}}$ and denote $\mat{A} = \left(\mat{a_1}, \ldots, \mat{a_N}\right)$.
  104. Given this notation, the marginal likelihood of the DAGP can be separated into the likelihood, the latent function processes, and the assignment process and is given by,
  105. \begin{align}
  106. \begin{split}
  107. % NOTE(mrksr): I am soo sorry :(
  108. \label{eq:true_marginal_likelihood}
  109. &\!\!\!\Prob*{\mat{Y} \given \mat{X}} \! =
  110. \!\!\int\!
  111. \Prob*{\mat{Y} \given \mat{F}, \mat{A}}
  112. \Prob*{\mat{F} \given \mat{X}}
  113. \Prob*{\mat{A} \given \mat{X}}
  114. \diff \mat{A} \diff \mat{F} \\
  115. &\!\!\!\Prob*{\mat{Y} \given \mat{F}, \mat{A}} \! =
  116. \!\prod_{n=1}^N\prod_{k=1}^K
  117. \Gaussian*{\mat{y_n} \given \mat{f_n^{\pix{k}}}, \left(\sigma^{\pix{k}}\right)^2}_{^{\displaystyle,}}^{\Fun{\Ind}{a_n^{\pix{k}} = 1}}
  118. \end{split}
  119. \end{align}
  120. where $\sigma^{\pix{k}}$ is the noise level of the $\nth{k}$ Gaussian likelihood and $\Ind$ is the indicator function.
  121. Since we assume the $K$ processes to be independent given the data and assignments, we place independent GP priors on the latent functions
  122. $\Prob*{\mat{F} \given \mat{X}} = \prod_{k=1}^K \Gaussian*{\mat{F^{\pix{k}}} \given \Fun*{\mu^{\pix{k}}}{\mat{X}}, \Fun*{\K^{\pix{k}}}{\mat{X}, \mat{X}}}$.
  123. Our prior on the assignment process is composite.
  124. First, we assume that the $\mat{a_n}$ are drawn independently from multinomial distributions with logit parameters $\mat{\alpha_n} = \left( \alpha_n^{\pix{1}}, \ldots, \alpha_n^{\pix{K}} \right)$.
  125. One approach to specify $\mat{\alpha_n}$ is to assume them to be known a priori and to be equal for all data points~\parencite{lazaro-gredilla_overlapping_2012}.
  126. Instead, we want to infer them from the data.
  127. Specifically, we assume that there is a relationship between the location in the input space $\mathbf{x}$ and the associations.
  128. By placing independent GP priors on $\mat{\alpha^{\pix{k}}}$, we can encode our prior knowledge of the associations by the choice of covariance function
  129. $\Prob*{\mat{\alpha} \given \mat{X}} = \prod_{k=1}^K \Gaussian*{\rv{\alpha^{\pix{k}}} \given \mat{0}, \Fun{\K_\alpha^{\pix{k}}}{\mat{X}, \mat{X}}}$.
  130. The prior on the assignments $\mat{A}$ is given by marginalizing the $\mat{\alpha^{\pix{k}}}$, which, when normalized, parametrize a batch of multinomial distributions,
  131. \begin{align}
  132. \begin{split}
  133. \label{eq:multinomial_likelihood}
  134. \Prob*{\mat{A} \given \mat{X}} &=
  135. \int
  136. \Multinomial*{\mat{A} \given \Fun{\softmax}{\mat{\alpha}}} \Prob*{\mat{\alpha} \given \mat{X}}
  137. \diff \rv{\alpha}.
  138. \end{split}
  139. \end{align}
  140. Modelling the relationship between the input and the associations allows us to efficiently model data, which, for example, is unimodal in some parts of the input space and bimodal in others.
  141. A simple smoothness prior will encode a belief for how quickly we believe the components switch across the input domain.
  142. Since the GPs of the $\mat{\alpha^{\pix{k}}}$ use a zero mean function, our prior assumption is a uniform distribution of the different generative processes everywhere in the input space.
  143. If inference on the $\mat{a_n}$ reveals that, say, all data points at similar positions in the input space can be explained by the same $\nth{k}$ process, the belief about $\mat{\alpha}$ can be adjusted to make a non-uniform distribution favorable at this position, thereby increasing the likelihood via $\Prob*{\mat{A} \given \mat{X}}$.
  144. This mechanism introduces an incentive for the model to use as few functions as possible to explain the data and importantly allows us to predict a relative importance of these functions when calculating the posterior of the new observations $\mat{x_\ast}$.
  145. \Cref{fig:dynamic_graphical_model} shows the resulting graphical model, which divides the generative process for every data point in the application of the latent functions on the left side and the assignment process on the right side.
  146. The interdependencies between the data points are introduced through the Gaussian process priors on $\rv{f_n^{\pix{k}}}$ and $\rv{\alpha_n^{\pix{k}}}$ and depend on the hyperparameters $\mat{\theta} = \Set*{\mat{\theta^{\pix{k}}}, \mat{\theta_\alpha^{\pix{k}}}, \sigma^{\pix{k}}}_{k=1}^K$.
  147. The priors for the $f^{\pix{k}}$ can be chosen independently to encode different prior assumptions about the underlying processes.
  148. In \cref{subsec:choicenet} we use different kernels to separate a non-linear signal from a noise process.
  149. Going further, we can also use deep Gaussian processes as priors for the $f^{\pix{k}}$~\parencite{damianou_deep_2013, salimbeni_doubly_2017}.
  150. Since many real word systems are inherently hierarchical, prior knowledge can often be formulated more easily using composite functions~\parencite{kaiser_bayesian_2018}.
  151. \section{Variational Approximation}
  152. \label{sec:variational_approximation}
  153. Exact inference is intractable in this model.
  154. Instead, we formulate a variational approximation following ideas from~\parencite{hensman_gaussian_2013, salimbeni_doubly_2017}.
  155. Because of the rich structure in our model, finding a variational lower bound which is both faithful and can be evaluated analytically is hard.
  156. To proceed, we formulate an approximation which factorizes along both the $K$ processes and $N$ data points.
  157. This bound can be sampled efficiently and allows us to optimize both the models for the different processes $\Set*{f^{\pix{k}}}_{k=1}^K$ and our belief about the data assignments $\Set*{\mat{a_n}}_{n=1}^N$ simultaneously using stochastic optimization.
  158. \subsection{Variational Lower Bound}
  159. \label{subsec:lower_bound}
  160. As first introduced by~\textcite{titsias_variational_2009}, we augment all Gaussian processes in our model using sets of $M$ inducing points $\mat{Z^{\pix{k}}} = \left(\mat{z_1^{\pix{k}}}, \ldots, \mat{z_M^{\pix{k}}}\right)$ and their corresponding function values $\mat{u^{\pix{k}}} = \Fun*{f^{\pix{k}}}{\mat{Z^{\pix{k}}}}$, the inducing variables.
  161. We collect them as $\mat{Z} = \Set*{\mat{Z^{\pix{k}}}, \mat{Z_\alpha^{\pix{k}}}}_{k=1}^K$ and $\mat{U} = \Set*{\mat{u^{\pix{k}}}, \mat{u_\alpha^{\pix{k}}}}_{k=1}^K$.
  162. Taking the function $f^{\pix{k}}$ and its corresponding GP as an example, the inducing variables $\mat{u^{\pix{k}}}$ are jointly Gaussian with the latent function values $\mat{F^{\pix{k}}}$ of the observed data by the definition of GPs.
  163. We follow~\parencite{hensman_gaussian_2013} and choose the variational approximation $\Variat*{\mat{F^{\pix{k}}}, \mat{u^{\pix{k}}}} = \Prob*{\mat{F^{\pix{k}}} \given \mat{u^{\pix{k}}}, \mat{X}, \mat{Z^{\pix{k}}}}\Variat*{\mat{u^{\pix{k}}}}$ with $\Variat*{\mat{u^{\pix{k}}}} = \Gaussian*{\mat{u^{\pix{k}}} \given \mat{m^{\pix{k}}}, \mat{S^{\pix{k}}}}$.
  164. This formulation introduces the set $\Set*{\mat{Z^{\pix{k}}}, \mat{m^{\pix{k}}}, \mat{S^{\pix{k}}}}$ of variational parameters indicated in~\cref{fig:dynamic_graphical_model}.
  165. To simplify notation we drop the dependency on $\mat{Z}$ in the following.
  166. A central assumption of this approximation is that given enough well-placed inducing variables $\mat{u^{\pix{k}}}$, they are a sufficient statistic for the latent function values $\mat{F^{\pix{k}}}$.
  167. This implies conditional independence of the $\mat{f_n^{\pix{k}}}$ given $\mat{u^{\pix{k}}}$ and $\mat{X}$.
  168. The variational posterior of a single GP can then be written as,
  169. \begin{align}
  170. \begin{split}
  171. \Variat*{\mat{F^{\pix{k}}} \given \mat{X}}
  172. &=
  173. \int \Variat*{\mat{u^{\pix{k}}}}
  174. \Prob*{\mat{F^{\pix{k}}} \given \mat{u^{\pix{k}}}, \mat{X}}
  175. \diff \mat{u^{\pix{k}}}
  176. \\
  177. &=
  178. \int \Variat*{\mat{u^{\pix{k}}}}
  179. \prod_{n=1}^N \Prob*{\mat{f_n^{\pix{k}}} \given \mat{u^{\pix{k}}}, \mat{x_n}}
  180. \diff \mat{u^{\pix{k}}},
  181. \end{split}
  182. \end{align}
  183. which can be evaluated analytically, since it is a convolution of Gaussians.
  184. This formulation simplifies inference within single Gaussian processes.
  185. Next, we discuss how to handle the correlations between the different functions and the assignment processes.
  186. Given a set of assignments $\mat{A}$, this factorization along the data points is preserved in our model due to the assumed independence of the different functions in~\cref{eq:true_marginal_likelihood}.
  187. The independence is lost if the assignments are unknown.
  188. In this case, both the (a priori independent) assignment processes and the functions influence each other through data with unclear assignments.
  189. Following the ideas of Doubly Stochastic Variational Inference (DSVI) presented by~\textcite{salimbeni_doubly_2017} in the context of deep Gaussian processes, we maintain these correlations between different parts of the model while assuming factorization of the variational distribution.
  190. That is, our variational posterior takes the factorized form,
  191. \begin{align}
  192. \begin{split}
  193. \label{eq:variational_distribution}
  194. \Variat*{\mat{F}, \mat{\alpha}, \mat{U}}
  195. = &\Variat*{\mat{\alpha}, \Set*{\mat{F^{\pix{k}}}, \mat{u^{\pix{k}}}, \mat{u_\alpha^{\pix{k}}}}_{k=1}^K} \\
  196. = &\prod_{k=1}^K\prod_{n=1}^N \Prob*{\mat{\alpha_n^{\pix{k}}} \given \mat{u_\alpha^{\pix{k}}}, \mat{x_n}}\Variat*{\mat{u_\alpha^{\pix{k}}}} \\
  197. &\prod_{k=1}^K \prod_{n=1}^N \Prob*{\mat{f_n^{\pix{k}}} \given \mat{u^{\pix{k}}}, \mat{x_n}}\Variat*{\mat{u^{\pix{k}}}}.
  198. \end{split}
  199. \end{align}
  200. Our goal is to recover a posterior for both the generating functions and the assignment of data.
  201. To achieve this, instead of marginalizing $\mat{A}$, we consider the variational joint of $\mat{Y}$ and $\mat{A}$,
  202. \begin{align}
  203. \begin{split}
  204. \Variat*{\mat{Y}, \mat{A}} =
  205. &\!\int\! % We cheat a bit to make the line fit.
  206. \Prob*{\mat{Y} \given \mat{F}, \mat{A}}
  207. \Prob*{\mat{A} \given \mat{\alpha}}
  208. \Variat*{\mat{F}, \mat{\alpha}}
  209. \diff \mat{F} \diff \mat{\alpha},
  210. \end{split}
  211. \end{align}
  212. which retains both the Gaussian likelihood of $\mat{Y}$ and the multinomial likelihood of $\mat{A}$ in \cref{eq:multinomial_likelihood}.
  213. A lower bound $\Ell_{\text{DAGP}}$ for the log-joint $\log\Prob*{\mat{Y}, \mat{A} \given \mat{X}}$ of DAGP is given by,
  214. \begin{align}
  215. \begin{split}
  216. \label{eq:variational_bound}
  217. \Ell_{\text{DAGP}} &= \Moment*{\E_{\Variat*{\mat{F}, \mat{\alpha}, \mat{U}}}}{\log\frac{\Prob*{\mat{Y}, \mat{A}, \mat{F}, \mat{\alpha}, \mat{U} \given \mat{X}}}{\Variat*{\mat{F}, \mat{\alpha}, \mat{U}}}} \\
  218. &= \sum_{n=1}^N \Moment*{\E_{\Variat*{\mat{f_n}}}}{\log \Prob*{\mat{y_n} \given \mat{f_n}, \mat{a_n}}} \\
  219. &\quad + \sum_{n=1}^N \Moment*{\E_{\Variat*{\mat{\alpha_n}}}}{\log \Prob*{\mat{a_n} \given \mat{\alpha_n}}} \\
  220. &\quad - \sum_{k=1}^K \KL{\Variat*{\mat{u^{\pix{k}}}}}{\Prob*{\mat{u^{\pix{k}}} \given \mat{Z^{\pix{k}}}}} \\
  221. &\quad - \sum_{k=1}^K \KL{\Variat*{\mat{u_\alpha^{\pix{k}}}}}{\Prob*{\mat{u_\alpha^{\pix{k}}} \given \mat{Z_\alpha^{\pix{k}}}}}.
  222. \end{split}
  223. \end{align}
  224. Due to the structure of~\cref{eq:variational_distribution}, the bound factorizes along the data enabling stochastic optimization.
  225. This bound has complexity $\Fun*{\Oh}{NM^2K}$ to evaluate.
  226. \subsection{Optimization of the Lower Bound}
  227. \label{subsec:computation}
  228. An important property of the variational bound for DSVI~\parencite{salimbeni_doubly_2017} is that taking samples for single data points is straightforward and can be implemented efficiently.
  229. Specifically, for some $k$ and $n$, samples $\mat{\hat{f}_n^{\pix{k}}}$ from $\Variat*{\mat{f_n^{\pix{k}}}}$ are independent of all other parts of the model and can be drawn using samples from univariate unit Gaussians using reparametrizations~\parencite{kingma_variational_2015,rezende_stochastic_2014}.
  230. While it would not be necessary to sample from the different function, since $\Variat*{\mat{F^{\pix{k}}}}$ can be computed analytically~\parencite{hensman_gaussian_2013}, we apply this idea to the optimization of both the assignment processes $\mat{\alpha}$ and the assignments $\mat{A}$.
  231. For $\mat{\alpha}$, the analytical propagation of uncertainties through the $\softmax$ renormalization and multinomial likelihoods is intractable but can easily be evaluated using sampling.
  232. We optimize $\Ell_{\text{DAGP}}$ to simultaneously recover maximum likelihood estimates of the hyperparameters $\mat{\theta}$, the variational parameters $\Set*{\mat{Z}, \mat{m}, \mat{S}}$, and assignments $\mat{A}$.
  233. For every $n$, we represent the belief about $\mat{a_n}$ as a $K$-dimensional discrete distribution $\Variat*{\mat{a_n}}$.
  234. This distribution models the result of drawing a sample from $\Multinomial*{\mat{a_n} \given \Fun{\softmax}{\mat{\alpha_n}}}$ during the generation of the data point $(\mat{x_n}, \mat{y_n})$.
  235. Since we want to optimize $\Ell_{\text{DAGP}}$ using (stochastic) gradient descent, we need to employ a continuous relaxation to gain informative gradients of the bound with respect to the binary (and discrete) vectors $\mat{a_n}$.
  236. One straightforward way to relax the problem is to use the current belief about $\Variat*{\mat{a_n}}$ as parameters for a convex combination of the $\mat{f_n^{\pix{k}}}$, that is, to approximate $\mat{f_n} \approx \sum_{k=1}^K \Variat*{\mat{a_n^{\pix{k}}}}\mat{\hat{f}_n^{\pix{k}}}$.
  237. Using this relaxation causes multiple problems in practice.
  238. Most importantly, explaining data points as mixtures of the different generating processes can substantially simplify the learning problem while violating the modelling assumption that every data point was generated using exactly one function.
  239. Because of this, special care must be taken during optimization to enforce the sparsity of $\Variat*{\mat{a_n}}$.
  240. To avoid this problem, we propose using a different relaxation based on additional stochasticity.
  241. Instead of directly using $\Variat*{\mat{a_n}}$ to combine the $\mat{f_n^{\pix{k}}}$, we first draw a sample $\mat{\hat{a}_n}$ from a Concrete random variable as suggested by~\textcite{maddison_concrete_2016}, parameterized by $\Variat*{\mat{a_n}}$.
  242. Based on a temperature parameter $\lambda$, a Concrete random variable enforces sparsity but is also continuous and yields informative gradients using automatic differentiation.
  243. Samples from a Concrete random variable are unit vectors and for $\lambda \to 0$ their distribution approaches a discrete distribution.
  244. Our approximate evaluation of the bound in \cref{eq:variational_bound} during optimization has multiple sources of stochasticity, all of which are unbiased.
  245. First, we approximate the expectations using Monte Carlo samples $\mat{\hat{f}_n^{\pix{k}}}$, $\mat{\hat{\alpha}_n^{\pix{k}}}$, and $\mat{\hat{a}_n}$.
  246. And second, the factorization of the bound along the data allows us to use mini-batches for optimization~\parencite{salimbeni_doubly_2017, hensman_gaussian_2013}.
  247. \subsection{Approximate Predictions}
  248. \label{subsec:predictions}
  249. Predictions for a test location $\mat{x_\ast}$ are mixtures of $K$ independent Gaussians, given by,
  250. \begin{align}
  251. \begin{split}
  252. \label{eq:predictive_posterior}
  253. \Variat*{\mat{f_\ast} \given \mat{x_\ast}}
  254. &= \int \sum_{k=1}^K \Variat*{a_\ast^{\pix{k}} \given \mat{x_\ast}} \Variat*{\mat{f_\ast^{\pix{k}}} \given \mat{x_\ast}} \diff \mat{a_\ast^{\pix{k}}}\\
  255. &\approx \sum_{k=1}^K \hat{a}_\ast^{\pix{k}} \mat{\hat{f}_\ast^{\pix{k}}}.
  256. \end{split}
  257. \end{align}
  258. The predictive posteriors of the $K$ functions $\Variat*{\mat{f_\ast^{\pix{k}}} \given \mat{x_\ast}}$ are given by $K$ independent shallow Gaussian processes and can be calculated analytically~\parencite{hensman_gaussian_2013}.
  259. Samples from the predictive density over $\Variat*{\mat{a_\ast} \given \mat{x_\ast}}$ can be obtained by sampling from the Gaussian process posteriors $\Variat*{\mat{\alpha_\ast^{\pix{k}}} \given \mat{x_\ast}}$ and renormalizing the resulting vector $\mat{\alpha_\ast}$ using the $\softmax$-function.
  260. The distribution $\Variat*{\mat{a_\ast} \given \mat{x_\ast}}$ reflects the model's belief about how many and which of the $K$ generative processes are relevant at the test location $\mat{x_\ast}$ and their relative probability.
  261. \subsection{Deep Gaussian Processes}
  262. \label{subsec:deep_gp}
  263. For clarity, we have described the variational bound in terms of a shallow GP.
  264. However, as long as their variational bound can be efficiently sampled, any model can be used in place of shallow GPs for the $f^{\pix{k}}$.
  265. Since our approximation is based on DSVI for deep Gaussian processes, an extension to deep GPs is straightforward.
  266. Analogously to~\parencite{salimbeni_doubly_2017}, our new prior assumption about the $\nth{k}$ latent function values $\Prob*{\mat{F^{\prime\pix{k}}} \given \mat{X}}$ is given by,
  267. \begin{align}
  268. \begin{split}
  269. \Prob*{\mat{F^{\prime\pix{k}}} \given \mat{X}} = \prod_{l=1}^L \Prob*{\mat{F_l^{\prime\pix{k}}} \given \mat{u_l^{\prime\pix{k}}} \mat{F_{l-1}^{\prime\pix{k}}}, \mat{Z_l^{\prime\pix{k}}}}
  270. \end{split}
  271. \end{align}
  272. for an $L$-layer deep GP and with $\mat{F_0^{\prime\pix{k}}} \coloneqq \mat{X}$.
  273. Similar to the single-layer case, we introduce sets of inducing points $\mat{Z_l^{\prime\pix{k}}}$ and a variational distribution over their corresponding function values $\Variat*{\mat{u_l^{\prime\pix{k}}}} = \Gaussian*{\mat{u_l^{\prime\pix{k}}} \given \mat{m_l^{\prime\pix{k}}}, \mat{S_l^{\prime\pix{k}}}}$.
  274. We collect the latent multi-layer function values as $\mat{F^\prime} = \Set{\mat{F_l^{\prime\pix{k}}}}_{k=1,l=1}^{K,L}$ and corresponding $\mat{U^\prime}$ and assume an extended variational distribution,
  275. \begin{align}
  276. \begin{split}
  277. \label{eq:deep_variational_distribution}
  278. \MoveEqLeft\Variat*{\mat{F^\prime}, \mat{\alpha}, \mat{U^\prime}} = \\
  279. = &\Variat*{\mat{\alpha}, \Set*{\mat{u_\alpha^{\pix{k}}}}_{k=1}^K, \Set*{\mat{F_l^{\prime\pix{k}}}, \mat{u_l^{\prime\pix{k}}}}_{k=1,l=1}^{K,L}} \\
  280. = &\prod_{k=1}^K\prod_{n=1}^N \Prob*{\mat{\alpha_n^{\pix{k}}} \given \mat{u_\alpha^{\pix{k}}}, \mat{x_n}}\Variat*{\mat{u_\alpha^{\pix{k}}}} \\
  281. \MoveEqLeft\prod_{k=1}^K \prod_{l=1}^L \prod_{n=1}^N \Prob*{\mat{f_{n,l}^{\prime\pix{k}}} \given \mat{u_l^{\prime\pix{k}}}, \mat{x_n}}\Variat*{\mat{u_l^{\prime\pix{k}}}},
  282. \end{split}
  283. \end{align}
  284. where we identify $\mat{f_n^{\prime\pix{k}}} = \mat{f_{n,L}^{\prime\pix{k}}}$.
  285. As the $\nth{n}$ marginal of the $\nth{L}$ layer depends only on depends only on the $\nth{n}$ marginal of all layers above sampling from them remains straightforward~\parencite{salimbeni_doubly_2017}.
  286. The marginal is given by,
  287. \begin{align}
  288. \begin{split}
  289. \Variat{\mat{f_{n,L}^{\prime\pix{k}}}} =
  290. \int
  291. &\Variat{\mat{f_{n,L}^{\prime\pix{k}}} \given \mat{f_{n,L-1}^{\prime\pix{k}}}} \\
  292. &\prod_{l=1}^{L-1} \Variat{\mat{f_{n,l}^{\prime\pix{k}}} \given \mat{f_{n,l-1}^{\prime\pix{k}}}}
  293. \diff \mat{f_{n,l}^{\prime\pix{k}}}.
  294. \end{split}
  295. \end{align}
  296. The complete bound is structurally similar to \cref{eq:variational_bound} and given by,
  297. \begin{align}
  298. \begin{split}
  299. \label{eq:deep_variational_bound}
  300. \Ell^\prime_{\text{DAGP}}
  301. &= \sum_{n=1}^N \Moment*{\E_{\Variat*{\mat{f^\prime_n}}}}{\log \Prob*{\mat{y_n} \given \mat{f^\prime_n}, \mat{a_n}}} \\
  302. &\quad + \sum_{n=1}^N \Moment*{\E_{\Variat*{\mat{\alpha_n}}}}{\log \Prob*{\mat{a_n} \given \mat{\alpha_n}}} \\
  303. &\quad - \sum_{k=1}^K \sum_{l=1}^L \KL{\Variat{\mat{u_l^{\pix{k}}}}}{\Prob{\mat{u_l^{\pix{k}}} \given \mat{Z_l^{\pix{k}}}}} \\
  304. &\quad - \sum_{k=1}^K \KL{\Variat*{\mat{u_\alpha^{\pix{k}}}}}{\Prob*{\mat{u_\alpha^{\pix{k}}} \given \mat{Z_\alpha^{\pix{k}}}}}.
  305. \end{split}
  306. \end{align}
  307. To calculate the first term, samples have to be propagated through the deep GP structures.
  308. This extended bound thus has complexity $\Fun*{\Oh}{NM^2LK}$ to evaluate in the general case and complexity $\Fun*{\Oh}{NM^2\cdot\Fun{\max}{L, K}}$ if the assignments $\mat{a_n}$ take binary values.
  309. \section{Experiments}
  310. \label{sec:experiments}
  311. %
  312. \begin{figure*}[t]
  313. \centering
  314. \captionof{table}{
  315. \label{tab:choicenet}
  316. Results on the ChoiceNet data set.
  317. The gray part of the table shows the RSME results from~\parencite{choi_choicenet_2018}.
  318. For our model trained using the same setup, we report RMSE comparable to the previous results together with MLL.
  319. Both are calculated based on a test set of 1000 equally spaced samples of the noiseless underlying function.
  320. }%
  321. \newcolumntype{Y}{>{\centering\arraybackslash}X}%
  322. \newcolumntype{Z}{>{\columncolor{sStone!33}\centering\arraybackslash}X}%
  323. \begin{tabularx}{\linewidth}{rY|YZZZZZZ}
  324. \toprule
  325. Outliers & DAGP & DAGP & CN & MDN & MLP & GPR & LGPR & RGPR \\
  326. & \scriptsize MLL & \scriptsize RMSE & \scriptsize RMSE & \scriptsize RMSE & \scriptsize RMSE & \scriptsize RMSE & \scriptsize RMSE & \scriptsize RMSE \\
  327. \midrule
  328. 0\,\% & 2.86 & \textbf{0.008} & 0.034 & 0.028 & 0.039 & \textbf{0.008} & 0.022 & 0.017 \\
  329. 20\,\% & 2.71 & \textbf{0.008} & 0.022 & 0.087 & 0.413 & 0.280 & 0.206 & 0.013 \\
  330. 40\,\% & 2.12 & \textbf{0.005} & 0.018 & 0.565 & 0.452 & 0.447 & 0.439 & 1.322 \\
  331. 60\,\% & 0.874 & 0.031 & \textbf{0.023} & 0.645 & 0.636 & 0.602 & 0.579 & 0.738 \\
  332. 80\,\% & 0.126 & 0.128 & \textbf{0.084} & 0.778 & 0.829 & 0.779 & 0.777 & 1.523 \\
  333. \bottomrule
  334. \end{tabularx}
  335. \\[.5\baselineskip]
  336. \begin{subfigure}{.32\linewidth}
  337. \centering
  338. \includestandalone{figures/choicenet_data_40}
  339. \end{subfigure}
  340. % NOTE(mrksr): Hack to make the center plot look more centered
  341. \hfill
  342. \hfill
  343. \begin{subfigure}{.32\linewidth}
  344. \centering
  345. \includestandalone{figures/choicenet_joint_40}
  346. \end{subfigure}
  347. \hfill
  348. \begin{subfigure}{.32\linewidth}
  349. \centering
  350. \includestandalone{figures/choicenet_attrib_40}
  351. \end{subfigure}
  352. \\
  353. \begin{subfigure}{.32\linewidth}
  354. \centering
  355. \includestandalone{figures/choicenet_data}
  356. \end{subfigure}
  357. % NOTE(mrksr): Hack to make the center plot look more centered
  358. \hfill
  359. \hfill
  360. \begin{subfigure}{.32\linewidth}
  361. \centering
  362. \includestandalone{figures/choicenet_joint}
  363. \end{subfigure}
  364. \hfill
  365. \begin{subfigure}{.32\linewidth}
  366. \centering
  367. \includestandalone{figures/choicenet_attrib}
  368. \end{subfigure}
  369. \captionof{figure}{
  370. \label{fig:choicenet}
  371. DAGP on the ChoiceNet data set with 40\,\% outliers (upper row) and 60\,\% outliers (lower row).
  372. We show the raw data (left), joint posterior (center) and assignments (right).
  373. The bimodal DAGP identifies the signal perfectly up to 40\,\% outliers.
  374. For 60\,\% outliers, some of the noise is interpreted as signal, but the latent function is still recovered.
  375. }
  376. \end{figure*}
  377. %
  378. %
  379. \begin{figure*}[t]
  380. \centering
  381. \begin{subfigure}{.495\linewidth}
  382. \centering
  383. \includestandalone{figures/semi_bimodal_joint}
  384. % \caption{
  385. % \label{fig:semi_bimodal:a}
  386. % Joint posterior.
  387. % }
  388. \end{subfigure}
  389. \hfill
  390. \begin{subfigure}{.495\linewidth}
  391. \centering
  392. \includestandalone{figures/semi_bimodal_attrib}
  393. % \caption{
  394. % \label{fig:semi_bimodal:b}
  395. % }
  396. \end{subfigure}
  397. \caption{
  398. \label{fig:semi_bimodal}
  399. The DAGP posterior on an artificial data set with bimodal and trimodal parts.
  400. The left plot shows the joint predictions which are mixtures of four Gaussians weighed by the assignment probabilities shown in \cref{fig:semi_bimodal:c}.
  401. The weights are represented via the opacity of the modes, which shows that the orange mode is completely disabled and the red mode only relevant around the interval $[0, 5]$.
  402. The right plot shows the posterior belief about the assignment of the training data to the respective modes.
  403. }
  404. \end{figure*}
  405. %
  406. \begin{figure}[t]
  407. \centering
  408. \includestandalone{figures/semi_bimodal_attrib_process}
  409. \caption{
  410. \label{fig:semi_bimodal:c}
  411. Normalized samples from the assignment process $\mat{\alpha}$ of the model shown in \cref{fig:semi_bimodal}.
  412. The assignment process is used to weigh the predictive distributions of the different modes depending on the position in the input space.
  413. The model has learned that the mode $k = 2$ is irrelevant, that the mode $k = 1$ is only relevant around the interval $[0, 5]$.
  414. Outside this interval, the mode $k = 3$ is twice as likely as the mode $k = 4$.
  415. }
  416. \end{figure}
  417. %
  418. In this section we investigate the behavior of the DAGP model in multiple regression settings.
  419. First, we show how prior knowledge about the different generative processes can be used to separate a signal from unrelated noise.
  420. Second, we apply the DAGP to a multimodal data set and showcase how the different components of the model interact to identify how many modes are necessary to explain the data.
  421. Finally, we investigate a data set which contains observations of two independent dynamical systems mixed together and show how the DAGP can recover information about both systems and infer the state variable separating the systems.
  422. We use an implementation of DAGP in TensorFlow~\parencite{tensorflow2015-whitepaper} based on GPflow~\parencite{matthews_gpflow_2017} and the implementation of DSVI~\parencite{salimbeni_doubly_2017}.
  423. \subsection{Noise Separation}
  424. \label{subsec:choicenet}
  425. We begin with an experiment based on a noise separation problem.
  426. We apply DAGP to a one-dimensional regression problem with uniformly distributed asymmetric outliers in the training data.
  427. We use a task proposed by~\textcite{choi_choicenet_2018} where we sample $x \in [-3, 3]$ uniformly and apply the function $\Fun{f}{x} = (1 - \delta)(\Fun{\cos}{\sfrac{\pi}{2} \cdot x}\Fun{\exp}{-(\sfrac{x}{2})^2} + \gamma) + \delta \cdot \epsilon$, where $\delta \sim \Fun{\Ber}{\lambda}$, $\epsilon \sim \Fun{\Uni}{-1, 3}$ and $\gamma \sim \Gaussian{0, 0.15^2}$.
  428. That is, a fraction $\lambda$ of the training data, the outliers, are replaced by asymmetric uniform noise.
  429. We sample a total of 1000 data points and use $25$ inducing points for every GP in our model.
  430. Every generating process in our model can use a different kernel and therefore encode different prior assumptions.
  431. For this setting, we use two processes, one with a squared exponential kernel and one with a white noise kernel.
  432. This encodes the problem statement that every data point is either part of the signal we wish to recover or uncorrelated noise.
  433. To avoid pathological solutions for high outlier ratios, we add a prior to the likelihood variance of the first process, which encodes our assumption that there actually is a signal in the training data.
  434. The model proposed in~\parencite{choi_choicenet_2018}, called ChoiceNet (CN), is a specific neural network structure and inference algorithm to deal with corrupted data.
  435. In their work, they compare their approach to a standard multi-layer perceptron (MLP), a mixture density network (MDN), standard Gaussian process regression (GPR), leveraged Gaussian process regression (LGPR)~\parencite{choi_robust_2016}, and infinite mixtures of Gaussian processes (RGPR)~\parencite{rasmussen_infinite_2002}.
  436. \Cref{tab:choicenet} shows results for outlier rates varied from 0\,\% to 80\,\%.
  437. Besides the root mean squared error (RMSE), we also report the mean test log likelihood (MLL) of the process representing the signal in our model.
  438. Up to an outlier rate of 40\,\%, our model correctly identifies the outliers and ignores them, resulting in a predictive posterior of the signal equivalent to standard GP regression without outliers.
  439. In the special case of 0\,\% outliers, DAGP correctly identifies that the process modelling the noise is not necessary and disables it, thereby simplifying itself to standard GP regression.
  440. For high outlier rates, strong prior knowledge about the signal would be required to still identify it perfectly.
  441. \Cref{fig:choicenet} shows the posterior for an outlier rate of 60\,\%.
  442. While the function has still been identified well, some of the noise is also explained using this process, thereby introducing slight errors in the predictions.
  443. \subsection{Multimodal Data}
  444. \label{subsec:semi_bimodal}
  445. Our second experiment applies DAGP to a multimodal data set.
  446. The data, together with recovered posterior attributions, can be seen in \cref{fig:semi_bimodal}.
  447. We uniformly sample 350 data points in the interval $x \in [-2\pi, 2\pi]$ and obtain $y_1 = \Fun{\sin}{x} + \epsilon$, $y_2 = \Fun{\sin}{x} - 2 \Fun{\exp}{-\sfrac{1}{2} \cdot (x-2)^2} + \epsilon$ and $y_3 = -1 - \sfrac{3}{8\pi} \cdot x + \sfrac{3}{10} \cdot \Fun*{\sin}{2x} + \epsilon$ with additive independent noise $\epsilon \sim \Gaussian*{0, 0.005^2}$.
  448. The resulting data set $\D = \Set{\left( x, y_1 \right), \left( x, y_2 \right), \left( x, y_3 \right)}$ is trimodal in the interval $[0, 5]$ and is otherwise bimodal with one mode containing double the amount of data than the other.
  449. We use squared exponential kernels as priors for both the $f^{\pix{k}}$ and $\alpha^{\pix{k}}$ and $25$ inducing points in every GP.
  450. \Cref{fig:semi_bimodal,fig:semi_bimodal:c} show the posterior of an DAGP with $K = 4$ modes applied to the data, which correctly identified the underlying functions.
  451. \Cref{fig:semi_bimodal} shows the posterior belief about the assignments $\mat{A}$ and illustrates that DAGP recovered that it needs only three of the four available modes to explain the data.
  452. One of the modes is only assigned points in the interval $[0, 5]$ where the data is actually trimodal.
  453. This separation is explicitly represented in the model via the assignment processes $\mat{\alpha}$ shown in \cref{fig:semi_bimodal:c}.
  454. The model has disabled the mode $k = 2$ in the complete input space and has learned that the mode $k = 1$ is only relevant in the interval $[0, 5]$ where the three enabled modes each explain about a third of the data.
  455. Outside this interval, the model has learned that one of the modes has about twice the assignment probability than the other one, thus correctly reconstructing the true generative process.
  456. The DAGP is implicitly incentivized to explain the data using as few modes as possible through the likelihood term of the inferred $\mat{a_n}$ in \cref{eq:variational_bound}.
  457. At $x = -10$ the inferred modes and assignment processes start reverting to their respective priors away from the data.
  458. \subsection{Mixed Cart-Pole Systems}
  459. \label{subsec:cartpole}
  460. \begin{table*}[t]
  461. \centering
  462. \caption{
  463. \label{tab:cartpole}
  464. Results on the cart-pole data set.
  465. We report mean log likelihoods with standard error for ten runs.
  466. }%
  467. \sisetup{
  468. table-format=-1.3(3),
  469. table-number-alignment=center,
  470. separate-uncertainty,
  471. % table-align-uncertainty,
  472. table-figures-uncertainty=1,
  473. detect-weight,
  474. }
  475. \newcolumntype{H}{>{\setbox0=\hbox\bgroup}c<{\egroup}@{}}
  476. \footnotesize
  477. \setlength{\tabcolsep}{1pt}
  478. \begin{tabular}{HlSSSSSS}
  479. \toprule
  480. & & \multicolumn{2}{c}{Mixed} & \multicolumn{2}{c}{Default only} & \multicolumn{2}{c}{Short-pole only} \\
  481. \cmidrule(lr){3-4} \cmidrule(lr){5-6} \cmidrule(lr){7-8}
  482. Runs & & {Train} & {Test} & {Train} & {Test} & {Train} & {Test} \\
  483. \midrule
  484. 10 & DAGP & \bfseries 0.575 \pm 0.013 & \bfseries 0.521 \pm 0.009 & 0.855 \pm 0.002 & 0.844 \pm 0.002 & 0.686 \pm 0.009 & 0.602 \pm 0.005 \\
  485. % 10 & U-DAGP & 0.472 \pm 0.002 & 0.425 \pm 0.002 & {\textemdash} & {\textemdash} & {\textemdash} & {\textemdash} \\
  486. 10 & DAGP 2 & 0.548 \pm 0.012 & \bfseries 0.519 \pm 0.008 & 0.851 \pm 0.003 & 0.859 \pm 0.001 & 0.673 \pm 0.013 & 0.599 \pm 0.011 \\
  487. 10 & DAGP 3 & 0.527 \pm 0.004 & 0.491 \pm 0.003 & 0.858 \pm 0.002 & 0.852 \pm 0.002 & 0.624 \pm 0.011 & 0.545 \pm 0.012 \\
  488. % 10 & DAGP 4 & 0.517 \pm 0.006 & 0.485 \pm 0.003 & 0.858 \pm 0.001 & 0.852 \pm 0.002 & 0.602 \pm 0.011 & 0.546 \pm 0.010 \\
  489. % 10 & DAGP 5 & 0.535 \pm 0.004 & 0.506 \pm 0.005 & 0.851 \pm 0.003 & 0.851 \pm 0.003 & 0.662 \pm 0.009 & 0.581 \pm 0.012 \\
  490. \addlinespace
  491. 10 & BNN+LV & 0.519 \pm 0.005 & \bfseries 0.524 \pm 0.005 & {\textemdash} & {\textemdash} & {\textemdash} & {\textemdash} \\
  492. 10 & GPR Mixed & 0.452 \pm 0.003 & 0.421 \pm 0.003 & {\textemdash} & {\textemdash} & {\textemdash} & {\textemdash} \\
  493. 10 & GPR Default & {\textemdash} & {\textemdash} & \bfseries 0.873 \pm 0.001 & \bfseries 0.867 \pm 0.001 & -7.01 \pm 0.11 & -7.54 \pm 0.14 \\
  494. 10 & GPR Short & {\textemdash} & {\textemdash} & -5.24 \pm 0.04 & -5.14 \pm 0.04 & \bfseries 0.903 \pm 0.003 & \bfseries 0.792 \pm 0.003 \\
  495. \bottomrule
  496. \end{tabular}
  497. \end{table*}
  498. Our third experiment is based on the cart-pole benchmark for reinforcement learning as described by~\textcite{barto_neuronlike_1983} and implemented in OpenAI Gym~\parencite{brockman_openai_2016}.
  499. In this benchmark, the objective is to apply forces to a cart moving on a frictionless track to keep a pole, which is attached to the cart via a joint, in an upright position.
  500. We consider the regression problem of predicting the change of the pole's angle given the current state of the cart and the action applied.
  501. The current state of the cart consists of the cart's position and velocity and the pole's angular position and velocity.
  502. To simulate a dynamical system with changing system characteristics our experimental setup is to sample trajectories from two different cart-pole systems and merging the resulting data into one training set.
  503. The task is not only to learn a model which explains this data well, but to solve the association problem introduced by the different system configurations.
  504. This task is important in reinforcement learning settings where we study systems with multiple operational regimes.
  505. We sample trajectories from the system by initializing the pole in an almost upright position and then applying 10 uniform random actions.
  506. We add Gaussian noise $\epsilon \sim \Gaussian*{0, 0.01^2}$ to the observed angle changes.
  507. To increase the non-linearity of the dynamics, we apply the action for five consecutive time steps and allow the pole to swing freely instead of ending the trajectory after reaching a specific angle.
  508. The data set consists of 500 points sampled from the \emph{default} cart-pole system and another 500 points sampled from a \emph{short-pole} cart-pole system in which we halve the mass of the pole to 0.05 and shorten the pole to 0.1, a tenth of its default length.
  509. This short-pole system is more unstable and the pole reaches higher speeds.
  510. Predictions in this system therefore have to take the multimodality into account, as mean predictions between the more stable and the more unstable system can never be observed.
  511. We consider three test sets, one sampled from the default system, one sampled from the short-pole system, and a mixture of the two.
  512. They are generated by sampling trajectories with an aggregated size of 5000 points from each system for the first two sets and their concatenation for the mixed set.
  513. For this data set, we use squared exponential kernels for both the $f^{\pix{k}}$ and $\alpha^{\pix{k}}$ and 100 inducing points in every GP.
  514. We evaluate the performance of deep GPs with up to 3 layers and squared exponential kernels as models for the different functions.
  515. As described in~\parencite{salimbeni_doubly_2017}, we use identity mean functions for all but the last layers and initialize the variational distributions with low covariances.
  516. We compare our models with three-layer relu-activated Bayesian neural networks with added latent variables (BNN+LV) as introduced by~\textcite{depeweg_learning_2016}.
  517. These latent variables can be used to effectively model multimodalities and stochasticity in dynamical systems for model-based reinforcement learning~\parencite{depeweg_decomposition_2018}.
  518. We also compare to three kinds of sparse GPs~\parencite{hensman_scalable_2015}.
  519. They are trained on the mixed data set, the default system and the short-pole system respectively.
  520. \Cref{tab:cartpole} shows mean training and test log likelihoods and their standard error over ten runs for these models.
  521. The \emph{mixed}-column corresponds to training and test log likelihoods for a standard regression problem, which in this case is a bimodal one.
  522. The GPR model trained on the mixed data set shows the worst performance, since its predictions are single Gaussians spanning both system states.
  523. Additionally, the mean prediction is approximately the mean of the two states and is physically implausible.
  524. Both the BNN+LV and DAGP models perform substantially better as they can model the bimodality.
  525. BNN+LV assumes continuous latent variables and a bimodal distribution can be recovered by approximately marginalizing these latent variables via sampling.
  526. The predictive posterior of unknown shape is approximated using a mixture of many Gaussians.
  527. Compared to the shallow DAGP, the prior of BNN+LV is harder to interpret, as the DAGP's generative process produces a mixture of two Gaussians representing the two processes in the data.
  528. Adding more layers to the DAGP model leads to more expressive models whose priors on the different processes become less informative.
  529. For this cart-pole data, two-layer deep GPs seem to be a good compromise between model expressiveness and the strength of the prior, as they are best able to separate the data into the two separate dynamics.
  530. On the \emph{mixed} test set, DAGP and BNN+LV both show comparable likelihoods.
  531. However, the DAGP is a more expressive model, whose different components can be evaluated further.
  532. The results in the \emph{default only} and \emph{short-pole only} columns compare training and test likelihoods on the parts of the training and test sets corresponding to these systems respectively.
  533. We calculate these values by evaluating both functions separately on the data sets and reporting the higher likelihood.
  534. We compare these results with sparse GP models trained only on the respective systems.
  535. The two functions of DAGP reliably separate the two different systems.
  536. In fact, the function corresponding to the \emph{default} system in the two-layer DAGP shows equal test performance to the corresponding GPR model trained only on data of this system.
  537. The \emph{default} and \emph{short-pole} systems are sufficiently different such that the sparse GPs trained on only one of the two sets performs very poorly.
  538. Out of these two systems, the \emph{short-pole} system is more complicated and harder to learn due to the higher instability of the pole.
  539. The second function of DAGP still recovers an adequate model.
  540. Given the fact that the two functions of DAGP model the two system dynamics in the original data, sampling trajectories from them results in physically plausible data, which is not possible with a sparse GP or BNN+LV model.
  541. \section{Conclusion}
  542. \label{sec:conclusion}
  543. We have presented a fully Bayesian model for the data association problem.
  544. Our model factorises the observed data into a set of independent processes and provides a model over both the processes and their association to the observed data.
  545. The data association problem is inherently ill-constrained and requires significant assumptions to recover a solution.
  546. In this paper, we make use of interpretable Gaussian process priors allowing global a priori information to be included into the model.
  547. Importantly, our model is able to exploit information both about the underlying functions and the association structure.
  548. We have derived a principled approximation to the marginal likelihood which allows us to perform inference for flexible hierarchical processes.
  549. In future work, we would like to incorporate the proposed model in a reinforcement learning scenario where we study a dynamical system with different operational regimes.
  550. \printbibliography
  551. \end{document}