You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

bayesian_warped_dependent_gp.tex 66KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807808809810811812813814815816817818819820821822823824825826827828829830831832833834835836837838839840841842843844845846847848849850851852853854855856857858859860861862863864865866867868869870871872873874875876877878879880881882883884885886887888889890891
  1. \input{preamble/packages_paper.tex}
  2. \input{preamble/abbreviations.tex}
  3. % We use precompiled images and do not add tikz for speed of compilation.
  4. \newcommand{\includestandalonewithpath}[2][]{%
  5. \begingroup%
  6. \StrCount{#2}{/}[\matches]%
  7. \StrBefore[\matches]{#2}{/}[\figurepath]%
  8. \includestandalone[#1]{#2}%
  9. \endgroup%
  10. }
  11. % \input{figures/tikz_common.tex}
  12. % \input{figures/tikz_colors.tex}
  13. \addbibresource{zotero_export.bib}
  14. \addbibresource{additional.bib}
  15. % We set this for hyperref
  16. \title{Bayesian Alignments of Warped Multi-Output Gaussian Processes}
  17. \author{\href{mailto:markus.kaiser@siemens.com}{Markus Kaiser}}
  18. \author{
  19. Markus Kaiser\\
  20. Siemens AG\\
  21. Technical University of Munich\\
  22. \texttt{markus.kaiser@siemens.com}\\
  23. \And
  24. Clemens Otte\\
  25. Siemens AG\\
  26. \texttt{clemens.otte@siemens.com}\\
  27. \And
  28. Thomas Runkler\\
  29. Siemens AG\\
  30. Technical University of Munich\\
  31. \texttt{thomas.runkler@siemens.com}\\
  32. \And
  33. Carl Henrik Ek\\
  34. University of Bristol\\
  35. \texttt{carlhenrik.ek@bristol.ac.uk}\\
  36. }
  37. \begin{document}
  38. \maketitle
  39. \begin{abstract}
  40. We propose a novel Bayesian approach to modelling nonlinear alignments of time series based on latent shared information.
  41. We apply the method to the real-world problem of finding common structure in the sensor data of wind turbines introduced by the underlying latent and turbulent wind field.
  42. The proposed model allows for both arbitrary alignments of the inputs and non-parametric output warpings to transform the observations.
  43. This gives rise to multiple deep Gaussian process models connected via latent generating processes.
  44. We present an efficient variational approximation based on nested variational compression and show how the model can be used to extract shared information between dependent time series, recovering an interpretable functional decomposition of the learning problem.
  45. We show results for an artificial data set and real-world data of two wind turbines.
  46. \end{abstract}
  47. \section{Introduction}
  48. Many real-world systems are inherently hierarchical and connected.
  49. Ideally, a machine learning method should model and recognize such dependencies.
  50. Take wind power production, which is one of the major providers for renewable energy today, as an example:
  51. To optimize the efficiency of a wind turbine the speed and pitch have to be controlled according to the local wind conditions (speed and direction).
  52. In a wind farm turbines are typically equipped with sensors for wind speed and direction.
  53. The goal is to use these sensor data to produce accurate estimates and forecasts of the wind conditions at every turbine in the farm.
  54. For the ideal case of a homogeneous and very slowly changing wind field, the wind conditions at each geometrical position in a wind farm can be estimated using the propagation times (time warps) computed from geometry, wind speed, and direction \parencite{soleimanzadeh_controller_2011,bitar_coordinated_2013,schepers_improved_2007}.
  55. In the real world, however, wind fields are not homogeneous, exhibit global and local turbulences, and interfere with the turbines and the terrain inside and outside the farm and further, sensor faults may lead to data loss.
  56. This makes it extremely difficult to construct accurate analytical models of wind propagation in a farm.
  57. Also, standard approaches for extracting such information from data, e.g.\ generalized time warping \parencite{zhou_generalized_2012}, fail at this task because they rely on a high signal to noise ratio.
  58. Instead, we want to construct Bayesian nonlinear dynamic data based models for wind conditions and warpings which handle the stochastic nature of the system in a principled manner.
  59. In this paper, we look at a generalization of this type of problem and propose a novel Bayesian approach to finding nonlinear alignments of time series based on latent shared information.
  60. We view the power production of different wind turbines as the outputs of a multi-output Gaussian process (MO-GP) \parencite{alvarez_kernels_2011} which models the latent wind fronts.
  61. We embed this model in a hierarchy, adding a layer of non-linear alignments on top and a layer of non-linear warpings \parencites{NIPS2003_2481,lazaro-gredilla_bayesian_2012} below which increases flexibility and encodes the original generative process.
  62. We show how the resulting model can be interpreted as a group of deep Gaussian processes with the added benefit of covariances between different outputs.
  63. The imposed structure is used to formulate prior knowledge in a principled manner, restrict the representational power to physically plausible models and recover the desired latent wind fronts and relative alignments.
  64. The presented model can be interpreted as a group of $D$ deep GPs all of which share one layer which is a MO-GP.
  65. This MO-GP acts as an interface to share information between the different GPs which are otherwise conditionally independent.
  66. The paper has the following contributions:
  67. In \cref{sec:model}, we propose a hierarchical, warped and aligned multi-output Gaussian process (AMO-GP).
  68. In \cref{sec:variational_approximation}, we present an efficient learning scheme via an approximation to the marginal likelihood which allows us to fully exploit the regularization provided by our structure, yielding highly interpretable results.
  69. We show these properties for an artificial data set and for real-world data of two wind turbines in \cref{sec:experiments}.
  70. \section{Model Definition}
  71. \label{sec:model}
  72. We are interested in formulating shared priors over a set of functions $\Set{f_d}_{d=1}^D$ using GPs, thereby directly parameterizing their interdependencies.
  73. In a traditional GP setting, multiple outputs are considered conditionally independent given the inputs, which significantly reduces the computational cost but also prevents the utilization of shared information.
  74. Such interdependencies can be formulated via \emph{convolution processes (CPs)} as proposed by \textcite{boyle_dependent_2004}, a generalization of the \emph{linear model of coregionalization (LMC)} \parencite{journel_mining_1978,coburn_geostatistics_2000}.
  75. In the CP framework, the output functions are the result of a convolution of the latent processes $w_r$ with smoothing kernel functions $T_{d,r}$ for each output $f_d$, defined as
  76. \begin{align}
  77. f_d(\mat{x}) = \sum_{r=1}^R \int T_{d,r}(\mat{x} - \mat{z}) \cdot w_r(\mat{z}) \diff \mat{z}.
  78. \end{align}
  79. In this model, the convolutions of the latent processes generating the different outputs are all performed around the same point $\mat{x}$.
  80. We generalize this by allowing different \emph{alignments} of the observations which depend on the position in the input space.
  81. This allows us to model the changing relative interaction times for the different latent wind fronts as described in the introduction.
  82. We also assume that the dependent functions $f_d$ are latent themselves and the data we observe is generated via independent noisy nonlinear transformations of their values.
  83. Every function $f_d$ is augmented with an alignment function $a_d$ and a warping $g_d$ on which we place independent GP priors.
  84. For simplicity, we assume that the outputs are evaluated all at the same positions $\mat{X} = \Set{\mat{x_n}}_{n=1}^N$.
  85. This can easily be generalized to different input sets for every output.
  86. In our application, the $\mat{x_n}$ are one-dimensional time indices.
  87. However, since the model can be generalized to multi-dimensional inputs, we do not restrict ourselves to the one-dimensional case.
  88. We note that in the multi-dimensional case, reasoning about priors on alignments can be challenging.
  89. We call the observations associated with the $d$-th function $\mat{y_d}$ and use the stacked vector $\mat{y} = \left( \mat{y_1}, \dots, \mat{y_D} \right)$ to collect the data of all outputs.
  90. The final model is then given by
  91. \begin{align}
  92. \mat{y_d} &= g_d(f_d(a_d(\mat{X}))) + \mat{\epsilon_d},
  93. \end{align}
  94. where $\mat{\epsilon_d} \sim \Gaussian{0, \sigma_{y, d}^2\Eye}$ is a noise term.
  95. The functions are applied element-wise.
  96. This encodes the generative process described above:
  97. For every turbine $\rv{y_d}$, observations at positions $\rv{X}$ are generated by first aligning to the latent wind fronts using $a_d$, applying the front in $f_d$, imposing turbine-specific components $g_d$ and adding noise in $\rv{\epsilon_d}$.
  98. We assume independence between $a_d$ and $g_d$ across outputs and apply GP priors of the form $a_d \sim \GP(\id, k_{a, d})$ and $g_d \sim \GP(\id, k_{g, d})$.
  99. By setting the prior mean to the identity function $\id(x) = x$, the standard CP model is our default assumption.
  100. During learning, the model can choose the different $a_d$ and $g_d$ in a way to reveal the independent shared latent processes $\Set{w_r}_{r=1}^R$ on which we also place GP priors $w_r \sim \GP(0, k_{u, r})$.
  101. Similar to \textcite{boyle_dependent_2004}, we assume the latent processes to be independent white noise processes by setting $\Moment{\cov}{w_r(\mat{z}), w_{r^\prime}(\mat{z^\prime})} = \delta_{rr^\prime}\delta_{\mat{z}\mat{z^\prime}}$.
  102. Under this prior, the $f_d$ are also GPs with zero mean and $\Moment{\cov}{f_d(\mat{x}), f_{d^\prime}(\mat{x^\prime})} = \sum_{r=1}^R \int T_{d,r}(\mat{x} - \mat{z}) T_{d^\prime,r}(\mat{x^\prime} - \mat{z}) \diff \mat{z}$.
  103. Using the squared exponential kernel for all $T_{d, r}$, the integral can be shown to have a closed form solution.
  104. With $\Set{\sigma_{d,r}, \mat{\ell_{d, r}}}$ denoting the kernel hyper parameters associated with $T_{d,r}$, it is given by
  105. \begin{align}
  106. \label{eq:dependent_kernel}
  107. \begin{split}
  108. \MoveEqLeft[1] \Moment{\cov}{f_d(\mat{x}), f_{d^\prime}(\mat{x^\prime})} = \sum_{r=1}^R \frac{(2\pi)^{\frac{K}{2}}\sigma_{d, r}\sigma_{d^\prime, r}}{\prod_{k=1}^K \hat{\ell}_{d, d^\prime, r, k}\inv} \Fun*{\exp}{-\frac{1}{2} \sum_{k=1}^K \frac{(x_k - x^\prime_k)^2}{\hat{\ell}_{d, d^\prime, r, k}^2}},
  109. \end{split}
  110. \end{align}
  111. where $\mat{x}$ is $K$-dimensional and $\hat{\ell}_{d, d^\prime, r, k} = \sqrt{\ell_{d, r, k}^2 + \ell_{d^\prime, r, k}^2}$.
  112. \begin{figure}[t]
  113. \centering
  114. \begin{minipage}[c]{.45\textwidth}
  115. \centering
  116. \includestandalone{figures/graphical_model_supervised}
  117. \end{minipage}
  118. \hfill
  119. \begin{minipage}[c]{.45\textwidth}
  120. \centering
  121. \includestandalonewithpath{figures/toy_decomposition_true}
  122. \end{minipage}\\[5pt]
  123. \begin{minipage}[t]{.45\textwidth}
  124. \centering
  125. \caption{
  126. \label{fig:graphical_model_supervised}
  127. The graphical model of AMO-GP with variational parameters (blue).
  128. A CP, informed by $R$ latent processes, models shared information between multiple data sets with nonlinear alignments and warpings.
  129. This CP connects multiple deep GPs through a shared layer.
  130. }
  131. \end{minipage}
  132. \hfill
  133. \begin{minipage}[t]{.45\textwidth}
  134. \centering
  135. \caption{
  136. \label{fig:toy_decomposition}
  137. An artificial example of hierarchical composite data with multiple observations of shared latent information.
  138. This hierarchy generates two data sets using a dampened sine function which is never observed directly.
  139. }
  140. \end{minipage}
  141. \end{figure}
  142. \section{Variational Approximation}
  143. \label{sec:variational_approximation}
  144. Since exact inference in this model is intractable, we present a variational approximation to the model's marginal likelihood in this section.
  145. A detailed derivation of the variational bound can be found in \cref{app:sec:variational_approximation}.
  146. Analogously to $\mat{y}$, we denote the random vectors which contain the function values of the respective functions and outputs as $\rv{a}$ and $\rv{f}$.
  147. The joint probability distribution of the data can then be written as
  148. \begin{align}
  149. \begin{split}
  150. \label{eq:full_model}
  151. \begin{aligned}
  152. &\Prob{\rv{y}, \rv{f}, \rv{a} \given \mat{X}} = \\
  153. &\qquad\Prob{\rv{f} \given \rv{a}} \prod_{d=1}^D \Prob{\rv{y_d} \given \rv{f_d}}\Prob{\rv{a_d} \given \rv{X}},
  154. \end{aligned}
  155. \qquad\qquad
  156. \begin{aligned}
  157. \rv{a_d} \mid \mat{X} &\sim \Gaussian{\mat{X}, \mat{K_{a, d}} + \sigma^2_{a, d}\Eye}, \\
  158. \rv{f} \mid \mat{a} &\sim \Gaussian{\mat{0}, \mat{K_f} + \sigma^2_f\Eye}, \\
  159. \rv{y_d} \mid \mat{f_d} &\sim \Gaussian{\mat{f_d}, \mat{K_{g, d}} + \sigma^2_{y, d}\Eye}.
  160. \end{aligned}
  161. \end{split}
  162. \end{align}
  163. Here, we use $\mat{K}$ to refer to the Gram matrices corresponding to the respective GPs.
  164. All but the convolution processes factorize over the different levels of the model as well as the different outputs.
  165. \subsection{Variational Lower Bound}
  166. \label{subsec:lower_bound}
  167. To approximate a single deep GP, that is a single string of GPs stacked on top of each other, \textcite{hensman_nested_2014} proposed nested variational compression in which every GP in the hierarchy is handled independently.
  168. In order to arrive at their lower bound they make two variational approximations.
  169. First, they consider a variational approximation $\Variat*{\hat{\rv{a}}, \rv{u}} = \Prob*{\rv{\hat{a}} \given \rv{u}} \Variat*{\rv{u}}$ to the true posterior of a single GP first introduced by \textcite{titsias_variational_2009}.
  170. In this approximation, the original model is augmented with \emph{inducing variables} $\mat{u}$ together with their \emph{inducing points} $\mat{Z}$ which are assumed to be latent observations of the same function and are thus jointly Gaussian with the observed data.
  171. In contrast to \cite{titsias_variational_2009}, the distribution $\Variat*{\rv{u}}$ is not chosen optimally but optimized using the closed form $\Variat*{\rv{u}} \sim \Gaussian*{\rv{u} \given \mat{m}, \mat{S}}$.
  172. This gives rise to the Scalable Variational GP presented in \cite{hensman_gaussian_2013}.
  173. Second, in order to apply this variational bound for the individual GPs recursively, uncertainties have to be propagated through subsequent layers and inter-layer cross-dependencies are avoided using another variational approximation.
  174. The variational lower bound for the AMO-GP is given by
  175. \begin{align}
  176. \label{eq:full_bound}
  177. \begin{split}
  178. \MoveEqLeft\log \Prob{\rv{y}\given \mat{X}, \mat{Z}, \rv{u}} \geq
  179. \sum_{d=1}^D \log\Gaussian*{\rv{y_d} \given \mat{\Psi_{g, d}} \mat{K_{u_{g, d}u_{g, d}}}\inv \mat{m_{g, d}}, \sigma_{y, d}^2 \Eye}
  180. - \sum_{d=1}^D \frac{1}{2\sigma_{a, d}^2} \Fun{\tr}{\mat{\Sigma_{a, d}}} \\
  181. &- \frac{1}{2\sigma_f^2} \left( \psi_{f} - \Fun*{\tr}{\mat{\Phi_f} \mat{K_{u_fu_f}}\inv} \right)
  182. - \sum_{d=1}^D\frac{1}{2\sigma_{y, d}^2} \left( \psi_{g, d} - \Fun*\tr{\mat{\Phi_{g, d}} \mat{K_{u_{g, d}u_{g, d}}}\inv} \right) \\
  183. &- \sum_{d=1}^D \KL{\Variat{\rv{u_{a, d}}}}{\Prob{\rv{u_{a, d}}}}
  184. - \KL{\Variat{\rv{u_f}}}{\Prob{\rv{u_f}}}
  185. - \sum_{d=1}^D \KL{\Variat{\rv{u_{y, d}}}}{\Prob{\rv{u_{y, d}}}} \\
  186. &- \frac{1}{2\sigma_f^2} \tr\left(\left(\mat{\Phi_f} - \mat{\Psi_f}\tran\mat{\Psi_f}\right) \mat{K_{u_fu_f}}\inv \left(\mat{m_f}\mat{m_f}\tran + \mat{S_f}\right)\mat{K_{u_fu_f}}\inv\right) \\
  187. &- \sum_{d=1}^D\frac{1}{2\sigma_{y, d}^2} \tr\left(\left(\mat{\Phi_{g, d}} - \mat{\Psi_{g, d}}\tran\mat{\Psi_{g, d}}\right)
  188. \mat{K_{u_{g, d}u_{g, d}}}\inv \left(\mat{m_{g, d}}\mat{m_{g, d}}\tran + \mat{S_{g, d}}\right) \mat{K_{u_{g, d}u_{g, d}}}\inv\right),
  189. \end{split}
  190. \end{align}
  191. where $\KLdiv$ denotes the KL-divergence.
  192. A detailed derivation can be found in \cref{app:sec:variational_approximation}.
  193. The bound contains one Gaussian fit term per output dimension and a series of regularization terms for every GP in the hierarchy.
  194. The KL-divergences connect the variational approximations to the prior and the different trace terms regularize the variances of the different GPs (for a detailed discussion see \parencite{hensman_nested_2014}).
  195. This bound depends on the hyper parameters of the kernel and likelihood $\left\{ \mat{\ell}, \mat{\sigma} \right\}$ and the variational parameters $\Set*{\mat{Z_{l,d}}, \mat{m_{l,d}}, \mat{S_{l,d}} \with l \in \Set{\rv{a}, \rv{f}, \rv{d}}, d \in [D]}$.
  196. The bound can be calculated in $\Oh(NM^2)$ time and factorizes along the data points which enables stochastic optimization.
  197. Since every of the $N$ data points is associated with one of the $D$ outputs, the computational cost of the model is independent of $D$.
  198. Information is only shared between the different outputs using the inducing points in $\mat{f}$.
  199. As the different outputs share a common function, increasing $D$ allows us to reduce the number of variational parameters per output, because the shared function can still be represented completely.
  200. A central component of this bound are expectations over kernel matrices, the three $\Psi$-statistics $\psi_f = \Moment*{\E_{\Variat{\rv{a}}}}{\Fun*{\tr}{\mat{K_{ff}}}}$, $\mat{\Psi_f} = \Moment*{\E_{\Variat{\rv{a}}}}{\mat{K_{fu}}}$ and $\mat{\Phi_f} = \Moment*{\E_{\Variat{\rv{a}}}}{\mat{K_{uf}}\mat{K_{fu}}}$.
  201. Closed form solutions for these statistics depend on the choice of kernel and are known for specific kernels, such as linear or RBF kernels, for example shown in \cite{damianou_deep_2012}.
  202. In the following subsection we will give closed form solutions for these statistics required in the shared CP-layer of our model.
  203. \subsection{Convolution Kernel Expectations}
  204. \label{subsec:kernel_expectations}
  205. The uncertainty about the first layer is captured by the variational distribution of the latent alignments $\rv{a}$ given by $\Variat{\rv{a}} \sim \Gaussian{\mat{\mu_a}, \mat{\Sigma_a}}$.
  206. Every aligned point in $\rv{a}$ corresponds to one output of $\rv{f}$ and ultimately to one of the $\rv{y_d}$.
  207. Since the closed form of the multi output kernel depends on the choice of outputs, we will use the notation $\Fun{\hat{f}}{\rv{a_n}}$ to denote $\Fun{f_d}{\rv{a_n}}$ such that $\rv{a_n}$ is associated with output $d$.
  208. For simplicity, we only consider one single latent process $w_r$.
  209. Since the latent processes are independent, the results can easily be generalized to multiple processes.
  210. Then, $\psi_f$ is given by
  211. \begin{align}
  212. \label{eq:psi0}
  213. \psi_f = \Moment*{\E_{\Variat{\rv{a}}}}{\Fun*{\tr}{\mat{K_{ff}}}} = \sum_{n=1}^N \hat{\sigma}_{nn}^2.
  214. \end{align}
  215. Similar to the notation $\Fun{\hat{f}}{\cdot}$, we use the notation $\hat{\sigma}_{nn^\prime}$ to mean the variance term associated with the covariance function $\Moment{\cov}{\Fun{\hat{f}}{\mat{a_n}}, \Fun{\hat{f}}{\mat{a_{n^\prime}}}}$ as shown in \cref{eq:dependent_kernel}.
  216. The expectation $\mat{\Psi_f} = \Moment*{\E_{\Variat{\rv{a}}}}{\mat{K_{fu}}}$ connecting the alignments and the pseudo inputs is given by
  217. \begin{align}
  218. \begin{split}
  219. \label{eq:psi1}
  220. \mat{\Psi_f} &= \Moment*{\E_{\Variat{\rv{a}}}}{\mat{K_{fu}}}\text{, with} \\
  221. \left( \mat{\Psi_f} \right)_{ni}
  222. &= \hat{\sigma}_{ni}^2 \sqrt{\frac{(\mat{\Sigma_a})_{nn}\inv}{\hat{\ell}_{ni} + (\mat{\Sigma_a})_{nn}\inv}}
  223. \exp\left(-\frac{1}{2} \frac{(\mat{\Sigma_a})_{nn}\inv\hat{\ell}_{ni}}{(\mat{\Sigma_a})_{nn}\inv + \hat{\ell}_{ni}} \left((\mat{\mu_a})_n - \mat{Z_i}\right)^2\right),
  224. \end{split}
  225. \end{align}
  226. where $\hat{\ell}_{ni}$ is the combined length scale corresponding to the same kernel as $\hat{\sigma}_{ni}$.
  227. Lastly, $\mat{\Phi_f} = \Moment*{\E_{\Variat{\rv{a}}}}{\mat{K_{uf}}\mat{K_{fu}}}$ connects alignments and pairs of pseudo inputs with the closed form
  228. \begin{align}
  229. \begin{split}
  230. \label{eq:psi2}
  231. \mat{\Phi_f} &= \Moment*{\E_{\Variat{\rv{a}}}}{\mat{K_{uf}}\mat{K_{fu}}}\text{, with} \\
  232. \left( \mat{\Phi_f} \right)_{ij} &= \sum_{n=1}^N \hat{\sigma}_{ni}^2 \hat{\sigma}_{nj}^2 \sqrt{\frac{(\mat{\Sigma_a})_{nn}\inv}{\hat{\ell}_{ni} + \hat{\ell}_{nj} + (\mat{\Sigma_a})_{nn}\inv}}
  233. \exp\left( -\frac{1}{2} \vphantom{\left(\frac{\hat{\ell}}{\hat{\ell}}\right)^2} \frac{\hat{\ell}_{ni}\hat{\ell}_{nj}}{\hat{\ell}_{ni} + \hat{\ell}_{nj}} (\mat{Z_i} - \mat{Z_j})^2 \right. \\
  234. &\quad {} - \frac{1}{2} \frac{(\mat{\Sigma_a})_{nn}\inv(\hat{\ell}_{ni} + \hat{\ell}_{nj})}{(\mat{\Sigma_a})_{nn}\inv + \hat{\ell}_{ni} + \hat{\ell}_{nj}}
  235. \left.\left( (\mat{\mu_a})_n - \frac{\hat{\ell}_{ni} \mat{Z_i} + \hat{\ell}_{nj} \mat{Z_j}}{\hat{\ell}_{ni} + \hat{\ell}_{nj}} \right)^2 \right).
  236. \end{split}
  237. \end{align}
  238. The $\Psi$-statistics factorize along the data and we only need to consider the diagonal entries of $\mat{\Sigma_a}$.
  239. If all the data belong to the same output, the $\Psi$-statistics of the squared exponential kernel can be recovered as a special case.
  240. This case is used for the output-specific warpings $\rv{g}$.
  241. \subsection{Model Interpretation}
  242. \label{subsec:interpretation}
  243. The graphical model shown in \cref{fig:graphical_model_supervised} illustrates that the presented model can be interpreted as a group of $D$ deep GPs all of which share one layer which is a CP.
  244. This CP acts as an interface to share information between the different GPs which are otherwise conditionally independent.
  245. This modelling-choice introduces a new quality to the model when compared to standard deep GPs with multiple output dimensions, since the latter are not able in principle to learn dependencies between the different outputs.
  246. Compared to standard multi-output GPs, the AMO-GP introduces more flexibility with respect to the shared information.
  247. CPs make strong assumptions about the relative alignments of the different outputs, that is, they assume constant time-offsets.
  248. The AMO-GP extends this by introducing a principled Bayesian treatment of general nonlinear alignments $a_d$ on which we can place informative priors derived from the problem at hand.
  249. Together with the warping layers $g_d$, our model can learn to share knowledge in an informative latent space learnt from the data.
  250. Alternatively, this model can be interpreted as a shared and warped latent variable model with a very specific prior:
  251. The indices $\mat{X}$ are part of the prior for the latent space $a_d(\mat{X})$ and specify a sense of order for the different data points $\mat{y}$ which is augmented with uncertainty by the alignment functions.
  252. Using this order, the convolution processes enforce the covariance structure for the different datapoints specified by the smoothing kernels.
  253. In order to derive an inference scheme, we need the ability to propagate uncertainties about the correct alignments and latent shared information through subsequent layers.
  254. We adapted the approach of nested variational compression by \textcite{hensman_nested_2014}, which is originally concerned with a single deep GP.
  255. The approximation is expanded to handle multiple GPs at once, yielding the bound in \cref{eq:full_bound}.
  256. The bound reflects the dependencies of the different outputs as the sharing of information between the different deep GPs is approximated through the shared inducing variables $\rv{u_{f,d}}$.
  257. Our main contribution for the inference scheme is the derivation of a closed-form solution for the $\Psi$-statistics of the convolution kernel in \cref{eq:psi0,eq:psi1,eq:psi2}.
  258. \section{Experiments}
  259. \label{sec:experiments}
  260. \begin{figure}[t]
  261. \centering
  262. \begin{subfigure}{.45\linewidth}
  263. \centering
  264. \includestandalonewithpath{figures/toy_decomposition_shallow_gp}
  265. \caption{
  266. Shallow GP with RBF kernel.
  267. \label{fig:toy_model_decomposition:a}
  268. }
  269. \end{subfigure}
  270. \hfill
  271. \begin{subfigure}{.45\linewidth}
  272. \centering
  273. \includestandalonewithpath{figures/toy_decomposition_mo_gp}
  274. \caption{
  275. Multi-Output GP with dependent RBF kernel.
  276. \label{fig:toy_model_decomposition:b}
  277. }
  278. \end{subfigure}
  279. \\[\baselineskip]
  280. \begin{subfigure}{.45\linewidth}
  281. \centering
  282. \includestandalonewithpath{figures/toy_decomposition_dgp}
  283. \caption{
  284. Deep GP with RBF kernels.
  285. \label{fig:toy_model_decomposition:c}
  286. }
  287. \end{subfigure}
  288. \hfill
  289. \begin{subfigure}{.45\linewidth}
  290. \centering
  291. \includestandalonewithpath{figures/toy_decomposition_ours}
  292. \caption{
  293. AMO-GP with (dependent) RBF kernels.
  294. \label{fig:toy_model_decomposition:d}
  295. }
  296. \end{subfigure}
  297. \caption{
  298. \label{fig:toy_model_decomposition}
  299. A comparison of the AMO-GP with other GP models.
  300. The plots show mean predictions and a shaded area of two standard deviations.
  301. If available, the ground truth is displayed as a dashed line.
  302. Additional lines are noiseless samples drawn from the model.
  303. The shallow and deep GPs in \cref{fig:toy_model_decomposition:a,fig:toy_model_decomposition:c} model the data independently and revert back to the prior in $\rv{y_2}$.
  304. Because of the nonlinear alignment, a multi-output GP cannot model the data in \cref{fig:toy_model_decomposition:b}.
  305. The AMO-GP in \cref{fig:toy_model_decomposition:d} recovers the alignment and warping and shares information between the two outputs.
  306. }
  307. \end{figure}
  308. In this section we show how to apply the AMO-GP to the task of finding common structure in time series observations.
  309. In this setting, we observe multiple time series $\T_d = (\mat{X_d}, \mat{y_d})$ and assume that there exist latent time series which determine the observations.
  310. We will first apply the AMO-GP to an artificial data set in which we define a decomposed system of dependent time series by specifying a shared latent function generating the observations together with relative alignments and warpings for the different time series.
  311. We will show that our model is able to recover this decomposition from the training data and compare the results to other approaches of modeling the data.
  312. Then we focus on a real world data set of a neighbouring pair of wind turbines in a wind farm, where the model is able to recover a representation of the latent prevailing wind condition and the relative timings of wind fronts at the two turbines.
  313. \subsection{Artificial data set}
  314. \label{subsec:artificial_example}
  315. Our data set consists of two time series $\T_1$ and $\T_2$ generated by a dampened sine function.
  316. % \begin{align}
  317. % f : \left\{ \begin{aligned}
  318. % [0, 1] &\to \R \\
  319. % x &\mapsto \left( 1 - \frac{3}{4} \tanh \left( \frac{10\pi}{15} \cdot x \right) \right) \cdot \sin (10\pi \cdot x).
  320. % \end{aligned}\right.
  321. % \end{align}
  322. We choose the alignment of $\T_1$ and the warping of $\T_2$ to be the identity in order to prevent us from directly observing the latent function and apply a sigmoid warping to $\T_1$.
  323. The alignment of $\T_2$ is selected to be a quadratic function.
  324. \Cref{fig:toy_decomposition} shows a visualization of this decomposed system of dependent time series.
  325. To obtain training data we uniformly sampled 500 points from the two time series and added Gaussian noise.
  326. We subsequently removed parts of the training sets to explore the generalization behaviour of our model, resulting in $\abs{\T_1}= 450$ and $\abs{\T_2} = 350$.
  327. We use this setup to train our model using squared exponential kernels both in the conditionally independent GPs $\rv{a_d}$ and $\rv{g_d}$ and as smoothing kernels in $\rv{f}$.
  328. We can always choose one alignment and one warping to be the identity function in order to constrain the shared latent spaces $\rv{a}$ and $\rv{f}$ and provide a reference the other alignments and warpings will be relative to.
  329. Since we assume our artificial data simulates a physical system, we apply the prior knowledge that the alignment and warping processes have slower dynamics compared to the shared latent function which should capture most of the observed dynamics.
  330. To this end we applied priors to the $\rv{a_d}$ and $\rv{g_d}$ which prefer longer length scales and smaller variances compared to $\rv{f}$.
  331. Otherwise, the model could easily get stuck in local minima like choosing the upper two layers to be identity functions and model the time series independently in the $\rv{g_d}$.
  332. Additionally, our assumption of identity mean functions prevents pathological cases in which the complete model collapses to a constant function.
  333. \Cref{fig:toy_model_decomposition:d} shows the AMO-GP's recovered function decomposition and joint predictions.
  334. The model successfully recovered a shared latent dampened sine function, a sigmoid warping for the first time series and an approximate quadratic alignment function for the second time series.
  335. In \cref{fig:toy_model_decomposition:a,fig:toy_model_decomposition:b,fig:toy_model_decomposition:c}, we show the training results of a standard GP, a multi-output GP and a three-layer deep GP on the same data.
  336. For all of these models, we used RBF kernels and, in the case of the deep GP, applied priors similar to our model in order to avoid pathological cases.
  337. In \cref{tab:toy_model_log_likelihoods} we report test log-likelihoods for the presented models, which illustrate the qualitative differences between the models.
  338. Because all models are non-parametric and converge well, repeating the experiments with different initializations leads to very similar likelihoods.
  339. Both the standard GP and deep GP cannot learn dependencies between time series and revert back to the prior where no data is available.
  340. The deep GP has learned that two layers are enough to model the data and the resulting model is essentially a Bayesian warped GP which has identified the sigmoid warping for $\T_1$.
  341. Uncertainties in the deep GP are placed in the middle layer areas where no data are available for the respective time series, as sharing information between the two outputs is impossible.
  342. In contrast to the other two models, the multi-output GP can and must share information between the two time series.
  343. As discussed in \cref{sec:model} however, it is constrained to constant time-offsets and cannot model the nonlinear alignment in the data.
  344. Because of this, the model cannot recover the latent sine function and can only model one of the two outputs.
  345. \subsection{Pairs of wind turbines}
  346. \label{subsec:wind_example}
  347. \begin{figure}[t]
  348. \centering
  349. \includestandalonewithpath{figures/wind_joint_model}
  350. \caption{
  351. \label{fig:wind_joint_model}
  352. The joint posterior for two time series $\rv{y_1}$ and $\rv{y_2}$ of power production for a pair of wind turbines.
  353. The top and bottom plots show the two observed time series with training data and dashed missing data.
  354. The AMO-GP recovers an uncertain relative alignment of the two time series shown in the middle plot.
  355. High uncertainty about the alignment is placed in areas where multiple explanations are plausible due to the high amount of noise or missing data.
  356. }
  357. \end{figure}
  358. \begin{figure}[t]
  359. \centering
  360. \begin{subfigure}{.475\linewidth}
  361. \centering
  362. \includestandalonewithpath{figures/wind_shallow_gp_samples_left}
  363. \caption{
  364. \label{fig:wind_samples:a}
  365. Samples from a GP.
  366. }
  367. \end{subfigure}
  368. \hfill
  369. \begin{subfigure}{.475\linewidth}
  370. \centering
  371. \includestandalonewithpath{figures/wind_mo_gp_samples_left}
  372. \caption{
  373. \label{fig:wind_samples:b}
  374. Samples from a MO-GP.
  375. }
  376. \end{subfigure}\\[\baselineskip]
  377. \begin{subfigure}{.475\linewidth}
  378. \centering
  379. \includestandalonewithpath{figures/wind_dgp_samples_left}
  380. \caption{
  381. \label{fig:wind_samples:c}
  382. Samples from a DGP.
  383. }
  384. \end{subfigure}
  385. \hfill
  386. \begin{subfigure}{.475\linewidth}
  387. \centering
  388. \includestandalonewithpath{figures/wind_alignment_samples_left}
  389. \caption{
  390. \label{fig:wind_samples:d}
  391. Samples from the AMO-GP.
  392. }
  393. \end{subfigure}
  394. \caption{
  395. \label{fig:wind_samples}
  396. A comparison of noiseless samples drawn from a GP, a MO-GP, a DGP and the AMO-GP.
  397. The separation of uncertainties implied by the model structure of AMP-GP gives rise to an informative model.
  398. Since the uncertainty in the generative process is mainly placed in the relative alignment shown in \cref{fig:wind_joint_model}, all samples in \cref{fig:wind_samples:d} resemble the underlying data in structure.
  399. }
  400. \end{figure}
  401. \begin{table}[t]
  402. \centering
  403. \caption{
  404. \label{tab:toy_model_log_likelihoods}
  405. Test-log-likelihoods for the models presented in \cref{sec:experiments}.
  406. }
  407. \newcolumntype{Y}{>{\centering\arraybackslash}X}
  408. \begin{tabularx}{\linewidth}{rrYYYY}
  409. \toprule
  410. Experiment & Test set & GP & MO-GP & DGP & AMO-GP (Ours) \\
  411. \midrule
  412. Artificial & $[0.7, 0.8] \subseteq \T_1$ & -0.12 & -0.053 & 0.025 & \textbf{1.54} \\
  413. & $[0.35, 0.65] \subseteq \T_2$ & -0.19 & -5.66 & -0.30 & \textbf{0.72} \\
  414. \midrule
  415. Wind & $[40, 45] \subseteq \T_2 $ & -4.42 & -2.31 & -1.80 & \textbf{-1.43} \\
  416. & $[65, 75] \subseteq \T_2 $ & -7.26 & -0.73 & -1.93 & \textbf{-0.69} \\
  417. \bottomrule
  418. \end{tabularx}
  419. \end{table}
  420. This experiment is based on real data recorded from a pair of neighbouring wind turbines in a wind farm.
  421. The two time series $\T_1$ and $\T_2$ shown in gray in \cref{fig:wind_joint_model} record the respective power generation of the two turbines over the course of one and a half hours, which was smoothed slightly using a rolling average over 60 seconds.
  422. There are 5400 data points for the first turbine (blue) and 4622 data points for the second turbine (green).
  423. We removed two intervals (drawn as dashed lines) from the second turbine's data set to inspect the behaviour of the model with missing data.
  424. This allows us to evaluate and compare the generative properties of our model in \cref{fig:wind_samples}.
  425. The power generated by a wind turbine is mainly dependent on the speed of the wind fronts interacting with the turbine.
  426. For system identification tasks concerned with the behaviour of multiple wind turbines, associating the observations on different turbines due to the same wind fronts is an important task.
  427. However it is usually not possible to directly measure these correspondences or wind propagation speeds between turbines, which means that there is no ground truth available.
  428. An additional problem is that the shared latent wind conditions are superimposed by turbine-specific local turbulences.
  429. Since these local effects are of comparable amplitude to short-term changes of wind speed, it is challenging to decide which parts of the signal to explain away as noise and which part to identify as the underlying shared process.
  430. Our goal is the simultaneous learning of the uncertain alignment in time $\rv{a}$ and of the shared latent wind condition $\rv{f}$.
  431. Modelling the turbine-specific parts of the signals is not the objective, so they need to be explained by the Gaussian noise term.
  432. We use a squared exponential kernel as a prior for the alignment functions $\rv{a_d}$ and as smoothening kernels in $\rv{f}$.
  433. For the given data set we can assume the output warpings $\rv{g_d}$ to be linear functions because there is only one dimension, the power generation, which in this data set is of similar shape for both turbines.
  434. Again we encode a preference for alignments with slow dynamics with a prior on the length scales of $\rv{a_d}$.
  435. As the signal has turbine-specific autoregressive components, plausible alignments are not unique.
  436. To constrain the AMO-GP, we want it to prefer alignments close to the identity function which we chose as a prior mean function.
  437. % In non-hierarchical models, this can be achieved by placing a prior on the kernel's variance preferring smaller $\sigma_a^2$.
  438. % In our case, the posterior space of the alignment process $\rv{a}$ is a latent space we have not placed any prior on.
  439. % The model can therefore choose the posterior distribution of $\rv{u_a}$ in a way to counteract the constrained scale of $\mat{K_{au}}$ and $\mat{K_{uu}}\inv$ in \cref{eq:augmented_joint}\todo{This equation does not exist anymore} and thereby circumvent the prior.
  440. % To prevent this, we also place a prior on the mean of $\rv{u_a}$ to remove this degree of freedom.
  441. \Cref{fig:wind_joint_model} shows the joint model learned from the data in which $a_1$ is chosen to be the identity function.
  442. The possible alignments identified match the physical conditions of the wind farm.
  443. For the given turbines, time offsets of up to six minutes are plausible and for most wind conditions, the offset is expected to be close to zero.
  444. For areas where the alignment is quite certain however, the two time series are explained with comparable detail.
  445. The model is able to recover unambiguous associations well and successfully places high uncertainty on the alignment in areas where multiple explanations are plausible due to the noisy signal.
  446. As expected, the uncertainty about the alignment also grows where data for the second time series is missing.
  447. This uncertainty is propagated through the shared function and results in higher predictive variances for the second time series.
  448. Because of the factorization in the model however, we can recover the uncertainties about the alignment and the shared latent function separately.
  449. \Cref{fig:wind_samples} compares samples drawn from our model with samples drawn from a GP, a MO-GP and a DGP.
  450. The GP reverts to their respective priors when data is missing, while the MO-GP does not handle short-term dynamics and smoothens the signal enough such that the nonlinear alignment can be approximated as constant.
  451. Samples drawn from a DGP model showcase the complexity of a DGP prior.
  452. Unconstrained composite GPs are hard to reason about and make the model very flexible in terms of representable functions.
  453. Since the model's evidence is very broad, the posterior is uninformed and inference is hard.
  454. Additionally, as discussed in~\cref{app:sec:joint_models} and~\parencite{hensman_nested_2014}, the nested variational compression bound tends to loosen with high uncertainties.
  455. AMO-GP shows richer structure:
  456. Due to the constraints imposed by the model, more robust inference leads to a more informed model.
  457. Samples show that it has learned that a maximum which is missing in the training data has to exist somewhere, but the uncertainty about the correct alignment due to the local turbulence means that different samples place the maximum at different locations in $\mat{X}$-direction.
  458. \section{Conclusion}
  459. \label{sec:conclusion}
  460. We have proposed the warped and aligned multi-output Gaussian process (AMO-GP), in which MO-GPs are embedded in a hierarchy to find shared structure in latent spaces.
  461. We extended convolution processes \parencite{boyle_dependent_2004} with conditionally independent Gaussian processes on both the input and output sides, giving rise to a highly structured deep GP model.
  462. This structure can be used to both regularize the model and encode expert knowledge about specific parts of the system.
  463. By applying nested variational compression \parencite{hensman_nested_2014} to inference in these models, we presented a variational lower bound which combines Bayesian treatment of all parts of the model with scalability via stochastic optimization.
  464. We compared the model with GPs, deep GPs and multi-output GPs on an artificial data set and showed how the richer model-structure allows the AMO-GP to pick up on latent structure which the other approaches cannot model.
  465. We then applied the AMO-GP to real world data of two wind turbines and used the proposed hierarchy to model wind propagation in a wind farm and recover information about the latent non homogeneous wind field.
  466. With uncertainties decomposed along the hierarchy, our approach handles ambiguities introduced by the stochasticity of the wind in a principled manner.
  467. This indicates the AMO-GP is a good approach for these kinds of dynamical system, where multiple misaligned sensors measure the same latent effect.
  468. \section{Acknowledgement}
  469. \label{sec:acknowledgement}
  470. The project this report is based on was supported with funds from the German Federal Ministry of Education and Research under project number 01IB15001.
  471. The sole responsibility for the report’s contents lies with the authors.
  472. \newpage
  473. \nocite{*}
  474. \printbibliography
  475. \newpage
  476. \appendix
  477. \section{Detailed Variational Approximation}
  478. \label{app:sec:variational_approximation}
  479. In this section, we repeat the derivation of the variational approximation in more detail.
  480. Since exact inference in this model is intractable, we discuss a variational approximation to the model's true marginal likelihood and posterior in this section.
  481. Analogously to $\mat{y}$, we denote the random vectors which contain the function values of the respective functions and outputs as $\rv{a}$ and $\rv{f}$.
  482. The joint probability distribution of the data can then be written as
  483. \begin{align}
  484. \begin{split}
  485. \label{app:eq:full_model}
  486. \Prob{\rv{y}, \rv{f}, \rv{a} \given \mat{X}} &=
  487. \Prob{\rv{f} \given \rv{a}} \prod_{d=1}^D \Prob{\rv{y_d} \given \rv{f_d}}\Prob{\rv{a_d} \given \rv{X}}, \\
  488. \rv{a_d} \mid \mat{X} &\sim \Gaussian{\mat{X}, \mat{K_{a, d}} + \sigma^2_{a, d}\Eye}, \\
  489. \rv{f} \mid \mat{a} &\sim \Gaussian{\mat{0}, \mat{K_f} + \sigma^2_f\Eye}, \\
  490. \rv{y_d} \mid \mat{f_d} &\sim \Gaussian{\mat{f_d}, \mat{K_{g, d}} + \sigma^2_{y, d}\Eye}.
  491. \end{split}
  492. \end{align}
  493. Here, we use $\mat{K}$ to refer to the Gram matrix corresponding to the kernel of the respective GP.
  494. All but the CPs factorize over both the different levels of the model as well as the different outputs.
  495. To approximate a single deep GP, \Textcite{hensman_nested_2014} proposed nested variational compression in which every GP in the hierarchy is handled independently.
  496. While this forces a variational approximation of all intermediate outputs of the stacked processes, it has the appealing property that it allows optimization via stochastic gradient descent \parencite{hensman_gaussian_2013} and the variational approximation can after training be used independently of the original training data.
  497. \subsection{Augmented Model}
  498. \label{app:subsec:augmented_model}
  499. Nested variational compression focuses on augmenting a full GP model by introducing sets of \emph{inducing variables} $\mat{u}$ with their \emph{inducing inputs} $\mat{Z}$.
  500. Those variables are assumed to be latent observations of the same functions and are thus jointly Gaussian with the observed data.
  501. It can be written using its marginals \parencite{titsias_variational_2009} as
  502. \begin{align}
  503. \label{app:eq:augmented_joint}
  504. \begin{split}
  505. \Prob{\rv{\hat{a}}, \rv{u}} &= \Gaussian{\rv{\hat{a}} \given \mat{\mu_a}, \mat{\Sigma_a}}\Gaussian{\rv{u} \given \rv{Z}, \mat{K_{uu}}}\text{, with} \\
  506. \mat{\mu_a} &= \mat{X} + \mat{K_{au}}\mat{K_{uu}}\inv(\rv{u} - \mat{Z}), \\
  507. \mat{\Sigma_a} &= \mat{K_{aa}} - \mat{K_{au}}\mat{K_{uu}}\inv\mat{K_{ua}},
  508. \end{split}
  509. \end{align}
  510. where, after dropping some indices and explicit conditioning on $\mat{X}$ and $\mat{Z}$ for clarity, $\rv{\hat{a}}$ denotes the function values $a_d(\mat{X})$ without noise and we write the Gram matrices as $\mat{K_{au}} = k_{a, d}(\mat{X}, \mat{Z})$.
  511. While the original model in \cref{app:eq:full_model} can be recovered exactly by marginalizing the inducing variables, considering a specific variational approximation of the joint $\Prob{\rv{\hat{a}}, \rv{u}}$ gives rise to the desired lower bound in the next subsection.
  512. A central assumption of this approximation \parencite{titsias_variational_2009} is that given enough inducing variables at the correct location, they are a sufficient statistic for $\rv{\hat{a}}$, implying conditional independence of the entries of $\rv{\hat{a}}$ given $\mat{X}$ and $\rv{u}$.
  513. We introduce such inducing variables for every GP in the model, yielding the set $\Set{\rv{u_{a, d}}, \rv{u_{f, d}}, \rv{u_{g, d}}}_{d=1}^D$ of inducing variables.
  514. Note that for the CP $f$, we introduce one set of inducing variables $\rv{u_{f, d}}$ per output $f_d$.
  515. These inducing variables play a crucial role in sharing information between the different outputs.
  516. \subsection{Variational Lower Bound}
  517. \label{app:subsec:lower_bound}
  518. To derive the desired variational lower bound for the log marginal likelihood of the complete model, multiple steps are necessary.
  519. First, we will consider the innermost GPs $a_d$ describing the alignment functions.
  520. We derive the Scalable Variational GP (SVGP), a lower bound for this model part which can be calculated efficiently and can be used for stochastic optimization, first introduced by \textcite{hensman_gaussian_2013}.
  521. In order to apply this bound recursively, we will both show how to propagate the uncertainty through the subsequent layers $f_d$ and $g_d$ and how to avoid the inter-layer cross-dependencies using another variational approximation as presented by \textcite{hensman_nested_2014}.
  522. While \citeauthor{hensman_nested_2014} considered standard deep GP models, we will show how to apply their results to CPs.
  523. \paragraph{The First Layer}
  524. \label{app:subsubsec:first_layer}
  525. Since the inputs $\mat{X}$ are fully known, we do not need to propagate uncertainty through the GPs $a_d$.
  526. Instead, the uncertainty about the $\rv{a_d}$ comes from the uncertainty about the correct functions $a_d$ and is introduced by the processes themselves.
  527. To derive a lower bound on the marginal log likelihood of $\rv{a_d}$, we assume a variational distribution $\Variat{\rv{u_{a, d}}} \sim \Gaussian{\mat{m_{a, d}}, \mat{S_{a, d}}}$ approximating $\Prob{\rv{u_{a, d}}}$ and additionally assume that $\Variat{\rv{\hat{a}_d}, \rv{u_{a, d}}} = \Prob{\rv{\hat{a}_d} \given \rv{u_{a, d}}}\Variat{\rv{u_{a, d}}}$.
  528. After dropping the indices again, using Jensen's inequality we get
  529. \begin{align}
  530. \label{app:eq:svgp_log_likelihood}
  531. \begin{split}
  532. \log \Prob{\rv{a} \given \mat{X}} &= \log \int \Prob{\rv{a} \given \rv{u}} \Prob{\rv{u}} \diff \rv{u} \\
  533. &= \log \int \Variat{\rv{u}} \frac{\Prob{\rv{a} \given \rv{u}} \Prob{\rv{u}}}{\Variat{\rv{u}}} \diff \rv{u} \\
  534. &\geq \int \Variat{\rv{u}} \log \frac{\Prob{\rv{a} \given \rv{u}} \Prob{\rv{u}}}{\Variat{\rv{u}}} \diff \rv{u} \\
  535. &= \int \log \Prob{\rv{a} \given \rv{u}} \Variat{\rv{u}} \diff \rv{u} - \int \Variat{\rv{u}} \log \frac{\Variat{\rv{u}}}{\Prob{\rv{u}}} \diff \rv{u} \\
  536. &= \Moment{\E_{\Variat{\rv{u}}}}{\log \Prob{\rv{a} \given \rv{u}}} - \KL{\Variat{\rv{u}}}{\Prob{\rv{u}}},
  537. \end{split}
  538. \end{align}
  539. where $\Moment{\E_{\Variat{\rv{u}}}}{{}\cdot{}}$ denotes the expected value with respect to the distribution $\Variat{\rv{u}}$ and $\KL{{}\cdot{}}{{}\cdot{}}$ denotes the KL divergence, which can be evaluated analytically.
  540. To bound the required expectation, we use Jensen's inequality again together with \cref{app:eq:augmented_joint} which gives
  541. \begin{align}
  542. \label{app:eq:svgp_log_marginal_likelihood}
  543. \begin{split}
  544. \log\Prob{\rv{a} \given \rv{u}}
  545. &= \log\int \Prob{\rv{a} \given \rv{\hat{a}}} \Prob{\rv{\hat{a}} \given \rv{u}} \diff \rv{\hat{a}} \\
  546. &= \log\int \Gaussian{\rv{a} \given \rv{\hat{a}}, \sigma_a^2 \Eye} \Gaussian{\rv{\hat{a}} \given \mat{\mu_a}, \mat{\Sigma_a}} \diff \rv{\hat{a}} \\
  547. &\geq \int \log\Gaussian{\rv{a} \given \rv{\hat{a}}, \sigma_a^2 \Eye} \Gaussian{\rv{\hat{a}} \given \mat{\mu_a}, \mat{\Sigma_a}} \diff \rv{\hat{a}} \\
  548. &= \log\Gaussian{\rv{a} \given \mat{\mu_a}, \sigma_a^2 \Eye} - \frac{1}{2\sigma_a^2}\Fun*{\tr}{\mat{\Sigma_a}}.
  549. \end{split}
  550. \end{align}
  551. We apply this bound to the expectation to get
  552. \begin{align}
  553. \begin{split}
  554. \Moment{\E_{\Variat{\rv{u}}}}{\log \Prob{\rv{a} \given \rv{u}}}
  555. &\geq \Moment{\E_{\Variat*{\rv{u}}}}{\log\Gaussian{\rv{a} \given \mat{\mu_a}, \sigma_a^2 \Eye}}
  556. - \frac{1}{2\sigma_a^2} \Fun*{\tr}{\mat{\Sigma_a}}\text{, with}
  557. \end{split} \\
  558. \begin{split}
  559. \Moment{\E_{\Variat*{\rv{u}}}}{\log\Gaussian{\rv{a} \given \mat{\mu_a}, \sigma_a^2 \Eye}}
  560. &= \log \Gaussian{\rv{a} \given \mat{K_{au}}\mat{K_{uu}}\inv\mat{m}, \sigma_a^2 \Eye} \\
  561. &\quad {} + \frac{1}{2\sigma_a^2}\Fun*{\tr}{\mat{K_{au}}\mat{K_{uu}}\inv\mat{S}\mat{K_{uu}}\inv\mat{K_{ua}}}.
  562. \end{split}
  563. \end{align}
  564. Resubstituting this result into \cref{app:eq:svgp_log_likelihood} yields the final bound
  565. \begin{align}
  566. \label{app:eq:svgp_bound}
  567. \begin{split}
  568. \log \Prob{\rv{a} \given \rv{X}}
  569. &\geq \log \Gaussian{\rv{a} \given \mat{K_{au}}\mat{K_{uu}}\inv\mat{m}, \sigma_a^2 \Eye}
  570. - \vphantom{\frac{1}{2\sigma_a^2}} \KL*{\Variat{\rv{u}}}{\Prob{\rv{u}}} \\
  571. &\quad {} - \frac{1}{2\sigma_a^2} \Fun*{\tr}{\mat{\Sigma_a}}
  572. - \frac{1}{2\sigma_a^2} \Fun*{\tr}{\mat{K_{au}}\mat{K_{uu}}\inv\mat{S}\mat{K_{uu}}\inv\mat{K_{ua}}}.
  573. \end{split}
  574. \end{align}
  575. This bound, which depends on the hyper parameters of the kernel and likelihood $\left\{ \mat{\theta}, \sigma_a \right\}$ and the variational parameters $\left\{\mat{Z}, \mat{m}, \mat{S} \right\}$, can be calculated in $\Oh(NM^2)$ time.
  576. It factorizes along the data points which enables stochastic optimization.
  577. In order to obtain a bound on the full model, we apply the same techniques to the other processes.
  578. Since the alignment processes $a_d$ are assumed to be independent, we have $\log \Prob{\rv{a_1}, \dots, \rv{a_D} \given \mat{X}} = \sum_{d=1}^D \log \Prob{\rv{a_d} \given \mat{X}}$, where every term can be approximated using the bound in \cref{app:eq:svgp_bound}.
  579. However, for all subsequent layers, the bound is not directly applicable, since the inputs are no longer known but instead are given by the outputs of the previous process.
  580. It is therefore necessary to propagate their uncertainty and also handle the interdependencies between the layers introduced by the latent function values $\rv{a}$, $\rv{f}$ and $\rv{g}$.
  581. \paragraph{The Second and Third Layer}
  582. \label{app:subsubsec:other_layers}
  583. Our next goal is to derive a bound on the outputs of the second layer
  584. \begin{align}
  585. \begin{split}
  586. \log \Prob{\rv{f} \given \mat{u_f}} &= \log \int \Prob{\rv{f}, \rv{a}, \mat{u_a} \given \mat{u_f}} \diff \rv{a} \diff \mat{u_a},
  587. \end{split}
  588. \end{align}
  589. that is, an expression in which the uncertainty about the different $\rv{a_d}$ and the cross-layer dependencies on the $\rv{u_{a, d}}$ are both marginalized.
  590. While on the first layer, the different $\rv{a_d}$ are conditionally independent, the second layer explicitly models the cross-covariances between the different outputs via convolutions over the shared latent processes $w_r$.
  591. We will therefore need to handle all of the different $\rv{f_d}$, together denoted as $\rv{f}$, at the same time.
  592. We start by considering the relevant terms from \cref{app:eq:full_model} and apply \cref{app:eq:svgp_log_marginal_likelihood} to marginalize $\rv{a}$ in
  593. \begin{align}
  594. \begin{split}
  595. \log\Prob{\rv{f} \given \rv{u_f}, \rv{u_a}}
  596. &= \log\int\Prob{\rv{f}, \rv{a} \given \rv{u_f}, \rv{u_a}}\diff\rv{a} \\
  597. &\geq \log\int \aProb{\rv{f} \given \rv{u_f}, \rv{a}} \aProb{\rv{a} \given \rv{u_a}}
  598. \cdot \Fun*{\exp}{-\frac{1}{2\sigma_a^2} \Fun*{\tr}{\mat{\Sigma_a}} - \frac{1}{2\sigma_f^2} \Fun*{\tr}{\mat{\Sigma_f}}} \diff \rv{a} \\
  599. &\geq \Moment{\E_{\aProb{\rv{a} \given \rv{u_a}}}}{\log \aProb{\rv{f} \given \rv{u_f}, \rv{a}}}
  600. - \Moment*{\E_{\aProb{\rv{a} \given \rv{u_a}}}}{\frac{1}{2\sigma_f^2} \Fun*{\tr}{\mat{\Sigma_f}}}
  601. - \frac{1}{2\sigma_a^2} \Fun*{\tr}{\mat{\Sigma_a}},
  602. \end{split}
  603. \end{align}
  604. where we write $\aProb{\rv{a} \given \rv{u_a}} = \Gaussian*{\rv{a} \given \mat{\mu_a}, \sigma_a^2 \Eye}$ to incorporate the Gaussian noise in the latent space.
  605. Due to our assumption that $\rv{u_a}$ is a sufficient statistic for $\rv{a}$ we choose
  606. \begin{align}
  607. \label{app:eq:variational_assumption}
  608. \begin{split}
  609. \Variat{\rv{a} \given \rv{u_a}} &= \aProb{\rv{a} \given \rv{u_a}}\text{, and}\\
  610. \Variat{\rv{a}} &= \int \aProb{\rv{a} \given \rv{u_a}} \Variat{\rv{u_a}} \diff \rv{u_a},
  611. \end{split}
  612. \end{align}
  613. and use another variational approximation to marginalize $\rv{u_a}$.
  614. This yields
  615. \begin{align}
  616. \begin{split}
  617. \label{app:eq:f_marginal_likelihood}
  618. \log \Prob{\rv{f} \given \rv{u_f}}
  619. &= \log \int \Prob{\rv{f}, \rv{u_a} \given \rv{u_f}} \diff \rv{u_a} \\
  620. &= \log \int \Prob{\rv{f} \given \rv{u_f}, \rv{u_a}} \Prob{\rv{u_a}} \diff \rv{u_a} \\
  621. &\geq \int \Variat{\rv{u_a}} \log\frac{\Prob{\rv{f} \given \rv{u_f}, \rv{u_a}} \Prob{\rv{u_a}}}{\Variat{\rv{u_a}}} \diff \rv{u_a} \\
  622. &= \Moment*{\E_{\Variat{\rv{u_a}}}}{\log \Prob{\rv{f} \given \rv{u_a}, \rv{u_f}}}
  623. - \KL{\Variat{\rv{u_a}}}{\Prob{\rv{u_a}}} \\
  624. &\geq \Moment*{\E_{\Variat{\rv{u_a}}}}{\Moment*{\E_{\aProb{\rv{a} \given \rv{u_a}}}}{\log \aProb{\rv{f} \given \rv{u_f}, \rv{a}}}}
  625. - \KL{\Variat{\rv{u_a}}}{\Prob{\rv{u_a}}} \\
  626. &\quad {} - \frac{1}{2\sigma_a^2} \Fun*{\tr}{\mat{\Sigma_a}}
  627. - \Moment*{\E_{\Variat{\rv{u_a}}}}{\Moment*{\E_{\aProb{\rv{a} \given \rv{u_a}}}}{\frac{1}{2\sigma_f^2} \Fun*{\tr}{\mat{\Sigma_f}}}} \\
  628. &\geq \Moment*{\E_{\Variat{\rv{a}}}}{\log \aProb{\rv{f} \given \rv{u_f}, \rv{a}}},
  629. - \KL{\Variat{\rv{u_a}}}{\Prob{\rv{u_a}}} \\
  630. &\quad {} - \frac{1}{2\sigma_a^2} \Fun*{\tr}{\mat{\Sigma_a}}
  631. - \frac{1}{2\sigma_f^2} \Moment*{\E_{\Variat{\rv{a}}}}{\Fun*{\tr}{\mat{\Sigma_f}}},
  632. \end{split}
  633. \end{align}
  634. where we apply Fubini's theorem to exchange the order of integration in the expected values.
  635. The expectations with respect to $\Variat{\rv{a}}$ involve expectations of kernel matrices, also called $\Psi$-statistics, in the same way as in \parencites{damianou_deep_2012} and are given by
  636. \begin{align}
  637. \begin{split}
  638. \label{app:eq:psi_statistics}
  639. \psi_f &= \Moment*{\E_{\Variat{\rv{a}}}}{\Fun*{\tr}{\mat{K_{ff}}}}, \\
  640. \mat{\Psi_f} &= \Moment*{\E_{\Variat{\rv{a}}}}{\mat{K_{fu}}}, \\
  641. \mat{\Phi_f} &= \Moment*{\E_{\Variat{\rv{a}}}}{\mat{K_{uf}}\mat{K_{fu}}}. \\
  642. \end{split}
  643. \end{align}
  644. These $\Psi$-statistics can be computed analytically for multiple kernels, including the squared exponential kernel.
  645. In \cref{app:subsec:kernel_expectations} we show closed-form solutions for these $\Psi$-statistics for the implicit kernel defined in the CP layer.
  646. To obtain the final formulation of the desired bound for $\log \Prob{\rv{f} \given \rv{u_f}}$ we substitute \cref{app:eq:psi_statistics} into \cref{app:eq:f_marginal_likelihood} and get the analytically tractable bound
  647. \begin{align}
  648. \begin{split}
  649. \log \Prob{\rv{f} \given \rv{u_f}} \geq
  650. &\log\Gaussian*{\rv{f} \given \mat{\Psi_f}\mat{K_{u_fu_f}}\inv \mat{m_f}, \sigma_f^2\Eye}
  651. - \KL{\Variat{\rv{u_a}}}{\Prob{\rv{u_a}}} - \frac{1}{2\sigma_a^2} \Fun*{\tr}{\mat{\Sigma_a}} \\
  652. &- \frac{1}{2\sigma_f^2} \left( \psi_f - \Fun*{\tr}{\mat{\Psi_f}\mat{K_{u_fu_f}}\inv} \right) \\
  653. &- \frac{1}{2\sigma_f^2} \tr\left(\left(\mat{\Phi_f} - \mat{\Psi_f}\tran\mat{\Psi_f}\right) \mat{K_{u_fu_f}}\inv \left(\mat{m_f}\mat{m_f}\tran + \mat{S_f}\right)\mat{K_{u_fu_f}}\inv\right)
  654. \end{split}
  655. \end{align}
  656. The uncertainties in the first layer have been propagated variationally to the second layer.
  657. Besides the regularization terms, $\rv{f} \mid \rv{u_f}$ is a Gaussian distribution.
  658. Because of their cross dependencies, the different outputs $\rv{f_d}$ are considered in a common bound and do not factorize along dimensions.
  659. The third layer warpings $\rv{g_d}$ however are conditionally independent given $\rv{f}$ and can therefore be considered separately.
  660. In order to derive a bound for $\log \Prob{\rv{y} \given \rv{u_g}}$ we apply the same steps as described above, resulting in the final bound, which factorizes along the data, allowing for stochastic optimization methods:
  661. \begin{align}
  662. \label{app:eq:full_bound}
  663. \begin{split}
  664. \MoveEqLeft\log \Prob{\rv{y}\given \mat{X}} \geq
  665. \sum_{d=1}^D \log\Gaussian*{\rv{y_d} \given \mat{\Psi_{g, d}} \mat{K_{u_{g, d}u_{g, d}}}\inv \mat{m_{g, d}}, \sigma_{y, d}^2 \Eye}
  666. - \sum_{d=1}^D \frac{1}{2\sigma_{a, d}^2} \Fun{\tr}{\mat{\Sigma_{a, d}}} \\
  667. &- \frac{1}{2\sigma_f^2} \left( \psi_{f} - \Fun*{\tr}{\mat{\Phi_f} \mat{K_{u_fu_f}}\inv} \right)
  668. - \sum_{d=1}^D\frac{1}{2\sigma_{y, d}^2} \left( \psi_{g, d} - \Fun*\tr{\mat{\Phi_{g, d}} \mat{K_{u_{g, d}u_{g, d}}}\inv} \right) \\
  669. &- \sum_{d=1}^D \KL{\Variat{\rv{u_{a, d}}}}{\Prob{\rv{u_{a, d}}}}
  670. - \KL{\Variat{\rv{u_f}}}{\Prob{\rv{u_f}}}
  671. - \sum_{d=1}^D \KL{\Variat{\rv{u_{y, d}}}}{\Prob{\rv{u_{y, d}}}} \\
  672. &- \frac{1}{2\sigma_f^2} \tr\left(\left(\mat{\Phi_f} - \mat{\Psi_f}\tran\mat{\Psi_f}\right) \mat{K_{u_fu_f}}\inv \left(\mat{m_f}\mat{m_f}\tran + \mat{S_f}\right)\mat{K_{u_fu_f}}\inv\right) \\
  673. &- \sum_{d=1}^D\frac{1}{2\sigma_{y, d}^2} \tr\left(\left(\mat{\Phi_{g, d}} - \mat{\Psi_{g, d}}\tran\mat{\Psi_{g, d}}\right)
  674. \mat{K_{u_{g, d}u_{g, d}}}\inv \left(\mat{m_{g, d}}\mat{m_{g, d}}\tran + \mat{S_{g, d}}\right) \mat{K_{u_{g, d}u_{g, d}}}\inv\right)
  675. \end{split}
  676. \end{align}
  677. \subsection{Convolution Kernel Expectations}
  678. \label{app:subsec:kernel_expectations}
  679. In \cref{sec:model} we assumed the latent processes $w_r$ to be white noise processes and the smoothing kernel functions $T_{d, r}$ to be squared exponential kernels, leading to an explicit closed form formulation for the covariance between outputs shown in \cref{eq:dependent_kernel}.
  680. In this section, we derive the $\Psi$-statistics for this generalized squared exponential kernel needed to evaluate \cref{app:eq:full_bound}.
  681. The uncertainty about the first layer is captured by the variational distribution of the latent alignments $\rv{a}$ given by $\Variat{\rv{a}} \sim \Gaussian{\mat{\mu_a}, \mat{\Sigma_a}}\text{, with } \rv{a} = \left( \rv{a_1}, \dots, \rv{a_d} \right)$.
  682. Every aligned point in $\rv{a}$ corresponds to one output of $\rv{f}$ and ultimately to one of the $\rv{y_d}$.
  683. Since the closed form of the multi output kernel depends on the choice of outputs, we will use the notation $\Fun{\hat{f}}{\rv{a_n}}$ to denote $\Fun{f_d}{\rv{a_n}}$ such that $\rv{a_n}$ is associated with output $d$.
  684. For notational simplicity, we only consider the case of one single latent process $w_r$.
  685. Since the latent processes are independent, the results can easily be generalized to multiple processes.
  686. Then, $\psi_f$ is given by
  687. \begin{align}
  688. \begin{split}
  689. \psi_f &= \Moment*{\E_{\Variat{\rv{a}}}}{\Fun*{\tr}{\mat{K_{ff}}}} \\
  690. &= \sum_{n=1}^N \Moment*{\E_{\Variat{\rv{a_n}}}}{\Moment*{\cov}{\Fun{\hat{f}}{\mat{a_n}}, \Fun{\hat{f}}{\mat{a_n}}}} \\
  691. &= \sum_{n=1}^N \int \Moment*{\cov}{\Fun{\hat{f}}{\mat{a_n}}, \Fun{\hat{f}}{\mat{a_n}}} \Variat{\rv{a_n}} \diff \rv{a_n} \\
  692. &= \sum_{n=1}^N \hat{\sigma}_{nn}^2.
  693. \end{split}
  694. \end{align}
  695. Similar to the notation $\Fun{\hat{f}}{\cdot}$, we use the notation $\hat{\sigma}_{nn^\prime}$ to mean the variance term associated with the covariance function $\Moment{\cov}{\Fun{\hat{f}}{\mat{a_n}}, \Fun{\hat{f}}{\mat{a_{n^\prime}}}}$.
  696. The expectation $\mat{\Psi_f} = \Moment*{\E_{\Variat{\rv{a}}}}{\mat{K_{fu}}}$ connecting the alignments and the pseudo inputs is given by
  697. \begin{align}
  698. \begin{split}
  699. \mat{\Psi_f} &= \Moment*{\E_{\Variat{\rv{a}}}}{\mat{K_{fu}}}\text{, with} \\
  700. \left( \mat{\Psi_f} \right)_{ni}
  701. &= \int \Moment*{\cov}{\Fun{\hat{f}}{\mat{a_n}}, \Fun{\hat{f}}{\mat{Z_i}}} \Variat{\rv{a_n}} \diff \rv{a_n} \\
  702. &= \hat{\sigma}_{ni}^2 \sqrt{\frac{(\mat{\Sigma_a})_{nn}\inv}{\hat{\ell}_{ni} + (\mat{\Sigma_a})_{nn}\inv}}
  703. \cdot \exp\left(-\frac{1}{2} \frac{(\mat{\Sigma_a})_{nn}\inv\hat{\ell}_{ni}}{(\mat{\Sigma_a})_{nn}\inv + \hat{\ell}_{ni}} \left((\mat{\mu_a})_n - \mat{Z_i}\right)^2\right)
  704. \end{split}
  705. \end{align}
  706. where $\hat{\ell}_{ni}$ is the combined length scale corresponding to the same kernel as $\hat{\sigma}_{ni}$.
  707. Lastly, $\mat{\Phi_f} = \Moment*{\E_{\Variat{\rv{a}}}}{\mat{K_{uf}}\mat{K_{fu}}}$ connects alignments and pairs of pseudo inputs with the closed form
  708. \begin{align}
  709. \begin{split}
  710. \mat{\Phi_f} &= \Moment*{\E_{\Variat{\rv{a}}}}{\mat{K_{uf}}\mat{K_{fu}}}\text{, with} \\
  711. \left(\mat{\Phi_f} \right)_{ij} &= \sum_{n=1}^N \int \Moment*{\cov}{\Fun{\hat{f}}{\mat{a_n}}, \Fun{\hat{f}}{\mat{Z_i}}}
  712. \cdot \Moment*{\cov}{\Fun{\hat{f}}{\mat{a_n}}, \Fun{\hat{f}}{\mat{Z_j}}} \Variat{\rv{a_n}} \diff \rv{a_n} \\
  713. &= \sum_{n=1}^N \hat{\sigma}_{ni}^2 \hat{\sigma}_{nj}^2 \sqrt{\frac{(\mat{\Sigma_a})_{nn}\inv}{\hat{\ell}_{ni} + \hat{\ell}_{nj} + (\mat{\Sigma_a})_{nn}\inv}}
  714. \cdot \exp\left( -\frac{1}{2} \frac{\hat{\ell}_{ni}\hat{\ell}_{nj}}{\hat{\ell}_{ni} + \hat{\ell}_{nj}} (\mat{Z_i} - \mat{Z_j})^2 \right. \\
  715. &\quad {} - \frac{1}{2} \frac{(\mat{\Sigma_a})_{nn}\inv(\hat{\ell}_{ni} + \hat{\ell}_{nj})}{(\mat{\Sigma_a})_{nn}\inv + \hat{\ell}_{ni} + \hat{\ell}_{nj}}
  716. \cdot \left.\left( (\mat{\mu_a})_n - \frac{\hat{\ell}_{ni} \mat{Z_i} + \hat{\ell}_{nj} \mat{Z_j}}{\hat{\ell}_{ni} + \hat{\ell}_{nj}} \right)^2 \right).
  717. \end{split}
  718. \end{align}
  719. Note that the $\Psi$-statistics factorize along the data and we only need to consider the diagonal entries of $\mat{\Sigma_a}$.
  720. If all the data belong to the same output, the $\Psi$-statistics of the standard squared exponential kernel can be recovered as a special case.
  721. It is used to propagate the uncertainties through the output-specific warpings $\rv{g}$.
  722. \subsection{Approximative Predictions}
  723. \label{subsec:predictions}
  724. Using the variational lower bound in \cref{eq:full_bound}, our model can be fitted to data, resulting in appropriate choices of the kernel hyper parameters and variational parameters.
  725. Now assume we want to predict approximate function values $\mat{g_{d, \star}}$ for previously unseen points $\mat{X_{d, \star}}$ associated with output $d$, which are given by $ \mat{g_{d, \star}} = g_d(f_d(a_d(\mat{X_{d, \star}})))$.
  726. Because of the conditional independence assumptions in the model, other outputs $d^\prime \neq d$ only have to be considered in the shared layer $\rv{f}$.
  727. In this shared layer, the belief about the different outputs and the shared information and is captured by the variational distribution $\Variat{\rv{u_f}}$.
  728. Given $\Variat{\rv{u_f}}$, the different outputs are conditionally independent of one another and thus, predictions for a single dimension in our model are equivalent to predictions in a single deep GP with nested variational compression as presented by \textcite{hensman_nested_2014}.
  729. % \todo{Below, I write what I currently know about how to calculate this. Should we just stop here?}Given inputs $\mat{X_{d, \star}}$, we propagate Gaussian messages through the different layers. The Gaussians are the variational approximations of $\rv{a_{d, \star}}$, $\rv{f_{d, \star}}$ and $\rv{g_{d, \star}}$.
  730. % Using $\rv{f_{d, \star}}$ as an example, in \cref{eq:variational_assumption} we assumed that they are given by a standard integral
  731. % \begin{align}
  732. % \begin{split}
  733. % \MoveEqLeft\Variat{\rv{f_{d, \star}} \given \rv{a_{d, \star}}} = \\
  734. % &= \int \aProb{\rv{f_{d, \star}} \given \rv{u_f}, \rv{a_{d, \star}}}\Variat{\rv{u_f}} \diff \rv{u_f} \\
  735. % &= \Gaussian*{\rv{f_{d, \star}} \given \mat{\mu_{\star}}, \mat{\Sigma_{\star}} + \sigma_f^2 \Eye}
  736. % \end{split}
  737. % \end{align}
  738. % with
  739. % \begin{align*}
  740. % \mat{\mu_{\star}} &= \mat{K_{\star m}}\mat{K_{mm}}\inv \mat{m} \\
  741. % \mat{\Sigma_{\star}} &= \mat{K_{\star\star}} - \mat{K_{\star m}}\mat{K_{mm}}\inv \left( \mat{K_{mm}} - \mat{S} \right) \mat{K_{mm}}\inv\mat{K_{m\star}}.
  742. % \end{align*}
  743. % Here, $\mat{K_{\star m}}$ denotes the kernel matrix of the previous layer's output $\rv{a_{d, \star}}$ and the current layer's inducing inputs $\mat{Z_f}$.
  744. % Since every message besides the initial $\rv{X_{d, \star}}$ is itself a Gaussian, it needs to be marginalized:
  745. % \begin{align}
  746. % \begin{split}
  747. % \MoveEqLeft\Variat{\rv{f_{d, \star}} \given \rv{X_{d, \star}}} = \\
  748. % &= \int \Variat{\rv{f_{d, \star}}, \rv{a_{d, \star}} \given \rv{X_{d, \star}}} \diff \rv{a_{d, \star}} \\
  749. % &= \int \Variat{\rv{f_{d, \star}} \given \rv{a_{d, \star}}} \Variat{\rv{a_{d, \star}} \given \rv{X_{d, \star}}} \diff \rv{a_{d, \star}} \\
  750. % &= \int \Variat{\rv{f_{d, \star}} \given \rv{a_{d, \star}}} \Variat{\rv{a_{d, \star}} \given \rv{X_{d, \star}}} \diff \rv{a_{d, \star}} \\
  751. % &= \Moment*{\E_{\Variat{\rv{a_{d, \star}} \given \rv{X_{d, \star}}}}}{\Variat{\rv{f_{d, \star}} \given \rv{a_{d, \star}}}} \\
  752. % &= \Gaussian*{\rv{f_{d, \star}} \given \mat{\bar{\mu}_{\star}}, \mat{\bar{\Sigma}_{\star}}}
  753. % \end{split}
  754. % \end{align}
  755. % with \todo{I am super uncertain about $\mat{\bar{\Sigma}}$.}
  756. % \begin{align*}
  757. % \mat{\bar{\mu}_{\star}} &= \mat{\Psi_{f\star}} \mat{K_{mm}}\inv \mat{m} \\
  758. % \mat{\bar{\Sigma}_{\star}} &= \mat{\Psi_{f\star}}\mat{K_{mm}}\inv\mat{S}\mat{K_{mm}}\inv\mat{\Psi_{f\star}}\tran.
  759. % \end{align*}
  760. % For the first layer, the expectation collapses to usual SVGP predictions as derived in \cite{hensman_scalable_2014}.
  761. % Given the approximation of $\rv{f_{d, \star}}$, the approximation of $\rv{g_{d, \star}}$ can be derived via the same procedure for the next layer.
  762. % This yields an approximate prediction for the final function values.
  763. \section{Joint models for wind experiment}
  764. \label{app:sec:joint_models}
  765. In the following, we show plots with joint predictions for the models discussed in \cref{subsec:wind_example}.
  766. Similar to \cref{subsec:artificial_example}, we trained a standard GP in \cref{app:fig:joint_gp}, a multi-output GP in \cref{app:fig:joint_mo_gp}, a deep GP in \cref{app:fig:joint_dgp} and our model in \cref{app:fig:joint_ours}.
  767. All models were trained until convergence and multiple runs result in very similar models.
  768. For all models we used RBF kernels or dependent RBF kernels where applicable.
  769. Each plot shows the data in gray and two mean predictions and uncertainty bands.
  770. The first violet uncertainty band is the result of the variational approximation of the respective model.
  771. The second green or blue posterior is obtained via sampling.
  772. For both the GP and MO-GP, we used the SVGP approximation \parencite{hensman_scalable_2014} and since the models are shallow, the approximation is almost exact.
  773. \Cref{app:fig:joint_dgp} showcases the difficulty of training a deep GP model and the shortcomings of the nested variational compression.
  774. The violet variational approximation is used for training and approximates the data comparatively well.
  775. As discussed above, the deep GP cannot share information, so the test sets cannot be predicted.
  776. However, as discussed in more detail in \parencite{hensman_scalable_2014}, the approximation tends to underestimate uncertainties when propagating them through the different layers and because of this, uncertainties obtained via sampling tend to vary considerably more.
  777. Because during model selection sample performance does not matter, the true posterior can be (and in this case is) considerably different.
  778. Our approach in principle has the same problem as the deep GP.
  779. However, because of the strong interpretability of the different parts of the hierarchy, uncertainties within the model are never placed arbitrarily and because of this, the variational posteriors and true posteriors look much more similar.
  780. They tend to disagree in places when there is high uncertainty about the alignment.
  781. \begin{figure}[p]
  782. \centering
  783. \includestandalonewithpath{figures/app_wind_joint_gp}
  784. \caption{
  785. \label{app:fig:joint_gp}
  786. GP
  787. }
  788. \end{figure}
  789. \begin{figure}[p]
  790. \centering
  791. \includestandalonewithpath{figures/app_wind_joint_mo_gp}
  792. \caption{
  793. \label{app:fig:joint_mo_gp}
  794. MO-GP
  795. }
  796. \end{figure}
  797. \begin{figure}[p]
  798. \centering
  799. \includestandalonewithpath{figures/app_wind_joint_dgp}
  800. \caption{
  801. \label{app:fig:joint_dgp}
  802. DGP
  803. }
  804. \end{figure}
  805. \begin{figure}[p]
  806. \centering
  807. \includestandalonewithpath{figures/app_wind_joint_ours}
  808. \caption{
  809. \label{app:fig:joint_ours}
  810. AMO-GP (Ours)
  811. }
  812. \end{figure}
  813. \end{document}