Browse Source

Do not show DGP 4 and 5 results

icml
Markus Kaiser 3 years ago
parent
commit
480395be94
  1. 6
      dynamic_dirichlet_deep_gp.tex

6
dynamic_dirichlet_deep_gp.tex

@ -539,8 +539,8 @@ At $x = -10$ the inferred modes and assignment processes start reverting to thei
% 10 & U-DAGP & 0.472 \pm 0.002 & 0.425 \pm 0.002 & {\textemdash} & {\textemdash} & {\textemdash} & {\textemdash} \\
10 & DAGP 2 & 0.548 \pm 0.012 & \bfseries 0.519 \pm 0.008 & 0.851 \pm 0.003 & 0.859 \pm 0.001 & 0.673 \pm 0.013 & 0.599 \pm 0.011 \\
10 & DAGP 3 & 0.527 \pm 0.004 & 0.491 \pm 0.003 & 0.858 \pm 0.002 & 0.852 \pm 0.002 & 0.624 \pm 0.011 & 0.545 \pm 0.012 \\
10 & DAGP 4 & 0.517 \pm 0.006 & 0.485 \pm 0.003 & 0.858 \pm 0.001 & 0.852 \pm 0.002 & 0.602 \pm 0.011 & 0.546 \pm 0.010 \\
10 & DAGP 5 & 0.535 \pm 0.004 & 0.506 \pm 0.005 & 0.851 \pm 0.003 & 0.851 \pm 0.003 & 0.662 \pm 0.009 & 0.581 \pm 0.012 \\
% 10 & DAGP 4 & 0.517 \pm 0.006 & 0.485 \pm 0.003 & 0.858 \pm 0.001 & 0.852 \pm 0.002 & 0.602 \pm 0.011 & 0.546 \pm 0.010 \\
% 10 & DAGP 5 & 0.535 \pm 0.004 & 0.506 \pm 0.005 & 0.851 \pm 0.003 & 0.851 \pm 0.003 & 0.662 \pm 0.009 & 0.581 \pm 0.012 \\
\addlinespace
10 & BNN+LV & 0.519 \pm 0.005 & \bfseries 0.524 \pm 0.005 & {\textemdash} & {\textemdash} & {\textemdash} & {\textemdash} \\
10 & GPR Mixed & 0.452 \pm 0.003 & 0.421 \pm 0.003 & {\textemdash} & {\textemdash} & {\textemdash} & {\textemdash} \\
@ -567,7 +567,7 @@ We consider three test sets, one sampled from the default system, one sampled fr
They are generated by sampling trajectories with an aggregated size of 5000 points from each system for the first two sets and their concatenation for the mixed set.
For this data set, we use squared exponential kernels for both the $f^{\pix{k}}$ and $\alpha^{\pix{k}}$ and 100 inducing points in every GP.
We evaluate the performance of deep GPs with up to 5 layers and squared exponential kernels as models for the different functions.
We evaluate the performance of deep GPs with up to 3 layers and squared exponential kernels as models for the different functions.
As described in~\parencite{salimbeni_doubly_2017}, we use identity mean functions for all but the last layers and initialize the variational distributions with low covariances.
We compare our models with three-layer relu-activated Bayesian neural networks with added latent variables (BNN+LV) as introduced by~\textcite{depeweg_learning_2016}.
These latent variables can be used to effectively model multimodalities and stochasticity in dynamical systems for model-based reinforcement learning~\parencite{depeweg_decomposition_2018}.

Loading…
Cancel
Save