123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247 

 @article{damianou_deep_2012,
 title = {Deep {Gaussian} {Processes}},
 url = {http://arxiv.org/abs/1211.0358},
 abstract = {In this paper we introduce deep Gaussian process (GP) models. Deep GPs are a deep belief network based on Gaussian process mappings. The data is modeled as the output of a multivariate GP. The inputs to that Gaussian process are then governed by another GP. A single layer model is equivalent to a standard GP or the GP latent variable model (GPLVM). We perform inference in the model by approximate variational marginalization. This results in a strict lower bound on the marginal likelihood of the model which we use for model selection (number of layers and nodes per layer). Deep belief networks are typically applied to relatively large data sets using stochastic gradient descent for optimization. Our fully Bayesian treatment allows for the application of deep models even when data is scarce. Model selection by our variational bound shows that a five layer hierarchy is justified even when modelling a digit data set containing only 150 examples.},
 urldate = {20160905},
 journal = {arXiv:1211.0358 [cs, math, stat]},
 author = {Damianou, Andreas C. and Lawrence, Neil D.},
 month = nov,
 year = {2012},
 note = {arXiv: 1211.0358},
 keywords = {Computer Science  Learning, Statistics  Machine Learning, 60G15, 58E30, G.1.2, G.3, I.2.6, Mathematics  Probability},
 file = {arXiv\:1211.0358 PDF:C\:\\Users\\z003jvyc\\Zotero\\storage\\BUXWE2UV\\Damianou and Lawrence  2012  Deep Gaussian Processes.pdf:application/pdf;arXiv.org Snapshot:C\:\\Users\\z003jvyc\\Zotero\\storage\\S2KB72DK\\1211.html:text/html}
 }

 @inproceedings{zhou_generalized_2012,
 title = {Generalized time warping for multimodal alignment of human motion},
 booktitle = {Computer {Vision} and {Pattern} {Recognition} ({CVPR}), 2012 {IEEE} {Conference} on},
 publisher = {IEEE},
 author = {Zhou, Feng and De la Torre, Fernando},
 year = {2012},
 pages = {12821289},
 file = {Fulltext:C\:\\Users\\z003jvyc\\Zotero\\storage\\H9KUZQZQ\\Zhou und De la Torre  2012  Generalized time warping for multimodal alignment.pdf:application/pdf;Snapshot:C\:\\Users\\z003jvyc\\Zotero\\storage\\A5RQ27H7\\auD.html:text/html}
 }

 @article{matthews_gpflow:_2017,
 title = {{GPflow}: {A} {Gaussian} process library using {TensorFlow}},
 volume = {18},
 shorttitle = {{GPflow}},
 url = {http://www.jmlr.org/papers/volume18/16537/16537.pdf},
 number = {40},
 urldate = {20170927},
 journal = {Journal of Machine Learning Research},
 author = {Matthews, Alexander G. de G. and van der Wilk, Mark and Nickson, Tom and Fujii, Keisuke and Boukouvalas, Alexis and LeónVillagrá, Pablo and Ghahramani, Zoubin and Hensman, James},
 year = {2017},
 pages = {16},
 file = {Full Text:C\:\\Users\\z003jvyc\\Zotero\\storage\\X6QGAFR8\\Matthews et al.  2017  GPflow A Gaussian process library using TensorFlo.pdf:application/pdf}
 }

 @article{snoek_input_2014,
 title = {Input {Warping} for {Bayesian} {Optimization} of {Non}stationary {Functions}},
 url = {http://arxiv.org/abs/1402.0929},
 abstract = {Bayesian optimization has proven to be a highly effective methodology for the global optimization of unknown, expensive and multimodal functions. The ability to accurately model distributions over functions is critical to the effectiveness of Bayesian optimization. Although Gaussian processes provide a flexible prior over functions which can be queried efficiently, there are various classes of functions that remain difficult to model. One of the most frequently occurring of these is the class of nonstationary functions. The optimization of the hyperparameters of machine learning algorithms is a problem domain in which parameters are often manually transformed a priori, for example by optimizing in "logspace," to mitigate the effects of spatiallyvarying length scale. We develop a methodology for automatically learning a wide family of bijective transformations or warpings of the input space using the Beta cumulative distribution function. We further extend the warping framework to multitask Bayesian optimization so that multiple tasks can be warped into a jointly stationary space. On a set of challenging benchmark optimization tasks, we observe that the inclusion of warping greatly improves on the stateoftheart, producing better results faster and more reliably.},
 urldate = {20170731},
 journal = {arXiv:1402.0929 [cs, stat]},
 author = {Snoek, Jasper and Swersky, Kevin and Zemel, Richard S. and Adams, Ryan P.},
 month = feb,
 year = {2014},
 note = {arXiv: 1402.0929},
 keywords = {Computer Science  Learning, Statistics  Machine Learning},
 file = {arXiv\:1402.0929 PDF:C\:\\Users\\z003jvyc\\Zotero\\storage\\ELZ9CMFF\\Snoek et al.  2014  Input Warping for Bayesian Optimization of Nonsta.pdf:application/pdf;arXiv.org Snapshot:C\:\\Users\\z003jvyc\\Zotero\\storage\\5ECB3EET\\1402.html:text/html}
 }

 @article{hensman_nested_2014,
 title = {Nested {Variational} {Compression} in {Deep} {Gaussian} {Processes}},
 url = {http://arxiv.org/abs/1412.1370},
 abstract = {Deep Gaussian processes provide a flexible approach to probabilistic modelling of data using either supervised or unsupervised learning. For tractable inference approximations to the marginal likelihood of the model must be made. The original approach to approximate inference in these models used variational compression to allow for approximate variational marginalization of the hidden variables leading to a lower bound on the marginal likelihood of the model [Damianou and Lawrence, 2013]. In this paper we extend this idea with a nested variational compression. The resulting lower bound on the likelihood can be easily parallelized or adapted for stochastic variational inference.},
 urldate = {20170719},
 journal = {arXiv:1412.1370 [stat]},
 author = {Hensman, James and Lawrence, Neil D.},
 month = dec,
 year = {2014},
 note = {arXiv: 1412.1370},
 keywords = {Statistics  Machine Learning},
 file = {arXiv\:1412.1370 PDF:C\:\\Users\\z003jvyc\\Zotero\\storage\\ZKNA6NYN\\Hensman and Lawrence  2014  Nested Variational Compression in Deep Gaussian Pr.pdf:application/pdf;arXiv.org Snapshot:C\:\\Users\\z003jvyc\\Zotero\\storage\\UMQ96R94\\1412.html:text/html}
 }

 @inproceedings{alvarez_sparse_2009,
 title = {Sparse convolved {Gaussian} processes for multioutput regression},
 url = {http://papers.nips.cc/paper/3553sparseconvolvedgaussianprocessesformultioutputregression},
 urldate = {20170714},
 booktitle = {Advances in neural information processing systems},
 author = {Alvarez, Mauricio and Lawrence, Neil D.},
 year = {2009},
 pages = {5764},
 file = {[PDF] nips.cc:C\:\\Users\\z003jvyc\\Zotero\\storage\\SIZMYY5F\\Alvarez and Lawrence  2009  Sparse convolved Gaussian processes for multioutp.pdf:application/pdf;Snapshot:C\:\\Users\\z003jvyc\\Zotero\\storage\\A2QU9XT7\\3553sparseconvolvedgaussianprocessesformultioutputregression.html:text/html}
 }

 @article{salimbeni_doubly_2017,
 title = {Doubly {Stochastic} {Variational} {Inference} for {Deep} {Gaussian} {Processes}},
 url = {http://arxiv.org/abs/1705.08933},
 abstract = {Gaussian processes (GPs) are a good choice for function approximation as they are flexible, robust to overfitting, and provide wellcalibrated predictive uncertainty. Deep Gaussian processes (DGPs) are multilayer generalisations of GPs, but inference in these models has proved challenging. Existing approaches to inference in DGP models assume approximate posteriors that force independence between the layers, and do not work well in practice. We present a doubly stochastic variational inference algorithm, which does not force independence between layers. With our method of inference we demonstrate that a DGP model can be used effectively on data ranging in size from hundreds to a billion points. We provide strong empirical evidence that our inference scheme for DGPs works well in practice in both classification and regression.},
 urldate = {20170602},
 journal = {arXiv:1705.08933 [stat]},
 author = {Salimbeni, Hugh and Deisenroth, Marc},
 month = may,
 year = {2017},
 note = {arXiv: 1705.08933},
 keywords = {Statistics  Machine Learning},
 file = {arXiv\:1705.08933 PDF:C\:\\Users\\z003jvyc\\Zotero\\storage\\FTCRG5BC\\Salimbeni und Deisenroth  2017  Doubly Stochastic Variational Inference for Deep G.pdf:application/pdf;arXiv.org Snapshot:C\:\\Users\\z003jvyc\\Zotero\\storage\\AP6UXDGD\\1705.html:text/html}
 }

 @inproceedings{titsias_variational_2009,
 title = {Variational {Learning} of {Inducing} {Variables} in {Sparse} {Gaussian} {Processes}.},
 volume = {5},
 url = {http://www.jmlr.org/proceedings/papers/v5/titsias09a/titsias09a.pdf},
 urldate = {20170406},
 booktitle = {{AISTATS}},
 author = {Titsias, Michalis K.},
 year = {2009},
 pages = {567574},
 file = {[PDF] jmlr.org:C\:\\Users\\z003jvyc\\Zotero\\storage\\UTMCPPXS\\Titsias  2009  Variational Learning of Inducing Variables in Spar.pdf:application/pdf}
 }

 @article{hensman_gaussian_2013,
 title = {Gaussian {Processes} for {Big} {Data}},
 url = {http://arxiv.org/abs/1309.6835},
 abstract = {We introduce stochastic variational inference for Gaussian process models. This enables the application of Gaussian process (GP) models to data sets containing millions of data points. We show how GPs can be vari ationally decomposed to depend on a set of globally relevant inducing variables which factorize the model in the necessary manner to perform variational inference. Our ap proach is readily extended to models with nonGaussian likelihoods and latent variable models based around Gaussian processes. We demonstrate the approach on a simple toy problem and two real world data sets.},
 urldate = {20160706},
 journal = {arXiv:1309.6835 [cs, stat]},
 author = {Hensman, James and Fusi, Nicolo and Lawrence, Neil D.},
 month = sep,
 year = {2013},
 keywords = {Computer Science  Learning, Statistics  Machine Learning},
 file = {arXiv\:1309.6835 PDF:C\:\\Users\\z003jvyc\\Zotero\\storage\\XV3VH9PJ\\Hensman et al.  2013  Gaussian Processes for Big Data.pdf:application/pdf;arXiv\:1309.6835 PDF:C\:\\Users\\z003jvyc\\Zotero\\storage\\EU8WZFR4\\Hensman et al.  2013  Gaussian Processes for Big Data.pdf:application/pdf;arXiv.org Snapshot:C\:\\Users\\z003jvyc\\Zotero\\storage\\ISZ4Z86Q\\1309.html:text/html;arXiv.org Snapshot:C\:\\Users\\z003jvyc\\Zotero\\storage\\2JAR4BNM\\1309.html:text/html}
 }

 @inproceedings{alvarez_efficient_2010,
 title = {Efficient {Multioutput} {Gaussian} {Processes} through {Variational} {Inducing} {Kernels}.},
 volume = {9},
 url = {http://www.jmlr.org/proceedings/papers/v9/alvarez10a/alvarez10a.pdf},
 urldate = {20170302},
 booktitle = {{AISTATS}},
 author = {Alvarez, Mauricio A. and Luengo, David and Titsias, Michalis K. and Lawrence, Neil D.},
 year = {2010},
 pages = {2532},
 file = {[PDF] jmlr.org:C\:\\Users\\z003jvyc\\Zotero\\storage\\6Q4I9FRF\\Alvarez et al.  2010  Efficient Multioutput Gaussian Processes through V.pdf:application/pdf}
 }

 @article{hensman_scalable_2014,
 title = {Scalable {Variational} {Gaussian} {Process} {Classification}},
 url = {http://arxiv.org/abs/1411.2005},
 abstract = {Gaussian process classification is a popular method with a number of appealing properties. We show how to scale the model within a variational inducing point framework, outperforming the state of the art on benchmark datasets. Importantly, the variational formulation can be exploited to allow classification in problems with millions of data points, as we demonstrate in experiments.},
 urldate = {20170213},
 journal = {arXiv:1411.2005 [stat]},
 author = {Hensman, James and Matthews, Alex and Ghahramani, Zoubin},
 month = nov,
 year = {2014},
 note = {arXiv: 1411.2005},
 keywords = {Statistics  Machine Learning},
 file = {arXiv\:1411.2005 PDF:C\:\\Users\\z003jvyc\\Zotero\\storage\\T4WFAQPK\\Hensman et al.  2014  Scalable Variational Gaussian Process Classificati.pdf:application/pdf;arXiv.org Snapshot:C\:\\Users\\z003jvyc\\Zotero\\storage\\5GEKF8R7\\1411.html:text/html}
 }

 @techreport{boyle_multiple_2005,
 title = {Multiple output gaussian process regression},
 abstract = {Gaussian processes are usually parameterised in terms of their covariance functions. However, this makes it difficult to deal with multiple outputs, because ensuring that the covariance matrix is positive definite is problematic. An alternative formulation is to treat Gaussian processes as white noise sources convolved with smoothing kernels, and to parameterise the kernel instead. Using this, we extend Gaussian processes to handle multiple, coupled outputs. 1},
 author = {Boyle, Phillip and Frean, Marcus and Boyle, Phillip and Frean, Marcus},
 year = {2005},
 file = {Citeseer  Full Text PDF:C\:\\Users\\z003jvyc\\Zotero\\storage\\STU7NV59\\Boyle et al.  2005  Multiple output gaussian process regression.pdf:application/pdf;Citeseer  Snapshot:C\:\\Users\\z003jvyc\\Zotero\\storage\\ZWMMCM3F\\summary.html:text/html}
 }

 @article{alvarez_kernels_2011,
 title = {Kernels for {Vector}{Valued} {Functions}: a {Review}},
 shorttitle = {Kernels for {Vector}{Valued} {Functions}},
 url = {http://arxiv.org/abs/1106.6251},
 abstract = {Kernel methods are among the most popular techniques in machine learning. From a frequentist/discriminative perspective they play a central role in regularization theory as they provide a natural choice for the hypotheses space and the regularization functional through the notion of reproducing kernel Hilbert spaces. From a Bayesian/generative perspective they are the key in the context of Gaussian processes, where the kernel function is also known as the covariance function. Traditionally, kernel methods have been used in supervised learning problem with scalar outputs and indeed there has been a considerable amount of work devoted to designing and learning kernels. More recently there has been an increasing interest in methods that deal with multiple outputs, motivated partly by frameworks like multitask learning. In this paper, we review different methods to design or learn valid kernel functions for multiple outputs, paying particular attention to the connection between probabilistic and functional methods.},
 urldate = {20170206},
 journal = {arXiv:1106.6251 [cs, math, stat]},
 author = {Alvarez, Mauricio A. and Rosasco, Lorenzo and Lawrence, Neil D.},
 month = jun,
 year = {2011},
 note = {arXiv: 1106.6251},
 keywords = {Statistics  Machine Learning, Computer Science  Artificial Intelligence, Mathematics  Statistics Theory},
 file = {arXiv\:1106.6251 PDF:C\:\\Users\\z003jvyc\\Zotero\\storage\\R6PZ939E\\Alvarez et al.  2011  Kernels for VectorValued Functions a Review.pdf:application/pdf;arXiv.org Snapshot:C\:\\Users\\z003jvyc\\Zotero\\storage\\IFE9Z28Q\\1106.html:text/html}
 }

 @inproceedings{boyle_dependent_2004,
 title = {Dependent {Gaussian} {Processes}.},
 volume = {17},
 url = {https://papers.nips.cc/paper/2561dependentgaussianprocesses.pdf},
 urldate = {20170127},
 booktitle = {{NIPS}},
 author = {Boyle, Phillip and Frean, Marcus R.},
 year = {2004},
 pages = {217224},
 file = {[PDF] nips.cc:C\:\\Users\\z003jvyc\\Zotero\\storage\\HJT7BPIT\\Boyle and Frean  2004  Dependent Gaussian Processes..pdf:application/pdf}
 }

 @inproceedings{lazarogredilla_bayesian_2012,
 title = {Bayesian warped {Gaussian} processes},
 url = {http://papers.nips.cc/paper/4494bayesianwarpedgaussianprocesses},
 urldate = {20161206},
 booktitle = {Advances in {Neural} {Information} {Processing} {Systems}},
 author = {LázaroGredilla, Miguel},
 year = {2012},
 pages = {16191627},
 file = {[PDF] wustl.edu:C\:\\Users\\z003jvyc\\Zotero\\storage\\NTS9SDBA\\LázaroGredilla  2012  Bayesian warped Gaussian processes.pdf:application/pdf;Snapshot:C\:\\Users\\z003jvyc\\Zotero\\storage\\HFAKAI4X\\4494bayesianwarpedgaussianprocesses.html:text/html}
 }

 @inproceedings{titsias_bayesian_2010,
 title = {Bayesian {Gaussian} process latent variable model},
 url = {http://machinelearning.wustl.edu/mlpapers/paper_files/AISTATS2010_TitsiasL10.pdf},
 urldate = {20160201},
 booktitle = {International {Conference} on {Artificial} {Intelligence} and {Statistics}},
 author = {Titsias, Michalis K. and Lawrence, Neil D.},
 year = {2010},
 pages = {844851},
 file = {[PDF] von wustl.edu:C\:\\Users\\z003jvyc\\Zotero\\storage\\5HPG3ZG9\\Titsias and Lawrence  2010  Bayesian Gaussian process latent variable model.pdf:application/pdf}
 }

 @book{coburn_geostatistics_2000,
 title = {Geostatistics for natural resources evaluation},
 publisher = {Taylor \& Francis Group},
 author = {Coburn, Timothy C.},
 year = {2000},
 file = {Snapshot:C\:\\Users\\z003jvyc\\Zotero\\storage\\QKNJSFHQ\\auD.html:text/html}
 }

 @book{journel_mining_1978,
 title = {Mining geostatistics},
 publisher = {Academic press},
 author = {Journel, Andre G. and Huijbregts, Ch J.},
 year = {1978},
 file = {Snapshot:C\:\\Users\\z003jvyc\\Zotero\\storage\\T8DUJGCJ\\27687.pdf:application/pdf}
 }

 @article{soleimanzadeh_controller_2011,
 title = {Controller design for a wind farm, considering both power and load aspects},
 volume = {21},
 number = {4},
 journal = {Mechatronics},
 author = {Soleimanzadeh, Maryam and Wisniewski, Rafael},
 year = {2011},
 pages = {720727},
 file = {Snapshot:C\:\\Users\\z003jvyc\\Zotero\\storage\\78N9YN8C\\S0957415811000328.html:text/html}
 }

 @inproceedings{bitar_coordinated_2013,
 title = {Coordinated control of a wind turbine array for power maximization},
 booktitle = {American {Control} {Conference} ({ACC}), 2013},
 publisher = {IEEE},
 author = {Bitar, Eilyan and Seiler, Pete},
 year = {2013},
 pages = {28982904},
 file = {Fulltext:C\:\\Users\\z003jvyc\\Zotero\\storage\\TQRZDYMQ\\coordinatedcontrolofawindturbinearrayforpowermaximizatio.html:text/html;Snapshot:C\:\\Users\\z003jvyc\\Zotero\\storage\\N8MHP7CD\\auD.html:text/html}
 }

 @inproceedings{schepers_improved_2007,
 title = {Improved modelling of wake aerodynamics and assessment of new farm control strategies},
 volume = {75},
 booktitle = {Journal of {Physics}: {Conference} {Series}},
 publisher = {IOP Publishing},
 author = {Schepers, J. G. and Van der Pijl, S. P.},
 year = {2007},
 pages = {012039},
 file = {Snapshot:C\:\\Users\\z003jvyc\\Zotero\\storage\\AUAMVYLD\\auD.html:text/html}
 }
