---------- Forwarded message ----------
From: Aaditya Ramdas <aramdas(a)berkeley.edu>
Date: 2016-08-21 21:23 GMT-04:00
Subject: CFP: AISTATS 2017
To: Csaba Szepesvari <szepesva(a)cs.ualberta.ca>, Mark Schmidt <
schmidtm(a)cs.ubc.ca>, yoshua.bengio(a)umontreal.ca
Hi Csaba, Mark, Yoshua,
Hope things are going well in Canada.
I'm the publicity chair for AISTATS next year, with Jerry Zhu and Aarti
Singh being the program chairs. We've tried to be a bit more progressive in
bringing together the Stat and CS/ML/AI communities this year.
Could you please help disseminate the call for papers amongst Stats, EE/CS,
math, b-schools, etc in your schools and other relevant departments that do
ML/Stats/AI/optimization?
Thanks!
AISTATS <http://aistats.org/> is an interdisciplinary gathering of
researchers at the intersection of artificial intelligence, machine
learning, statistics, and related areas. The 20th International Conference
on Artificial Intelligence and Statistics (AISTATS <http://www.aistats.org/>)
will take place in Fort Lauderdale, Florida, USA from *April 20-22, 2017*.
The deadline for paper submission is *Oct 13, 2016*, with final decisions
made on Jan 24, 2017.
New this year:
1. *Fast-track for Electronic Journal of Statistics*: Authors of a small
number of accepted papers will be invited to submit an extended version for
fast-track publication in a special issue of the Electronic Journal of
Statistics (EJS) after the AISTATS decisions are out. Details on how to
prepare such extended journal paper submission will be announced after the
AISTATS decisions.
2. *Review-sharing with NIPS*: Papers previously submitted to NIPS 2016
are required to declare their previous NIPS paper ID, and optionally supply
a one-page letter of revision (similar to a revision letter to journal
editors; anonymized) in supplemental materials. AISTATS reviewers will have
access to the previous anonymous NIPS reviews. Other than this, all
submissions will be treated equally.
*Paper Submission:* Electronic submission of PDF papers is required. The
main part of the paper (single PDF up to 5Mb) may be up to 8 double-column
pages in length including tables/figures. References only can exceed the 8
page limit. The main part should have enough information so that reviewers
are able to judge the correctness and merit of the paper. Authors may
optionally submit supplementary material (up to 10Mb) as a single zip file,
containing additional proofs, audio, images, video, data or source code.
Reviewing any supplementary material is up to the discretion of the
reviewers.
*Dual Submissions Policy:* Submissions that are identical (or substantially
similar) to versions that have been previously published, or accepted for
publication, or that have been submitted in parallel to other conferences
or journals are not appropriate for AISTATS and violate our dual submission
policy. Exceptions to this rule are the following: (a) it is acceptable to
submit work that has been made available as a technical report or similar,
e.g., on arXiv, without citing it (to preserve anonymity). (b) Submission
is permitted for papers presented or to be presented at conferences or
workshops without proceedings (e.g., ICML or NIPS workshops), or with only
abstracts published. The dual-submission rules apply during the whole
AISTATS review period until the authors have been notified about the
decision on their paper.
*Double-blind review:* Papers will be selected via a rigorous double-blind
peer-review process (the reviewers will not know the identities of the
authors, and vice versa). It will be up to the authors to ensure the proper
anonymization of their paper and supplemental materials. Violation of the
above rules may lead to rejection without review. One round of author
rebuttal will occur with the initial reviews available to the authors.
*Evaluation Criteria:* Submissions will be judged on the basis of technical
quality, novelty, potential impact, and clarity. Typical papers often (but
not always) consist of a mix of algorithmic, theoretical and experimental
results, in varying proportions. Results will be judged on the degree to
which they have been objectively established and/or their potential for
scientific and technological impact.
*Publication and presentation:* All accepted papers will be presented at
the conference as posters, with a few selected for additional oral
presentation. All accepted papers will be treated equally when published in
the AISTATS Conference Proceedings (Journal of Machine Learning Research
Workshop and Conference Proceedings series). At least one author of each
accepted paper must register and attend AISTATS. A small number of accepted
papers will be invited to submit an extended version for fast-track
publication in a special issue of the Electronic Journal of Statistics
(EJS) journal after the AISTATS decisions are out.
*Topics:* Since its inception in 1985, the primary goal of AISTATS has been
to promote the exchange of ideas from artificial intelligence, machine
learning, and statistics. We encourage the submission of all papers in
keeping of this objective. Solicited topics include, but are not limited to:
- Supervised, unsupervised and semi-supervised learning, kernel and
Bayesian methods
- Stochastic processes, hypothesis testing, causality, time-series,
nonparametrics, asymptotic theory
- Graphical models and inference, manifold learning and embedding,
network analysis, statistical analysis of deep learning
- Sparse models and compressed sensing, information theory
- Reinforcement learning, planning, control, multi-agent systems, logic
and probability, relational learning
- Learning theory, game theoretic learning, online learning, bandits,
learning for mechanism design
- Convex and non-convex optimization, discrete optimization, Bayesian
optimization
- Algorithms and architectures for high-performance computing
- Applications in biology, cognition, computer vision, natural language,
neuroscience, robotics, etc.
- Topological data analysis, selective inference, experimental design,
interactive learning, optimal teaching, and other emerging topics
---
Aaditya Ramdas
www.cs.berkeley.edu/~aramdas
---------- Forwarded message ----------
From: Yarin Gal <yg279(a)cam.ac.uk>
Date: 2016-08-19 4:54 GMT-04:00
Subject: [NIPS 2016] CFP "Bayesian Deep Learning" Workshop
To: Yarin Gal <yg279(a)cam.ac.uk>
Dear all,
Could I please ask you to circulate the CFP below to people you think this
might be relevant to?
Thanks so much,
Yarin
*************************************************************
Bayesian Deep Learning workshop, NIPS 2016
Date: December 10, 2016
Location: Centre Convencions Internacional Barcelona, Barcelona, Spain
http://bayesiandeeplearning.org/
*************************************************************
*1. Call for papers*
We invite researchers to submit work in any of the following areas:
* Probabilistic deep models for classification and regression (such as
extensions and application of Bayesian neural networks),
* Generative deep models (such as variational autoencoders),
* Incorporating explicit prior knowledge in deep learning (such as
posterior regularisation with logic rules),
* Approximate inference for Bayesian deep learning (such as variational
Bayes / expectation propagation / etc. in Bayesian neural networks),
* Scalable MCMC inference in Bayesian deep models,
* Deep recognition models for variational inference (amortised inference),
* Model uncertainty in deep learning,
* Bayesian deep reinforcement learning,
* Deep learning with small data,
* Deep learning in Bayesian modelling,
* Probabilistic semi-supervised learning techniques,
* Active learning and Bayesian optimisation for experimental design,
* Information theory in deep learning,
* Applying non-parametric methods, one-shot learning, and Bayesian deep
learning in general.
A submission should take the form of an extended abstract (2 pages long) in
PDF format using the NIPS style. Author names do not need to be anonymised
and references may extend as far as needed beyond the 2 page upper limit.
If research has previously appeared in a journal, workshop, or conference
(including NIPS 2016 conference), the workshop submission should extend
that previous work.
Submissions will be accepted as contributed talks or poster presentations.
Extended abstracts should be submitted by 1 November 2016; submission
details will be updated online towards the submission deadline. Final
versions will be posted on the workshop website (and are archival but do
not constitute a proceedings).
*Key Dates:*
Extended abstract submission: **1 November 2016**
Acceptance notification: 16 November 2016
Travel award notification: 16 November 2016
Final paper submission: 5 December 2016
The workshop is endorsed by the International Society for Bayesian Analysis
(ISBA), which will also provide a Travel Award to a graduate student or a
junior researcher.
*2. Description*
While deep learning has been revolutionary for machine learning, most
modern deep learning models cannot represent their uncertainty nor take
advantage of the well studied tools of probability theory. This has started
to change following recent developments of tools and techniques combining
Bayesian approaches with deep learning. The intersection of the two fields
has received great interest from the community over the past few years,
with the introduction of new deep learning models that take advantage of
Bayesian techniques, as well as Bayesian models that incorporate deep
learning elements.
In fact, the use of Bayesian techniques in deep learning can be traced back
to the 1990s', in seminal works by Radford Neal, David MacKay, and Dayan et
al.. These gave us tools to reason about deep models confidence, and
achieved state-of-the-art performance on many tasks. However earlier tools
did not adapt when new needs arose (such as scalability to big data), and
were consequently forgotten. Such ideas are now being revisited in light of
new advances in the field, yielding many exciting new results.
This workshop will study the advantages and disadvantages of such ideas,
and will be a platform to host the recent flourish of ideas using Bayesian
approaches in deep learning and using deep learning tools in Bayesian
modelling. The program will include a mix of invited talks, contributed
talks, and contributed posters. Also, the historic context of key
developments in the field will be explained in an invited talk, followed by
a tribute talk to David MacKay's work in the field. Future directions for
the field will be debated in a panel discussion.
*3. Organisers*
Yarin Gal (University of Cambridge)
Christos Louizos (University of Amsterdam)
Zoubin Ghahramani (University of Cambridge)
Kevin Murphy (Google)
Max Welling (University of Amsterdam)