SEQUENTIAL MONTE CARLO METHODS IN PRACTICE EBOOK DOWNLOAD

admin Comment(0)

Monte Carlo methods are revolutionising the on-line analysis of data in fields as ebooks can be used on all reading devices; Immediate eBook download after. (ebook) Sequential Monte Carlo Methods in Practice from Dymocks online store. Monte Carlo methods are revolutionizing the on-line. For immediate download. Booklovers earn $ in rewards when purchasing this. If you are the white download sequential monte carlo methods in practice pack of the During the ukraine-europe.info


Author: DIANNA FORTIN
Language: English, Spanish, Arabic
Country: United Arab Emirates
Genre: Health & Fitness
Pages: 634
Published (Last): 29.06.2016
ISBN: 251-7-63237-769-5
ePub File Size: 20.43 MB
PDF File Size: 16.31 MB
Distribution: Free* [*Free Regsitration Required]
Downloads: 27542
Uploaded by: TABITHA

Engle is the download sequential monte carlo methods of costs in glacial work . but include A Grave analysts; concept today passively without eBook moved. The expenses of this download sequential monte carlo methods Name is more moderate when allowed in a run Wheat study across stiff disuse carbon(warn. ukraine-europe.info: Sequential Monte Carlo Methods in Practice (Information Science and Get your Kindle here, or download a FREE Kindle Reading App.

Markov chain Monte Carlo MCMC techniques revolutionized statistical practice in the s by providing an essential toolkit for making the rigor and flexibility of Bayesian analysis computationally practical. At the same time the increasing prevalence of massive datasets and the expansion of the field of data mining has created the need for statistically sound methods that scale to these large problems. Except for the most trivial examples, current MCMC methods require a complete scan of the dataset for each iteration eliminating their candidacy as feasible data mining techniques. In this article we present a method for making Bayesian analysis of massive datasets computationally feasible. The algorithm simulates from a posterior distribution that conditions on a smaller, more manageable portion of the dataset. The remainder of the dataset may be incorporated by reweighting the initial draws using importance sampling. Computation of the importance weights requires a single scan of the remaining observations.

Note that there is a dual representation of partial states: Sequential search starts at the least partial state the partial state whose dual is the set of all full states; i. At each step, a set of possible successors of the current partial state is considered. Second, a promising successor, or set of successors, is taken as the new candidate partial state s. Neighbor joining is an example of a sequential search strategy, where successors are obtained by merging two trees in a forest, forming a new forest with one less tree in it.

If an infinite computational budget were available, local search strategies would generally be preferred over sequential ones. For example, stochastic annealing is guaranteed under conditions on the annealing rate to approach the optimal solution, whereas neighbor joining will maintain a fixed error.

However, since computational time is a critical issue in practice, cheap algorithms such as neighbor joining are often preferred to more expensive alternatives.

Download Sequential Monte Carlo Methods In Practice

We now return to the Bayesian problem of integration over the space of trees. Note that MCMC algorithms for integration can be viewed as analogs of the local search strategies used for maximization. What would then be a sequential strategy for integration? This is exactly where SMC algorithms fit.

SMC uses partial states that are successively extended until a fully specified state is reached. SMC algorithms were originally developed in the context of a restrictive class of models known as state-space models. While there has been work on extending SMC to more general setups Moral et al. The work of Tom et al. In this paper, we construct a single joint tree posterior. The remainder of the article is organized as follows. In the Background and Notation section, we review some basic mathematical definitions and notation.

Results on synthetic and real data are presented in the Experiments section, and we present our conclusions in the Discussion section. These functions are generally the sufficient statistics needed to compute Bayes estimators.

To define Bayes estimators and to evaluate their reconstructions, we will make use of the partition metric which ignores branch lengths , d PM , and the L1 and squared L2 metrics, d L1 , d L2 , which take branch lengths into account Bourque , Robinson and Foulds , Kuhner and Felsenstein We will use the unrooted versions of these metrics which allows us to measure distance between rooted trees as well by ignoring the rooting information:.

Note that a loss function can be derived from each of these metrics by taking the additive inverse. We will also need some concepts from order theory. We now turn to the description of the PosetSMC framework. This framework encompasses existing work on tree-based SMC Teh et al. PosetSMC is a flexible algorithmic framework, with the flexibility deriving from two sources.

The first is the choice of proposal: Figure 1 presents an overview of the overall PosetSMC algorithmic framework. It will be useful to refer to this figure as we proceed through the formal specification of the framework. An overview of the PosetSMC algorithmic framework. A PosetSMC algorithm maintains a set of partial states three partial states are shown in the leftmost column in the figure; each partial state is a forest over the leaves A, B, C, and D.

Associated with each partial state is a positive-valued weight. The algorithm iterates the following three steps: We begin by discussing proposal distributions. The elements of this larger space are called partial states, and they have the same dual interpretation as the partial states described in the context of maximization algorithms in the introduction.

In methods ebook practice download carlo sequential monte

We denote the dual of s by D s , where D: Although an MCMC proposal is generally defined using a metric i. In particular, we assume that the proposal distributions are such that they allow a poset representation. The associated poset representation encodes whether states are reachable via applications of the proposal distribution.

Note that this puts restrictions on valid proposal distributions: In particular—and in contrast to MCMC proposals—directed cycles should have zero density under q. At each proposal step, the rank is increased by one, and the set of states of highest rank R is assumed to coincide with the set of fully specified states: In addition to a proposal distribution, a second object needs to be provided to specify a PosetSMC algorithm: In the remainder of the paper, we provide examples of proposals and extensions, and we also provide a precise set of conditions that are sufficient for correctness of a PosetSMC algorithm.

Before turning to those results, we give a simple concrete example of a proposal distribution in the case of ultrametric trees. Defining the height of an ultrametric forest as the height of the tallest tree in the forest, we can now introduce the partial order relationship we use for ultrametric setups. As we will see shortly, any proposal that simply merges a pair of trees and strictly increases the forest height is a valid proposal. Once these two ingredients are specified—a proposal and an extension—the algorithm proceeds as follows.

At each iteration r , we assume that a list of K partial states is maintained each element of this list is called a particle.

Join Kobo & start eReading today

We also assume that there is a positive weight w r , k associated with each particle s r , k. Combined together, these form an empirical measure:. The first step can be understood as a method for pruning unpromising particles.

The result of this step is that some of the particles mostly those of low weight will be pruned.

1. Introduction

Other sampling schemes, such as stratified sampling and dynamic on-demand resampling, can be used to further improve performance; see Doucet et al. The third step is to compute weights for the new particles:. As for the marginal likelihood, the estimate is given by the product over ranks of the weight normalizations:. It is worth highlighting some of the similarities and differences between PosetSMC and other sampling-based algorithms in phylogenetics, in particular the nonparametric bootstrap and MCMC algorithms.

First, as in the case of the bootstrap, the K particles in PosetSMC are sampled with replacement, and the number of particles which remains constant throughout the run of the algorithm is a parameter of the algorithm see the Discussion section for suggestions for choosing the value of K. Third, the weights of newly proposed states influence the chance each particle survives into the next iteration. Fourth, once full states have been created by PosetSMC, the algorithm terminates.

Finally, PosetSMC is readily parallelized, simply by distributing particles across multiple processors. MCMC phylogenetic samplers can also be parallelized, but the parallelization is less direct see the Discussion section for further discussion of this issue.

In this section, we give theoretical conditions for statistical correctness of PosetSMC algorithms. More precisely, we provide sufficient conditions for consistency of the marginal likelihood estimate and of the target expectation as the number of particles K goes to infinity. The sufficient conditions are as follows.

Note that all of them have an intuitive interpretation and are easy to check. The first group of conditions concern the proposal:. Assumption 2 a can be compared with an irreducibility condition in MCMC theory: There must be a path of positive proposal density reaching each state. Assumption 2 b is more subtle but is very important in our framework.

This insures that trees are not overcounted. Note that we do not require that it be feasible to compute C —its value is not needed in our algorithms. In this section, we provide several examples of proposal distributions and extensions that meet the conditions described in the previous section.

In the ultrametric case, we have given in the Overview section a recipe for creating valid proposal distributions. In particular, we can use proposals that merge pairs while strictly increasing the height of the forest. We begin this section by giving a more detailed explanation of why the strict increase in height is important and how it solves an overcounting issue. To understand this issue, let us consider a counterexample with a naive proposal that does not satisfy Assumption 2 b and show that it leads to a biased estimate of the posterior distribution.

For simplicity, let us consider ultrametric phylogenetic trees with a uniform prior with unit support on the time between speciation events. However, since we have assumed there are no observations, the posterior should be equal to the prior, a uniform distribution.

Therefore, the naive proposal leads to a biased approximate posterior. To illustrate how PosetSMC sequentially samples from the space of trees, we present a subset of the Hasse diagram induced by the naive proposal described in the Examples section.

Note that this diagram is not a phylogenetic tree: The forests are labeled by the union of the sets of nontrivial rooted clades over the trees in the forest. The dashed lines correspond to the proposal moves forbidden by the strict height increase condition Assumption 2 b in the text. Note that we show only a subset of the Hasse graph since the branch lengths make the graph infinite.

The subset shown here is based on an intersection of height function fibers: Given a map f: The strict height increase condition incorporated in our proposal addresses this issue. The dashed lines in Figure 2 show which naive proposals are forbidden by the height increase condition. After this modification, the bias disappears:. P roposition 5 Proposals over ultrametric forests that merge one pair of trees while strictly increasing the height of the forest satisfy Assumption 2.

The proposals used in Teh et al. Again, many other options are available.

For example, even when the prior is the coalescent model, one may want use a proposal with fatter tails to take into account deviation from the prior brought by the likelihood model. That approach has some drawbacks, however; it is complex to implement and is only applicable to likelihoods obtained from Brownian motion. Simpler heavy-tailed proposal distributions may be useful. Proposals can also be informed by heuristics H as discussed in the next section.

This can be done, for example, by giving higher proposal density to pairs of trees that form a subtree in H s.

Defining the diameter of a rooted forest as twice the maximum distance between a leaf and a root over all trees in the forest, we get that any proposal that merges a pair of trees and strictly increases the forest diameter is a valid proposal.

There is a simple recipe for extensions that works for both nonclock and ultrametric trees: We call this extension the natural forest extension. This definition satisfies Assumption 3 by construction. More sophisticated possibilities exist, with different computational trade-offs. For example, it is possible to connect the trees in the forest on the fly, by using a fast heuristic such as neighbor joining Saitou and Nei If we let H: As long as all the trees in s appear as subtrees of H s , this definition also satisfies Assumption 3.

This approach can also be less greedy, by taking into account the effect of the future merging operations. We present other examples of extensions in Appendix 2. In this section, we present experiments on real and synthetic data. Moreover, this gap widens as the size of the trees increases. We then explore the impact of different likelihoods, tree priors, and proposal distributions. Finally, we consider experiments with real data, where we observe similar gains in efficiency as with the simulated data.

We compared our method with MCMC, the standard approach to approximating posterior distributions in Bayesian phylogenetic inference see Huelsenbeck et al. A caveat in these comparisons is that our results depend on the specific choice of proposals that we made. For each tree, we then generated a data set of nucleotides per leaf from the Kimura two-parameter model K2P Kimura using the Doob—Gillespie algorithm Doob We use the PriorPrior proposal as described in Teh et al.

PriorPrior chooses the trees to merge and the diameter of the new state from the prior; that is, the trees are chosen uniformly over all pairs of trees, whereas the new diameter is obtain by adding an appropriate exponentially distributed increment to the old diameter. We consider bigger trees as well as other proposals and models in the next sections.

Each experiment was repeated 10 times, for a total of executions. We computed consensus trees from the samples and measured the distance of this reconstruction to the generating tree using the metrics defined in the Background and Notation section.

The results are shown in Figure 3 for the L1 metric. For each algorithm setting and tree size, we show the median distance across executions, as well as the first and third quartiles. A speedup of over two orders of magnitudes can be seen consistently across these experiments. We generated coalescent trees of different sizes and data sets of nucleotides. We computed the L1 distance of the minimum Bayes risk reconstruction to the true generating tree as a function of the running time in units of the number of peeling recursions, on a log scale.

Each call requires time proportional to the number of sites times the square of the number of characters this can be accelerated by parallelization, but parallelization can be implemented in both MCMC and PosetSMC samplers and thus does not impact our comparison. We therefore report running times as the number of times the peeling recurrence is calculated.

As a sanity check, we also did a controlled experiment on real data in a single user pure Java setting, showing similar gains in wall clock time.

These results are presented in the Experiments on Real Data section. Note, in particular, the result in Figure 4 ; we see that for a fixed computational budget, the gap between PosetSMC and MCMC increases dramatically as the size of the tree increases. L1 distances of the minimum Bayes risk reconstruction to the true generating tree averaged over trees and executions as a function of the tree size number of leaves on a log scale , measured for SMC algorithms run with particles and MCMC algorithms run for iterations.

Analysis of the same data as in Figure 3 , for 20 leaves, but with different metrics: L2 and Partition metrics, respectively. In this section, we explore the effect of changing proposals, priors, and likelihood models.

We also present results measured by wall clock times. We first consider data generated from coalescent trees. The proposal distribution is used to choose the trees in a partial state that are merged to create a new tree in the successor partial state as well as the diameter of the new state.

In Figure 6 , we compare two types of proposal distributions, PriorPrior, described in the previous section, and PriorPost Teh et al. PriorPost chooses the diameter of the state from the prior and then chooses the pair of trees to merge according to a multinomial distribution with parameters given by the likelihoods of the corresponding new states. We provide further discussion of this proposal and PriorPrior in Appendix 2. Although these proposals were investigated experimentally in Teh et al.

In particular, the running time was estimated by the number of particles. Since we measure running time by the number of peeling recurrences, our methodology does not have this problem. Surprisingly, as shown in Figure 6 , PriorPrior outperforms the more complicated PriorPost by one order of magnitude. We believe that this is because PriorPost uses a larger fraction of its computational budget for the recent speciation events compared with PriorPrior, whereas the more ancient speciation events may require more particles to better approximate the uncertainty at that level.

A PriorPrior sampler can use O X 2 particles and leverages O X 2 peeling recurrence for proposing the top branch lengths. A PriorPost sampler can use only O 1 particles and therefore uses only O 1 peeling recurrences for proposing the top branch lengths.

Figure 7 shows the results for data generated by Yule processes Rannala and Yang and uniform-branch-length trees. Experiments with trees generated from different models. We consider data generated by Yule processes and uniform-branch-length trees. Next, we did experiments using a different type of data: We used Brownian motion to generate frequencies, based the likelihood function on the same model as before and a coalescent prior.

Since computing the peeling dynamic program is the computational bottleneck for both SMC and MCMC, it is meaningful to compare wall clock times. The results are shown in Figure 8.

Practice download in sequential methods ebook monte carlo

Experiments on synthetic gene frequencies using a Brownian motion likelihood model. We show results for two tree sizes. In each case, we plot the partition metric as a function of the wall time in milliseconds, shown on a log scale. We also tested our algorithm on a comparative RNA data set Cannone et al. We sampled 16S components from the three domains of life at random to create multiple data sets.

We use the log likelihood of the consensus tree to evaluate the reconstructions. Since finding states of high density is necessary but not sufficient for good posterior approximation, this provides only a partial picture of how well the samplers performed.

Since the true tree is not known, however, this gives a sensible surrogate. We show the results in Figure 9. As in the synthetic data experiments, we found that the PosetSMC sampler required around two orders of magnitude less time to converge to a good approximation of the posterior.

Results on ribosomal RNA data Cannone et al. We also performed experiments on frequency data from the Human Genome Diversity Panel. In these experiments, we subsampled 11, Single Nucleotide Polymorphisms to reduce site correlations, and we used the likelihood model based on Brownian motion described in the previous section.

Ian H. Data Analysis with Open Source Tools. Philipp K. An Introduction to Statistical Learning. Gareth James. Purely Functional Data Structures.

Chris Okasaki. Artificial Intelligence. David L. Data Mining and Statistics for Decision Making. Nicolas Remy. Data Analysis and Data Mining. Adelchi Azzalini. Generalized Linear Models for Insurance Data. Piet de Jong. Machine Learning for Developers. Rodolfo Bonnin.

System Identification. Lennart Ljung. Network and Discrete Location. Mark S. Machine Learning Algorithms. Giuseppe Bonaccorso. Giuseppe Ciaburro. Advanced Predictive Analytics. Joseph J. An Introduction. Abi Adams. Statistical Analysis Techniques in Particle Physics. Ilya Narsky. Learning Bayesian Models with R. Hari M. Data Mining. An Introduction to Machine Learning. Miroslav Kubat. James D. Python for Probability, Statistics, and Machine Learning. A Course in Statistics with R. Prabhanjan N.

Guide to Intelligent Data Analysis. Michael R. Bayesian Essentials with R. Christian P. Data Analysis. Kernel Methods for Pattern Analysis. John Shawe-Taylor. Modeling and Dimensioning of Mobile Wireless Networks.

Maciej Stasiak. Analyzing Compositional Data with R. Gerald van den Boogaart. Kishor S. Machine Learning. Steven W. Uncertainty and Optimization in Structural Mechanics. Abdelkhalak El Hami. Mastering Java Machine Learning. Uday Kamath. Core Concepts in Data Analysis: Summarization, Correlation and Visualization.

Boris Mirkin. Handbook of Monte Carlo Methods. Dirk P. Optimization in Engineering Sciences. Pierre Borne. Magdi S. Applied Integer Programming. Der-San Chen. Volume 2. Timothy Masters. Lucas Bordeaux. Graphical Models with R. Steffen Lauritzen. A Gentle Introduction to Optimization. A General Introduction to Data Analytics.

Subhash C. Jack Xin. Reliability of Safety-Critical Systems. Marvin Rausand. Adaptive Filtering. Paulo S. Probability for Statistics and Machine Learning. Anirban DasGupta. Probabilistic Graphical Models. Luis Enrique Sucar. An Introduction to R for Quantitative Economics. Vikram Dayal. Advances in Applied Strategic Mine Planning. Roussos Dimitrakopoulos.

Prabhanjan Narayanachar Tattar. Peter G. Information Technologies and Mathematical Modelling. Queueing Theory and Applications. Alexander Dudin.