Estudo Geralhttps://estudogeral.sib.uc.ptThe DSpace digital repository system captures, stores, indexes, preserves, and distributes digital research material.Sat, 26 Sep 2020 02:25:27 GMT2020-09-26T02:25:27Z5071Levenberg--Marquardt Methods Based on Probabilistic Gradient Models and Inexact Subproblem Solution, with Application to Data Assimilationhttp://hdl.handle.net/10316/45235Title: Levenberg--Marquardt Methods Based on Probabilistic Gradient Models and Inexact Subproblem Solution, with Application to Data Assimilation
Authors: Bergou, E.; Gratton, S.; Vicente, Luís Nunes
Abstract: The Levenberg--Marquardt algorithm is one of the most popular algorithms for the solution of nonlinear least squares problems. Motivated by the problem structure in data assimilation, we consider in this paper the extension of the classical Levenberg-Marquardt algorithm to the scenarios where the linearized least squares subproblems are solved inexactly and/or the gradient model is noisy and accurate only within a certain probability. Under appropriate assumptions, we show that the modified algorithm converges globally to a first order stationary point with probability one. Our proposed approach is first tested on simple problems where the exact gradient is perturbed with a Gaussian noise or only called with a certain probability. It is then applied to an instance in variational data assimilation where stochastic models of the gradient are computed by the so-called ensemble methods.
Fri, 01 Jan 2016 00:00:00 GMThttp://hdl.handle.net/10316/452352016-01-01T00:00:00ZGlobally convergent evolution strategieshttp://hdl.handle.net/10316/45499Title: Globally convergent evolution strategies
Authors: Diouane, Y.; Gratton, S.; Vicente, Luís Nunes
Abstract: In this paper we show how to modify a large class of evolution strategies (ES’s) for unconstrained optimization to rigorously achieve a form of global convergence, meaning convergence to stationary points independently of the starting point. The type of ES under consideration recombines the parent points by means of a weighted sum, around which the offspring points are computed by random generation. One relevant instance of such an ES is covariance matrix adaptation ES (CMA-ES). The modifications consist essentially of the reduction of the size of the steps whenever a sufficient decrease condition on the function values is not verified. When such a condition is satisfied, the step size can be reset to the step size maintained by the ES’s themselves, as long as this latter one is sufficiently large. We suggest a number of ways of imposing sufficient decrease for which global convergence holds under reasonable assumptions (in particular density of certain limit directions in the unit sphere). Given a limited budget of function evaluations, our numerical experiments have shown that the modified CMA-ES is capable of further progress in function values. Moreover, we have observed that such an improvement in efficiency comes without weakening significantly the performance of the underlying method in the presence of several local minimizers.
Thu, 01 Jan 2015 00:00:00 GMThttp://hdl.handle.net/10316/454992015-01-01T00:00:00ZA surrogate management framework using rigorous trust-region stepshttp://hdl.handle.net/10316/45703Title: A surrogate management framework using rigorous trust-region steps
Authors: Gratton, S.; Vicente, Luís Nunes
Abstract: Surrogate models are frequently used in the optimization engineering community as convenient approaches to deal with functions for which evaluations are expensive or noisy, or lack convexity. These methodologies do not typically guarantee any type of convergence under reasonable assumptions. In this article, we will show how to incorporate the use of surrogate models, heuristics, or any other process of attempting a function value decrease in trust-region algorithms for unconstrained derivative-free optimization, in a way that global convergence of the latter algorithms to stationary points is retained. Our approach follows the lines of search/poll direct-search methods and corresponding surrogate management frameworks, both in algorithmic design and in the form of organizing the convergence theory.
Wed, 01 Jan 2014 00:00:00 GMThttp://hdl.handle.net/10316/457032014-01-01T00:00:00ZGlobally convergent evolution strategies for constrained optimizationhttp://hdl.handle.net/10316/45496Title: Globally convergent evolution strategies for constrained optimization
Authors: Diouane, Y.; Gratton, S.; Vicente, Luís Nunes
Abstract: In this paper we propose, analyze, and test algorithms for constrained optimization when no use of derivatives of the objective function is made. The proposed methodology is built upon the globally convergent evolution strategies previously introduced by the authors for unconstrained optimization. Two approaches are encompassed to handle the constraints. In a first approach, feasibility is first enforced by a barrier function and the objective function is then evaluated directly at the feasible generated points. A second approach projects first all the generated points onto the feasible domain before evaluating the objective function. The resulting algorithms enjoy favorable global convergence properties (convergence to stationarity from arbitrary starting points), regardless of the linearity of the constraints. The algorithmic implementation (i) includes a step where previously evaluated points are used to accelerate the search (by minimizing quadratic models) and (ii) addresses the particular cases of bounds on the variables and linear constraints. Our solver is compared to others, and the numerical results confirm its competitiveness in terms of efficiency and robustness.
Thu, 01 Jan 2015 00:00:00 GMThttp://hdl.handle.net/10316/454962015-01-01T00:00:00ZA parallel evolution strategy for an earth imaging problem in geophysicshttp://hdl.handle.net/10316/45245Title: A parallel evolution strategy for an earth imaging problem in geophysics
Authors: Diouane, Y.; Gratton, S.; Vasseur, X.; Vicente, Luís Nunes; Calandra, H.
Abstract: In this paper we propose a new way to compute a rough approximation solution, to be later used as a warm starting point in a more refined optimization process, for a challenging global optimization problem related to earth imaging in geophysics. The warm start consists of a velocity model that approximately solves a full-waveform inverse problem at low frequency. Our motivation arises from the availability of massively parallel computing platforms and the natural parallelization of evolution strategies as global optimization methods for continuous variables. Our first contribution consists of developing a new and efficient parametrization of the velocity models to significantly reduce the dimension of the original optimization space. Our second contribution is to adapt a class of evolution strategies to the specificity of the physical problem at hands where the objective function evaluation is known to be the most expensive computational part. A third contribution is the development of a parallel evolution strategy solver, taking advantage of a recently proposed modification of these class of evolutionary methods that ensures convergence and promotes better performance under moderate budgets. The numerical results presented demonstrate the effectiveness of the algorithm on a realistic 3D full-waveform inverse problem in geophysics. The developed numerical approach allows us to successfully solve an acoustic full-waveform inversion problem at low frequencies on a reasonable number of cores of a distributed memory computer.
Fri, 01 Jan 2016 00:00:00 GMThttp://hdl.handle.net/10316/452452016-01-01T00:00:00ZDirect Search Based on Probabilistic Descenthttp://hdl.handle.net/10316/45248Title: Direct Search Based on Probabilistic Descent
Authors: Gratton, S.; Royer, C. W.; Vicente, Luís Nunes; Zhang, Zaikun
Abstract: Direct-search methods are a class of popular derivative-free algorithms characterized by evaluating the objective function using a step size and a number of (polling) directions. When applied to the minimization of smooth functions, the polling directions are typically taken from positive spanning sets, which in turn must have at least n+1 vectors in an $n$-dimensional variable space. In addition, to ensure the global convergence of these algorithms, the positive spanning sets used throughout the iterations are required to be uniformly nondegenerate in the sense of having a positive (cosine) measure bounded away from zero. However, recent numerical results indicated that randomly generating the polling directions without imposing the positive spanning property can improve the performance of these methods, especially when the number of directions is chosen as considerably less than n+1. In this paper, we analyze direct-search algorithms when the polling directions are probabilistic descent, meaning that with a certain probability at least one of them is of descent type. Such a framework enjoys almost-sure global convergence. More interestingly, we will show a global decaying rate of $1/\sqrt{k}$ for the gradient size, with overwhelmingly high probability, matching the corresponding rate for the deterministic versions of the gradient method or of direct search. Our analysis helps us understand numerical behavior and the choice of the number of polling directions.
Thu, 01 Jan 2015 00:00:00 GMThttp://hdl.handle.net/10316/452482015-01-01T00:00:00ZA second-order globally convergent direct-search method and its worst-case complexityhttp://hdl.handle.net/10316/45243Title: A second-order globally convergent direct-search method and its worst-case complexity
Authors: Gratton, S.; Royer, C. W.; Vicente, Luís Nunes
Abstract: Direct-search algorithms form one of the main classes of algorithms for smooth unconstrained derivative-free optimization, due to their simplicity and their well-established convergence results. They proceed by iteratively looking for improvement along some vectors or directions. In the presence of smoothness, first-order global convergence comes from the ability of the vectors to approximate the steepest descent direction, which can be quantified by a first-order criticality (cosine) measure. The use of a set of vectors with a positive cosine measure together with the imposition of a sufficient decrease condition to accept new iterates leads to a convergence result as well as a worst-case complexity bound. In this paper, we present a second-order study of a general class of direct-search methods. We start by proving a weak second-order convergence result related to a criticality measure defined along the directions used throughout the iterations. Extensions of this result to obtain a true second-order optimality one are discussed, one possibility being a method using approximate Hessian eigenvectors as directions (which is proved to be truly second-order globally convergent). Numerically guaranteeing such a convergence can be rather expensive to ensure, as it is indicated by the worst-case complexity analysis provided in this paper, but turns out to be appropriate for some pathological examples.
Fri, 01 Jan 2016 00:00:00 GMThttp://hdl.handle.net/10316/452432016-01-01T00:00:00Z