Estudo Geralhttps://estudogeral.sib.uc.ptThe DSpace digital repository system captures, stores, indexes, preserves, and distributes digital research material.Mon, 17 Feb 2020 10:48:30 GMT2020-02-17T10:48:30Z50291Efficient Cardinality/Mean-Variance Portfolioshttp://hdl.handle.net/10316/45700Title: Efficient Cardinality/Mean-Variance Portfolios
Authors: Brito, R. Pedro; Vicente, Luís Nunes
Abstract: We propose a novel approach to handle cardinality in portfolio selection, by means of a biobjective cardinality/mean-variance problem, allowing the investor to analyze the efficient tradeoff between return-risk and number of active positions. Recent progress in multiobjective optimization without derivatives allow us to robustly compute (in-sample) the whole cardinality/mean-variance efficient frontier, for a variety of data sets and mean-variance models. Our results show that a significant number of efficient cardinality/mean-variance portfolios can overcome (out-of-sample) the naive strategy, while keeping transaction costs relatively low.
Wed, 01 Jan 2014 00:00:00 GMThttp://hdl.handle.net/10316/457002014-01-01T00:00:00ZGeometry of sample sets in derivative free optimization. Part II: polynomial regression and underdetermined interpolationhttp://hdl.handle.net/10316/11386Title: Geometry of sample sets in derivative free optimization. Part II: polynomial regression and underdetermined interpolation
Authors: Conn, Andrew R.; Scheinberg, Katya; Vicente, Luís Nunes
Abstract: In the recent years, there has been a considerable amount of work in
the development of numerical methods for derivative free optimization problems.
Some of this work relies on the management of the geometry of sets of sampling
points for function evaluation and model building.
In this paper, we continue the work developed in [7] for complete or determined
interpolation models (when the number of interpolation points equals the number
of basis elements), considering now the cases where the number of points is higher
(regression models) and lower (underdetermined models) than the number of basis
components.
We show how the notion of A-poisedness introduced in [7] to quantify the quality
of the sample sets can be extended to the nondetermined cases, by extending first
the underlying notion of bases of Lagrange polynomials. We also show that Apoisedness
is equivalent to a bound on the condition number of the matrix arising
from the sampling conditions. We derive bounds for the errors between the function
and the (regression and underdetermined) models and between their derivatives.
Sat, 01 Jan 2005 00:00:00 GMThttp://hdl.handle.net/10316/113862005-01-01T00:00:00ZDynamic evolution for risk-neutral densitieshttp://hdl.handle.net/10316/11214Title: Dynamic evolution for risk-neutral densities
Authors: Monteiro, Ana Margarida; Tütüncü, Reha H.; Vicente, Luís Nunes
Abstract: Option price data is often used to infer risk-neutral densities for future
prices of an underlying asset. Given the prices of a set of options on the same
underlying asset with different strikes and maturities, we propose a nonparametric
approach for estimating the evolution of the risk-neutral density in time. Our
method uses bicubic splines in order to achieve the desired smoothness for the
estimation and an optimization model to choose the spline functions that best fit the
price data. Semidefinite programming is employed to guarantee the nonnegativity of
the densities. We illustrate the process using synthetic option price data generated
using log-normal and absolute diffusion processes as well as actual price data for
options on the S&P500 index.
We also used the risk-neutral densities that we computed to price exotic options
and observed that this approach generates prices that closely approximate the
market prices of these options.
Tue, 01 Jan 2008 00:00:00 GMThttp://hdl.handle.net/10316/112142008-01-01T00:00:00ZIncorporating minimum Frobenius norm models in direct searchhttp://hdl.handle.net/10316/11216Title: Incorporating minimum Frobenius norm models in direct search
Authors: Custódio, Ana Luísa; Rocha, Humberto; Vicente, Luís Nunes
Abstract: The goal of this paper is to show that the use of minimum Frobenius
norm quadratic models can improve the performance of direct-search methods.
The approach taken here is to maintain the structure of directional direct-search
methods, organized around a search and a poll step, and to use the set of previously
evaluated points generated during a direct-search run to build the models. The minimization
of the models within a trust region provides an enhanced search step. Our
numerical results show that such a procedure can lead to a significant improvement
of direct search for smooth, piecewise smooth, and stochastic and nonstochastic
noisy problems.
Tue, 01 Jan 2008 00:00:00 GMThttp://hdl.handle.net/10316/112162008-01-01T00:00:00ZModelling nearby FGK Population I stars: A new form of estimating stellar parameters using an optimization approachhttp://hdl.handle.net/10316/11217Title: Modelling nearby FGK Population I stars: A new form of estimating stellar parameters using an optimization approach
Authors: Fernandes, João Manuel; Vaz, A. Ismael F.; Vicente, Luís Nunes
Abstract: Modelling a single star by means of theoretical stellar evolutionary
tracks is a nontrivial problem due to the large number of unknowns compared to the
number of observations. Currently, stellar age and mass are estimated using interpo-
lations in grids of stellar models and/or isochrones assuming ad-hoc approximations
for the mixing length parameter and the metal to helium enrichment, normally
scaled on the solar values. This paper presents a new method to model the FGK
main sequence of stars of Population I, capable of simultaneously estimating the fol-
lowing stellar parameters: mass, age, helium and metals abundance, mixing length
parameter, and overshooting.
Our approach is based on the application of a global optimization method (PSwarm)
to solve the inverse, simulation-based optimization models of nding the values for
the stellar parameters that better match the given observations. The evaluation of
the tting function requires the simulation of a stellar evolutionary code. In these
models, the helium and the mixing length are not scaled on the Sun but, together
with the overshooting, considered free optimization parameters. The optimization
algorithm used by PSwarm is a rigorous direct search method enriched by a popu-
lation based heuristic (particle swarm) to improve the ability to search for a global
optimizer.
We develop a public-domain computational tool to interface the global optimiza-
tion solver and the stellar evolutionary code. We test our method using the Sun
and ve FGK ctitious stars and then apply our methodology to about 135 detailed
spectroscopic analysed stars, including 74 planet host stars. We present and discuss
the stellar parameters estimated for each star in the context of previous works. The
impact of the results on stellar evolutionary studies is brie y discussed.
Tue, 01 Jan 2008 00:00:00 GMThttp://hdl.handle.net/10316/112172008-01-01T00:00:00ZGlobal convergence of general derivative-free trust-region algorithms to first and second order critical pointshttp://hdl.handle.net/10316/11325Title: Global convergence of general derivative-free trust-region algorithms to first and second order critical points
Authors: Conn, Andrew R.; Scheinberg, Katya; Vicente, Luís Nunes
Abstract: In this paper we prove global convergence for first and second-order stationarity
points of a class of derivative-free trust-region methods for unconstrained
optimization. These methods are based on the sequential minimization of linear or
quadratic models built from evaluating the objective function at sample sets. The
derivative-free models are required to satisfy Taylor-type bounds but, apart from
that, the analysis is independent of the sampling techniques.
A number of new issues are addressed, including global convergence when acceptance
of iterates is based on simple decrease of the objective function, trust-region
radius maintenance at the criticality step, and global convergence for second-order
critical points.
Sun, 01 Jan 2006 00:00:00 GMThttp://hdl.handle.net/10316/113252006-01-01T00:00:00ZUsing simplex gradients of nonsmooth functions in direct search methodshttp://hdl.handle.net/10316/11326Title: Using simplex gradients of nonsmooth functions in direct search methods
Authors: Custódio, A. L.; Dennis Jr., John E.; Vicente, Luís Nunes
Abstract: It has been shown recently that the efficiency of direct search methods
that use opportunistic polling in positive spanning directions can be improved
significantly by reordering the poll directions according to descent indicators built
from simplex gradients.
The purpose of this paper is twofold. First, we analyze the properties of simplex
gradients of nonsmooth functions in the context of direct search methods like the
Generalized Pattern Search (GPS) and the Mesh Adaptive Direct Search (MADS),
for which there exists a convergence analysis in the nonsmooth setting. Our analysis
does not require continuous differentiability and can be seen as an extension of the
accuracy properties of simplex gradients known for smooth functions. Secondly,
we test the use of simplex gradients when pattern search is applied to nonsmooth
functions, confirming the merit of the poll ordering strategy for such problems.
Sun, 01 Jan 2006 00:00:00 GMThttp://hdl.handle.net/10316/113262006-01-01T00:00:00ZA globally convergent primal-dual interior-point filter method for nonlinear programming: new filter optimality measures and computational resultshttp://hdl.handle.net/10316/11218Title: A globally convergent primal-dual interior-point filter method for nonlinear programming: new filter optimality measures and computational results
Authors: Silva, Renata; Ulbrich, Michael; Ulbrich, Stefan; Vicente, Luís Nunes
Abstract: In this paper we prove global convergence for first and second-order stationarity
points of a class of derivative-free trust-region methods for unconstrained
optimization. These methods are based on the sequential minimization of linear or
quadratic models built from evaluating the objective function at sample sets. The
derivative-free models are required to satisfy Taylor-type bounds but, apart from
that, the analysis is independent of the sampling techniques.
A number of new issues are addressed, including global convergence when acceptance
of iterates is based on simple decrease of the objective function, trust-region
radius maintenance at the criticality step, and global convergence for second-order
critical points.
Tue, 01 Jan 2008 00:00:00 GMThttp://hdl.handle.net/10316/112182008-01-01T00:00:00ZTrust-Region Methods Without Using Derivatives: Worst Case Complexity and the NonSmooth Casehttp://hdl.handle.net/10316/44582Title: Trust-Region Methods Without Using Derivatives: Worst Case Complexity and the NonSmooth Case
Authors: Garmanjani, Rohollah; Júdice, Diogo; Vicente, Luís Nunes
Abstract: Trust-region methods are a broad class of methods for continuous optimization that found application in a variety of problems and contexts. In particular, they have been studied and applied for problems without using derivatives. The analysis of trust-region derivative-free methods has focused on global convergence, and they have been proven to generate a sequence of iterates converging to stationarity independently of the starting point. Most of such an analysis is carried out in the smooth case, and, moreover, little is known about the complexity or global rate of these methods. In this paper, we start by analyzing the worst case complexity of general trust-region derivative-free methods for smooth functions. For the nonsmooth case, we propose a smoothing approach, for which we prove global convergence and bound the worst case complexity effort. For the special case of nonsmooth functions that result from the composition of smooth and nonsmooth/convex components, we show how to improve the existing results of the literature and make them applicable to the general methodology.
Fri, 01 Jan 2016 00:00:00 GMThttp://hdl.handle.net/10316/445822016-01-01T00:00:00ZError estimates and poisedness in multivariate polynomial interpolationhttp://hdl.handle.net/10316/11438Title: Error estimates and poisedness in multivariate polynomial interpolation
Authors: Conn, Andrew R.; Scheinberg, Katya; Vicente, Luís Nunes
Abstract: We show how to derive error estimates between a function and its interpolating
polynomial and between their corresponding derivatives. The derivation
is based on a new de nition of well-poisedness for the interpolation set, directly
connecting the accuracy of the error estimates with the geometry of the points in
the set. This de nition is equivalent to the boundedness of Lagrange polynomials,
but it provides new geometric intuition. Our approach extracts the error bounds
for all of the derivatives using the same analysis; the error bound for the function
values is then derived a posteriori.
We also develop an algorithm to build a set of well-poised interpolation points or
to modify an existing set to ensure its well-poisedness. We comment on the optimal
geometries corresponding to the best possible well-poised sets in the case of linear
interpolation.
Wed, 01 Jan 2003 00:00:00 GMThttp://hdl.handle.net/10316/114382003-01-01T00:00:00ZUsing sampling and simplex derivatives in pattern search methodshttp://hdl.handle.net/10316/11401Title: Using sampling and simplex derivatives in pattern search methods
Authors: Custódio, Ana Luísa; Vicente, Luís Nunes
Abstract: Pattern search methods can be made more efficient if past function
evaluations are appropriately reused. In this paper we will introduce a number of
ways of reusing previous evaluations of the objective function based on the computation
of simplex derivatives (e.g., simplex gradients) to improve the efficiency of a
pattern search iteration.
At each iteration of a pattern search method, one can attempt to compute an
accurate simplex gradient by identifying a sampling set of previous iterates with
good geometrical properties. This simplex gradient computation can be done using
only past successful iterates or by considering all past function evaluations.
The simplex gradient can then be used, for instance, to reorder the evaluations
of the objective function associated with the positive spanning set or positive basis
used in the poll step. But it can also be used to update the mesh size parameter
according to a sufficient decrease criterion. None of these modifications demands
new function evaluations. A search step can also be tried along the negative simplex
gradient at the beginning of the current pattern search iteration.
We will present these procedures in detail and show how promising they are to
enhance the practical performance of pattern search methods.
Thu, 01 Jan 2004 00:00:00 GMThttp://hdl.handle.net/10316/114012004-01-01T00:00:00ZLevenberg--Marquardt Methods Based on Probabilistic Gradient Models and Inexact Subproblem Solution, with Application to Data Assimilationhttp://hdl.handle.net/10316/45235Title: Levenberg--Marquardt Methods Based on Probabilistic Gradient Models and Inexact Subproblem Solution, with Application to Data Assimilation
Authors: Bergou, E.; Gratton, S.; Vicente, Luís Nunes
Abstract: The Levenberg--Marquardt algorithm is one of the most popular algorithms for the solution of nonlinear least squares problems. Motivated by the problem structure in data assimilation, we consider in this paper the extension of the classical Levenberg-Marquardt algorithm to the scenarios where the linearized least squares subproblems are solved inexactly and/or the gradient model is noisy and accurate only within a certain probability. Under appropriate assumptions, we show that the modified algorithm converges globally to a first order stationary point with probability one. Our proposed approach is first tested on simple problems where the exact gradient is perturbed with a Gaussian noise or only called with a certain probability. It is then applied to an instance in variational data assimilation where stochastic models of the gradient are computed by the so-called ensemble methods.
Fri, 01 Jan 2016 00:00:00 GMThttp://hdl.handle.net/10316/452352016-01-01T00:00:00ZChapter 37: Methodologies and Software for Derivative-Free Optimizationhttp://hdl.handle.net/10316/45712Title: Chapter 37: Methodologies and Software for Derivative-Free Optimization
Authors: Custódio, Ana Luísa; Scheinberg, Katya; Nunes Vicente, Luís
Abstract: Derivative-free optimization (DFO) methods [502] are typically considered for the minimization/maximization of functions for which the corresponding derivatives neither are available for use nor can be directly approximated by numerical techniques. Constraints may be part of the problem definition, but, similar to the objective function, it is possible that their derivatives are not available. Problems of this type are common in engineering optimization, where the value of the functions is often computed by simulation and may be subject to statistical noise or other forms of inaccuracy. In fact, expensive function evaluations would prevent approximation of derivatives, and, even when computed, noise would make such approximations less reliable. In the past couple of decades, intense research has resulted in robust and efficient DFO methods, accompanied by convergence theory and numerical implementations.
Sun, 01 Jan 2017 00:00:00 GMThttp://hdl.handle.net/10316/457122017-01-01T00:00:00ZInexact solution of NLP subproblems in MINLPhttp://hdl.handle.net/10316/45710Title: Inexact solution of NLP subproblems in MINLP
Authors: Li, Min; Vicente, Luís Nunes
Abstract: In the context of convex mixed integer nonlinear programming (MINLP), we investigate how the outer approximation method and the generalized Benders decomposition method are affected when the respective nonlinear programming (NLP) subproblems are solved inexactly. We show that the cuts in the corresponding master problems can be changed to incorporate the inexact residuals, still rendering equivalence and finiteness in the limit case. Some numerical results will be presented to illustrate the behavior of the methods under NLP subproblem inexactness.
Tue, 01 Jan 2013 00:00:00 GMThttp://hdl.handle.net/10316/457102013-01-01T00:00:00ZGlobally convergent evolution strategies for constrained optimizationhttp://hdl.handle.net/10316/45496Title: Globally convergent evolution strategies for constrained optimization
Authors: Diouane, Y.; Gratton, S.; Vicente, Luís Nunes
Abstract: In this paper we propose, analyze, and test algorithms for constrained optimization when no use of derivatives of the objective function is made. The proposed methodology is built upon the globally convergent evolution strategies previously introduced by the authors for unconstrained optimization. Two approaches are encompassed to handle the constraints. In a first approach, feasibility is first enforced by a barrier function and the objective function is then evaluated directly at the feasible generated points. A second approach projects first all the generated points onto the feasible domain before evaluating the objective function. The resulting algorithms enjoy favorable global convergence properties (convergence to stationarity from arbitrary starting points), regardless of the linearity of the constraints. The algorithmic implementation (i) includes a step where previously evaluated points are used to accelerate the search (by minimizing quadratic models) and (ii) addresses the particular cases of bounds on the variables and linear constraints. Our solver is compared to others, and the numerical results confirm its competitiveness in terms of efficiency and robustness.
Thu, 01 Jan 2015 00:00:00 GMThttp://hdl.handle.net/10316/454962015-01-01T00:00:00ZWorst case complexity of direct searchhttp://hdl.handle.net/10316/45707Title: Worst case complexity of direct search
Authors: Vicente, Luís Nunes
Abstract: In this paper, we prove that the broad class of direct-search methods of directional type based on imposing sufficient decrease to accept new iterates shares the worst case complexity bound of steepest descent for the unconstrained minimization of a smooth function, more precisely that the number of iterations needed to reduce the norm of the gradient of the objective function below a certain threshold is at most proportional to the inverse of the threshold squared. In direct-search methods, the objective function is evaluated, at each iteration, at a finite number of points. No derivatives are required. The action of declaring an iteration successful (moving into a point of lower objective function value) or unsuccessful (staying at the same iterate) is based on objective function value comparisons. Some of these methods are directional in the sense of moving along predefined directions along which the objective function will eventually decrease for sufficiently small step sizes. The worst case complexity bounds derived measure the maximum number of iterations as well as the maximum number of objective function evaluations required to find a point with a required norm of the gradient of the objective function, and are proved for such directional direct-search methods when a sufficient decrease condition based on the size of the steps is imposed to accept new iterates.
Tue, 01 Jan 2013 00:00:00 GMThttp://hdl.handle.net/10316/457072013-01-01T00:00:00ZConvergence of Trust-Region Methods Based on Probabilistic Modelshttp://hdl.handle.net/10316/45698Title: Convergence of Trust-Region Methods Based on Probabilistic Models
Authors: Bandeira, A. S.; Scheinberg, K.; Vicente, Luís Nunes
Abstract: In this paper we consider the use of probabilistic or random models within a classical trust-region framework for optimization of deterministic smooth general nonlinear functions. Our method and setting differs from many stochastic optimization approaches in two principal ways. Firstly, we assume that the value of the function itself can be computed without noise, in other words, that the function is deterministic. Second, we use random models of higher quality than those produced by the usual stochastic gradient methods. In particular, a first order model based on random approximation of the gradient is required to provide sufficient quality of approximation with probability $\geq 1/2$. This is in contrast with stochastic gradient approaches, where the model is assumed to be “correct” only in expectation. As a result of this particular setting, we are able to prove convergence, with probability one, of a trust-region method which is almost identical to the classical method. Moreover, the new method is simpler than its deterministic counterpart as it does not require a criticality step. Hence we show that a standard optimization framework can be used in cases when models are random and may or may not provide good approximations, as long as “good” models are more likely than “bad” models. Our results are based on the use of properties of martingales. Our motivation comes from using random sample sets and interpolation models in derivative-free optimization. However, our framework is general and can be applied with any source of uncertainty in the model. We discuss various applications for our methods in the paper.
Wed, 01 Jan 2014 00:00:00 GMThttp://hdl.handle.net/10316/456982014-01-01T00:00:00ZSmoothing and worst-case complexity for direct-search methods in nonsmooth optimizationhttp://hdl.handle.net/10316/45705Title: Smoothing and worst-case complexity for direct-search methods in nonsmooth optimization
Authors: Garmanjani, Rohollah; Vicente, Luís Nunes
Abstract: In the context of the derivative-free optimization of a smooth objective function, it has been shown that the worst-case complexity of direct-search methods is of the same order as that of the steepest descent for derivative-based optimization; more precisely, the number of iterations needed to reduce the norm of the gradient of the objective function below a certain threshold is proportional to the inverse of the threshold squared. Motivated by the lack of such a result in the nonsmooth case, we propose, analyse, and test a class of smoothing direct-search methods for the unconstrained optimization of nonsmooth functions. Given a parameterized family of smoothing functions for the nonsmooth objective function dependent on a smoothing parameter, this class of methods consists of applying a direct-search algorithm for a fixed value of the smoothing parameter until the step size is relatively small, after which the smoothing parameter is reduced and the process is repeated. One can show that the worst-case complexity (or cost) of this procedure is roughly one order of magnitude worse than the one for direct search or steepest descent on smooth functions. The class of smoothing direct-search methods is also shown to enjoy asymptotic global convergence properties. Some preliminary numerical experiments indicate that this approach leads to better values of the objective function, in some cases pushing the optimization further, apparently without an additional cost in the number of function evaluations.
Tue, 01 Jan 2013 00:00:00 GMThttp://hdl.handle.net/10316/457052013-01-01T00:00:00ZA surrogate management framework using rigorous trust-region stepshttp://hdl.handle.net/10316/45703Title: A surrogate management framework using rigorous trust-region steps
Authors: Gratton, S.; Vicente, Luís Nunes
Abstract: Surrogate models are frequently used in the optimization engineering community as convenient approaches to deal with functions for which evaluations are expensive or noisy, or lack convexity. These methodologies do not typically guarantee any type of convergence under reasonable assumptions. In this article, we will show how to incorporate the use of surrogate models, heuristics, or any other process of attempting a function value decrease in trust-region algorithms for unconstrained derivative-free optimization, in a way that global convergence of the latter algorithms to stationary points is retained. Our approach follows the lines of search/poll direct-search methods and corresponding surrogate management frameworks, both in algorithmic design and in the form of organizing the convergence theory.
Wed, 01 Jan 2014 00:00:00 GMThttp://hdl.handle.net/10316/457032014-01-01T00:00:00ZGlobally convergent evolution strategieshttp://hdl.handle.net/10316/45499Title: Globally convergent evolution strategies
Authors: Diouane, Y.; Gratton, S.; Vicente, Luís Nunes
Abstract: In this paper we show how to modify a large class of evolution strategies (ES’s) for unconstrained optimization to rigorously achieve a form of global convergence, meaning convergence to stationary points independently of the starting point. The type of ES under consideration recombines the parent points by means of a weighted sum, around which the offspring points are computed by random generation. One relevant instance of such an ES is covariance matrix adaptation ES (CMA-ES). The modifications consist essentially of the reduction of the size of the steps whenever a sufficient decrease condition on the function values is not verified. When such a condition is satisfied, the step size can be reset to the step size maintained by the ES’s themselves, as long as this latter one is sufficiently large. We suggest a number of ways of imposing sufficient decrease for which global convergence holds under reasonable assumptions (in particular density of certain limit directions in the unit sphere). Given a limited budget of function evaluations, our numerical experiments have shown that the modified CMA-ES is capable of further progress in function values. Moreover, we have observed that such an improvement in efficiency comes without weakening significantly the performance of the underlying method in the presence of several local minimizers.
Thu, 01 Jan 2015 00:00:00 GMThttp://hdl.handle.net/10316/454992015-01-01T00:00:00ZGlobally convergent DC trust-region methodshttp://hdl.handle.net/10316/45702Title: Globally convergent DC trust-region methods
Authors: Le Thi, Hoai An; Huynh, Van Ngai; Dinh, Tao Pham; Vaz, A. Ismael F.; Vicente, Luís Nunes
Abstract: In this paper, we investigate the use of DC (Difference of Convex functions) models and algorithms in the application of trust-region methods to the solution of a class of nonlinear optimization problems where the constrained set is closed and convex (and, from a practical point of view, where projecting onto the feasible region is computationally affordable). We consider DC local models for the quadratic model of the objective function used to compute the trust-region step, and apply a primal-dual subgradient method to the solution of the corresponding trust-region subproblems. One is able to prove that the resulting scheme is globally convergent to first-order stationary points. The theory requires the use of exact second-order derivatives but, in turn, the computation of the trust-region step asks only for one projection onto the feasible region (in comparison to the calculation of the generalized Cauchy point which may require more). The numerical efficiency and robustness of the proposed new scheme when applied to bound-constrained problems is measured by comparing its performance against some of the current state-of-the-art nonlinear programming solvers on a vast collection of test problems.
Wed, 01 Jan 2014 00:00:00 GMThttp://hdl.handle.net/10316/457022014-01-01T00:00:00ZAn indicator for the switch from derivative-free to derivative-based optimizationhttp://hdl.handle.net/10316/44581Title: An indicator for the switch from derivative-free to derivative-based optimization
Authors: Gratton, Serge; Soualmi, Nacer; Vicente, Luís Nunes
Abstract: In some optimization problems found in applications, the derivatives of the objective function can be computed or approximated but at an expensive cost, and it is desirable to know when to use derivative-free methods (such as direct search, for instance) or derivative-based methods (such as gradient or quasi-Newton methods). Derivative-free methods may achieve a steady initial progress for some problems, but after some advance they may also become slower or even stagnate due to the lack of derivatives. It is thus of interest to provide a way to appropriately switch from a derivative-free method to a derivative-based one. In this paper, we develop a family of indicators for such a switch based on the decrease properties of both classes of methods (typically used when deriving worst case complexity bounds).
Sun, 01 Jan 2017 00:00:00 GMThttp://hdl.handle.net/10316/445812017-01-01T00:00:00ZPrediction of chronic damage in systemic lupus erythematosus by using machine-learning modelshttp://hdl.handle.net/10316/45495Title: Prediction of chronic damage in systemic lupus erythematosus by using machine-learning models
Authors: Ceccarelli, Fulvia; Sciandrone, Marco; Perricone, Carlo; Galvan, Giulio; Morelli, Francesco; Vicente, Luís Nunes; Leccese, Ilaria; Massaro, Laura; Cipriano, Enrica; Spinelli, Francesca Romana; Alessandri, Cristiano; Valesini, Guido; Conti, Fabrizio
Abstract: The increased survival in Systemic Lupus Erythematosus (SLE) patients implies the development of chronic damage, occurring in up to 50% of cases. Its prevention is a major goal in the SLE management. We aimed at predicting chronic damage in a large monocentric SLE cohort by using neural networks.
Sun, 01 Jan 2017 00:00:00 GMThttp://hdl.handle.net/10316/454952017-01-01T00:00:00ZA Merit Function Approach for Direct Searchhttp://hdl.handle.net/10316/45560Title: A Merit Function Approach for Direct Search
Authors: Gratton, Serge; Vicente, Luís Nunes
Abstract: In this paper it is proposed to equip direct-search methods with a general procedure to minimize an objective function, possibly nonsmooth, without using derivatives and subject to constraints on the variables. One aims at considering constraints, most likely nonlinear or nonsmooth, for which the derivatives of the corresponding functions are also unavailable. The novelty of this contribution relies mostly on how relaxable constraints are handled. Such constraints, which can be relaxed during the course of the optimization, are taken care of by a merit function and, if necessary, by a restoration procedure. Constraints that are unrelaxable, when present, are treated by an extreme barrier approach. One is able to show that the resulting merit function direct-search algorithm exhibits global convergence properties for first-order stationary constraints. As in the progressive barrier method [C. Audet and J. E. Dennis Jr., SIAM J. Optim., 20 (2009), pp. 445--472], we provide a mechanism to indicate the transfer of constraints from the relaxable set to the unrelaxable one.
Wed, 01 Jan 2014 00:00:00 GMThttp://hdl.handle.net/10316/455602014-01-01T00:00:00ZOn the optimal order of worst case complexity of direct searchhttp://hdl.handle.net/10316/45237Title: On the optimal order of worst case complexity of direct search
Authors: Dodangeh, Mahdi; Vicente, Luís Nunes; Zhang, Zaikun
Abstract: The worst case complexity of direct-search methods has been recently analyzed when they use positive spanning sets and impose a sufficient decrease condition to accept new iterates. For smooth unconstrained optimization, it is now known that such methods require at most \mathcal {O}(n^2\epsilon ^{-2}) function evaluations to compute a gradient of norm below \epsilon \in (0,1), where n is the dimension of the problem. Such a maximal effort is reduced to \mathcal {O}(n^2\epsilon ^{-1}) if the function is convex. The factor n^2 has been derived using the positive spanning set formed by the coordinate vectors and their negatives at all iterations. In this paper, we prove that such a factor of n^2 is optimal in these worst case complexity bounds, in the sense that no other positive spanning set will yield a better order of n. The proof is based on an observation that reveals the connection between cosine measure in positive spanning and sphere covering.
Fri, 01 Jan 2016 00:00:00 GMThttp://hdl.handle.net/10316/452372016-01-01T00:00:00ZA second-order globally convergent direct-search method and its worst-case complexityhttp://hdl.handle.net/10316/45243Title: A second-order globally convergent direct-search method and its worst-case complexity
Authors: Gratton, S.; Royer, C. W.; Vicente, Luís Nunes
Abstract: Direct-search algorithms form one of the main classes of algorithms for smooth unconstrained derivative-free optimization, due to their simplicity and their well-established convergence results. They proceed by iteratively looking for improvement along some vectors or directions. In the presence of smoothness, first-order global convergence comes from the ability of the vectors to approximate the steepest descent direction, which can be quantified by a first-order criticality (cosine) measure. The use of a set of vectors with a positive cosine measure together with the imposition of a sufficient decrease condition to accept new iterates leads to a convergence result as well as a worst-case complexity bound. In this paper, we present a second-order study of a general class of direct-search methods. We start by proving a weak second-order convergence result related to a criticality measure defined along the directions used throughout the iterations. Extensions of this result to obtain a true second-order optimality one are discussed, one possibility being a method using approximate Hessian eigenvectors as directions (which is proved to be truly second-order globally convergent). Numerically guaranteeing such a convergence can be rather expensive to ensure, as it is indicated by the worst-case complexity analysis provided in this paper, but turns out to be appropriate for some pathological examples.
Fri, 01 Jan 2016 00:00:00 GMThttp://hdl.handle.net/10316/452432016-01-01T00:00:00ZWorst case complexity of direct search under convexityhttp://hdl.handle.net/10316/45246Title: Worst case complexity of direct search under convexity
Authors: Dodangeh, Mahdi; Vicente, Luís Nunes
Abstract: In this paper we prove that the broad class of direct-search methods of directional type, based on imposing sufficient decrease to accept new iterates, exhibits the same worst case complexity bound and global rate of the gradient method for the unconstrained minimization of a convex and smooth function. More precisely, it will be shown that the number of iterations needed to reduce the norm of the gradient of the objective function below a certain threshold is at most proportional to the inverse of the threshold. It will be also shown that the absolute error in the function values decay at a sublinear rate proportional to the inverse of the iteration counter. In addition, we prove that the sequence of absolute errors of function values and iterates converges r-linearly in the strongly convex case.
Fri, 01 Jan 2016 00:00:00 GMThttp://hdl.handle.net/10316/452462016-01-01T00:00:00ZDirect Search Based on Probabilistic Descenthttp://hdl.handle.net/10316/45248Title: Direct Search Based on Probabilistic Descent
Authors: Gratton, S.; Royer, C. W.; Vicente, Luís Nunes; Zhang, Zaikun
Abstract: Direct-search methods are a class of popular derivative-free algorithms characterized by evaluating the objective function using a step size and a number of (polling) directions. When applied to the minimization of smooth functions, the polling directions are typically taken from positive spanning sets, which in turn must have at least n+1 vectors in an $n$-dimensional variable space. In addition, to ensure the global convergence of these algorithms, the positive spanning sets used throughout the iterations are required to be uniformly nondegenerate in the sense of having a positive (cosine) measure bounded away from zero. However, recent numerical results indicated that randomly generating the polling directions without imposing the positive spanning property can improve the performance of these methods, especially when the number of directions is chosen as considerably less than n+1. In this paper, we analyze direct-search algorithms when the polling directions are probabilistic descent, meaning that with a certain probability at least one of them is of descent type. Such a framework enjoys almost-sure global convergence. More interestingly, we will show a global decaying rate of $1/\sqrt{k}$ for the gradient size, with overwhelmingly high probability, matching the corresponding rate for the deterministic versions of the gradient method or of direct search. Our analysis helps us understand numerical behavior and the choice of the number of polling directions.
Thu, 01 Jan 2015 00:00:00 GMThttp://hdl.handle.net/10316/452482015-01-01T00:00:00ZA parallel evolution strategy for an earth imaging problem in geophysicshttp://hdl.handle.net/10316/45245Title: A parallel evolution strategy for an earth imaging problem in geophysics
Authors: Diouane, Y.; Gratton, S.; Vasseur, X.; Vicente, Luís Nunes; Calandra, H.
Abstract: In this paper we propose a new way to compute a rough approximation solution, to be later used as a warm starting point in a more refined optimization process, for a challenging global optimization problem related to earth imaging in geophysics. The warm start consists of a velocity model that approximately solves a full-waveform inverse problem at low frequency. Our motivation arises from the availability of massively parallel computing platforms and the natural parallelization of evolution strategies as global optimization methods for continuous variables. Our first contribution consists of developing a new and efficient parametrization of the velocity models to significantly reduce the dimension of the original optimization space. Our second contribution is to adapt a class of evolution strategies to the specificity of the physical problem at hands where the objective function evaluation is known to be the most expensive computational part. A third contribution is the development of a parallel evolution strategy solver, taking advantage of a recently proposed modification of these class of evolutionary methods that ensures convergence and promotes better performance under moderate budgets. The numerical results presented demonstrate the effectiveness of the algorithm on a realistic 3D full-waveform inverse problem in geophysics. The developed numerical approach allows us to successfully solve an acoustic full-waveform inversion problem at low frequencies on a reasonable number of cores of a distributed memory computer.
Fri, 01 Jan 2016 00:00:00 GMThttp://hdl.handle.net/10316/452452016-01-01T00:00:00Z