Доступ предоставлен для: Guest
International Journal for Uncertainty Quantification
Главный редактор: Habib N. Najm (open in a new tab)
Ассоциированный редакторs: Dongbin Xiu (open in a new tab) Tao Zhou (open in a new tab)
Редактор-основатель: Nicholas Zabaras (open in a new tab)

Выходит 6 номеров в год

ISSN Печать: 2152-5080

ISSN Онлайн: 2152-5099

The Impact Factor measures the average number of citations received in a particular year by papers published in the journal during the two preceding years. 2017 Journal Citation Reports (Clarivate Analytics, 2018) IF: 1.7 To calculate the five year Impact Factor, citations are counted in 2017 to the previous five years and divided by the source items published in the previous five years. 2017 Journal Citation Reports (Clarivate Analytics, 2018) 5-Year IF: 1.9 The Immediacy Index is the average number of times an article is cited in the year it is published. The journal Immediacy Index indicates how quickly articles in a journal are cited. Immediacy Index: 0.5 The Eigenfactor score, developed by Jevin West and Carl Bergstrom at the University of Washington, is a rating of the total importance of a scientific journal. Journals are rated according to the number of incoming citations, with citations from highly ranked journals weighted to make a larger contribution to the eigenfactor than those from poorly ranked journals. Eigenfactor: 0.0007 The Journal Citation Indicator (JCI) is a single measurement of the field-normalized citation impact of journals in the Web of Science Core Collection across disciplines. The key words here are that the metric is normalized and cross-disciplinary. JCI: 0.5 SJR: 0.584 SNIP: 0.676 CiteScore™:: 3 H-Index: 25

Indexed in

RANDOM PREDICTOR MODELS FOR RIGOROUS UNCERTAINTY QUANTIFICATION

Том 5, Выпуск 5, 2015, pp. 469-489
DOI: 10.1615/Int.J.UncertaintyQuantification.2015013799
Get accessDownload

Краткое описание

This paper proposes techniques for constructing linear parametric models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, random predictors models (RPMs) yield a random variable. The strategies proposed yield models in which the mean, the variance, and the range of the model's parameters, thus, of the random process describing the output, are rigorously prescribed. As such, these strategies encompass all RPMs conforming to the prescription of these metrics (e.g., random variables and probability boxes describing the model's parameters, and random processes describing the output). Strategies for calculating optimal RPMs by solving a sequence of optimization programs are developed. The RPMs are optimal in the sense that they yield the tightest output ranges containing all (or, depending on the formulation, most) of the observations. Extensions that enable eliminating the effects of outliers in the data set are developed. When the data-generating mechanism is stationary, the data are independent, and the optimization program(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex program), the reliability of the prediction, which is the probability that a future observation would fall within the predicted output range, is bounded rigorously using Scenario Optimization Theory. This framework does not require making any assumptions on the underlying structure of the data-generating mechanism.

ЦИТИРОВАНО В
  1. Slaba Tony C., Bahadori Amir A., Reddell Brandon D., Singleterry Robert C., Clowdsley Martha S., Blattnig Steve R., Optimal shielding thickness for galactic cosmic ray environments, Life Sciences in Space Research, 12, 2017. Crossref

  2. Garatti S., Campi M.C., Carè A., On a class of interval predictor models with universal reliability, Automatica, 110, 2019. Crossref

  3. Garatti Simone, Campi Marco C., Complexity-based modulation of the data-set in scenario optimization, 2019 18th European Control Conference (ECC), 2019. Crossref

  4. Garatti Simone, Campi Marco C., Learning for Control: a Bayesian Scenario Approach, 2019 IEEE 58th Conference on Decision and Control (CDC), 2019. Crossref

  5. Rocchetta Roberto, Gao Qi, Petkovic Milan, Soft-constrained interval predictor models and epistemic reliability intervals: A new tool for uncertainty quantification with limited experimental data, Mechanical Systems and Signal Processing, 161, 2021. Crossref

  6. Campi Marco C., Garatti Simone, Scenario optimization with relaxation: a new tool for design and application to machine learning problems, 2020 59th IEEE Conference on Decision and Control (CDC), 2020. Crossref

  7. Riedmaier Stefan, Danquah Benedikt, Schick Bernhard, Diermeyer Frank, Unified Framework and Survey for Model Verification, Validation and Uncertainty Quantification, Archives of Computational Methods in Engineering, 28, 4, 2021. Crossref

  8. Garatti Simone, Campi Marco C., The risk of making decisions from data through the lens of the scenario approach, IFAC-PapersOnLine, 54, 7, 2021. Crossref

  9. Garatti Simone, Campi Marco C., On the consistency of the risk evaluation in the scenario approach, 2021 60th IEEE Conference on Decision and Control (CDC), 2021. Crossref

  10. Garatti S., Campi M. C., Risk and complexity in scenario optimization, Mathematical Programming, 191, 1, 2022. Crossref

Портал Begell Электронная Бибилиотека e-Книги Журналы Справочники и Сборники статей Коллекции Цены и условия подписки Begell House Контакты Language English 中文 Русский Português German French Spain