A Monte Carlo comparison of traditional and Stein-rule estimators under squared error loss

by Thomas A. Yancey

Publisher: College of Commerce and Business Administration, University of Illinois at Urbana-Champaign in [Urbana, Ill.]

Written in English
Cover of: A Monte Carlo comparison of traditional and Stein-rule estimators under squared error loss | Thomas A. Yancey
Published: Pages: 11 Downloads: 481
Share This

Edition Notes

Includes bibliographical references (p. 10-11).

StatementThomas A. Yancey, George Judge, Mary E. Bock
SeriesFaculty working papers -- no. 211, Faculty working papers -- no. 211.
ContributionsJudge, George, Bock, Mary E., University of Illinois at Urbana-Champaign. College of Commerce and Business Administration
The Physical Object
Pagination11 p. :
Number of Pages11
ID Numbers
Open LibraryOL25167777M
OCLC/WorldCa759523149

estimators as its particular cases, including the Stein rule family of estimators proposed by James and Stein (). In the context of linear regression mod-els, the family of double k-class estimators was proposed under the assumption of spherical or homoskedastic disturbances. Later, Wan and Chaturvedi (). required to achieve a specified precision in estimating a loss probability. Because this means that the number of portfolio revaluations is also reduced by a factor of or more, it results in a very large reduction in the computing time required for Monte Carlo estimation of VAR. The rest of this article is organized as follows. I am looking for references of studies that show how to use Monte Carlo simulations to compare different estimators for any given parameter of any probability distribution (for example, comparing the MLE vs. the method-of-moments estimator, for . Estimating Integrals Via Monte Carlo • The benefit of Monte Carlo is with higher dimension multiple integrals, and with extremely complex integrals (like those in rendering) • It also provides an easy (but slow!) ground truth to compare against approximations • Rendering!

Savin proposes to test the Monte Carlo hypothesis by a method free from these criticisms. Like Anderson, he uses a Chi-squared goodness of fit test but he works with the NBER data used by McCulloch and concentrates on expansions. He too finds that the Monte Carlo hypothesis cannot be rejected. McCulloch () replied to Savin (), arguing. Most improvements to Monte Carlo methods are variance-reduction techniques. Antithetic Resampling Suppose we have two random variables that provide estimators for, and, that they have the same variance but that they are negatively correlated, then will provide a better estimate for because it's variance will be smaller.. This the idea in antithetic resampling (see Hall, ). II. MONTE CARLO TECHNIQUES FOP. ESTImATING UNCERTAINTY In the previous section, it was noted that cost estimate uncer-tainty results from two primary sources--requirements and cost-estimating uncertainty. The relationship between the system cost uncertainty. Next: Exercise One dimensional Up: Monte Carlo integration Previous: Simple Monte Carlo integration The Monte Carlo method clearly yields approximate results. The accuracy deppends on the number of values that we use for the average.

Monte Carlo Simulation 9 Main Purposes and Means 9 Generating Pseudo Random Numbers 10 LLN and Classic Simple Regression 15 CLT and Simple Sample Averages 20 Exercises 24 2 Monte Carlo Assessment of Moments 27 MCS Estimation of an Expectation 28 Analysis of Estimator Bias by Simulation 34 Assessing the (R)MSE of an. Evaluations using Monte-Carlo simulations show that standard errors estimators, assuming a normally distributed population, are almost always reliable. In addition, as expected, smaller sample sizes lead to less reliable.

A Monte Carlo comparison of traditional and Stein-rule estimators under squared error loss by Thomas A. Yancey Download PDF EPUB FB2

COVID campus closures: see options for getting or retaining Remote Access to subscribed content see options for getting or retaining Remote Access to subscribed contentCited by:   Risk comparison of the Stein-rule estimator in a linear regression model with omitted relevant regressors and multivariatet errors under the Pitman nearness criterion.

Statistical Papers, Vol. 48, Issue. 1, p. Cited by: In this paper, the risk of the Stein-rule (SR) estimator is compared with that of the improved Stein-rule estimator using the Stein variance (SRSV) estimator under the balanced loss. Unlike nonstationary conditions, under stationary conditions a true mean exists and a comparison among the estimators, SM, FM, FM* and RM, is relatively simple because they are all unbiased, so their efficiency defined in reduces to the ratio of their variances.

For example, the efficiency of the sample mean SM relative to the fractional mean FM is equal to 1/f, which we Author: Richard M. Vogel, Charles N. Kroll. Monte Carlo methods, or Monte Carlo experiments, are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results.

The underlying concept is to use randomness to solve problems that might be deterministic in principle. They are often used in physical and mathematical problems and are most useful when it is difficult or. comparison of some bayes' estimators for the weibull reliability function under different loss functions‬.

Article (PDF Available) in Iraqi Journal of Science 53(2) January with. A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text.

Empirical risk functions for least squares and Stein-rule estimators, w2 = R. Hill, R. Ziemer / The Stein-rule in non-orthogonal designs 10 8~ 6 o - 4 of ~ 2 0 I 3 5 9 15 a Fig. Empirical risk functions for least squares and Stein-rule estimators, wz = 10 8 6 o y N 2 0 - I 3 5 9 15 a Fig.

Results of Monte Carlo simulation of empirical Bayes estimators are given which show that, even with a few past experiences, the empirical Bayes estimators have smaller mean squared errors than do. A Monte Carlo simulation study shows that the mean squared errors (MSE) of shrinkage estimator are comparable to the MSE of the penalty estimators and in particular it performs better than the.

Economics Letters 11 () North-Holland Publishing Company PRE-TEST ESTIMATION UNDER SQUARED ERROR LOSS G.G. JUDGE and T.A. YANCEY University of Illinois, Urbana, ILUSA M.E. BOCK Purdue University, West Lafayette, INUSA Received 16 August This paper specifies a pre-test estimator for conditions.

Journal of Econometrics 25 () North-Holland THE RISK OF GENERAL STEIN-LIKE ESTIMATORS IN THE PRESENCE OF MULTICOLLINEARITY R.C. HILL University of Georgia, Athens, GAUSA R.F. ZIEMER Texas A & M University, College Station, TXUSA This paper investigates the behavior of the bounding risk functions for the Stein-like estimator.

Download PDF: Sorry, we are unable to provide the full text but you may find it at the following location(s): (external link). A Monte-Carlo experiment investigates the finite sample behaviour of the proposed family of estimators.

In this paper, we consider a family of feasible generalised double k-class estimators in a linear regression model with non-spherical disturbances.

Monte Carlo Simulation A method of estimating the value of an unknown quantity using the principles of inferential statistics Inferential statistics Population: a set of examples Sample: a proper subset of a population Key fact: a. random sample. tends to exhibit the same properties as the population from which it is drawn.

To compare these two estimators by monte carlo for a speci c nand: te X 1;;X n˘N(; 2) ate ^ 1 and ^ 2 (^ 1)2 and (^ 2)2 step ktimes the means of the (^ 1 2)2’s and (^ 2) ’s, over the kreplicates, are the monte carlo estimators.

Monte Carlo estimates of pi. To compute Monte Carlo estimates of pi, you can use the function f(x) = sqrt(1 – x 2). The graph of the function on the interval [0,1] is shown in the plot. The graph of the function forms a quarter circle of unit radius. The exact area under. A comparison of several model selection procedures.

A consistent model selection. in Granger (). A conversation on econometric methodology. A heteroskedastic-consistent covariance matrix estimator and a direct test for heteroskedasticity. Respondent-driven sampling (RDS) is a snowball-type sampling method used to survey hidden populations, that is, those that lack a sampling frame.

In this work, we consider the problem of regression modeling and association for continuous RDS data. We propose a new sample weight method for estimating non-linear parameters such as the covariance and the correlation coefficient.

We also estimate. COVID Resources. Reliable information about the coronavirus (COVID) is available from the World Health Organization (current situation, international travel).Numerous and frequently-updated resource results are available from this ’s WebJunction has pulled together information and resources to assist library staff as they consider how to handle.

George Judge: current contact information and listing of economic research of this author provided by RePEc/IDEAS. N(,1). We want to provide some sort of interval estimate C for. Frequentist Approach. Construct the confidence interval C = X n p n, X n + p n.

Then P (2 C)= for all 2 R. The probability statement is about the random interval C. The interval is random because it is a function of the data.

The Monte Carlo model makes it possible for researchers from all different kinds of professions to run multiple trials, and, thus, to define all.

One method to estimate the value of \(\pi \) () is by using a Monte Carlo method. In the demo above, we have a circle of radiusenclosed by a 1 × 1 square.

Tsunami and Sheffin () use a Monte Carlo experiment to compare a number of asymptotic tests, emphasizing in particular an asymptotic F-test conditional on the posterior mean of the ratio of standard deviations.

Estimators of the Squared Cross-Validity Coefficient: A Monte Carlo Investigation Fritz Drasgow University of Illinois at Urbana-Champaign Neil J. Dorans Personnel Research and Development Center U.S.

Office of Personnel Management Ledyard R. Tucker University of Illinois at Urbana-Champaign A monte carlo experiment was used to evaluate four procedures for estimating. The performance of three such combined odds ratio estimators and four tests of association was studied using Monte Carlo methods.

The results of these studies are described in this paper. For the Monte Carlo studies, a constant odds ratio was used with. original Stein-rule estimator equation (4) is modified by the rule that (b) = Ao if F Stein-rule estimator equation (6) demon-strates the inadmissibility of the traditional pretest estimators.

Monte-Carlo sampling experiments by Yancey and Judge suggest that the losses from using the pretest estimator instead of the. The best example of the plug-in principle, the bootstrapping method. Bootstrapping is a statistical method for estimating the sampling distribution of an estimator by sampling with replacement from the original sample, most often with the purpose of deriving robust estimates of standard errors and confidence intervals of a population parameter like a mean, median, proportion.

Monte Carlo simulations can deal with path-dependent options, options dependent on several underlying state variables and options with complex payoffs It is not easy, however, to use Monte Carlo simulations to handle American-style options and other derivatives where the holder has decisions to make prior to maturity Calculate Delta.

we use Monte Carlo experiment to compare the finite sample performances of four different Type-3 Tobit estimators. This paper is organized as follows. In section 2, we briefly discuss the basic Type-3 Tobit model and various semiparametric estimators.

Section 3 reports the Monte Carlo simulation results and compares the finite sample.Peter Ho Shrinkage estimators Octo Letting w= 1 aand 0 = b=(1 a), the result suggests that if we want to use an admissible linear estimator, it should be of the form (X) = w 0 + (1 w)X; w2[0;1] We call such estimators linear shrinkage estimators as they \shrink" the estimate.The "30" answer is very popular and has been propagated from book to book until it became an axiom, pretty much.

Once I was able to track the first paper which mentioned it. The "it depends" answer is explored in this paper: Central Limit Theorem and Sample Size by Zachary R. Smith and Craig S. Wells. The other thing is variance reduction.

It.