Details

Mathematical Statistics


Mathematical Statistics


1. Aufl.

von: Dieter Rasch, Dieter Schott

70,99 €

Verlag: Wiley
Format: PDF
Veröffentl.: 10.01.2018
ISBN/EAN: 9781119385264
Sprache: englisch
Anzahl Seiten: 688

DRM-geschütztes eBook, Sie benötigen z.B. Adobe Digital Editions und eine Adobe ID zum Lesen.

Beschreibungen

<p><b>Explores mathematical statistics in its entirety—from the fundamentals to modern methods</b></p> <p>This book introduces readers to point estimation, confidence intervals, and statistical tests. Based on the general theory of linear models, it provides an in-depth overview of the following: analysis of variance (ANOVA) for models with fixed, random, and mixed effects; regression analysis is also first presented for linear models with fixed, random, and mixed effects before being expanded to nonlinear models; statistical multi-decision problems like statistical selection procedures (Bechhofer and Gupta) and sequential tests; and design of experiments from a mathematical-statistical point of view. Most analysis methods have been supplemented by formulae for minimal sample sizes. The chapters also contain exercises with hints for solutions.</p> <p>Translated from the successful German text, <i>Mathematical Statistics</i> requires knowledge of probability theory (combinatorics, probability distributions, functions and sequences of random variables), which is typically taught in the earlier semesters of scientific and mathematical study courses. It teaches readers all about statistical analysis and covers the design of experiments. The book also describes optimal allocation in the chapters on regression analysis. Additionally, it features a chapter devoted solely to experimental designs.</p> <ul> <li>Classroom-tested with exercises included</li> <li>Practice-oriented (taken from day-to-day statistical work of the authors)</li> <li>Includes further studies including design of experiments and sample sizing</li> <li>Presents and uses IBM SPSS Statistics 24 for practical calculations of data</li> </ul> <p><i>Mathematical Statistics</i> is a recommended text for advanced students and practitioners of math, probability, and statistics.</p> <p> </p>
<p>Preface xiii</p> <p><b>1 Basic Ideas of Mathematical Statistics 1</b></p> <p>1.1 Statistical Population and Samples 2</p> <p>1.1.1 Concrete Samples and Statistical Populations 2</p> <p>1.1.2 Sampling Procedures 4</p> <p>1.2 Mathematical Models for Population and Sample 8</p> <p>1.3 Sufficiency and Completeness 9</p> <p>1.4 The Notion of Information in Statistics 20</p> <p>1.5 Statistical Decision Theory 28</p> <p>1.6 Exercises 32</p> <p>References 37</p> <p><b>2 Point Estimation 39</b></p> <p>2.1 Optimal Unbiased Estimators 41</p> <p>2.2 Variance-Invariant Estimation 53</p> <p>2.3 Methods for Construction and Improvement of Estimators 57</p> <p>2.3.1 Maximum Likelihood Method 57</p> <p>2.3.2 Least Squares Method 60</p> <p>2.3.3 Minimum Chi-Squared Method 61</p> <p>2.3.4 Method of Moments 62</p> <p>2.3.5 Jackknife Estimators 63</p> <p>2.3.6 Estimators Based on Order Statistics 64</p> <p>2.3.6.1 Order and Rank Statistics 64</p> <p>2.3.6.2 L-Estimators 66</p> <p>2.3.6.3 M-Estimators 67</p> <p>2.3.6.4 R-Estimators 68</p> <p>2.4 Properties of Estimators 68</p> <p>2.4.1 Small Samples 69</p> <p>2.4.2 Asymptotic Properties 71</p> <p>2.5 Exercises 75</p> <p>References 78</p> <p><b>3 Statistical Tests and Confidence Estimations 79</b></p> <p>3.1 Basic Ideas of Test Theory 79</p> <p>3.2 The Neyman–Pearson Lemma 87</p> <p>3.3 Tests for Composite Alternative Hypotheses and One-Parametric Distribution Families 96</p> <p>3.3.1 Distributions with Monotone Likelihood Ratio and Uniformly Most Powerful Tests for One-Sided Hypotheses 96</p> <p>3.3.2 UMPU-Tests for Two-Sided Alternative Hypotheses 105</p> <p>3.4 Tests for Multi-Parametric Distribution Families 110</p> <p>3.4.1 General Theory 111</p> <p>3.4.2 The Two-Sample Problem: Properties of Various Tests and Robustness 124</p> <p>3.4.2.1 Comparison of Two Expectations 125</p> <p>3.4.3 Comparison of Two Variances 137</p> <p>3.4.4 Table for Sample Sizes 138</p> <p>3.5 Confidence Estimation 139</p> <p>3.5.1 One-Sided Confidence Intervals in One-Parametric Distribution Families 140</p> <p>3.5.2 Two-Sided Confidence Intervals in One-Parametric and Confidence Intervals in Multi-Parametric Distribution Families 143</p> <p>3.5.3 Table for Sample Sizes 146</p> <p>3.6 Sequential Tests 147</p> <p>3.6.1 Introduction 147</p> <p>3.6.2 Wald’s Sequential Likelihood Ratio Test for One-Parametric Exponential Families 149</p> <p>3.6.3 Test about Mean Values for Unknown Variances 153</p> <p>3.6.4 Approximate Tests for the Two-Sample Problem 158</p> <p>3.6.5 Sequential Triangular Tests 160</p> <p>3.6.6 A Sequential Triangular Test for the Correlation Coefficient 162</p> <p>3.7 Remarks about Interpretation 169</p> <p>3.8 Exercises 170</p> <p>References 176</p> <p><b>4 Linear Models: General Theory 179</b></p> <p>4.1 Linear Models with Fixed Effects 179</p> <p>4.1.1 Least Squares Method 180</p> <p>4.1.2 Maximum Likelihood Method 184</p> <p>4.1.3 Tests of Hypotheses 185</p> <p>4.1.4 Construction of Confidence Regions 190</p> <p>4.1.5 Special Linear Models 191</p> <p>4.1.6 The Generalised Least Squares Method (GLSM) 198</p> <p>4.2 Linear Models with Random Effects: Mixed Models 199</p> <p>4.2.1 Best Linear Unbiased Prediction (BLUP) 200</p> <p>4.2.2 Estimation of Variance Components 202</p> <p>4.3 Exercises 203</p> <p>References 204</p> <p><b>5 Analysis of Variance (ANOVA): Fixed Effects Models (Model I of Analysis of Variance) 207</b></p> <p>5.1 Introduction 207</p> <p>5.2 Analysis of Variance with One Factor (Simple- or One-Way Analysis of Variance) 215</p> <p>5.2.1 The Model and the Analysis 215</p> <p>5.2.2 Planning the Size of an Experiment 228</p> <p>5.2.2.1 General Description for All Sections of This Chapter 228</p> <p>5.2.2.2 The Experimental Size for the One-Way Classification 231</p> <p>5.3 Two-Way Analysis of Variance 232</p> <p>5.3.1 Cross-Classification (A × B) 233</p> <p>5.3.1.1 Parameter Estimation 236</p> <p>5.3.1.2 Testing Hypotheses 244</p> <p>5.3.2 Nested Classification (A B) 260</p> <p>5.4 Three-Way Classification 272</p> <p>5.4.1 Complete Cross-Classification (A × B × C) 272</p> <p>5.4.2 Nested Classification (C≺B≺A) 279</p> <p>5.4.3 Mixed Classification 282</p> <p>5.4.3.1 Cross-Classification between Two Factors Where One of Them Is Subordinated to a Third Factor B≺A × C 282</p> <p>5.4.3.2 Cross-Classification of Two Factors in Which a Third Factor Is Nested C≺ A× B 288</p> <p>5.5 Exercises 291</p> <p>References 291</p> <p><b>6 Analysis of Variance: Estimation of Variance Components (Model II of the Analysis of Variance) 293</b></p> <p>6.1 Introduction: Linear Models with Random Effects 293</p> <p>6.2 One-Way Classification 297</p> <p>6.2.1 Estimation of Variance Components 300</p> <p>6.2.1.1 Analysis of Variance Method 300</p> <p>6.2.1.2 Estimators in Case of Normally Distributed Y 302</p> <p>6.2.1.3 REML: Estimation 304</p> <p>6.2.1.4 Matrix Norm Minimising Quadratic Estimation 305</p> <p>6.2.1.5 Comparison of Several Estimators 306</p> <p>6.2.2 Tests of Hypotheses and Confidence Intervals 308</p> <p>6.2.3 Variances and Properties of the Estimators of the Variance Components 310</p> <p>6.3 Estimators of Variance Components in the Two-Way and Three-Way Classification 315</p> <p>6.3.1 General Description for Equal and Unequal Subclass Numbers 315</p> <p>6.3.2 Two-Way Cross-Classification 319</p> <p>6.3.3 Two-Way Nested Classification 324</p> <p>6.3.4 Three-Way Cross-Classification with Equal Subclass Numbers 326</p> <p>6.3.5 Three-Way Nested Classification 334</p> <p>6.3.6 Three-Way Mixed Classification 334</p> <p>6.4 Planning Experiments 336</p> <p>6.5 Exercises 338</p> <p>References 339</p> <p><b>7 Analysis of Variance: Models with Finite Level Populations and Mixed Models 341</b></p> <p>7.1 Introduction: Models with Finite Level Populations 341</p> <p>7.2 Rules for the Derivation of SS, df, MS and E(MS) in Balanced ANOVA Models 343</p> <p>7.3 Variance Component Estimators in Mixed Models 348</p> <p>7.3.1 An Example for the Balanced Case 349</p> <p>7.3.2 The Unbalanced Case 351</p> <p>7.4 Tests for Fixed Effects and Variance Components 353</p> <p>7.5 Variance Component Estimation and Tests of Hypotheses in Special Mixed Models 354</p> <p>7.5.1 Two-Way Cross-Classification 355</p> <p>7.5.2 Two-Way Nested Classification B ≺ A 358</p> <p>7.5.2.1 Levels of A Random 360</p> <p>7.5.2.2 Levels of B Random 361</p> <p>7.5.3 Three-Way Cross-Classification 362</p> <p>7.5.4 Three-Way Nested Classification 365</p> <p>7.5.5 Three-Way Mixed Classification 369</p> <p>7.5.5.1 The Type (B ≺ A) × C 369</p> <p>7.5.5.2 The Type C ≺ AB 374</p> <p>7.6 Exercises 376</p> <p>References 376</p> <p><b>8 Regression Analysis: Linear Models with Non-random Regressors (Model I of Regression Analysis) and with Random Regressors (Model II of Regression Analysis) 377</b></p> <p>8.1 Introduction 377</p> <p>8.2 Parameter Estimation 380</p> <p>8.2.1 Least Squares Method 380</p> <p>8.2.2 Optimal Experimental Design 394</p> <p>8.3 Testing Hypotheses 397</p> <p>8.4 Confidence Regions 406</p> <p>8.5 Models with Random Regressors 410</p> <p>8.5.1 Analysis 410</p> <p>8.5.2 Experimental Designs 415</p> <p>8.6 Mixed Models 416</p> <p>8.7 Concluding Remarks about Models of Regression Analysis 417</p> <p>8.8 Exercises 419</p> <p>References 419</p> <p><b>9 Regression Analysis: Intrinsically Non-linear Model I 421</b></p> <p>9.1 Estimating by the Least Squares Method 424</p> <p>9.1.1 Gauß–Newton Method 425</p> <p>9.1.2 Internal Regression 431</p> <p>9.1.3 Determining Initial Values for Iteration Methods 433</p> <p>9.2 Geometrical Properties 434</p> <p>9.2.1 Expectation Surface and Tangent Plane 434</p> <p>9.2.2 Curvature Measures 440</p> <p>9.3 Asymptotic Properties and the Bias of LS Estimators 443</p> <p>9.4 Confidence Estimations and Tests 447</p> <p>9.4.1 Introduction 447</p> <p>9.4.2 Tests and Confidence Estimations Based on the Asymptotic Covariance Matrix 451</p> <p>9.4.3 Simulation Experiments to Check Asymptotic Tests and Confidence Estimations 452</p> <p>9.5 Optimal Experimental Design 454</p> <p>9.6 Special Regression Functions 458</p> <p>9.6.1 Exponential Regression 458</p> <p>9.6.1.1 Point Estimator 458</p> <p>9.6.1.2 Confidence Estimations and Tests 460</p> <p>9.6.1.3 Results of Simulation Experiments 463</p> <p>9.6.1.4 Experimental Designs 466</p> <p>9.6.2 The Bertalanffy Function 468</p> <p>9.6.3 The Logistic (Three-Parametric Hyperbolic Tangent) Function 473</p> <p>9.6.4 The Gompertz Function 476</p> <p>9.6.5 The Hyperbolic Tangent Function with Four Parameters 479</p> <p>9.6.6 The arc tangent Function with Four Parameters 484</p> <p>9.6.7 The Richards Function 487</p> <p>9.6.8 Summarising the Results of Sections 9.6.1–9.6.7 487</p> <p>9.6.9 Problems of Model Choice 488</p> <p>9.7 Exercises 489</p> <p>References 490</p> <p><b>10 Analysis of Covariance (ANCOVA) 495</b></p> <p>10.1 Introduction 495</p> <p>10.2 General Model I–I of the Analysis of Covariance 496</p> <p>10.3 Special Models of the Analysis of Covariance for the Simple Classification 503</p> <p>10.3.1 One Covariable with Constant γ 504</p> <p>10.3.2 A Covariable with Regression Coefficients γi Depending on the Levels of the Classification Factor 506</p> <p>10.3.3 A Numerical Example 507</p> <p>10.4 Exercises 510</p> <p>References 511</p> <p><b>11 Multiple Decision Problems 513</b></p> <p>11.1 Selection Procedures 514</p> <p>11.1.1 Basic Ideas 514</p> <p>11.1.2 Indifference Zone Formulation for Expectations 516</p> <p>11.1.2.1 Selection of Populations with Normal Distribution 517</p> <p>11.1.2.2 Approximate Solutions for Non-normal Distributions and t = 1 529</p> <p>11.1.3 Selection of a Subset Containing the Best Population with Given Probability 530</p> <p>11.1.3.1 Selection of the Normal Distribution with the Largest Expectation 534</p> <p>11.1.3.2 Selection of the Normal Distribution with Smallest Variance 535</p> <p>11.2 Multiple Comparisons 536</p> <p>11.2.1 Confidence Intervals for All Contrasts: Scheffé’s Method 542</p> <p>11.2.2 Confidence Intervals for Given Contrast: Bonferroni’s and Dunn’s Method 547</p> <p>11.2.3 Confidence Intervals for All Contrasts for ni = n: Tukey’s Method 550</p> <p>11.2.4 Confidence Intervals for All Contrast: Generalised Tukey’s Method 553</p> <p>11.2.5 Confidence Intervals for the Differences of Treatments with a Control: Dunnett’s Method 555</p> <p>11.2.6 Multiple Comparisons and Confidence Intervals 556</p> <p>11.2.7 Which Multiple Comparisons Shall Be Used? 559</p> <p>11.3 A Numerical Example 560</p> <p>11.4 Exercises 564</p> <p>References 564</p> <p><b>12 Experimental Designs 567</b></p> <p>12.1 Introduction 568</p> <p>12.2 Block Designs 571</p> <p>12.2.1 Completely Balanced Incomplete Block Designs (BIBD) 574</p> <p>12.2.2 Construction Methods of BIBD 582</p> <p>12.2.3 Partially Balanced Incomplete Block Designs 596</p> <p>12.3 Row–Column Designs 600</p> <p>12.4 Factorial Designs 603</p> <p>12.5 Programs for Construction of Experimental Designs 604</p> <p>12.6 Exercises 604</p> <p>References 605</p> <p>Appendix A: Symbolism 609</p> <p>Appendix B: Abbreviations 611</p> <p>Appendix C: Probability and Density Functions 613</p> <p>Appendix D: Tables 615</p> <p>Solutions and Hints for Exercises 627</p> <p>Index 659</p>
<p><b>DIETER RASCH, PhD,</b> is scientific advisor at the Center for Design of Experiments at the University of Natural Resources and Life Sciences, Vienna, Austria. He has published more than 275 scientific papers and 56 books as author or editor. <p><b>DIETER SCHOTT</b> obtained his PhD in analysis from the University of Rostock in 1976 and did his habilitation in the field of numerical functional analysis in 1982. He has published more than 100 scientific papers and is active as author, co-author and editor of numerous books and scientific journals.
<p><b>MATHEMATICAL STATISTICS</b> <p><b>Explores mathematical statistics in its entirety—from the fundamentals to modern methods</b> <p>This book introduces readers to point estimation, confidence intervals, and statistical tests. Based on the general theory of linear models, it provides an in-depth overview of the following: analysis of variance (ANOVA) for models with fixed, random, and mixed effects; regression analysis is also first presented for linear models with fixed, random, and mixed effects before being expanded to nonlinear models; statistical multi-decision problems like statistical selection procedures (Bechhofer and Gupta) and sequential tests; and design of experiments from a mathematical-statistical point of view. Most analysis methods have been supplemented by formulae for minimal sample sizes. The chapters also contain exercises with hints for solutions. <p>Translated from the successful German text, <i>Mathematical Statistics</i> requires knowledge of probability theory (combinatorics, probability distributions, functions and sequences of random variables), which is typically taught in the earlier semesters of scientific and mathematical study courses. It teaches readers all about statistical analysis and covers the design of experiments. The book also describes optimal allocation in the chapters on regression analysis. Additionally, it features a chapter devoted solely to experimental designs. <ul> <li>Classroom-tested with exercises included</li> <li>Practice-oriented (taken from day-to-day statistical work of the authors)</li> <li>Includes further studies including design of experiments and sample sizing</li> <li>Presents and uses IBM SPSS Statistics 24 for practical calculations of data</li> </ul> <p><i>Mathematical Statistics</i> is a recommended text for advanced students and practitioners of math, probability, and statistics.

Diese Produkte könnten Sie auch interessieren:

Modeling Uncertainty
Modeling Uncertainty
von: Moshe Dror, Pierre L'Ecuyer, Ferenc Szidarovszky
PDF ebook
236,81 €
Level Crossing Methods in Stochastic Models
Level Crossing Methods in Stochastic Models
von: Percy H. Brill
PDF ebook
203,29 €
Continuous Bivariate Distributions
Continuous Bivariate Distributions
von: N. Balakrishnan, Chin Diew Lai
PDF ebook
128,39 €