This outline of statistics as an aid in decision making will introduce a reader with limited mathematical background to the most important modern statistical methods. This is a revised and enlarged version, with major extensions and additions, of my "Angewandte Statistik" (5th ed.), which has proved useful for research workers and for consulting statisticians. Applied statistics is at the same time a collection of applicable statistical methods and the application of these methods to measured and/or counted observations. Abstract mathematical concepts and derivations are avoided. Special emphasis is placed on the basic principles of statistical formulation, and on the explanation of the conditions under which a certain formula or a certain test is valid. Preference is given to consideration of the analysis of small sized samples and of distribution-free methods. As a text and reference this book is written for non-mathematicians, in particular for technicians, engineers, executives, students, physicians as well as researchers in other disciplines. It gives any mathematician interested in the practical uses of statistics a general account of the subject. Practical application is the main theme; thus an essential part of the book consists in the 440 fully worked-out numerical examples, some of which are very simple; the 57 exercises with solutions; a number of different compu tational aids; and an extensive bibliography and a very detailed index. In particular, a collection of 232 mathematical and mathematical-statistical tables serves to enable and to simplify the computations.
Inhalt
to Statistics.- 0 Preliminaries.- 0.1 Mathematical Abbreviations.- 0.2 Arithmetical Operations.- 0.3 Computational Aids.- 0.4 Rounding Off.- 0.5 Computations with Inaccurate Numbers.- 1 Statistical Decision Techniques.- 1.1 What Is Statistics? Statistics and the Scientific Method.- 1.2 Elements of Computational Probability.- ?1.2.1 Statistical probability.- ?1.2.2 The addition theorem of probability theory.- ?1.2.3 Conditional probability and statistical independence.- 1.2.4 Bayes's theorem.- ?1.2.5 The random variable.- 1.2.6 The distribution function and the probability function.- 1.3 The Path to the Normal Distribution.- ?1.3.1 The population and the sample.- ?1.3.2 The generation of random samples.- ?1.3.3 A frequency distribution.- ?1.3.4 Bell-shaped curves and the normal distribution.- ?1.3.5 Deviations from the normal distribution.- ?1.3.6 Parameters of unimodal distributions.- ?1.3.7 The probability plot.- 1.3.8 Additional statistics for the characterization of a one dimensional frequency distribution.- 1.3.9 The lognormal distribution.- 1.4 The Road to the Statistical Test.- 1.4.1 The confidence coefficient.- 1.4.2 Null hypotheses and alternative hypotheses.- 1.4.3 Risk I and risk II.- 1.4.4 The significance level and the hypotheses are, if possible, to be specified before collecting the data.- 1.4.5 The statistical test.- 1.4.6 One sided and two sided tests.- 1.4.7 The power of a test.- 1.4.8 Distribution-free procedures.- 1.4.9 Decision principles.- 1.5 Three Important Families of Test Distributions.- 1.5.1 The Student's t-distribution.- 1.5.2 The ?2distribution.- 1.5.3 The F-distribution.- 1.6 Discrete Distributions.- 1.6.1 The binomial coefficient.- ?1.6.2 The binomial distribution.- 1.6.3 The hypergeometric distribution.- 1.6.4 The Poisson distribution.- ?1.6.5 The Thorndike nomogram.- 1.6.6 Comparison of means of Poisson distributions.- 1.6.7 The dispersion index.- 1.6.8 The multinomial coefficient.- 1.6.9 The multinomial distribution.- 2 Statistical Methods in Medicine and Technology.- 2.1 Medical Statistics.- 2.1.1 Critique of the source material.- 2.1.2 The reliability of laboratory methods.- 2.1.3 How to get unbiased information and how to investigate associations.- 2.1.4 Retrospective and prospective comparisons.- 2.1.5 The therapeutic comparison.- 2.1.6 The choice of appropriate sample sizes for the clinical trial.- 2.2 Sequential Test Plans.- 2.3 Evaluation of Biologically Active Substances Based on Dosage-Dichotomous Effect Curves.- 2.4 Statistics in Engineering.- 2.4.1 Quality control in industry.- 2.4.2 Life span and reliability of manufactured products.- 2.5 Operations Research.- 2.5.1 Linear programming.- 2.5.2 Game theory and the war game.- 2.5.3 The Monte Carlo method and computer simulation.- 3 The Comparison of Independent Data Samples.- 3.1 The Confidence Interval of the Mean and of the Median.- ?3.1.1 Confidence interval for the mean.- ?3.1.2 Estimation of sample sizes.- 3.1.3 The mean absolute deviation.- 3.1.4 Confidence interval for the median.- ?3.2 Comparison of an Empirical Mean with the Mean of a Normally Distributed Population.- ?3.3 Comparison of an Empirical Variance with Its Parameter.- 3.4 Confidence Interval for the Variance and for the Coefficient of Variation.- 3.5 Comparison of Two Empirically Determined Variances of Normally Distributed Populations.- 3.5.1 Small to medium sample size.- 3.5.2 Medium to large sample size.- 3.5.3 Large to very large sample size (n1, n2 ? 100).- ?3.6 Comparison of Two Empirical Means of Normally Distributed Populations.- 3.6.1 Unknown but equal variances.- 3.6.2 Unknown, possibly unequal variances.- 3.7 Quick Tests Which Assume Nearly Normally Distributed Data.- 3.7.1 The comparison of the dispersions of two small samples according to Pillai and Buenaventura.- 3.7.2 The comparison of the means of two small samples according to Lord.- 3.7.3 Comparison of the means of several samples of equal size according to Dixon.- 3.8 The Problem of Outliers and Some Tables Useful in Setting Tolerance Limits.- 3.9 Distribution-Free Procedures for the Comparison of Independent Samples.- 3.9.1 The rank dispersion test of Siegel and Tukey.- 3.9.2 The comparison of two independent samples: Tukey's quick and compact test.- 3.9.3 The comparison of two independent samples according to Kolmogoroff and Smirnoff.- ?3.9.4 Comparison of two independent samples: The U-test of Wilcoxon, Mann, and Whitney.- 3.9.5 The comparison of several independent samples: The H-test of Kruskal and Wallis.- 4 Further Test Procedures.- 4.1 Reduction of Sampling Errors by Pairing Observations: Paired Samples.- 4.2 Observations Arranged in Pairs.- 4.2.1 The t-test for data arranged in pairs.- 4.2.2 The Wilcoxon matched pair signed-rank test.- 4.2.3 The maximum test for pair differences.- 4.2.4 The sign test of Dixon and Mood.- 4.3 The ?2 Goodness of Fit Test.- 4.3.1 Comparing observed frequencies with their expectations.- 4.3.2 Comparison of an empirical distribution with the uniform distribution.- 4.3.3 Comparison of an empirical distribution with the normal distribution.- 4.3.4 Comparison of an empirical distribution with the Poisson distribution.- 4.4 The Kolmogoroff-Smirnoff Goodness of Fit Test.- 4.5 The Frequency of Events.- 4.5.1 Confidence limits of an observed frequency for binomially distributed population. The comparison of a relative frequency with the underlying parameter.- 4.5.2 Clopper and Pearson's quick estimation of the confidence intervals of a relative frequency.- 4.5.3 Estimation of the minimum size of a sample with counted data.- 4.5.4 The confidence interval for rare events.- 4.5.5 Comparison of two frequencies; testing whether they stand in a certain ratio.- 4.6 The Evaluation of Fourfold Tables.- 4.6.1 The comparison of two percentages-the analysis of fourfold tables.- 4.6.2 Repeated application of the fourfold ?2 test.- 4.6.3 The sign test modified by McNemar.- 4.6.4 The additive property of ?2.- 4.6.5 The combination of fourfold tables.- 4.6.6 The Pearson contingency coefficient.- 4.6.7 The exact Fisher test of independence, as well as an approximation for the comparison of two binomially distributed populations (based on very small samples).- 4.7 Testing the Randomness of a Sequence of Dichotomous Data or of Measured Data.- 4.7.1 The mean square successive difference.- 4.7.2 T…