De Aller-Bedste Bøger - over 12 mio. danske og engelske bøger
Levering: 1 - 2 hverdage

Bøger i Springer Series in Statistics serien

Filter
Filter
Sorter efterSorter Serie rækkefølge
  • af Rupert G. Jr. Miller
    910,95 kr.

    With this new edition Springer-Verlag has republished the original edition along with my review article on multiple comparisons from the December 1977 issue of the Journal of the American Statistical Association. A few minor typographical errors in the original edition have been corrected in this new edition.

  • af H. Heyer
    909,95 kr.

    By a statistical experiment we mean the procedure of drawing a sample with the intention of making a decision. These achieve ments rested upon the fortunate fact that the foundations of probability had by then been laid bare, for it appears to be necessary that any such quantitative theory of statistics be based upon probability theory.

  • af Francis John Anscombe
    593,95 kr.

    Part B, presenting some more extended examples of statistical analysis of data, has also the further aim of suggesting the interplay of computing and theory that must surely henceforth be typical of the develop ment of statistical science.

  • - Modeling Data with Differential Equations
    af James Ramsay & Giles Hooker
    1.679,95 - 1.690,95 kr.

  • af Kung-Sik Chan & Howell Tong
    1.103,95 - 1.113,95 kr.

    It covers many of the contributions made by the statisticians in the past twenty years or so towards our understanding of estimation, the Lyapunov-like index, the nonparametric regression, and many others, many of which are motivated by their dynamical system counterparts but have now acquired a distinct statistical flavor.

  • - With Applications to Linear Models, Logistic Regression, and Survival Analysis
    af Jr. Harrell & Frank E.
    1.133,95 kr.

  • - With an Implementation in R
    af Thomas W. Yee
    1.004,95 - 1.359,95 kr.

    This book treats distributions and classical models as generalized regression models, and the result is a much broader application base for GLMs and GAMs. The book can be used in senior undergraduate or first-year postgraduate courses on GLMs or categorical data analysis and as a methodology resource for VGAM users.

  • af Matthias Schmid & Gerhard Tutz
    1.010,95 - 1.209,95 kr.

    This book focuses on statistical methods for the analysis of discrete failure times. Although there are a large variety of statistical methods for failure time analysis, many techniques are designed for failure times that are measured on a continuous scale.

  • - Nonparametric Bayesian Estimation
    af Eswar G. Phadia
    1.298,95 - 1.640,95 kr.

    After an overview of different prior processes, it examines the now pre-eminent Dirichlet process and its variants including hierarchical processes, then addresses new processes such as dependent Dirichlet, local Dirichlet, time-varying and spatial processes, all of which exploit the countable mixture representation of the Dirichlet process.

  • af Jan G. De Gooijer
    2.011,95 - 2.023,95 kr.

    This book provides an overview of the current state-of-the-art of nonlinear time series analysis, richly illustrated with examples, pseudocode algorithms and real-world applications.

  • af Brajendra C. Sutradhar
    1.702,95 kr.

    This is the first book in longitudinal categorical data analysis with parametric correlation models developed based on dynamic relationships among repeated categorical responses.

  • af Dominique Fourdrinier, William E. Strawderman & Martin T. Wells
    1.596,95 kr.

    The book focuses primarily on point and loss estimation of the mean vector of multivariate normal and spherically symmetric distributions. In particular, Chapter 5 extends many of the results from Chapters 2 and 3 to spherically and elliptically symmetric distributions.

  • - Causal Inference for Complex Longitudinal Studies
    af Mark J. van der Laan & Sherri Rose
    782,95 kr.

    This textbook for graduate students in statistics, data science, and public health deals with the practical challenges that come with big, complex, and dynamic data.

  • - Volume I: Density Estimation
    af Vincent N. LaRiccia & P.P.B. Eggermont
    2.182,95 - 2.252,95 kr.

    This book deals with parametric and nonparametric density estimation from the maximum (penalized) likelihood point of view, including estimation under constraints.

  • - With Applications to Linear Models, Logistic and Ordinal Regression, and Survival Analysis
    af Jr. & Frank E. Harrell
    899,95 - 1.341,95 kr.

    Most of the methods in this text apply to all regression models, but special emphasis is given to multiple regression using generalised least squares for longitudinal data, the binary logistic model, models for ordinal responses, parametric survival regression models and the Cox semi parametric survival model.

  • - For Science and Data Science
    af Goran Kauermann
    1.213,95 kr.

    This textbook provides a comprehensive introduction to statistical principles, concepts and methods that are essential in modern statistics and data science.

  • - A General Unifying Theory
    af George Seber
    565,95 - 805,95 kr.

    This book provides a concise and integrated overview of hypothesis testing in four important subject areas, namely linear and nonlinear models, multivariate analysis, and large sample theory.

  • af Jiming Jiang
    1.386,95 kr.

    Over the past decade there has been an explosion of developments in mixed e?ects models and their applications. This book concentrates on two major classes of mixed e?ects models, linear mixed models and generalized linear mixed models, with the intention of o?ering an up-to-date account of theory and methods in the analysis of these models as well as their applications in various ?elds. The ?rst two chapters are devoted to linear mixed models. We classify l- ear mixed models as Gaussian (linear) mixed models and non-Gaussian linear mixed models. There have been extensive studies in estimation in Gaussian mixed models as well as tests and con?dence intervals. On the other hand, the literature on non-Gaussian linear mixed models is much less extensive, partially because of the di?culties in inference about these models. However, non-Gaussian linear mixed models are important because, in practice, one is never certain that normality holds. This book o?ers a systematic approach to inference about non-Gaussian linear mixed models. In particular, it has included recently developed methods, such as partially observed information, iterative weighted least squares, and jackknife in the context of mixed models. Other new methods introduced in this book include goodness-of-?t tests, p- diction intervals, and mixed model selection. These are, of course, in addition to traditional topics such as maximum likelihood and restricted maximum likelihood in Gaussian mixed models.

  • af Charles F. Manski
    1.574,95 kr.

    Sample data alone never suffice to draw conclusions about populations. Inference always requires assumptions about the population and sampling process. Statistical theory has revealed much about how strength of assumptions affects the precision of point estimates, but has had much less to say about how it affects the identification of population parameters. Indeed, it has been commonplace to think of identification as a binary event - a parameter is either identified or not - and to view point identification as a pre-condition for inference. Yet there is enormous scope for fruitful inference using data and assumptions that partially identify population parameters. This book explains why and shows how. The book presents in a rigorous and thorough manner the main elements of Charles Manski's research on partial identification of probability distributions. One focus is prediction with missing outcome or covariate data. Another is decomposition of finite mixtures, with application to the analysis of contaminated sampling and ecological inference. A third major focus is the analysis of treatment response. Whatever the particular subject under study, the presentation follows a common path. The author first specifies the sampling process generating the available data and asks what may be learned about population parameters using the empirical evidence alone. He then ask how the (typically) setvalued identification regions for these parameters shrink if various assumptions are imposed. The approach to inference that runs throughout the book is deliberately conservative and thoroughly nonparametric. Conservative nonparametric analysis enables researchers to learn from the available data without imposing untenable assumptions. It enables establishment of a domain of consensus among researchers who may hold disparate beliefs about what assumptions are appropriate. Charles F. Manski is Board of Trustees Professor at Northwestern University. He is author of Identification Problems in the Social Sciences and Analog Estimation Methods in Econometrics. He is a Fellow of the American Academy of Arts and Sciences, the American Association for the Advancement of Science, and the Econometric Society.

  • af Geert Verbeke & Geert Molenberghs
    1.608,95 kr.

    This book provides a comprehensive treatment of linear mixed models for continuous longitudinal data. Next to model formulation, this edition puts major emphasis on exploratory data analysis for all aspects of the model, such as the marginal model, subject-specific profiles, and residual covariance structure. Further, model diagnostics and missing data receive extensive treatment. Sensitivity analysis for incomplete data is given a prominent place. Several variations to the conventional linear mixed model are discussed (a heterogeity model, conditional linear mid models). This book will be of interest to applied statisticians and biomedical researchers in industry, public health organizations, contract research organizations, and academia. The book is explanatory rather than mathematically rigorous. Most analyses were done with the MIXED procedure of the SAS software package, and many of its features are clearly elucidated. How3ever, some other commercially available packages are discussed as well. Great care has been taken in presenting the data analyses in a software-independent fashion. Geert Verbeke is Assistant Professor at the Biostistical Centre of the Katholieke Universiteit Leuven in Belgium. He received the B.S. degree in mathematics (1989) from the Katholieke Universiteit Leuven, the M.S. in biostatistics (1992) from the Limburgs Universitair Centrum, and earned a Ph.D. in biostatistics (1995) from the Katholieke Universiteit Leuven. Dr. Verbeke wrote his dissertation, as well as a number of methodological articles, on various aspects of linear mixed models for longitudinal data analysis. He has held visiting positions at the Gerontology Research Center and the Johns Hopkins University. Geert Molenberghs is Assistant Professor of Biostatistics at the Limburgs Universitair Centrum in Belgium. He received the B.S. degree in mathematics (1988) and a Ph.D. in biostatistics (1993) from the Universiteit Antwerpen. Dr. Molenberghs published methodological work on the analysis of non-response in clinical and epidemiological studies. He serves as an associate editor for Biometrics, Applied Statistics, and Biostatistics, and is an officer of the Belgian Statistical Society. He has held visiting positions at the Harvard School of Public Health.

  • af Phillip I. Good
    1.976,95 kr.

    This text is intended to provide a strong theoretical background in testing hypotheses and decision theory for those who will be practicing in the real worldorwhowillbeparticipatinginthetrainingofreal-worldstatisticiansand biostatisticians. In previous editions of this text, my rhetoric was somewhat tentative. I was saying, in e?ect, "e;Gee guys, permutation methods provide a practical real-world alternative to asymptotic parametric approximations. Why not give them a try?"e; But today, the theory, the software, and the hardware have come together. Distribution-free permutation procedures are the primary method for testing hypotheses. Parametric procedures and the bootstrap are to be reserved for the few situations in which they may be applicable. Four factors have forced this change: 1. Desire by workers in applied ?elds to use the most powerful statistic for their applications. Such workers may not be aware of the fundamental lemma of Neyman and Pearson, but they know that the statistic they wanttouse-acomplexscoreoraratioofscores,doesnothaveanalready well-tabulated distribution. 2. Pressure from regulatory agencies for the use of methods that yield exact signi?cance levels, not approximations. 3. A growing recognition that most real-world data are drawn from mixtures of populations. 4. A growing recognition that missing data is inevitable, balanced designs the exception. Thus, it seems natural that the theory of testing hypothesis and the more general decision theory in which it is embedded should be introduced via the permutation tests. On the other hand, certain relatively robust param- ric tests such as Student's t continue to play an essential role in statistical practice.

  • af Sadanori Konishi
    1.100,95 kr.

    The Akaike information criterion (AIC) derived as an estimator of the Kullback-Leibler information discrepancy provides a useful tool for evaluating statistical models, and numerous successful applications of the AIC have been reported in various fields of natural sciences, social sciences and engineering.One of the main objectives of this book is to provide comprehensive explanations of the concepts and derivations of the AIC and related criteria, including Schwarz's Bayesian information criterion (BIC), together with a wide range of practical examples of model selection and evaluation criteria. A secondary objective is to provide a theoretical basis for the analysis and extension of information criteria via a statistical functional approach. A generalized information criterion (GIC) and a bootstrap information criterion are presented, which provide unified tools for modeling and model evaluation for a diverse range of models, including various types of nonlinear models and model estimation procedures such as robust estimation, the maximum penalized likelihood method and a Bayesian approach.

  • - Methods for the Exploration of Posterior Distributions and Likelihood Functions
    af Martin A. Tanner
    1.287,95 - 1.296,95 kr.

    This book provides a unified introduction to a variety of computational algorithms for Bayesian and likelihood inference. In this third edition, I have attempted to expand the treatment of many of the techniques discussed. I have added some new examples, as well as included recent results. Exercises have been added at the end of each chapter. Prerequisites for this book include an understanding of mathematical statistics at the level of Bickel and Doksum (1977), some understanding of the Bayesian approach as in Box and Tiao (1973), some exposure to statistical models as found in McCullagh and NeIder (1989), and for Section 6. 6 some experience with condi- tional inference at the level of Cox and Snell (1989). I have chosen not to present proofs of convergence or rates of convergence for the Metropolis algorithm or the Gibbs sampler since these may require substantial background in Markov chain theory that is beyond the scope of this book. However, references to these proofs are given. There has been an explosion of papers in the area of Markov chain Monte Carlo in the past ten years. I have attempted to identify key references-though due to the volatility of the field some work may have been missed.

  • af Mark J. Schervish
    1.426,95 - 1.435,95 kr.

    The aim of this graduate textbook is to provide a comprehensive advanced course in the theory of statistics covering those topics in estimation, testing, and large sample theory which a graduate student might typically need to learn as preparation for work on a Ph.D. An important strength of this book is that it provides a mathematically rigorous and even-handed account of both Classical and Bayesian inference in order to give readers a broad perspective. For example, the "e;uniformly most powerful"e; approach to testing is contrasted with available decision-theoretic approaches.

  • af H a David
    574,95 kr.

    This book provides a selection of pioneering papers or extracts ranging from Pascal (1654) to R.A. Fisher (1930). The editors'annotations put the articles in perspective for the modern reader. A special feature of the book is the large number of translations, nearly all made by the authors. There are several reasons for studying the history of statistics: intrinsic interest in how the field of statistics developed, learning from often brilliant ideas and not reinventing the wheel, and livening up general courses in statistics by reference to important contributors.

  • af Noel A. C. Cressie & Timothy R. C. Read
    565,95 kr.

  • af Michael Wolf, Dimitris N. Politis & Joseph P. Romano
    1.300,95 kr.

  • af Samuel Kotz
    599,95 kr.

    This is author-approved bcc: This is the third volume of a collection of seminal papers in the statistical sciences written during the past 110 years. These papers have each had an outstanding influence on the development of statistical theory and practice over the last century. Each paper is preceded by an introduction written by an authority in the field providing background information and assessing its influence. Volume III concerntrates on articles from the 1980's while including some earlier articles not included in Volume I and II. Samuel Kotz is Professor of Statistics in the College of Business and Management at the University of Maryland. Norman L. Johnson is Professor Emeritus of Statistics at the University of North Carolina. Also available: Breakthroughs in Statistics Volume I: Foundations and Basic Theory Samuel Kotz and Norman L. Johnson, Editors 1993. 631 pp. Softcover. ISBN 0-387-94037-5 Breakthroughs in Statistics Volume II: Methodology and Distribution Samuel Kotz and Norman L. Johnson, Editors 1993. 600 pp. Softcover. ISBN 0-387-94039-1