Classic analysis of the foundations of statistics and development of personal probability, one of the greatest controversies in modern statistical thought. Revised edition. Calculus, probability, statistics, and Boolean algebra are recommended.
This important collection of essays is a synthesis of foundational studies in Bayesian decision theory and statistics. An overarching topic of the collection is understanding how the norms for Bayesian decision making should apply in settings with more than one rational decision maker and then tracing out some of the consequences of this turn for Bayesian statistics. There are four principal themes to the collection: cooperative, non-sequential decisions; the representation and measurement of 'partially ordered' preferences; non-cooperative, sequential decisions; and pooling rules and Bayesian dynamics for sets of probabilities. The volume will be particularly valuable to philosophers concerned with decision theory, probability, and statistics, statisticians, mathematicians, and economists.
Statistics and hypothesis testing are routinely used in areas (such as linguistics) that are traditionally not mathematically intensive. In such fields, when faced with experimental data, many students and researchers tend to rely on commercial packages to carry out statistical data analysis, often without understanding the logic of the statistical tests they rely on. As a consequence, results are often misinterpreted, and users have difficulty in flexibly applying techniques relevant to their own research — they use whatever they happen to have learned. A simple solution is to teach the fundamental ideas of statistical hypothesis testing without using too much mathematics. This book provides a non-mathematical, simulation-based introduction to basic statistical concepts and encourages readers to try out the simulations themselves using the source code and data provided (the freely available programming language R is used throughout). Since the code presented in the text almost always requires the use of previously introduced programming constructs, diligent students also acquire basic programming abilities in R. The book is intended for advanced undergraduate and graduate students in any discipline, although the focus is on linguistics, psychology, and cognitive science. It is designed for self-instruction, but it can also be used as a textbook for a first course on statistics. Earlier versions of the book have been used in undergraduate and graduate courses in Europe and the US. ”Vasishth and Broe have written an attractive introduction to the foundations of statistics. It is concise, surprisingly comprehensive, self-contained and yet quite accessible. Highly recommended.” Harald Baayen, Professor of Linguistics, University of Alberta, Canada ”By using the text students not only learn to do the specific things outlined in the book, they also gain a skill set that empowers them to explore new areas that lie beyond the book’s coverage.” Colin Phillips, Professor of Linguistics, University of Maryland, USA
Foundations and Applications of Statistics simultaneously emphasizes both the foundational and the computational aspects of modern statistics. Engaging and accessible, this book is useful to undergraduate students with a wide range of backgrounds and career goals. The exposition immediately begins with statistics, presenting concepts and results from probability along the way. Hypothesis testing is introduced very early, and the motivation for several probability distributions comes from p-value computations. Pruim develops the students' practical statistical reasoning through explicit examples and through numerical and graphical summaries of data that allow intuitive inferences before introducing the formal machinery. The topics have been selected to reflect the current practice in statistics, where computation is an indispensible tool. In this vein, the statistical computing environment R is used throughout the text and is integral to the exposition. Attention is paid to developing students' mathematical and computational skills as well as their statistical reasoning. Linear models, such as regression and ANOVA, are treated with explicit reference to the underlying linear algebra, which is motivated geometrically. Foundations and Applications of Statistics discusses both the mathematical theory underlying statistics and practical applications that make it a powerful tool across disciplines. The book contains ample material for a two-semester course in undergraduate probability and statistics. A one-semester course based on the book will cover hypothesis testing and confidence intervals for the most common situations. In the second edition, the R code has been updated throughout to take advantage of new R packages and to illustrate better coding style. New sections have been added covering bootstrap methods, multinomial and multivariate normal distributions, the delta method, numerical methods for Bayesian inference, and nonlinear least squares. Also, the use of matrix algebra has been expanded, but remains optional, providing instructors with more options regarding the amount of linear algebra required.
This book links up the theory of a selection of statistical procedures used in general practice with their application to real world data sets using the statistical software package SAS (Statistical Analysis System). These applications are intended to illustrate the theory and to provide, simultaneously, the ability to use the knowledge effectively and readily in execution.
A new and refreshingly different approach to presenting the foundations of statistical algorithms, Foundations of Statistical Algorithms: With References to R Packages reviews the historical development of basic algorithms to illuminate the evolution of today’s more powerful statistical algorithms. It emphasizes recurring themes in all statistical algorithms, including computation, assessment and verification, iteration, intuition, randomness, repetition and parallelization, and scalability. Unique in scope, the book reviews the upcoming challenge of scaling many of the established techniques to very large data sets and delves into systematic verification by demonstrating how to derive general classes of worst case inputs and emphasizing the importance of testing over a large number of different inputs. Broadly accessible, the book offers examples, exercises, and selected solutions in each chapter as well as access to a supplementary website. After working through the material covered in the book, readers should not only understand current algorithms but also gain a deeper understanding of how algorithms are constructed, how to evaluate new algorithms, which recurring principles are used to tackle some of the tough problems statistical programmers face, and how to take an idea for a new method and turn it into something practically useful.
This text provides a through, straightforward first course on basics statistics. Emphasizing the application of theory, it contains 200 fully worked examples and supplies exercises in each chapter-complete with hints and answers.
Phase space, ergodic problems, central limit theorem, dispersion and distribution of sum functions. Chapters include Geometry and Kinematics of the Phase Space; Ergodic Problem; Reduction to the Problem of the Theory of Probability; Application of the Central Limit Theorem; Ideal Monatomic Gas; The Foundation of Thermodynamics; and more.
Probabilistic Foundations of Statistical Network Analysis presents a fresh and insightful perspective on the fundamental tenets and major challenges of modern network analysis. Its lucid exposition provides necessary background for understanding the essential ideas behind exchangeable and dynamic network models, network sampling, and network statistics such as sparsity and power law, all of which play a central role in contemporary data science and machine learning applications. The book rewards readers with a clear and intuitive understanding of the subtle interplay between basic principles of statistical inference, empirical properties of network data, and technical concepts from probability theory. Its mathematically rigorous, yet non-technical, exposition makes the book accessible to professional data scientists, statisticians, and computer scientists as well as practitioners and researchers in substantive fields. Newcomers and non-quantitative researchers will find its conceptual approach invaluable for developing intuition about technical ideas from statistics and probability, while experts and graduate students will find the book a handy reference for a wide range of new topics, including edge exchangeability, relative exchangeability, graphon and graphex models, and graph-valued Levy process and rewiring models for dynamic networks. The author’s incisive commentary supplements these core concepts, challenging the reader to push beyond the current limitations of this emerging discipline. With an approachable exposition and more than 50 open research problems and exercises with solutions, this book is ideal for advanced undergraduate and graduate students interested in modern network analysis, data science, machine learning, and statistics. Harry Crane is Associate Professor and Co-Director of the Graduate Program in Statistics and Biostatistics and an Associate Member of the Graduate Faculty in Philosophy at Rutgers University. Professor Crane’s research interests cover a range of mathematical and applied topics in network science, probability theory, statistical inference, and mathematical logic. In addition to his technical work on edge and relational exchangeability, relative exchangeability, and graph-valued Markov processes, Prof. Crane’s methods have been applied to domain-specific cybersecurity and counterterrorism problems at the Foreign Policy Research Institute and RAND’s Project AIR FORCE. ? ? ? ? ? ?
Initially published in Moscow in 1950 following the author's death, this book contains the first chapters of a large monograph Krylov planned entitled The foundations of physical statistics," his doctoral thesis on "The processes of relaxation of statistical systems and the criterion of mechanical instability," and a small paper entitled "On the description of exhaustively complete experiments." Originally published in 1980. The Princeton Legacy Library uses the latest print-on-demand technology to again make available previously out-of-print books from the distinguished backlist of Princeton University Press. These editions preserve the original texts of these important books while presenting them in durable paperback and hardcover editions. The goal of the Princeton Legacy Library is to vastly increase access to the rich scholarly heritage found in the thousands of books published by Princeton University Press since its founding in 1905.
An introduction to statistical natural language processing (NLP). The text contains the theory and algorithms needed for building NLP tools. Topics covered include: mathematical and linguistic foundations; statistical methods; collocation finding; word sense disambiguation; and probalistic parsing.
International Series of Monographs in Natural Philosophy, Volume 22: Foundations of Statistical Mechanics: A Deductive Treatment presents the main approaches to the basic problems of statistical mechanics. This book examines the theory that provides explicit recognition to the limitations on one's powers of observation. Organized into six chapters, this volume begins with an overview of the main physical assumptions and their idealization in the form of postulates. This text then examines the consequences of these postulates that culminate in a derivation of the fundamental formula for calculating probabilities in terms of dynamic quantities. Other chapters provide a careful analysis of the significant notion of entropy, which shows the links between thermodynamics and statistical mechanics and also between communication theory and statistical mechanics. The final chapter deals with the thermodynamic concept of entropy. This book is intended to be suitable for students of theoretical physics. Probability theorists, statisticians, and philosophers will also find this book useful.
Author: Irving John Good
Publisher: U of Minnesota Press
Good Thinking was first published in 1983.Good Thinking is a representative sampling of I. J. Good's writing on a wide range of questions about the foundations of statistical inference, especially where induction intersects with philosophy. Good believes that clear reasoning about many important practical and philosophical questions is impossible except in terms of probability. This book collects from various published sources 23 of Good's articles with an emphasis on more philosophical than mathematical.He covers such topics as rational decisions, randomness, operational research, measurement of knowledge, mathematical discovery, artificial intelligence, cognitive psychology, chess, and the nature of probability itself. In spite of the wide variety of topics covered, Good Thinking is based on a unified philosophy which makes it more than the sum of its parts. The papers are organized into five sections: Bayesian Rationality; Probability; Corroboration, Hypothesis Testing, and Simplicity; Information and Surprise; and Causality and Explanation. The numerous references, an extensive index, and a bibliography guide the reader to related modern and historic literature.This collection makes available to a wide audience, for the first time, the most accessible work of a very creative thinker. Philosophers of science, mathematicians, scientists, and, in Good's words, anyone who wants “to understand understanding, to reason about reasoning, to explain explanation, to think about thought, and to decide how to decide” will find Good Thinking a stimulating and provocative look at probability.