kianN 3 days ago

This is my favorite book on statistics. Full stop. The author Andrew Gelman created a whole new branch of Bayesian statistics with both his theoretical work on hierarchical modeling while also publishing Stan to enable practical applications of hierarchical models.

It took me about a year to work through this book on the side (including the exercises) and it provided the foundation for years of fruitful research into hierarchical Bayesian models. It’s a definitely not an introductory read, but for any looking to advance their statistical toolkit, I cannot recommend this book highly enough.

As a starting point, I’d strongly suggest the first 5 chapters for an excellent introduction to Gelman’s modeling philosophy, and then jumping around the table of contents to any topics that look interesting.

  • tmule 3 days ago

    “ The author Andrew Gelman created a whole new branch of Bayesian statistics ...” Love Gelman, but this is playing fast and loose with facts.

    • kragen 3 days ago

      His book on hierarchical modeling with Hill has 20398 cites on Google Scholar https://scholar.google.com/scholar?cluster=94492350364273118... and Wikipedia calls him "a major contributor to statistical philosophy and methods especially in Bayesian statistics[6] and hierarchical models.[7]", which sounds like the claim is more true than false.

      • nextos 3 days ago

        He co-wrote the reference textbook on the topic and made interesting methodological contributions, but Gelman acknowledges other people as creators of the theoretical underpinnings of multilevel/hierarchical modeling, including Stein or Donoho [1]. The field is quite old, one can find hierarchical models in articles that were published many decades ago.

        Also, IMHO, his best work has been done describing how to do statistics. He has written somewhere I cannot find now that he sees himself as a user of mathematics, not as a creator of new theories. His book Regression and Other Stories is elementary but exceptionally well written. He describes how great Bayesian statisticians think and work, and this is invaluable.

        He is updating Data Analysis Using Regression and Multilevel/Hierarchical Models to the same standard, and I guess BDA will eventually come next. As part of the refresh, I imagine everything will be ported to Stan. Interestingly, Bob Carpenter and others working on Stan are now pursuing ideas on variational inference to scale things further.

        [1] https://sites.stat.columbia.edu/gelman/research/unpublished/...

        • kianN 3 days ago

          Totally agree and great point that hierarchical models have been around for a long time; however, these were primarily analytical, leveraging conjugate priors or requiring pretty extensive integration.

          I would say his work with Stan and his writings, along with theorists like Radford Neal, really opened the door to a computational approach to hierarchical modeling. And I think this is a meaningfully different field.

  • pyyxbkshed 3 days ago

    What is a book / course on statistics that I can go through before this so that I can understand this?

    • oogway8020 2 days ago

      Here is one path to learn Bayesian starting from basics, assuming modern R path with tidyverse (recommended):

      First learn some basic probability theory: Peter K. Dunn (2024). The theory of distributions. https://bookdown.org/pkaldunn/DistTheory

      Then frequentist statistics: Chester Ismay, Albert Y. Kim, and Arturo Valdivia - https://moderndive.com/v2/ Mine Çetinkaya-Rundel and Johanna Hardin - https://openintrostat.github.io/ims/

      Finally Bayesian: Johnson, Ott, Dogucu - https://www.bayesrulesbook.com/ This is a great book, it will teach you everything from very basics to advanced hierachical bayesian modeling and all that by using reproducible code and stan/rstanarm

      Once you master this, next level may be using brms and Solomon Kurz has done full Regression and Other Stories Book using tidyerse/brms. His knowledge of tidyverse and brms is impressive and demonstrated in his code. https://github.com/ASKurz/Working-through-Regression-and-oth...

      • thefringthing 2 days ago

        I would include Richard McElreath's _Statistical Rethinking_ here after, or in combination with, _Bayes Rules!_. A translation of the code parts into the tidyverse is available free online, as are lecture videos based on the book.

    • kianN 3 days ago

      I don’t mean for the bar to sound too high. I think working through khan academy’s full probability, calculus and linear algebra courses would give you a strong foundation. I worked through this book having just completed the equivalent courses in college.

      It’s just a relatively dense book. There’s some other really good suggestions in this thread, most of which I’ve heard good things about. If you have a background in programming, I’d suggest Bayesian Methods for Hackers as a really good starting point. But you can also definitely tackle this book head on, and it will be very rewarding.

    • crystal_revenge 3 days ago

      Bayesian Statistics the Fun Way is probably the best place to start if you're coming at this from 0. It covers the basics of most of the foundational math you'll need along the way and assumes basically no prerequisites.

      After than Statistical Rethinking will take you much deeper into more complex experiment design using linear models and beyond as well as deepening your understanding of other areas of math required.

    • 1u15 3 days ago

      Regression and Other Stories. It’s also co-authored by Gelman and it reads like an updated version of his previous book Data Analysis Using Hierarchical/Multilevel Models.

      Statistical Rethinking is a good option too.

    • itissid 3 days ago

      If you are near Columbia the visiting students post baccalaureate program(run by the SPS last I recall) allows you to take for credit courses in the Social Sciences department. Professor Ben Goodrich has an excellent course on Bayesian Statistics in Social Sciences which teaches it using R(now it might be in Stan).

      That course is a good balance between theory and practice. It gave me a practical intuition understanding why posterior distribution of parameters and data are important and how to compute them.

      I took the course in 2016 so a lot could have changed.

    • musebox35 3 days ago

      I found the book from David Mackay on Information Theory, Inference, and Learning Algorithms to be well written and easy to follow. Plus it is freely available from his website: https://www.inference.org.uk/itprnn/book.pdf

      It goes through fundamentals of Bayesian ideas in the context of applications in communication and machine learning problems. I find his explanations uncluttered.

    • sn9 2 days ago

      For effectively and efficiently learning the calculus, linear algebra, and probability underpinning these fields, Math Academy is going to be your best resource.

    • glial 2 days ago

      Doing Bayesian Data Analysis by John Kruschke (get the 2nd edition). The name is even an homage to the original.

    • jmpeax 2 days ago

      Statistical Rethinking by Richard McElreath. He even has a youtube series covering the book if you prefer that modality.

  • djmips 2 days ago

    Can you explain to me in simple terms how your fruitful research benefited you in a concrete way. Is this simply an enlightening hobby or do you have significant everyday applications? What kind of cool job has you employ Bayesian Data Analysis day to day and for what benefit? How do the suits relate to such knowledge and it's beneficial application that may be well beyond their ken?

    • kianN 2 days ago

      My applications have focused on noisy, high dimensional small datasets in which it is either very expensive or impossible to get more data.

      One example is rare class prediction on long form text data eg phone calls, podcasts, transcripts. Other networks including neural networks and LLMs are either not flexible enough or require far too much data to achieve the necessary performance. Structured hierarchical modeling is the balance between those two extremes.

      Another example is in genomic analysis. Similarly high dimensional, noisy, low data. Additionally, you don’t actually care about the predictions, you want to understand what genes or sets of genes are driving phenotypic behaviors.

      I’d be happy to go into more depth via email or chat if this is something you are interested in (on my profile).

      Some useful reads

      [1] https://sturdystatistics.com/articles/text-classification

      [2] https://pmc.ncbi.nlm.nih.gov/articles/PMC5028368/

  • SilverElfin 3 days ago

    Is there a good book that covers statistics as it is applied to testing - like for medical research or as optimization or manufacturing or whatever?

    • crystal_revenge 3 days ago

      The key insight to recognize is that within the Bayesian framework hypothesis testing is parameter estimation. Your certainty in the outcome of the test is your posterior probability over the test-relevant parameters.

      Once you realize this you can easily develop very sophisticated testing models (if necessary) that are also easy to understand and reason about. This dramatically simplifies.

      If you're looking for a specific book recommendation Statistical Rethinking does a good job covering this at length and Bayesian Statistics the Fun Way is a more beginner friendly book that covers the basics of Bayesian hypothesis testing.

      • kianN 3 days ago

        I might checkout Statistical Rethinking given how frequently it is being recommended!

        Edit: Haha I just found the textbook and I’m remembering now that I actually worked through sections of it back when I was working through BDA several years back.

    • kianN 3 days ago

      This book is very relevant to those fields. There is a common choice in statistics to either stratify or aggregate your dataset.

      There is an example in his book discussing efficacy trials across seven hospitals. If you stratify the data, you lose a lot of confidence, if you aggregate the data, you end up just modeling the difference between hospitals.

      Hierarchical modeling allows you to split your dataset under a single unified model. This is really powerful for extracting signal for noise because you can split your dataset according to potential confounding variables eg the hospital from which the data was collected.

      I am writing this on my phone so apologies for the lack of links, but in short the approach in this book is extremely relevant of medical testing.

      • greymalik 3 days ago

        It’s unclear which post you’re referring to - can you clarify which book you mean by “this book”?

g9yuayon 3 days ago

I can attest how useful Bayesian analysis is. My team recently needed to sample from many millions of items to test their qualities. The question is that given a certain budget and expectation, what's the minimum or maximum number of items that we need to sample. There was an elegant solution to this problem.

What was surprising, though, was how reluctant the engineers are to learn such basic techniques. It's not like the math was hard. They all went through the first-year college math and I'm sure they did reasonably well.

  • some_guy_nobel 3 days ago

    What were they reluctant to learn? Why do they need to learn it?

    Plenty of engineers have to take an introductory stats course, but it's not clear why you'd want your engineers to learn bayesian statistics? I would be surprised if they could correctly interpret a p-value or regression coefficient, let alone one with interaction effects. (It'd be wholly useless if they could, fwiw).

    It'd be nice if the statisticians/'data scientists' on my team learned their way around the CI/CD pipelines, understood kubernetes pods, and could write their own distributed training versions of their pytorch models, but division-of-labor is a thing for a reason, and I don't expect them to nor need them to.

    • g9yuayon a day ago

      I guess I have a different philosophy: whoever owns the problem should learn everything necessary to solve the problem. In my case, the engineers showed no interests in learning the algorithm and the math behind it. For instance, when they built the dashboard for the testing, they omitted a few important columns and got the column names wrong. When I tested them on their understanding of the method, there was none. To say the least, my team should know enough to challenge me in case I made any mistake, or so I assume.

      On a side note, I believe it is an individual's responsibility to find the coolness in their project. What's the fun of building a dashboard that I have done a thousand times? What's the fun of carrying out a routine that does not challenge me? But solving a problem in a most rigorous and generalized way? That is something in which an engineer can find some fun. Or maybe it's just me.

canyon289 3 days ago

BDA is THE book to learn Bayesian Modeling in depth rigorously. For different approaches there are a number shared here like Statistical Rethinking from Richard McElreath or Regression and other stories which Gelman and Aki wrote as well.

I also write a book on the topic which is focused a code and example approach. It's available for open access here. https://bayesiancomputationbook.com

mcdonje 3 days ago

I'm a fan of the stats blog hosted by Columbia that Gelman is the main contributor to: https://statmodeling.stat.columbia.edu

jakubmazanec 3 days ago

For beginners, I found Doing Bayesian Data Analysis by John Kruschke to be much better. Easier to read and comprehend.

  • drnick1 3 days ago

    BDA is intended for grad students I think. It's not particular "hard" as far as math goes, but it assumes a first course in mathematical statistics.

piqufoh 2 days ago

While we're here - I've gained a lot from "Data Analysis: A Bayesian Tutorial" by DS Sivia and J Skilling. It's a graduate level text, and I found the chapters very concise and the subject well-laid out. It was one of those books that gave me continuous insight and fresh inspiration - even though it's more than 10 years old.

atdt 3 days ago

I am interested in this topic, but this textbook is too daunting for me. What I'd love is a crash course on Bayesian methods for the working systems performance engineer. If you, dear reader, happen to be familiar with both domains: what would you include in such a course, and can you recommend any existing resources for self-study?

  • JHonaker 3 days ago

    My go to for teaching statistics is Statistical Rethinking. It’s basically a course in how to actually thing about modeling: what you’re really looking for is analyzing a hypothesis, and a model may be consistent with a number of hypotheses, figuring out what hypotheses any given model implies is the hard/fun part, and this book teaches you that. The only drawback is that it’s not free. (Although there are excellent lectures by the author available for free on YouTube. These are worth watching even if you don’t get the book.)

    I also recommend Gelman’s (one of the authors of the linked book) Regression and Other Stories as a more approachable text for this content.

    Think Bayes and Bayesian Methods for Hackers are introductory books from a beginner coming from a programming background.

    If you want something more from the ML world that heavily emphasizes the benefits of probabilistic (Bayesian) methods, I highly recommend Kevin Murphy’s Probabilistic Machine Learning. I have only read the first edition before he split it into two volumes and expanded it, but I’ve only heard good things about the new volumes too.

    • huijzer 3 days ago

      Yep 100% came here to say the same. Helped me a lot during the PhD to get a better understanding of statistics.

mkw5053 2 days ago

Before my 2 year old, I led a math book club in SF. This was one of the books I taught/led with the group and it's still one of my favorites.

asdev 3 days ago

Looking for more self study statistics resources for someone with a CS degree, any other recs?

  • mamonster 3 days ago

    Start with statistics by David Freedman. It is very approachable as an introduction, not too theory heavy, can get a handle on all of the "main" issues. Afterwards, you have 2 options:

    1) Do you want "theoretical" knowledge(math background required)? If so, then you need to get a decent mathematical statistics book like Casella-Berger. I think a good US CS degree grad could handle it, but you might need to go a bit slow and google around/ maybe fill in some gaps in probability/calculus.

    2)Introduction to Statistical Learning is unironically a great intro to "applied" stats. You have most of the "vanilla" models/algorithms, theoretical background behind each but not too much, you can follow along with the R version and see how stuff actually works and exercises that vary in difficulty.

    With regards to Gelman and Bayesian data analysis, I should note that in my experience the Bayesian approach is 1st year MS /4th year of a Bachelors in the US. It's very useful to know and have in your toolbox but IMO it should be left aside until you are confident in the "frequentist" basics.

  • fishmicrowaver 3 days ago

    Probability Theory by Jaynes if you'd like more bayes

j7ake 3 days ago

Is Bayesian data analysis relevant anymore in the era of foundation models and big data?

  • jmpeax 2 days ago

    Yes, the two are orthogonal concepts. Text did not disappear just because we invented photography. Bayesian data analysis is for inverse problems, such as using data to learn about the properties of the system/model that could have generated the data, and neural networks are for forward problems such as using data to generate more data or make predictions.

    You can use BDA for forward problems too, via posterior predictive samples. The benefit over neural networks for this task is that with BDA you get dependable uncertainty quantification about your predictions. The disadvantage is that the modalities are somewhat limited to simple structured data.

    You can also use neural networks for inverse problems, such as for example with Neural Posterior Estimation. This approach shows promise since it can tackle more complex problems than the standard BDA approach of Markov Chain Monte Carlo and with much faster results, but the accuracy and dependability are still quite lacking.

  • analog31 3 days ago

    General quantitative thinking, and a sense of statistics, are still valuable. If you don't learn them from Bayes specifically, you should learn them somehow. The "square root of n rule" is still a stern master. And we're still not past having to think about whether our results make sense.

    [0] The rule of thumb that signal-to-noise improves with the square root of the number of measurements. Also, as my dad put it: "The more bad data we average together, the closer we get to the wrong answer."

    • memming 2 days ago

      Your dad sounds like an awesome person.

  • memming 3 days ago

    Foundation models can be seen as approximate amortized posterior inference machines where the posterior is conditioning on the pre-training data. However, the uncertainty is usually ignored, and there may be ways to improve the state of the art if we were better Bayesians.

  • canjobear 3 days ago

    Why would it not be? You can use big data and neural nets to fit Bayesian models (variational inference).

    • j7ake 3 days ago

      I meant specifically the book, which doe not have any of those things you mentioned.

      Also nobody fits neural networks and use variation inference using any priors that aren’t some standard form that makes algorithm easy

      • tech_ken 2 days ago

        Yeah definitely! People still need to do statistical inference in 2025 (see ex. the field of econometrics).

  • CuriouslyC 3 days ago

    Yes, because Bayes' rule is fundamental if you're reasoning probabalistically. Bayesian methods produce better results with quantified uncertainty, we just don't have efficient methods to compute them for deep models yet.

  • mitthrowaway2 3 days ago

    Even in this era, there are some problems for which data is extremely limited. Those IMO tend to be the problems in which Bayesian techniques shine the most.

  • fromMars 3 days ago

    Most definitely. Many problems do not have giant datasets and it also depends on what is your task.

kalx 2 days ago

This helped me so much during my phd

pks016 3 days ago

Waiting for the Bayesian workflow book!