Practically Insignificant

Posts

December 2, 2023

Estimating the Unconditional Median with Deep Learning: A Practical Approach Using R and Keras

NOTE: This post is an parody in the same genre as How to Burn Money and Computing Power for a Simple Median. This post was generated using the same code as in the previous post, but with the prompt: Write a post in the style of Towards Data Science about the following code. Keep in mind that this code is showing how to estimate an unconditional median using a deep learning model.
December 2, 2023

How to Burn Money and Computing Power for a Simple Median

NOTE: This is a parody post. It was generated by providing my code to GPT-4 with the prompt: Turn this into a blog post with the title "World's Most Expensive Way to Compute a Mean or Median" and then asking GPT-4 to make its initial post even more entertaining and sarcastic. Introduction Today, we dive into the comedic world of absurdly over-engineered solutions for simple problems. Our target? Calculating a mean or median using R and TensorFlow in what might be the most hilariously unnecessary method ever conceived.
September 13, 2021

Inference via Stan for the Mean and Variance of a Gaussian ("Normal") Population with Weakly Informative and Fiducial Priors

Preamble Attenion Conservation Notice: I implement the now-standard Bayesian procedure for estimating a Gaussian mean and variance with weakly informative priors using Stan and make some connections to confidence distributions and fiducial inference. But without any of the details for this to make sense for a newcomer. For the former material, you are better served by page 73 of A First Course in Bayesian Statistical Methods by Peter Hoff or page 67 of Bayesian Data Analysis.
December 17, 2020

Bounded Bayes: Markov Chain Monte Carlo (MCMC) for Posteriors of Bounded Parameters

This is largely a note to my past-self on how to easily use Markov Chain Monte Carlo (MCMC) methods for Bayesian inference when the parameter you are interested in has bounded support. The most basic MCMC methods involve using additive noise to get new draws, which can cause problems if that kicks you out of the parameter space. Suggestions abound to use the transformation trick on a bounded parameter \(\theta\), and then make draws of the transformed parameter.
December 4, 2020

How Confident Are We that Masks "Work"? Confidence Functions and the DANMASK-19 Study

Attention Conservation Notice: I use a recent study about COVID-19 as an excuse to demo an R package I am developing. Some other reasons not to read this: (a) I am not an epidemiologist and (b) I do not plan to explain confidence functions enough for the uninitiated to make sense of this post. Andrew Gelman had an interesting post recently about the DANMASK-19 trial out of Denmark. This study randomized individuals to recieve a mask recommendation and a supply of 50 surgical masks (or not):
June 28, 2020

Quantum Statistics: Exact Tests with Discrete Test Statistics

For those who ended up here looking for information about quantum statistical mechanics or particle statistics: apologies! But sometimes I feel like physicists took all the exciting names, so I’m stealing the term quanta as it relates to an observable that can take on only discrete (not continuous values). That has interesting consequences for inferential procedures based on such discrete (“quantum”) statistics, as we will see in this post.
June 28, 2020

Salvaging Lost Significance via Randomization: Randomized \(P\)-values for Discrete Test Statistics

Last time, we saw that when performing a hypothesis test with a discrete test statistic, we will typically lose size unless we happen to be very lucky and have the significance level \(\alpha\) exactly match one of our possible \(P\)-values. In this post, I will introduce a randomized hypothesis test that will regain the size we lost. Unlike a lot of randomization in statistics, the randomization here comes at the end: we randomize the \(P\)-value in order to recover the size.
© 2023 Practically Insignificant