S3E2: Should we try to ensure misclassification is non-differential? Discussing measurement error with Dr. Patrick Bradshaw



In this episode we have a conversation with Patrick Bradshaw about issues related to measurement error, misclassification, and information bias. We ask him to help define and clarify the differences between these concepts. We chat about dependent and differential forms of misclassification and how helpful DAGs can be for identifying these sources of bias. Patrick helps to explain the problem with the over-reliance on non-differential bias producing a bias toward the null and concerns about being “anchored to the null” in epidemiologic analyses. This episode will also serve to provide you with the most up-to-date information from Patrick on his recommendations about excellent new TV shows to stream (Wednesday on Netflix; Wandavision on Disney+). Two thumbs up.


S3E1: Are we measuring what we think we’re measuring?



In the season three premiere Matt and Hailey discuss Chapter 13 in Modern Epidemiology, 4th edition. For the third season of the SERious Epi podcast, we are going to continue our close-reading of the newest version of the Modern Epi textbook. This chapter is focused on measurement error and misclassification. In this episode we discuss issues related to the mis-measurement of exposure, outcome, and covariates. We also debate whether misclassification is just an analytic issue (i.e., putting people into the wrong categories) or an analytic + conceptual issue (i.e., putting people into the wrong categories and having an incorrect definition for those categories). We also talk about measurement error DAGs, why we wish more people use analytic approaches to correct for measurement error, and Matt explains the concept of email bankruptcy.


S2E16: There’s a 95% probability you’ll enjoy learning about sample size and precision with Dr. Jon Huang



In this episode of Season 2 of SERious Epidemiology, Hailey and Matt connect with Dr. Jon Huang for a discussion on precision and study size. We wade into whether or not we should use p-values. We discuss whether the debates on p-values are real or just on Twitter and whether they should be used in observational epi or just in trials. We ask whether p-values do more harm than good in observational studies or whether the harm is really around null hypothesis significance testing. We talk about misconceptions about p-values. And Jon tells us how he’s going to win a gold medal in the Winter Olympics, despite living in a tropical climate.


S2E15: As random as it gets



In this episode of Season 2 of SERious Epidemiology, Hailey and Matt finally start talking about random error. We explore the deep philosophical (as deep as we are capable of) meaning behind randomness and whether the universe is a random (and hey, while we are at it, is there even free will) and how we think about random error. We talk about p-hacking and p-curves and anything p really. And we talk about precision and accuracy in epidemiologic research. And Hailey aces Matt’s quiz.


S2E14: Confounding will never go away – with Maya Mathur



In this episode of Season 2 of SERious Epidemiology, Hailey and Matt connect with Dr. Maya Mathur for a discussion on confounding. We talk about different ways of thinking about confounding and we discuss how different sources of bias can come together. We talk about overadjustment bias, a topic we all feel needs more attention. We discuss e-values, and have Dr. Mathur explain their practical utility and also how complicated they are to interpret. And we discuss bias analysis for meta-analyses.

Article mentioned in this episode:

Schisterman EF, Cole SR, Platt RW. Overadjustment bias and unnecessary adjustment in epidemiologic studies. Epidemiology. 2009 Jul;20(4):488-95. doi: 10.1097/EDE.0b013e3181a819a1. PMID: 19525685; PMCID: PMC2744485.


S2E13: Confounding: Ten thousand arrows going into a bunch of squiggly things



In this episode of Season 2 of SERious Epidemiology, Hailey and Matt discuss confounding and whether confounding is hogging the spotlight in epi methods and epi teaching. We debate the value of all the different terms for confounding in the world of epi and beyond and struggle to define them all. We talk about different definitions for confounding and we differentiate between confounders and confounding. We talk about the 10% change in estimate of effect approach and its limitations and we talk about different strategies for confounder control. And Hailey coins the term “DAGmatist”.

We reference the paper below:

VanderWeele, T.J. and Shpitser, I. (2011). A new criterion for confounder selectionBiometrics, 67:1406-1413.

 


S2E12: How great are case-control studies with Ellie Matthay



In this episode of Season 2 of SERious Epidemiology, (recorded back when we were getting COVID booster shots) Hailey and Matt connect with Dr. Ellie Matthay for a discussion on Chapter 8 on case-control studies. We finally answer whether it is spelled with a – or not (and Hailey and Ellie disagree with Matt about semicolons). We discuss how cohort studies and case control studies differ and overlap. We talk about whether case control studies are more biased than cohort studies. And Hailey reveals her dreams for releasing Modern Epidemiology: the Audiobook (with possible singing).


S2E11: Case Control Studies



In this episode of Season 2 of SERious Epidemiology, Hailey and Matt get into the humble case control study. We discuss the ins and outs of this much maligned study design that has so flummoxed so many in epidemiology. We ask the hard questions about the best way sample in a case control study, whether we spend too much or not enough time on it in our teaching, whether a case control study always has to be nested within some hypothetical cohort, whether the design is inherently more biased than cohort studies (spoiler: no, but…), why some people refer to cases and controls when they are not referring to a case control study, and, if it were on a famous TV show, which character the case control study would be (and more importantly, why Hailey has never seen said TV show).

Papers referenced in this episode:

Selection of Controls in Case-Control Studies: I. Principles
Sholom Wacholder, Joseph K. McLaughlin, Debra T. Silverman, Jack S. Mandel
American Journal of Epidemiology, Volume 135, Issue 9, 1 May 1992, Pages 1019–1028, https://doi.org/10.1093/oxfordjournals.aje.a116396

Selection of Controls in Case-Control Studies: II. Types of Controls
Sholom Wacholder, Debra T. Silverman, Joseph K. McLaughlin, Jack S. Mandel
American Journal of Epidemiology, Volume 135, Issue 9, 1 May 1992, Pages 1029–1041, https://doi.org/10.1093/oxfordjournals.aje.a116397

Selection of controls in case-control studies. III. Design options
S Wacholder 1D T SilvermanJ K McLaughlinJ S Mandel
Wacholder S, Silverman DT, McLaughlin JK, Mandel JS. Selection of controls in case-control studies. III. Design options. Am J Epidemiol. 1992 May 1;135(9):1042-50.
doi: 10.1093/oxfordjournals.aje.a116398

 


S2E10: The Return of the Cohort Studies



In this episode of Season 2 of SERious Epidemiology, Hailey and Matt get some real world experience with cohort studies in a conversation with Dr. Vasan Ramachandran, PI of the Framingham Heart Study (FHS). FHS is a very well-known cohort study and the model that many of us have in mind when we think of cohort studies. We get a bit of history on FHS and Hailey and I have a chance to ask the questions we have struggled with around cohort studies including the role of representativeness. And, spoiler alert, we learn that FHS did not invent the term “risk factor” as Matt has been telling his students for years.


S2E9: The Cohort Studies Brouhaha



In this episode of Season 2 of SERious Epidemiology, Hailey and Matt get into cohort studies. We spend a lot of time confessing our limitations, both personally, and as a field, in assigning person time. We talk about the end of the large cohort study and the challenges in determining when to consider a person as exposed. We talk about issues of immortal person time and whether it is technically acceptable to include those who already have the outcome in a cohort study.