PB4A7- Quantitative Applications for Behavioural Science
26 Sep 2024
Confidence: You will feel like you have a good understanding of design-based causal inference by the end such that it doesnât feel mysterious or intimidating
Comprehension: You will have learned a lot both conceptually but also in various specifics, particularly with regards to issues around identification and estimation
Competency: You will have had some experience working together implementing these methods using code in Stata syntax
Coming up with questions is easy.
But coming up with good ones, is tricky. Good RQ:
Coming up with questions is easy.
But coming up with good ones, is tricky. Good RQ:
Improve our understanding of the world:
Contemplating interventios that change behaviour:
Each of these policies is asking what happens to some outcome if we make an intervention - keep everything the same but change one factor â
October 2021âs Nobel Prize in economics went to D. Card, J. Angrist and G. Imbens
But itâs arguably as much to Princetonâs mid 1980s Industrial Relations group as itâs ground zero for the credibility revolution
Starts with Orley Ashenfelter, who had been working on job trainings programs
KEY individuals: Orley Ashenfelter, David Card (Orleyâs student), Josh Angrist (Card and Orleyâs student), Alan Krueger (hired by Orley), Bob Lalonde (Card and Orleyâs student) and then a generation of students (Levine, Currie, Pischke)
Angrist started working on how randomization in Vietnam drafting can explain later outcomes (we will see this in Week 10)
Meets Gibens and they both get mentored by Gary Chamberlain
They propose the potential outcomes framework
This course is about these people, their ideas, subsequent development and how the revolutionised modern empirical research with observational data
Letâs do a little thought experiment
Aliens come and orbit earth, in superposition.
Letâs do a little thought experiment
Aliens come and orbit earth, in superposition.
They kill the doctors, unplug patients from machines, throw open the doors â many more patients inexplicably die
Sounds ridiculous?
Letâs do a little thought experiment
Aliens come and orbit earth, in superposition.
They kill the doctors, unplug patients from machines, throw open the doors â many more patients inexplicably die
Sounds ridiculous?
Arenât we all aliens in our research?
Example: If we want to know whether a vaccine works
Example: If we want to know whether a vaccine works
We compare people who have gotten vaccinated and those who took a placebo instead
In a classic clinical experiment, one applies a âtreatmentâ (0 = placebo, 1 = vaccine) to some set of n âsubjectsâ and observes some âoutcomeâ (Y).
We can then estimate:
Each individual i is assigned into one of the treatment options (0 = placebo, 1 = vaccine)
Therefore, each i as two potential outcomes:
Did vaccines prevent infection?
Once we observe one treatment for one individual, we cannot observe a different treatment for the same individual.
This is called the âfundamental problem of causal inference.â Each potential outcome is observable, but we can never observe all of them.â (Rubin, 2005, p. 323).
Then, why are we discussing all these?
Once we observe one treatment for one individual, we cannot observe a different treatment for the same individual.
This is called the âfundamental problem of causal inference.â Each potential outcome is observable, but we can never observe all of them.â (Rubin, 2005, p. 323).
Then, why are we discussing all these?
We can observe different treatments across different people.
This may be a way of solving the fundamental problem, but it introduces a new problem we must consider.
Differences between people following a treatment may be because of the treatment, or they may be because of the differences in the people being treated.
This is selection bias.
Letâs consider some other factors which may matter for selection bias.
Select a large enough random sample and divide them into two groups.
Each group differs within the groupâŚ
But, on average, the groups themselves are the same, and so are comparable.
The effect of treatment on average would then be:
E(Y | T = 1) â E(Y | T = 0) = Average Treatment Effect (ATE)
The effect of the intervention then would be:
Treatment effect of intervention = Outvome of Treated - Outcome of Untreated + Selection Bias
Selection bias is the difference in average outcomes between treatment and control groups due to factors other than the treatment status
The true treatment effect, selection bias needs to be eliminated, or shown to be reasonably assumed to be zero.
To eliminate selection bias, we need well designed experiments (Matteoâs class) and large enough samples
We design a strategy (Identification Strategy from now on) that allows us to:
\[ X = \gamma_0 + \gamma_1\varepsilon + \nu \]
\[ Y = \beta_0 + \beta_1X + \varepsilon \]
Seminar today:
Week 2: Hypothesis testing
PB4A7- Quantitative Applications for Behavioural Science