Wednesday, October 29, 2008

Working With Missing Data in Survey Analysis

A new working paper from NUIM Economics addresses the issue of missing data in probit estimation: "An Efficient Estimator for Dealing with Missing Data on Explanatory Variables in a Probit Choice Model", (Denis Conniffe and Donal O’Neill). According to the authors, a common approach to dealing with missing data in econometrics is to estimate the model on the common subset of data, thereby throwing away potentially useful data. The authors wish to avoid case-deletion --- in the particular context of a probit model with missing data on the explanatory variables; so they develop a new estimator. Their simulation results show that the new estimator performs well when compared to popular alternatives, such as complete case analysis and multiple imputation.

A few of us have been discussing missing data and how to address it (mostly with multiple imputation) recently. Below is a list of some resources we have found. If anyone else is aware of other missing data lecture-notes, multiple imputation software packages or relevant econometric estimators, I suggest that we build up a list in the comments on this post.

(i) The NBER econometrics video (and lecture-notes) on missing values - this is done by Woolridge:

(ii) The Gary King lecture-notes on missing values: These notes mention the software package developed by Gary King to implement multiple imputation of missing values. The package is called Amelia and there is a comprehensive guide to it made available by King here:

(In general, the King site has some great notes - available here)

(iii) A political science lecturer from UCD called Jos Elkink has some lecture-notes on missing values:

(iv) The multiple imputation FAQ page:


(vi) Stephen Soldz's resources for missing data:

(vii) The Southampton CASS course on missing values:

(viii) The course from the Cambridge Biostatistics Unit (Patrick Royston is one of the lecturers here):

(ix) The ICE software package in STATA:

(x) The Hotdeck module in STATA:

(xi) David Howell's notes on working with missing data:

(xii) Joe Schafer's notes on missing data in longitudinal studies:

(xiii) Richard Williams' notes on missing data (including traditional approaches in STATA):

(xiv) A book on missing data by Patrick E McKnight et al., made partially available by Googlebooks here


Martin Ryan said...

There is a recent discussion on the IQSS blog about multiple imputation of categorical data. Most standard multiple imputation packages assume the multivariate normal (MVN) distribution, which may not hold for certain types of categorical and binary data. The standard shortcut for overcoming this problem is to just impute under the MVN assumption, then use rounding to finish out the imputation. But a more finessed approach is suggested by Yucel Recai, Yulei He, and Alan Zaslavsky in their May 2008 article in 'The American Statistician'.

Martin Ryan said...


Matt said...

Since no one has discussed this yet, let me suggest using partial identification techniques. For example see:

Partial identification with missing data: concepts and findings by Charles Manski

or Manski's overview text which has a chapter on missing data in general situations, including surveys:

Martin Ryan said...

Thanks Matt.

I just realised that the Woolridge lecture on 'Partial Identification' (P.I.) in the NBER video-series discusses a partial identification example related to missing values. He cites Manski throughout. I'm going to look at this and the Manski links you sent on.

The NBER video and lecture notes on P.I. are available here:

Martin Ryan said...

There is a useful list of techniques put together here aswell:

Martin Ryan said...

This is an interesting discussion about multiple imputation and multilevel models:

Martin Ryan said...