Four Ways AIR Experts Are Innovating Survey Methodology

Michael Jackson and Cameron McPhee

A scientific, research-based survey is one of the most unbiased, methodical ways to collect and understand human behavior and opinion. Findings from formal surveys are important—not only do they provide scientific data instead of assumptions, but they can ultimately influence policymaking. For instance, data collected through regular Census Bureau surveys are a critical factor in determining how to distribute over $675 billion in federal funding.

Read the journal article discussing results of AIR’s research into how differential financial incentives affect survey response rates in the Journal of Survey Statistics and Methodology (an account with Oxford Academic is necessary to view the full article).

Before researchers conduct a survey, they spend a great deal of time preparing, from developing the questions they’ll ask to determining appropriate sample sizes. Increasingly, researchers have to consider an important factor in survey preparation: response rates. High response rates help ensure sufficient sample sizes and mitigate the possibility that one group is underrepresented. In other words, response rates are an essential component of survey credibility.

With response rates on the decline, we've conducted research exploring ways to predict survey response and boost participation among those least likely to respond. Here, we offer four takeaways from our work.
 

1. Researchers are using different survey designs to try to increase response rates.

When it comes to conducting surveys, researchers are increasingly recognizing that a “one-size-fits-all" approach to data collection may not be the best use of resources. For one thing, not all respondents prefer the same type of outreach and response. For example, some prefer mail while others prefer phone, and some require a financial incentive to participate while others don’t.

These challenges have led researchers to use and experiment with both mixed-mode and adaptive and responsive survey designs. Mixed mode surveys may use more than one mode to contact participants (for example, both mail and phone instead of mail only). Adaptive and responsive surveys apply predictive modeling: using preexisting information, such as previous survey participation, to predict which data collection strategies might be most effective.

As the number of household landlines—and telephone survey response rates—decline, many researchers have transitioned away from using random digit dialing and have turned instead to address-based sampling, or identifying respondents based on a list of addresses from the U.S. Postal Service. Address-based sampling provides several advantages, including the ability to use web and mail surveys, which now tend to have higher response rates than telephone surveys.
 

2. Data linked to addresses can be used to predict response rates.

Another advantage of address-based sampling is that addresses in the U.S. can be linked to a number of already existing data sources and can potentially help us determine which data collection strategies will be most effective.

For example, in our research, we were able to use data linked to addresses to predict the probability of a response with reasonable—though certainly not perfect—accuracy. We support the National Center for Education Statistics (NCES), part of the U.S. Department of Education, with all aspects of survey design for the National Household Education Survey (which has used an address-based sampling design since 2012). We identified several characteristics associated with the likelihood of response to the survey, which collects data on the educational activities of U.S. children and adults.

Another helpful piece of information for our research was the Low Response Score, a publicly available statistic released by the Census Bureau. This score predicts the percentage of addresses in a neighborhood that will not complete the census by mail based on previous response patterns. Addresses with low response rates to the census tended to also have a lower probability of response to the National Household Education Surveys Program.

Our research also found that, among other patterns, households with older adults were more likely to respond and households with renters were less likely to respond. But it’s important to remember that survey response is complex and isn’t driven by any one factor. This is why our research used a multivariate statistical model—accounting for a variety of household characteristics that can potentially predict the likelihood of responding—rather than one focused on a single household characteristic, such as the Low Response Score.
 

3. Offering a financial incentive doesn’t guarantee participation.

For several years, the National Household Education Surveys Program has used prepaid cash incentives to try to increase response rates.

"When it comes to conducting surveys, researchers are increasingly recognizing that a 'one-size-fits-all' approach to data collection isn’t the best use of resources."

Our research tested a uniform incentive ($5) against a variable incentive amount. To do this, we used predictive modeling techniques to assign every sampled address a predicted probability of response based on other data linked to the address. We doubled the financial incentive for addresses that were the least likely to respond and reduced or eliminated it for addresses that were the most likely to respond. Our hope was that increasing the financial incentive for those least likely to respond would help reduce nonresponse bias and thus obtain more accurate estimates.

While it's possible to use predictive modeling to identify households that are highly unlikely to participate, we found that a higher financial incentive isn’t an effective way to encourage participation for these households. This was disappointing, but any research findings—even ones we don’t necessarily want—are useful as we continue to study survey participation and the use of incentives.
 

4. There’s more to learn about why people respond to surveys—and why they don’t.

Working with our clients and partners, AIR is involved in a wide range of innovative methodological research. We continue to experiment with predictive modeling to be more efficient in how we identify likely and unlikely participants. This goes beyond incentives—for example, the National Household Education Survey is currently experimenting with using predictive modeling to determine which households should receive a mailing sent via FedEx. For some households, a FedEx delivery would attract more attention than a traditional mailing. We’re also evaluating additional sources of data that may help us predict survey response behavior, such as voter registration files.

As part of our work with NCES, AIR researchers are conducting in-depth interviews with U.S. households to get more information about the reasons why people don’t respond to surveys, such as privacy concerns. This will help us refine our data collection strategies and use resources more efficiently to keep costs down. Ultimately, the goal of federal surveys, like the National Household Education Surveys Program, is to provide policymakers and the public with reliable data to use to make decisions and understand issues affecting our society.

NCES had no role in the writing of this journal article or the decision to submit it for publication. Opinions expressed herein are those of the authors and do not represent official positions of any agency of the U.S. government.