General Information
Project description
To better learn about the U.S. General Services Administration’s USAGov email audience, a bilingual survey was developed to include questions on preferred topics, preferred frequency of email communications, utility of emails, and audience demographics. USAGov sought to learn more about their email audience through the survey and by using behavioral insights to increase survey response.
Detailed information
Final report: Is there a final report presenting the results and conclusions of this project?
Final report
Pre-analysis plan: Is there a pre-analysis plan associated with this registration?
Hypothesis
Do response rates to a government feedback survey improve compared
to a business-as-usual email request when the email request includes a personal appeal, a personal
appeal with a thank you email to prime reciprocity, or the process by which the survey will be
used?
How hypothesis was tested
Our empirical model to answer this research question is an Ordinary Least Squares (OLS) model
where:
Y β β (P ) β (T ) β (PR )
ib = 0 + 1 ib + 2 ib + 3 ib + αb + εib
where i indexes email subscribers and b indexes blocks, and
Yib:
is an indicator for survey submission;
Pib:is an indicator for assignment to the personal appeal email;
Tib: is an indicator for assignment to the personal appeal and thank you emails;
PRib: is an indicator for assignment to the process transparency email;
b α : are block fixed effects (indicators for active or inactive email subscriber, English- or
Spanish-speaking subscriber, and subscriber to English business emails); and
εib: is a subscriber error term
Analyses
We will estimate heteroskedastic robust (HC2) standard errors. Our coefficients of interest are
β , , and , which measure the (ITT) intent-to-treat effect of subscribers being emailed a 1 β2 β3
survey request with a personal appeal, with a personal appeal and thank you email reciprocity
prime, and with information about the process by which the survey will be used.
Data Exclusion
In practice, we will have missing data when an email bounces back. In this case, email subscribers
do not have a chance to complete the survey. We will test for differential bounce-back rate
between the treatment arms, based on the first email sent and the survey email sent (since one
treatment group will receive two emails). We do not plan to adjust our estimates of treatment
effects unless bounce-back rates on the first email send statistically differ between assignment
groups. In that case, we will show a set of results that include subscribers who had emails bounce
back and another set that excludes email subscribers who had emails bounce back. If results differ,
for our primary analysis, we will exclude email subscribers who had the first email sent bounce
back.
We not expect to need to exclude data for other reasons. Because all outcomes are dichotomous
indicators for a behavior and recorded automatically, the outcomes are not at risk for having
outliers or data-recording errors.
Additional information
United States
USA
Who is behind the project?
Project status:
Completed
Methods
What is the project about?
Date published:
25 June 2021