Skip to content
An official website of the OECD. Find out more
Created by the Public Governance Directorate

This website was created by the OECD Observatory of Public Sector Innovation (OPSI), part of the OECD Public Governance Directorate (GOV).

How to validate authenticity

Validation that this is an official OECD website can be found on the Innovative Government page of the corporate OECD website.

Password123: Applying behavioural insights to cyber security advice

General Information

Project description

To help improve the impact of cyber security advice for individuals and small businesses, BETA partnered with the Australian Cyber Security Centre (ACSC) to design and test different formats of advice. We conducted focus groups and two survey experiments (surveys with embedded randomised controlled trials) to understand whether behavioural insights concepts are effective in shifting people’s intentions to enact safer cyber security practices. We surveyed small and medium business (SMB) owners and operators and tested the effect of different formats of advice. We found some evidence messengers may have a small positive impact on people’s intentions to update their software, but we only have moderate confidence in this finding. We also found no effect from messengers on people’s intentions to use strong and different passwords across important accounts, and no effect on either cyber security practice from using different financial or non-financial loss framing. Overall, our research suggests making cyber security advice salient and engaging can help make key messages stand out. However, simply providing advice alone is insufficient to change behaviour, and further research is needed to better understand which formats, framing, and channels are most impactful for different groups.

This is part of a series of reports on applying behavioural insights to improve cyber security advice for individuals and small businesses in Australia.

Detailed information

Final report: Is there a final report presenting the results and conclusions of this project?

Yes

Final report

Hypothesis

Part 1 Hypotheses:
H1: Behavioural intentions and phishing identification will be higher among respondents exposed to any treatment compared to control (T1-3 pooled > C).
H2: Behavioural intentions and phishing identification will be higher among each of our treatment groups compared to control (T1-3 > C).
H3: Behavioural intentions and phishing identification will be different between respondents who receive cyber security advice as an infographic vs text (T2 ≠ T1).
H4: Behavioural intentions and phishing identification will be different between respondents who receive cyber security advice as an interactive infographic vs text (T3 ≠ T1).
H5: Behavioural intentions and phishing identification will be different between respondents who receive cyber security advice as an infographic vs an interactive infographic (T3 ≠ T2).

Part 2 Hypotheses:
Messenger conditions
H1a-H1d: The four outcomes will be higher among respondents exposed to any messenger condition (pooled) compared to the attention control condition (one-sided test).
H2a-H2d: The four outcomes will be higher among respondents exposed to each messenger condition compared to the attention control condition (one-sided test).
H3a-H3d: The four outcomes will be different among respondents exposed to the peer messenger condition compared to the expert messenger condition (two-sided test).
Financial consequences condition
H4a-H4d: The four outcomes will be different among respondents exposed to the financial consequences condition compared to the non-financial consequences condition (two sided test).

How hypothesis was tested

Part 1 Trial Design:
This was an individually randomised survey experiment delivered as part of a survey collecting information on the cybersecurity behaviours of small and medium enterprises (SMEs). The survey and experiment were collected through an online survey platform (Qualtrics). The initial survey took roughly 9 minutes to complete. Randomisation into one of four groups, and then exposure to the intervention occurred after the main body of survey questions were complete. Individuals then completed a second short survey to gather outcome data. All groups responded to the same set of outcome measures and were invited to participate in the follow-up survey. The follow-up survey was identical for all treatment groups.

Part 2 Trial Design:
This experiment has a factorial design. Participants saw advice on two cyber security behaviours (two separate experiments). They saw:
1. password security advice; then
2. software update advice.
In each experiment, all participants were randomised into one one of six possible cells based on a combination of either financial or non-financial consequences, and one of the three messenger arms.
Participants were initially randomised at an individual level for allocation to the password security experiment (A1 through A6). All participants were then re-randomised at an individual level for allocation to the software update experiment (B1 through B6). In both cases, the allocation ratio will give an equal number in all six cells. Randomisation into B1 through B6 was blocked on randomisation to A1 through A6.

Dependent variables

Part 1 Primary outcome measures:
1. Individuals completed a phishing test, where respondents are presented with three emails and decide if they are genuine or fake. The order of emails was randomised. One email was genuine and two were fake. Outcome measurement: Average number of correct answers.
2. Individuals were asked about their intentions to update their software and intention to backup their data. Questions were as follows:
1. ‘Thinking about the next seven days, how likely are you to check for software updates, as required, on your business devices?’
2. Thinking about the next seven days, how likely are you to initiate regular back-ups of business data?’
For both of these outcomes, response options were identical: 1. System is already in place/2. Definitely/ 3. Likely/4. Unlikely/5. Definitely not. Outcome measurement: A binary variable will be derived in which: ‘System is already in place/Definitely’ = 1 and ‘Likely/Unlikely/Definitely not’ = 0.

Part 2 Primary Outcome Measures
We have four primary outcomes for each experiment:
1. knowledge (at the time of exposure),
2. knowledge (two weeks later),
3. self-reported behavioural intentions (at the time of exposure),
4. self-reported behaviours (two weeks later).

Analyses

Part 1 Analyses from pre-registration:
The principal analysis of the effect of the intervention will be an adjusted comparison of our primary outcomes. These estimates, confidence intervals (CI) and p-values will be derived from a linear regression model with the following specification:
y_i= α+τT_i+βx_i+ γx_i T_i+ε_i
Where y is one of our three primary outcomes, α is the intercept, T_i is a vector of indicators for treatment group membership, x_i is a vector of mean centred covariates (see Covariates below), x_i T_i is an interaction between treatment group indicators and the mean-centred covariates (as per Lin (2013)), and ε is an error term which picks up variance not explainable by treatment indicators or covariates.

Part 2 Analyses from pre-registration:
The principal analysis of the effect of the intervention will be an adjusted comparison of each our primary outcomes. These estimates, confidence intervals (CI) and p-values will be derived from a linear regression model with the following specification:
Y=a+b1T1+b2T2a+b3T2b+b4X+B5XT1+b6XT2a+b7XT2b+e
Where Y is one of our primary outcomes, T1 is a dummy variable for financial consequences, T2a is a dummy variable for the peer messenger, T2b is a dummy variable for the expert messenger, and X is a vector of mean-centred covariates, which are interacted with each of the treatment dummies. For binary outcomes, we will conduct a robustness check by running a logistic regression and then calculating average marginal effects.Exact p-values and confidence intervals will be reported. Our primary analysis will not adjust for multiple comparisons.

Sample Size. How many observations will be collected or what will determine sample size?

Part 1 Sample size:
The final survey was distributed via a small business e-newsletter to approximately 2.4 million businesses, who were able to opt in to complete the survey (without incentive). Of these, 1,553 individuals (0.06 per cent) commenced the survey and 1,186 individuals completed the main body of the survey and were subsequently randomised into the experiment.
Part 2 Sample size:
Our sample will be 4,500 participants, with the sample size dictated by budget considerations.

Data Exclusion

NA

Treatment of Missing Data

Part 1 Missing Data
We expected there to be missing outcome data due to people leaving the survey prior to completing the outcome measures, as well as due to skipping individual questions (there were no forced responses). We considered this unlikely to be related to treatment status, and found no evidence of differential attrition. Survey respondents who were randomised but did not provide a response for a given outcome were excluded from the analysis for that outcome (but included for other outcomes if they provided a response).

Part 2 Missing Data
We did not have any missing outcome data from the main survey as the responses to outcome questions were mandatory. If the mandatory questions were not completed, that survey was discarded for the purposes of the survey experiment (though their responses to the survey were kept) and another respondent was recruited. We did have missing data for the follow-up survey. Although respondents were compensated for their time, we had an attrition rate between the main survey and follow-up survey of
around 27 per cent. We do not believe the form of treatment delivered in the main survey could have influenced respondents’ subsequent decisions about whether to complete the follow-up survey, and we saw no evidence of imbalance in response rates. Consequently, we undertook complete case analysis (that is, we dropped the records with missing outcomes) and proceeded on the assumption that the dropped records were missing independent of potential outcomes (MIPO).

Who is behind the project?

Institution: Prime Minister and Cabinet (PMC)
Team: Behavioural Economics Team of the Australian Government (BETA)

Project status:

Completed

Methods

Methodology: Online Experiment, Survey
Could you self-grade the strength of the evidence generated by this study?: 8

What is the project about?

Policy area(s): Cybersecurity
Topic(s):
Behavioural tool(s): Framing, Messenger effect, Salience

Date published:

3 November 2021

Comments are closed.