Last week Verizon published its Data Breach Investigations Report (DBIR) 2013. This years report presents Verizon’s analysis of 47,626 security incidents.
As in previous years the report offers a wealth of information for organisations and individuals seeking to understand the events that can lead to a data breach occurring. I strongly recommend anyone that is interested in evidence based information security risk management to download a copy of the report from here
In this post I’m going to focus on the one component that is pertinent to many New Zealand organisations at this point in time: Human Error.
You would have to have been living under a rock to miss the recent press frenzy about privacy breaches by government agencies. Most of the reported breaches have resulted from emails with attachments being sent to the wrong recipients. The rhetoric has largely been around how badly technology is implemented and managed but is this really where the government needs to be focusing its efforts?
Figure 18 of the report highlights that error was the greatest threat action with it being identified as a factor in 48% of incidents that were analysed. Although this percentage includes other types of incidents beyond the mis-delivery of emails and documents including “lost devices and publishing goof-ups”, page 41 of the DBIR states that “erroneous delivery of e-mails and documents was the leading threat action among the 47,000+ security incidents we studied from 2012”.
This is interesting as it underlines the potential scale of the problem. If users sending information to the wrong person is the cause of nearly 48% of all breaches it is unlikely that implementing technical controls will fix the problem.
Information systems are comprised of people, processes and technology and focusing on one element to the detriment of the others will result in ineffective security. This isn’t a new concept, Liebenau and Backhouse published the Informal Formal Technical (IFT) model back in 1990 as a method of breaking information systems down into their three separate but interrelated components.
For example, for a user authentication system to be effective there has to be appropriate controls at all three layers:
• Technical – the system is configured to require users to select a password that meets the requirements defined in the password policy, and users must enter their username and password to access the system.
• Formal – the organisation has an approved password policy that defines how staff must formulate and protect their passwords.
• Informal – staff are provided with security awareness training to influence their password selection and protection behaviour.
In my opinion the IFT model is a useful tool today as it provides a high-level framework for organisations to identify and understand the interaction between the social and technical aspects of their information systems and therefore the security of them. Considering all three aspects in the context of the recent privacy breaches is likely to reveal that the formal controls (e.g., the information security policies and operating procedures) and informal controls (e.g. the behavioural norms or actual working practices) are lacking. Deficiencies at the informal and formal layers cannot be addressed through the implementation of technical controls.
In other industries (e.g. medical and aviation) when human error is found to be a significant contributing factor in incidents the focus is typically on modifying employees’ behaviour through the introduction of updated policies and processes or targeted training rather than trying to implement arbitrary technical controls.
There’s a lot that can be learnt from the risk management research in these industries. If you are serious about risk management I strongly recommend that you read James Reason’s Human Error, The Human Contribution and Managing the Risks of Organisational Accidents. Reason’s research provides an excellent insight into the causes of human error.
Reason’s 1990 Swiss Cheese Model identifies four interrelated factors that interact with each other to increase accident rates: Unsafe Acts, Preconditions for Unsafe Acts, Unsafe Supervision and Organisational Influences.
The following provides a brief overview of how the Swiss Cheese Model could be used to analyse the factors that led to a hypothetical information security incident:
• Unsafe Act – a user leaves their laptop unattended in a public place and it is stolen leading to sensitive information being accessed by an unauthorised party.
• Preconditions for Unsafe Act – the user assumed that the laptop would be watched by their colleagues but did not confirm that this was the case.
• Unsafe Supervision – the laptop was not encrypted because the user’s immediate management believed that the inconvenience to its staff introduced by the encryption software was greater than the consequences of the risk of losing a laptop.
• Organisational Influences – although the organisation has a policy that requires all laptops to be encrypted it is ignored and there was no pressure to comply with the policy because senior management believed that the cost of implementing full disk encryption outweighed the benefits.
Ultimately organisations can only reduce human error by understanding the interaction between the people, process and technology that make up their information systems, together with the factors that have resulted in an information security incident. The two models I have discussed here provide a solid foundation for any organisation looking to understand not only what went wrong but also why it went wrong. This is critical when seeking to reduce the likelihood of an incident reoccurring.