Most organisations understand that their people are a key vector for cyberattack. Attackers use a ‘social engineering’ (a blanket term that includes phishing) to trick users into surrendering credentials, enabling malware, and granting access to systems including bank accounts. In addition to phishing, attackers are increasingly resorting to cold voice calls (vishing) and even physical approaches (albeit still rare).


In response, organisations have increasingly adopted phishing simulations to test users’ resilience to such techniques. Even Microsoft O365 enterprise licenses come with out-of-the-box tools to test users’ social engineering responses. While the intent of user social engineering testing is sound, there are important ethical and legal considerations organisations need to consider.

Social engineering testing involves an attempt to deceive the user with hopefully a well-crafted phishing email. Regardless of the fact many users are bombarded with phishing and spam daily, it is a significant step for an organisation to craft and direct such things at their staff. A small percentage of users can have an adverse reaction to phishing tests, invoking a feeling of humiliation or breach of trust, especially if they fall victim to the approach. With few exceptions, users that fall victim to the approach should not be punished for doing so; aside from increasing the prospect of workplace mistrust and a reluctance to report actual phishing attacks, it could lead to legal challenges.

Before embarking on a simulated phishing, vishing or physical penetration test, organisations should provide users with plenty of notice. While you may not want to undermine the legitimacy of a phishing test by broadcasting it immediately beforehand, there should still be a sustained awareness campaign in the background that not only advises testing may occur, but also has an emphasis on resilience and response to real social engineering attacks. This is important. Organisations that uncover thematic weaknesses in their cyber defence, such as particular users repeatedly falling for phishing simulations, should aim to mitigate those weaknesses by, for example, deploying follow-up training or stricter hard controls. Failing to do so could prompt uncomfortable questions from a regulator or plaintiff lawyer if any such identified weaknesses later cause a significant incident.

Organisations should think about the contents of simulated phishing collateral. Ask yourself: could using third-party branding or marks carry legal, reputational or intellectual property risks? If you are contemplating capturing any personal or other sensitive information as part of the simulation, is that appropriate, do you have the necessary consents or legal basis to do so, and what will you do with the information if you receive it? Is any of the proposed language likely to offend or defamatory?

Ideally, organisations should include the prospect of phish testing in their IT User Agreements, contracts or similar formal documents, in addition to regular staff communications and training sessions. Care should also be taken with third-party contractors who may be included in distribution lists utilised for phish testing campaigns.

Social engineering awareness and resilience is a key part of an organisation’s overall cyber defence, but needs to be done carefully, with transparency, an understanding of the risks and above all, respect for users.

Thanks to our friends at Clear Cut Law for contributing to this article: catch them at hello@clearcutlaw.com

Let’s talk business

Think this service suits your business? We work with a multitude of different industries across the board, so get in touch with us if you think you’re in the right area and would like to talk to one of our team about becoming cyber secure.

Contact us