How Far Should a Company go to Test Employee’s Resistance to Phishing

Bomb.jpgShould companies take the risks necessary to test their employee’s resistance to phishing?

The behavioral aspects of security are just as important as the technical controls.  A security savvy workforce is a tremendous asset while employees who make poor choices can undermine even the most robust security mechanisms.  Social engineering techniques target what is considered the weakest link, the user.  Mature organizations invest in both the tradition technical controls such as firewalls, anti-malware agents, and encryption, as well as conducting employee security training to improve the behaviors of workers.  This includes educating personnel on phishing and social engineering to make them more resistant to attacks. 

The effectiveness of security controls determines their overall value.  Good metrics facilitate improvement programs, but testing technology efficacy is far easier than measuring that on the human side.  So what can an organization do?

Companies have a few options.  Post-training surveys and quizzes are easy but fall short on practical realism and longevity.  Hiring professional penetration-testers, which can be expensive, is a great way to test defenses but they are looking for any way in and not testing the whole community for a specific type of social compromise.  Organizations can take it upon themselves to send out some fake messages internally, using corporate emails systems, and see which employees fails to recognize the phishing bait.  But these messages tend to be limited to internal communication systems and be mostly bland and generic in nature.  To be effective, testing of employees must get personal to the individual and arrive in both work as well as non-professional communication avenues.  Home email addresses, social media sites, and texts on personal phones should be part of the test parameters, otherwise the results will lack important avenues of attack. 

Phishing Caliber2.jpgTesting must be realistic.  Attackers today are quickly refining their techniques at both targeting specific individuals and delivering highly realistic and convincing messages to intended victims.  Modern phishing campaigns add specific social elements into the communications.  Perhaps a message from a child’s coach, teacher, or babysitter might get someone to click a malicious link.  A digital note from your boss or an executive in the division might be enough to persuade the reader to divulge information.  Perhaps an urgent text from your spouse or parent would be just enough to download and open a file?  These are all potential traps which attackers can use to compromise entire networks.

New options are emerging which are both comprehensive and authentic in testing employee’s resistance to social engineering, but represent a risky path for evaluation and gathering the desired metrics.  A recent Wired article Security Tool Tricks Workers Into Spilling Company Secrets highlights one such tool, AVA.  AVA works by gathering data from both inside the organization, such as corporate directories, as well as externally from social media and internet sites.  Based upon how people are connected and communicate, the tool can build a social and hierarchical map.  It then crafts a phishing campaign tailored to individuals and sends out tests across email and networks like Twitter, LinkedIn, and Facebook.  I suspect these social sites probably wouldn’t approve of such activities within their services based upon the user agreements and usage guidelines.


The results however, will give the organization very specific insights to which workers are the most susceptible to social engineering attacks and what kinds of manipulation works best.  If conducted properly, accurate results could be a windfall for the security organization to understand specific weaknesses which can easily be closed.  Those who are most vulnerable can be educated in how their social sharing impacts the security of work, family, and friends.  This can lead to better overall training, opening lines of communication between security and employees, elevation of good personal security choices, and drive a new level of flexibility and effectiveness in overall risk management of the company and for individuals. 


There are risks.  Many risks.  In order to conduct such a test.  The tool must synthesize sensitive corporate information and harvest a tremendous amount of very private information of the employee from external networks.  Many companies would be cautious to give such operating details to a 3rd party and would likely opt to orchestrate the whole process internally.  There are privacy regulations, both state/regional, national, and international to consider.  Most would require some type of notification, some would necessitate the opt-in, and in other geographies it may be altogether forbidden.  Such an activity may violate the corporate privacy policy, ethical standards, or expectations set in employment agreements. 

Then there is the “creepiness” factor.  Do you really want your employer to gather and analyze all the information from your social feeds and networks?  Most people embrace a strong delineation between work and home domains.  Such details could foster fears of discrimination, employee contract violations, overstepping of privacy policies, and the unnecessary sharing of personal activities among workers and superiors.

Such invasions of privacy, perceived or real, could spur internal discord, dissatisfaction, protects, and unnecessary drama.  It could drive lover productivity and crater employee satisfaction.  The irony would be if a security program actually contributed to a rise in disgruntled employees, sabotage, and litigation.

Choice: Risk versus Reward

Ultimately it comes down to the choice of the organizations if they want to institute aggressive and invasive practices in pursuit of better security.  Every company and government agency is unique and driven by different priorities.  At best, decision makers should make informed choices, understanding both the potential benefits and risks.

My Top 10 recommendations to those organizations considering more invasive testing:

  1. Move with great caution! 
  2. Openly communicate and publish expectations in employee hiring and privacy policies
  3. Work with employees, human resources and legal departments to get buy-in and ensure compliance with privacy policies, ethics, and regulations
  4. If possible, establish an opt-in/out mechanism and time limitations which are tied role changes and training cycles.  Be sure all activities cease when employees leave the company
  5. Make it fun if possible.  A contest or team challenge, and provide awards to individuals or groups who do well.  Provide post event feedback
  6. Follow good data practices.  Collect only what is necessary, keep data anonymous as much as possible, be sure to secure all stored data and delete all private information as soon as the event concludes
  7. Consider leveraging a trusted 3rd party as an independent proxy to do the data gathering, analysis, and testing.  Verify they are properly protecting data and deleting it afterwards
  8. Results should not expose private data of those tested, only their scores and generic areas of training improvement
  9. Be prepared to justify actions with executives, board members, and the media
  10. Don’t be creepy! 

Security is important and the strength of employees’ behaviors are critical to the success of a proper security posture, but so is morale, trust, and support the workers have toward their employer.  Don’t let pursuit of security metrics undermine your strongest advocates for security, your loyal employees.  Move with caution, forethought, and in partnership with the employee community.

Twitter: @Matt_Rosenquist


Published on Categories Archive
Matthew Rosenquist

About Matthew Rosenquist

Matthew Rosenquist is a Cybersecurity Strategist for Intel Corp and benefits from 20+ years in the field of security. He specializes in strategy, measuring value, and developing cost effective capabilities and organizations which deliver optimal levels of security. Matthew helped with the formation of the Intel Security Group, an industry leading organization bringing together security across hardware, firmware, software and services. An outspoken advocate of cybersecurity, he strives to advance the industry and his guidance can be heard at conferences, and found in whitepapers, articles, and blogs.