Answered step by step
Verified Expert Solution
Link Copied!
Question
1 Approved Answer

BEST PRACTICES FOR CYBERSECURITY ETHICSNo single, detailed code of cybersecurity ethics can be fitted to all contexts and practitioners;organizations and professions should therefore be encouraged

BEST PRACTICES FOR CYBERSECURITY ETHICSNo single, detailed code of cybersecurity ethics can be fitted to all contexts and practitioners;organizations and professions should therefore be encouraged to develop explicit internalpolicies, procedures, guidelines and best practices for cybersecurity ethics that arespecifically adapted to their own activities and challenges. However, those specific codes ofpractice can be well shaped by reflecting on these 14 general norms and guidelines for ethicalcybersecurity practice:I. Keep Cybersecurity Ethics in the Spotlightand Out of the Compliance Box:As earlier examples have shown, ethics is a pervasive aspect of cybersecurity practice.Because of the immense social power of information technology, ethical issues are virtuallyalways in play when we strive to keep that technology and its functioning secure. Even whenour work is highly technical and not directly client-facing, ethical issues are never absentfrom the context of our work. However, the compliance mindset found in manyorganizations, especially concerning legal matters, can, when applied to cybersecurity,encourage a dangerous tendency to sideline ethics as an external constraint rather than see itas an integral part of being good at what we do. Law and ethics are not the same, and dontsubstitute for one another. What is legal can be unethical (quite common), and what is ethicalcan (even if less commonly) be illegal. If we fall victim to the legalistic compliance mindsetwhen we are thinking about ethics, we are more likely to view our ethical obligations as a boxto check off and then forget about, once we feel we have done the minimum needed tocomply with them. Unfortunately, this often leads to disastrous consequences, forindividuals and organizations alike. Because cybersecurity is a practice for which ethicalconsiderations are ubiquitous and intrinsic, not intermittent and external, our individual andorganizational efforts must strive to keep the ethics of our security work in the spotlight.II. Consider the Human Lives and Interests Behind the Systems:Especially in technical contexts, its easy to lose sight of what most of the systems we workwith are: namely, ways of improving human lives and protecting human interests. Much ofwhat falls under the cybersecurity umbrella concerns the most sensitive aspects of humanlives: their reputations, opportunities, property, and freedoms; their physical andpsychological well-being; and their social connections, likes and dislikes. A decent humanwould never handle another persons body, money, or mental condition without due care; butit can be easy to forget that this is often what we are doing when we are charged withsecuring cybersystems.III. Consider Downstream (and Upstream and Lateral) Risks in CybersecurityPractice:As noted above, often we focus too narrowly on whether we have complied with ethicalguidelines and we forget that ethical issues concerning cybersecurity dont just go awayonce we have performed our own particular task diligently. Thus it is essential to think aboutwhat happens to the sensitive device, software, hardware system, or data even after it leavesour hands. Even if, for example, I have done extensive security testing and auditing of aproduct before its release, there are always new threats, new vulnerabilities that can emerge,and new applications of the product that might create new security challenges. I shouldalways therefore have a view of the security risks downstream from my practice, andmaintain effective lines of communication with those persons in a position to keep the systemor product secure at those stages. Communication with those upstream and lateral to mysecurity practice is also essential; if the reason that I struggle with keeping a system secure isthat poor design and configuration choices upstream are tying my hands, or because someonein another department is continually ignoring or overriding the security practices Iveinstituted, then I need to be prepared to address that. If I am not paying attention to thedownstream, upstream, and lateral risks, then I have not fully appreciated the ethical stakes ofmy own current security practice.IV. Dont Discount Non-Technical Actors, Interests, Expectations, andExposures:Most cybersecurity professionals are highly skilled in specific areas of technical practice andaccustomed to interacting with others with similar levels of technical expertise. Even theiradversaries are often hackers with comparable technical skillsets and interests. This can leadto a dangerously insular mindset when it comes to considering the interests and risks to whichnon-technical actors are exposed (and which cybersecurity professionals are often duty-boundto protect.) For example, cybersecurity professionals know that no system is 100% securefrom intrusion or attack, and that, for example, installing antivirus software is just one(modest and limited) tool for reducing security risk, not a magic security genie that makesany additional security practices unnecessary. But an ordinary, untutored user may welloperate with inflated expectations and unrealistic beliefs about cybersecurity. Likewise, acybersecurity professional is unlikely to fall for an unsophisticated phishing attempt, or to usean easy-to-guess password, or to insert a USB flash storage device received from a randomstranger into their networked laptop. But many others will, and it can be tempting to adopt anethically callous attitude toward people whose exposure to security risks results fromtechnical incompetence or navete. This attitude is important to resist, for two reasons. First,because it leads to missed opportunities to implement basic risk prevention and mitigationstrategies, increasing the overall risk to the network/organization and third parties. Second,because being technically nave is not, in fact, something that makes a person any moredeserving of harm or injury, or any less deserving of security. Maintaining appropriateempathy for non-technical actors and their interests will ultimately make you a bettercybersecurity professional, not just in terms of your moral character, but also in terms ofbeing more effective in cybersecurity work.V. Establish Chains of Ethical Responsibility and Accountability:In organizational settings, the problem of many hands is a constant challenge to responsiblepractice and accountability. To avoid a diffusion of responsibility in which no one on a teammay feel empowered or obligated to take the steps necessary to ensure effective and ethicalcybersecurity practice, it is important that clear chains of responsibility are established andmade explicit to everyone involved in the work, at the earliest possible stages of a project. Itshould be clear who is responsible for each aspect of security risk management andprevention of harm. It should also be clear who is ultimately accountable for ensuring anethically executed security project or practice. Who will be expected to provide answers,explanations, and remedies if there is a failure of ethics or significant breach allowed by theteams work? The essential function of chains of responsibility and accountability is to assurethat individuals take explicit ownership of cybersecurity work and its ethical significance.VI. Practice Cybersecurity Disaster Planning and Crisis Response:Most people dont want to anticipate failure, disaster, or crisis; they want to focus on thepositive potential of a project or system. While this is understandable, the dangers of thisattitude are well known, and have often caused failure, disaster, or crisis that could easilyhave been avoided. This attitude also often prevents effective crisis response since there is noplanning for a worst-case-scenario. This is why engineering fields whose designs can impactpublic safety have long had a culture of encouraging thinking about failure. Understandinghow a product or system will function in non-ideal conditions, at the boundaries of intendeduse, or even outside those boundaries, is essential to building in appropriate margins of safetyand developing a plan for unwelcome scenarios. Thinking about failure makes engineerswork better, not worse. Cybersecurity practitioners must promote the same cultural habit intheir work. Known vulnerabilities and past incidents/breaches should be carefully analyzedand discussed (post-mortems) and results projected into the future. Pre-mortems(imagining together how a currently secure system could be breached, so that we canenvision to prevent that outcome) can be a great cybersecurity practice. Its also essential todevelop crisis plans that go beyond deflecting blame or denying harm (often the first mistakeof a PR team after a breach or security flaw is exposed). Crisis plans should be intelligent,responsive to public input, and most of all, able to effectively mitigate or remedy harm beingdone. This is much easier to plan before a crisis has actually happened.VII. Promote Values of Transparency, Autonomy, and Trustworthiness:The most important thing to preserve a healthy relationship between cybersecuritypractitioners and the public is to understand the importance of transparency, autonomy, andtrustworthiness in that relationship. Hiding a severe security risk to others behind legal,technical or PR jargon, disempowering users efforts to promote their own security, andbetraying public trust are almost never good strategies in the long run. Clear andunderstandable security attestations, policies, and recommendations, when accurate andreliable, help to promote the values of transparency, autonomy, and trustworthiness.Notifications of security flaws, patches, and breaches should be shaped by these same values,to the maximum extent compatible with the value of security itself. Thus delaying or buryinga vulnerability or breach notification in order to spare oneself, or ones team, fromprofessional or public scorn is not typically an ethical choice. It undermines transparency,and by doing so prevents affected stakeholders from making their own informed, self-guidedchoices to manage their risk. It also violates the public trust that their security is deeplyvalued. However, if a premature notification would expose others to unreasonable risk, adelay may often be justified by careful reasoning from the facts, assuming that reasoning isitself reliable, and not an example of motivated reasoning (believing something onlybecause it benefits me to do so, or because I strongly wish it were true).VIII. Consider Disparate Interests, Resources, and Impacts:It is important to understand the profound risk in cybersecurity practices of producing ormagnifying disparate impacts; that is, of making some people better off and others worse off,whether this is in terms of their social share of economic well-being, political power, health,justice, or other important goods. Not all disparate impacts are unjustifiable or wrong. Forexample, while a device that uses strong end-to- end encryption may make it easier forcriminals to avoid government scrutiny of their communications, it may also have a disparateimpact on authoritarian governments ability to track and neutralize their political opposition.Here, the ethical balance of the disparate impacts is quite complex (as seen in 2016s case ofApple v. the FBI.) But imagine another device that offers cutting-edge security tools andfeatures only to those buying the most expensive model, and outdated/weak security featuresin all other models. Can the disparate impact of this choice be justified, insofar as it mayexpose millions who are not at the top of the socioeconomic ladder to real deprivations ofproperty, privacy, and reputation, while only protecting the already secure and privileged? Orconsider a visual recognition security challenge that is inaccessible to the blind. What about afacial recognition security lock that doesnt properly function with darker skin tones? This iswhy there must be a presumption in cybersecurity practice of ethical risk from disparateimpacts; they should be anticipated, actively audited for, and carefully examined for theirethical acceptability. Likewise, we must investigate the extent to which different populationsaffected by our practice have different interests, training, and resources that give them adifferential ability to benefit from our product or project.IX. Invite Diverse Stakeholder Input:One way to avoid groupthink in ethical risk assessment and design is to invite input fromdiverse stakeholders outside of the team and organization. It is important that stakeholderinput not simply reflect the same perspectives one already has within the organization orgroup. Many cybersecurity practitioners have unusually high levels of educationalachievement and economic status, and in many technical fields, there may be skewedrepresentation of the population in terms of gender, ethnicity, age, and other characteristics.Also, the nature of the work may attract people who have common interests and values--forexample, a shared optimism about the potential of science and technology, and comparativelyless faith in other social mechanisms. All of these factors can lead to organizationalmonocultures, which magnify the dangers of groupthink, blind spots, insularity of interests,and poor design. For example, many of the best practices above cant be carried outsuccessfully if members of a team struggle to imagine how a cybersecurity practice would beperceived by, or how it might affect, people unlike themselves. Actively recognizing thelimitations of a team perspective is essential. Fostering more diverse cybersecurityorganizations and teams is one obvious way to mitigate those limitations, but solicitingexternal input from a more truly representative body of those likely to be impacted by ourpractice is another.X. Design for Privacy and Security:This might seem like an obvious one, but nevertheless its importance cant beoveremphasized. Design here means not only technical design (of networks, databases,devices, platforms, websites, tools, or apps), but also social and organizational design (ofgroups, policies, procedures, incentives, resource allocations, and techniques) that promoteprivacy and security objectives. How this is best done in each context will vary, but theessential thing is that along with other project goals, the values of privacy and security remainat the forefront of project design, planning, execution, and oversight, and are never treated asmarginal, external, or after-the-fact concerns.XI. Make Ethical Reflection & Practice Standard, Pervasive, Iterative, andRewarding:Ethical reflection and practice, as we have already said, is an essential and central part ofprofessional excellence in cybersecurity. Yet it is still in the process of being fully integratedinto the profession. The work of making ethical reflection and practice standard andpervasive, that is, accepted as a necessary, constant, and central component of everycybersecurity context, must continue to be carried out through active measures taken byindividual practitioners and organizations alike. Ethical reflection and practice incybersecurity must also, to be effective, be instituted in iterative ways. That is, because thenature and extent of threats to cybersecurity are continually evolving, we must treatcybersecurity ethics as an active and unending learning cycle in which we continuallyobserve the ethical outcomes of our security practice, learn from our mistakes, gather moreinformation, acquire further ethical and technical expertise, and then update and improve oursecurity practice accordingly. Most of all, ethical practice in cybersecurity environments mustbe made rewarding: team, project, and institutional/company incentives must be well alignedwith the ethical best practices described above, so that those practices are reinforced and sothat cybersecurity practitioners are empowered and given the necessary resources to carrythem out.XII. Model and Advocate for Ethical Cybersecurity Practice:One way to be guided well in practical ethical contexts is to find and pay attention toexcellent models of that practice. Eventually, becoming excellent oneself not only allows youto guide others, it also allows you to collaborate with other excellent persons andprofessionals, to improve the standards by which we all live. Aspiring cybersecurityprofessionals can benefit from seeking, identifying, and developing strong mentoringrelationships with excellent models of cybersecurity practicemodels who not only possesstechnical excellence, but who are also exemplars of ethically superior cybersecurityleadership. A diverse range of models to learn from is best, as even experts have theirweaknesses and blind spots. But those who develop practical wisdom in cybersecuritypractice by learning from the best mentors can in turn become excellent mentors to others,raising the overall excellence and nobility of the field. Jointly, they can also work to advocatefor more technically and ethically superior cybersecurity norms, standards, and practices inthe field, raising the bar for everyone, and ensuring that cybersecurity professionals securethe promise of the information society for us all.CASE STUDY 2Anthony and Sarah are a cybersecurity team hired by a young but growing mobile devicemanufacturer to beef up their infosec operations. The company had two embarrassingsecurity breaches in recent months and is determined to take a more aggressive securityapproach moving forward; a friend of the companys founders recommended Anthony andSarah as fitting the bill. Anthony and Sarah favor an especially aggressive and offense-drivenstyle of cybersecurity practice. Their techniques include:13Forced Inoculation in the Wild: If Anthony and Sarah discover a worm that is beginning tospread quickly on the manufacturers devices, they will design and release into the wild (onthe Internet) a worm of their own design that remotely and autonomously patches hostsystems against the malware. Users are not made aware that they are downloading orinstalling the patch worm, it is spread and activated through the same surreptitious techniquesas malware.Automated Disabling: They create a security tool that, if it detects an infected host computeron the network, immediately launches a disabling attack on that computer, breaking its link tothe network so that it cannot infect more systems. The infected host computer will, withoutwarning, lose its network connection and all networked programs running will be interrupteduntil the security administrator can come to disinfect, patch, and reboot the host computer.Hacking Back and Honeypots: Anthony and Sarah use a variety of techniques to attackcomputers that appear to be carrying out hostile actions against their network: from installingspyware on attacking systems in an effort to identify the perpetrator, installing disablingmalware on the attacking system, deleting stolen data, and creating honeypots on their ownnetwork that appear to be vulnerable troves of sensitive data but really allow them to lure andinfect attackers systems. They are aware that these techniques in many contexts are illegaland pose the risk of collateral damage to innocent third parties whose systems have beencommandeered or spoofed without their knowledge, but they see their vigilante approach asjustified, at least in some cases, by the lack of effective law enforcement remedies forransomware and other cyberattacks on the company.Anthony and Sarah know that their methods are regarded as ethically questionable by asignificant portion of the security community, and so they do not disclose details of theirmethods, either to their employer (who takes a the less I know the better approach,) or tousers of the company network or the public whose systems may be impacted by theirmethods. Their motto is, the ends justify the means, and if they can discourage futureattacks on the company, they regard their job as well done. Question 2.2: Off the first set of 12 ethical best practices in cybersecurity listed in Section 5,which ones do Anthony and Sarah seem not to reliably practice? Explain your answer.

Step by Step Solution

There are 3 Steps involved in it

Step: 1

Anthony and Sarah the cybersecurity team described in the case study engage in aggressive and controversial cybersecurity practices that raise ethical ... blur-text-image
Get Instant Access to Expert-Tailored Solutions

See step-by-step solutions with expert insights and AI powered tools for academic success

Step: 2

blur-text-image_2

Step: 3

blur-text-image_3

Ace Your Homework with AI

Get the answers you need in no time with our AI-driven, step-by-step assistance

Get Started

Recommended Textbook for

Business Ethics A Stakeholder And Issues Management Approach

Authors: Joseph W. Weiss

7th Edition

1523091541, 978-1523091546

More Books

Students explore these related Programming questions