Question
EXERCISE 1: RUSH JOBS AND RED LIES: CASCADING ETHICAL FAILURES In their article Professional and Ethical Dilemmas in Software Engineering, Brian Berenbach and Manfred Broy
EXERCISE 1: RUSH JOBS AND RED LIES: CASCADING ETHICAL FAILURES In their article Professional and Ethical Dilemmas in Software Engineering, Brian Berenbach and Manfred Broy discuss various ways in which software engineers can find themselves pressured to compromise professional and ethical standards. They include cases of Mission Impossible (being asked to create or accept a product schedule that is clearly impossible to meet), Mea Culpa (delivering products without key functionality or with known defects), Rush Jobs (delivering products of subpar quality to meet schedule pressures), Red Lies (telling clients or management known falsehoods about product schedule or performance), Fictionware (promising features that are infeasible), and Nondiligence (inadequate review of requests for proposals, contracts or specifications).15 Consider the following scenario, adapted from several real-life cases: A mid-level sales manager at LifeDesign, a medical systems engineering firm, receives a request for proposal (RFP) to develop a radiation-delivery system for use in outpatient hospital settings. The RFP specifies that the product must have a guaranteed failsafe mechanism to prevent radiation overdoses due to user input error. The sales manager reviews the RFP with his development team, who tell him that A) the system cannot be built to specifications within the desired timeframe and B) there is no way in this kind of system to build in a guaranteed failsafe against operator error; proper radiation dosages vary so much by patient that there is no way for the system to enforce safe limits by itself; it must rely on accurate user input of the dosage instructions on the prescription. The best we can offer, the team tells him, is a warning system that will loudly prompt the user to verify and double-verify the correct input. But, the team tells him, it is well-known that such warnings are often mindlessly canceled by users who find them annoying, so this mechanism is hardly guaranteed to prevent operator error. The sales manager goes back to the client with a proposal that promises delivery of all functionality within the specified time frame. When they receive the proposal, which has been accepted and contracted with the client, the engineering team begins to design the system. But it is three weeks into the work before anyone notices the delivery date the design team leader immediately appeals to the sales manager, reminding him that he was told this was an impossible date to meet. He tells her that this is unfortunate but that they are under contract now, and there will be severe financial penalties for a late delivery. He reminds her that upper management wont be happy with any of them in that case, and tells her that her team had better find a way to get it done. The team leader goes back to her group, frustrated but resigned, and tells them to get it done somehow. No one has even noticed yet that the contract also includes language about the guaranteed failsafe against bad user input. Halfway through the delivery schedule, the sales manager and design team leader sit down with the client for an update. Despite being well behind schedule, they put on a good show for the client, who leaves confident that all is as it should be and that the product will deliver full functionality on time. Weeks before the due date, the team leader is telling her team that their jobs depend on an on-time delivery. Her engineering team has resorted to desperate measures especially the group responsible for the system software, who are taking shortcuts, hiding system errors, doing sloppy coding and subcontracting work out to third-parties without proper review of their qualifications. When the product is finally assembled, there is no time to test the full system in the actual client/user environment, so the quality-assurance team of LifeDesign relies on computer simulations to test its operation. These simulations make many flawed assumptions due to the rush schedule and a lack of information about the user context/environment. None of the simulations assume erroneous dosage input by users. The product is delivered on time, and represented as having full functionality. As a result, the client tells its therapists using the new system that it has an advanced failsafe mechanism that will prevent radiation overdoses from user error. Two years into its use, dozens of patients receiving radiation from the system, including young children and others with curable diseases, have been poisoned by radiation overdoses. Those who are not already ill or dying from overdose-related effects must now live with the knowledge that they have a severely elevated chance of developing fatal bladder or kidney cancers from the overdoses. An external review of the system by federal regulators and litigators reveals that not only was there no guaranteed failsafe mechanism, the warning system that did exist was so buggy that it would often not be triggered at all, or would behave so erratically that most users dismissed it whenever it was triggered. Lawsuits are piling up, the media is on the story, and everyone is pointing fingers at the radiation therapists, at the client, and especially at LifeDesign.
Question 1:1: Which of the various pressures identified by Berenbach and Broy at the start of this section contributed to the outcome of this case?
Question 1:2: Who were the various stakeholders whose interests were ethically significant here? What were the interests of each stakeholder? Whose interests should have taken precedence in the minds of the relevant actors, and why?
Question 1:3: Who at LifeDesign had the power to prevent this outcome? The sales manager? The design team manager? The manager of the software group? The individual members of that group? The quality-assurance team? The CEO? Go through the list and discuss what each person, at each level, could have done to effectively prevent this outcome.
Question 1:4: Put yourselves in the shoes of the software engineers employed on this project. Discuss together how the outcome of this case would affect you, personally and professionally. How would you feel about your friends, family, neighbors and mentors learning that you were involved in the scandal, and that your work was implicated in the suffering and deaths of many innocent people? What would you say to them to explain yourself? Would any explanation be adequate?
EXERCISE 2: PRIVACY BY DESIGN (OR NOT): GOOGLE STREET VIEW Along with intellectual property/copyright concerns, privacy is one of the most commonly discussed issues of ethical and legal concern with software. There are many definitions of what constitutes privacy: they include control over ones personal information; the right to be forgotten, to be left alone or to have a measure of obscurity; the integrity of the context in which your personal information is used; and the ability to form your own identity on your own terms. Each of these, along with many other potential definitions, captures something important about privacy. There is also increasing debate about the extent to which new technologies are changing our expectations of privacy, or even how much we value it. Regardless, privacy is in many contexts a legally protected right, and, in all contexts, among those interests that stakeholders may legitimately expect to be acknowledged and respected. The pressures that Web 2.0, Big Data, cloud computing and other technological advances are putting on privacy will continue to make headlines for the foreseeable future, and software engineers will continue to struggle to balance the legitimate desire for expanding software functionality with the ethical requirements of privacy protection. This is complicated by the spread of dual-use technologies with open-ended and adaptable functionalities, and technologies that offer a scaffold upon which third-party apps can be built. Among the most famous cases to reveal these challenges is Google Street View: In 2007, Google launched its StreetView feature, which displays searchable aerial and street-view photographs of neighborhoods, city blocks, stores, and even individual residences. From the very beginning privacy concerns with Googles technology were evident; it did not take long for people to realize that the feature displayed photographs of unwitting customers leaving adult bookstores and patients leaving abortion clinics, children playing naked, adults sunning themselves topless in their backyards, and employees playing hooky from work. Moreover, it was recognized that the display of these photos was being used by burglars and other criminals to identify ideal targets. Although Google did initially think to remove photos of some sensitive locations, such as domestic violence shelters, it was initially very difficult for users to request removal of photos that compromised their privacy. After an initial outpouring of complaints and media stories on its privacy problems, Google streamlined the user process for requesting image removal. This still presupposed, however, that users were aware of the breach of their privacy. In 2010, StreetView became the center of a new privacy scandal; it was discovered that software used in Google vehicles doing drive-by photography had been collecting personal data from unencrypted Wi-Fi networks, including SSIDs, device identifiers, medical and financial records, passwords and email content. Initially Google claimed that this data had not been collected, later they said that only fragments of such data had been retained, yet eventually they conceded that complete sets of data had not only been collected but stored. At one point Google blamed the breaches on a single rogue engineer, though later it was learned that he had communicated with his superiors about the WiFi data collection. As of 2012, 12 countries had launched formal investigations of Google for breaches of privacy or wiretap laws, at least 9 have determined that its laws were violated.16 In the U.S., Google was fined $25,000 for obstructing its investigation into the matter. More recently, Google settled a lawsuit brought by 38 states over the breaches for $7 million (a tiny fraction of its profits). As part of the settlement, Google acknowledged its culpability in privacy breaches, and promised to set up an annual privacy week for its employees along with other forms of privacy training and education.
Question 2:1: What forms of harm did members of the public suffer as a result of Google StreetView images? What forms of harm could they have suffered as a result of Googles data-collection efforts?
Question 2:2: What institutional and professional choices might have been responsible for Google StreetViews violations of public privacy?
Question 2:3: What ethical strategies from your earlier reading in this unit could Google engineers and managers have used to produce a more optimal balance of functionality and ethical design? Could one or more Google superprofessionals have prevented the privacy breaches, and if so, how?
Question 2:4: What can Google do now to prevent similar privacy issues with its products in the future?
EXERCISE 3: ANTI-PATTERNS, PROFESSIONALISM AND ETHICS Anti-patterns are engineering or business habits, techniques and solutions that are generally considered substandard, likely to backfire or generate more problems, unreliable, or otherwise indicative of poor professional conduct. The term, then, represents the opposite of what are known as best practices. Hundreds of anti-patterns have been named. Some of them are exclusive to software engineering, while others are general to business and other social institutions. Here are just some of the common antipatterns that can affect software engineers: Analysis Paralysis (a risk-averse pattern of unending analysis/discussion that never moves forward to the actual decision phase) Blind Coding/Blind Faith (implementing a bug fix or subroutine without ever actually testing it) Boat Anchor (a totally useless piece of software or hardware that you nevertheless keep in your design, often due to its initial cost) Bystander Apathy (everyone can see impending disaster, but no one is motivated to do anything about it) Cut and Paste Programming (largely self-explanatory; reusing/cloning code) Death March (a project everyone knows is doomed to fail but is ordered to keep working on anyway) Design by Committee (no intelligent vision guiding and unifying the project) Error Hiding (overriding the display of error messages with exception handling, so neither the user nor tech support can actually see them) Escalation of Commitment (adopting a misguided goal or poor strategy, then completely refusing to rethink it despite clear evidence of its failure) Improbability Factor (refusing to expend resources on a fix on the assumption that the problem is unlikely to actually occur in use) Input Kludge (failure of a software program to anticipate and handle invalid or incorrect user input) Gold Plating (gilding the lily; continuing to develop a product or adding features that offer insufficient value to justify the effort) Golden Hammer (using the same favored solution at every opportunity, whether its the appropriate tool for the job or not) Lava Flow (old code, often undocumented and with its function poorly understood, therefore left in) Moral Hazard (insulating decision makers from the risks/negative consequences of their decisions) Mushroom Management (keeping non-management employees in the dark, information-starved) Software Bloat (successive iterations of software using more and more memory/power/other resources with little or no added functionality) Spaghetti Code/Big Ball of Mud (unstructured, messy code not easily modified or extended)
Question 3:1: What makes the above anti-patterns not just potential sources of poor professional practice, but potential sources of unethical conduct as well? In general, how do these anti-patterns relate to the NSPE and ACM/IEEE-CS codes, and how do they show the connection between professional and ethical conduct?
Question 3:2: Which of these anti-patterns are most easy to see as causes of unethical engineering conduct? Which are more difficult to connect to lapses of ethics? Are there any that you think arent connected to ethics at all? Discuss this among your group.
Question 3:3: Which of these anti-patterns would you think are among the most difficult to avoid in software engineering practice, and why?
Question 3.4: Construct your own fictional case study using these anti-patterns. Create an engineering scenario in which at least three of these anti-patterns interact and lead to an outcome that is profoundly unethical. Describe the harm that results, and then identify at least two interventions or measures that members of the organization could nd hould have taken to prevent that outcome.
Step by Step Solution
There are 3 Steps involved in it
Step: 1
Get Instant Access to Expert-Tailored Solutions
See step-by-step solutions with expert insights and AI powered tools for academic success
Step: 2
Step: 3
Ace Your Homework with AI
Get the answers you need in no time with our AI-driven, step-by-step assistance
Get Started