Question
Fred and Tamara, a married couple in their 30's, are applying for a business loan to help them realize their long-held dream of owning and
Fred and Tamara, a married couple in their 30's, are applying for a business loan to help them realize their long-held dream of owning and operating their own restaurant. Fred is a highly promising graduate of a prestigious culinary school, and Tamara is an accomplished accountant. They share a strong entrepreneurial desire to be 'their own bosses' and to bring something new and wonderful to their local culinary scene; outside consultants have reviewed their business plan and assured them that they have a very promising and creative restaurant concept and the skills needed to implement it successfully. The consultants tell them they should have no problem getting a loan to get the business off the ground. For evaluating loan applications, Fred and Tamara's local bank loan officer relies on an off-theshelf software package that synthesizes a wide range of data profiles purchased from hundreds of private data brokers. As a result, it has access to information about Fred and Tamara's lives that goes well beyond what they were asked to disclose on their loan application. Some of this information is clearly relevant to the application, such as their on-time bill payment history. But a lot of the data used by the system's algorithms is of the sort that no human loan officer would normally think to look at, or have access toincluding inferences from their drugstore purchases about their likely medical histories, information from online genetic registries about health risk factors in their extended families, data about the books they read and the movies they watch, and inferences about their racial background. Much of the information is accurate, but some of it is not. A few days after they apply, Fred and Tamara get a call from the loan officer saying their loan was not approved. When they ask why, they are told simply that the loan system rated them as 'moderate-to-high risk.' When they ask for more information, the loan officer says he doesn't have any, and that the software company that built their loan system will not reveal any specifics about the proprietary algorithm or the data sources it draws from, or whether that data was even validated. In fact, they are told, not even the system's designers know how what data led it to reach any particular result; all they can say is that statistically speaking, the system is 'generally' reliable. Fred and Tamara ask if they can appeal the decision, but they are told that there is no means of appeal, since the system will simply process their application again using the same algorithm and data, and will reach the same result.
Questions.:
1) What ethically significant harms, as defined in Part One, might Fred and Tamara have suffered as a result of their loan denial? (Make your answers as full as possible; identify as many kinds of possible harm done to their significant life interests as you can think of).
2) What sort of ethically significant benefits, as defined in Part One, could come from banks using a big-data driven system to evaluate loan applications?
3) Beyond the impacts on Fred and Tamara's lives, what broader harms to society could result from the widespread use of this particular loan evaluation process?
4) Could the harms you listed in 1.1 and 1.3 have been anticipated by the loan officer, the bank's managers, and/or the software system's designers and marketers? Should they have been anticipated, and why or why not?
5) What measures could the loan officer, the bank's managers, or the employees of the software company have taken to lessen or prevent those harms?
Step by Step Solution
There are 3 Steps involved in it
Step: 1
Get Instant Access to Expert-Tailored Solutions
See step-by-step solutions with expert insights and AI powered tools for academic success
Step: 2
Step: 3
Ace Your Homework with AI
Get the answers you need in no time with our AI-driven, step-by-step assistance
Get Started