Question
It seems that just as quickly as Artificial Intelligence systems show promise in transforming how we work, live, drive, and even get treated by law
It seems that just as quickly as Artificial Intelligence systems show promise in transforming how we work, live, drive, and even get treated by law enforcement, scholars and others question the ethics that surround these autonomous decision-making systems. The ethics of AI focuses on whether or not decisions are being made that discriminate against people on the basis of race, religion, sex, or other criteria.
AI's profound bias problems have become public in recent years, thanks to researchers like Joy Buolamwini and Timnit Gebru, authors of a 2018 study that showed that face-recognition algorithms nearly always identified white males but recognized black women only two-thirds of the time. Consequences of that flaw can be serious if the algorithms cause law enforcement to discriminate when identifying suspects, or doctors use the algorithms to decide who to treat.
The challenge for developers is to remove bias from AI, which is complicated because the system depends upon the data that goes into the system. Training data must be vast, diverse, and reflective of the population so that the AI system has a strong sample.
Use this forum to discuss two examples of situations where bias can skew the data causing an AI system to discriminate against certain groups of people. How can fairness be built into the AI systems? Are the advantages that AI bring to a system worth the bias, if uncorrected?
Step by Step Solution
There are 3 Steps involved in it
Step: 1
Get Instant Access to Expert-Tailored Solutions
See step-by-step solutions with expert insights and AI powered tools for academic success
Step: 2
Step: 3
Ace Your Homework with AI
Get the answers you need in no time with our AI-driven, step-by-step assistance
Get Started