An interesting case of friendly competition between humans and machines is the Supreme Court Forecasting Project 2002:
Question:
An interesting case of friendly competition between humans and machines is the Supreme Court Forecasting Project 2002: http://wusct.wustl.edu/
It compared the accuracy of two dierent ways to predict outcomes of the Supreme Court cases in the USA: informed opinions of 83 legal experts versus a computer algorithm.
They were predicting in advance the votes of each of the nine individual justices for every case in the Supreme Court in 2002. The same algorithm was used to predict outcomes of all cases, but legal experts were only predicting the cases that were within their area of expertise.
The computer algorithm seemed very reductionist. It only took into account six simple factors such as the issue area of the case or whether or not the petitioner argued that a law or practice is unconstitutional.
Both predictions were posted publicly on a website prior to the announcement of each of the Court’s decisions. There was a lot of suspense.
The experts lost the game: the computer correctly predicted 75% of the Supreme Court decisions, while the experts collectively made only 59.1% of correct predictions. Note that all the decisions were binary (arm/
reverse), so the experts did only 9.1% better than what could be achieved by a toss of a coin.
Why can human experts who have access to detailed information about a case turn out to be such bad predictors? What is it about human decision-making that allows a simple computer algorithm to outperform collective wisdom of people with years of education and experience behind them? If human decisions are biased, can these biases be xed?
Step by Step Answer: