Decision Weights in Prospect Theory

Overview

Decision Weights in Prospect Theory

It's clear that humans are irrational, but how irrational are they? After some research into behavourial economics, I became very interested in Prospect Theory (see Chapter 29 of Thinking, Fast and Slow). A very interesting part of Prospect theory is that it is not probabilities that are used in the calculation of expected value:

ev

Here, the q's are not the probabilities of outcome z, but it is from another probability measure called decision weights that humans actually use to weigh outcomes. Using a change of measure, we can observe the relationship between the actual probabilities and the decision weights:

cmg

My interest is in this change of measure.

The Setup

Suppose you have two choices:

  1. Lottery A: have a 1% chance to win $10 000,
  2. Lottery B: have a 99% chance to win $101

Which would you prefer?

Well, under the real world probabilty measure, these two choices are equal: .99 x 101 = .01 x 10000. Thus a rational agent would be indifferent to either option. But a human would have a preference: they would see one more valuable than the other. Thus:

inq

rewritten:

inq2

and dividing:

inq3

What's left to do is determine the direction of the first inequality.

Mechanical Turk it.

So I created combinations of probabilities and prizes, all with equal real-world expected value, and asked Turkers to pick which one they preferred. Example:

Imgur

Again, notice that .5 x $200 = .8 x $125 = $100. The original HIT data and the python scripts that generate are in the repo, plus the MTurk data. Each HIT received 10 turkers.

Note: The Turking cost me $88.40, if you'd like to give back over Gittip, that would be great =)

Note: I called the first choice Lottery A and the second choice Lottery B.

Analysis

Below is a slightly inappropriate heatmap of the choices people made. If everyone was rational, and hence indifferent to the two choices, the probabilities should hover around 0.5. This is clearly not the case.

Imgur

What else do we see here?

  1. As expected, people are loss averse: every point in the lower-diagonal is where lottery A had a high probability of success than B. The matrix shows that most points in here are greater than 50%, thus people chose the safer bet more often.
  2. The exception to the above point is the fact that 1% is choosen more favourably over 2%. This is an instance of the possibility effect. People are indifferent between 1% and 2%, as they are both so rare, thus will pick the one with larger payoff.

FAQ

  1. Why did I ask the Turkers to deeply imagine winning $50 dollars before answering the question? This was to offset a potential anchoring effect: if a Turkers first choice had prize $10 000, then any other prize would have looked pitiful, as the anchor had been set at $10 000. By having them imagine winning $50 (lower than any prize), then any prize they latter saw would appear better than this anchor.

  2. Next steps? I'd like to try this again, with more control over the Turkers (have a more diverse set of Turkers on it).

This data is mirrored and can be queried via API here

Owner
Cameron Davidson-Pilon
CEO of Pioreactor. Former Director of Data Science @Shopify. Author of Bayesian Methods for Hackers and DataOrigami.
Cameron Davidson-Pilon
Model Validation Toolkit is a collection of tools to assist with validating machine learning models prior to deploying them to production and monitoring them after deployment to production.

Model Validation Toolkit is a collection of tools to assist with validating machine learning models prior to deploying them to production and monitoring them after deployment to production.

FINRA 25 Dec 28, 2022
Kaggle Tweet Sentiment Extraction Competition: 1st place solution (Dark of the Moon team)

Kaggle Tweet Sentiment Extraction Competition: 1st place solution (Dark of the Moon team)

Artsem Zhyvalkouski 64 Nov 30, 2022
Exemplary lightweight and ready-to-deploy machine learning project

Exemplary lightweight and ready-to-deploy machine learning project

snapADDY GmbH 6 Dec 20, 2022
A Python Package to Tackle the Curse of Imbalanced Datasets in Machine Learning

imbalanced-learn imbalanced-learn is a python package offering a number of re-sampling techniques commonly used in datasets showing strong between-cla

6.2k Jan 01, 2023
Mesh TensorFlow: Model Parallelism Made Easier

Mesh TensorFlow - Model Parallelism Made Easier Introduction Mesh TensorFlow (mtf) is a language for distributed deep learning, capable of specifying

1.3k Dec 26, 2022
Responsible Machine Learning with Python

Examples of techniques for training interpretable ML models, explaining ML models, and debugging ML models for accuracy, discrimination, and security.

ph_ 624 Jan 06, 2023
LibTraffic is a unified, flexible and comprehensive traffic prediction library based on PyTorch

LibTraffic is a unified, flexible and comprehensive traffic prediction library, which provides researchers with a credibly experimental tool and a convenient development framework. Our library is imp

432 Jan 05, 2023
A Python package to preprocess time series

Disclaimer: This package is WIP. Do not take any APIs for granted. tspreprocess Time series can contain noise, may be sampled under a non fitting rate

Maximilian Christ 57 Dec 17, 2022
This machine-learning algorithm takes in data from the last 60 days and tries to predict tomorrow's price of any crypto you ask it.

Crypto-Currency-Predictor This machine-learning algorithm takes in data from the last 60 days and tries to predict tomorrow's price of any crypto you

Hazim Arafa 6 Dec 04, 2022
onelearn: Online learning in Python

onelearn: Online learning in Python Documentation | Reproduce experiments | onelearn stands for ONE-shot LEARNning. It is a small python package for o

15 Nov 06, 2022
Estudos e projetos feitos com PySpark.

PySpark (Spark com Python) PySpark é uma biblioteca Spark escrita em Python, e seu objetivo é permitir a análise interativa dos dados em um ambiente d

Karinne Cristina 54 Nov 06, 2022
Decision tree is the most powerful and popular tool for classification and prediction

Diabetes Prediction Using Decision Tree Introduction Decision tree is the most powerful and popular tool for classification and prediction. A Decision

Arjun U 1 Jan 23, 2022
Regularization and Feature Selection in Least Squares Temporal Difference Learning

Regularization and Feature Selection in Least Squares Temporal Difference Learning Description This is Python implementations of Least Angle Regressio

Mina Parham 0 Jan 18, 2022
Pyomo is an object-oriented algebraic modeling language in Python for structured optimization problems.

Pyomo is a Python-based open-source software package that supports a diverse set of optimization capabilities for formulating and analyzing optimization models. Pyomo can be used to define symbolic p

Pyomo 1.4k Dec 28, 2022
The code from the Machine Learning Bookcamp book and a free course based on the book

The code from the Machine Learning Bookcamp book and a free course based on the book

Alexey Grigorev 5.5k Jan 09, 2023
Confidence intervals for scikit-learn forest algorithms

forest-confidence-interval: Confidence intervals for Forest algorithms Forest algorithms are powerful ensemble methods for classification and regressi

272 Dec 01, 2022
Predicting India’s COVID-19 Third Wave with LSTM

Predicting India’s COVID-19 Third Wave with LSTM Complete project of predicting new COVID-19 cases in the next 90 days with LSTM India is seeing a ste

Samrat Dutta 4 Jan 27, 2022
Azure MLOps (v2) solution accelerators.

Azure MLOps (v2) solution accelerator Welcome to the MLOps (v2) solution accelerator repository! This project is intended to serve as the starting poi

Microsoft Azure 233 Jan 01, 2023
PyCaret is an open-source, low-code machine learning library in Python that automates machine learning workflows.

An open-source, low-code machine learning library in Python 🚀 Version 2.3.5 out now! Check out the release notes here. Official • Docs • Install • Tu

PyCaret 6.7k Jan 08, 2023