Research

Working Papers

The Problem of Reputation Reliability in Online Freelance Markets — with Marina Sandomirskaia

HSE Working Paper WP BRP 260/EC/2023

This paper explains how the problem of reputation credibility may arise in online freelance markets, as clients often complain about the quality of the completed work irrespective of the price and the rating of the worker. We develop a dynamic signaling model of falsified reputation purchase by low-skilled freelancers, focusing on a semi-separating equilibrium in every period. The main result states that when the costs of purchasing reputation are high, only the maximum rating is bought. This is due to low-skilled freelancers wanting to be chosen by clients in order to recoup their losses. When the costs are low, a variety of reputations are observed, but the reputation mechanism is not credible and adds little new information to prices.

Work in Progress

Can I Trust the AI? Delegating Decisions Under Uncertainty About Preference Alignment — with Vincent Lenglin

The increasing integration of AI into decision-making raises an important question: do individuals perceive human and artificial agents as making different types of errors when deciding on their behalf? While prior research has focused on delegation behavior and trust, individuals' underlying beliefs about decision errors are rarely measured directly. This study elicits subjective beliefs about the likelihood and nature of errors made by human and AI decision-makers and examines how these beliefs shape willingness to delegate. In a within-subject experiment, participants choose between making decisions themselves, delegating to another human, or delegating to an AI (ChatGPT) in a risky lottery setting with potential preference misalignment. Willingness to pay (WTP) for each option is elicited using a multiple price list. Participants also report their beliefs about each agent's likelihood of selecting different options, allowing us to link perceived error patterns to delegation preferences. The study provides direct evidence on how beliefs about human versus AI errors influence delegation decisions.

An Experimental Investigation of Algorithm Delegation for Choice Tasks — with Fabrice Le Lec and Vincent Lenglin

Whether individuals are willing to delegate decisions to algorithms is central to understanding the economic implications of artificial intelligence. This question bears directly on how AI may reshape economic behavior, organizational efficiency, and market outcomes. In this paper, we provide experimental evidence on individuals' attitudes toward algorithmic delegation, shedding light on the behavioral foundations of algorithmic adoption. Our contribution is threefold. First, we study delegation in choice tasks—decisions reflecting preferences under risk—rather than the judgment or prediction tasks that dominate the existing literature. This distinction matters because delegation in choice contexts raises concerns about autonomy and responsibility rather than factual accuracy. Second, we elicit preferences across three decision modes: self-decision, delegation to another human, and delegation to an algorithm, allowing us to distinguish algorithm-specific aversion from general delegation aversion. Finally, we hold constant the performance accuracy across all conditions, isolating preferences toward the source of delegation from differences in expected performance. Our results are unambiguous: we find no statistically significant preference for self-decision relative to either human or algorithmic delegation. Taken together with prior evidence, our results suggest that experimentally documented algorithm aversion primarily reflects pessimistic beliefs about algorithmic performance rather than intrinsic resistance to delegating decisions to algorithms.