Value capture occurs when an agent’s values are rich and subtle; they enter a social environment that presents simplified — typically quantified — versions of those values; and those simplified articulations come to dominate their practical reasoning. Examples include becoming motivated by FitBit’s step counts, Twitter Likes and Retweets, citation rates, ranked lists of best schools, and Grade Point Averages. We are vulnerable to value capture because of the competitive advantage that such crisp and clear expressions of value have in our private reasoning and our public justification. There is, however, a price. In value capture, we take a central component of our autonomy — our ongoing deliberation over the exact articulation of our values — and we outsource it. And the metrics to which we outsource usually engineered for the interests of some external force, like a large-scale institution’s interest in cross-contextual comprehensibility and quick aggregability. That outsourcing cuts off one of the key benefits to personal deliberation. In value capture, we no longer adjust our values and their articulations in light of own rich experience of the world. Our values should often be carefully tailored to our particular selves or our small-scale communities, but in value capture, we buy our values off the rack. In some cases – like decreasing CO2 emissions – the costs of non-tailored values are outweighed by the benefit of precise collective coordination. In other cases, like in our aesthetic lives, they are not. This suggests that we should want different values suited to different scales. We should want value federalism. Some values are perhaps best pursued at the largest-scale level, others at smaller scales. The problem occurs when we exhibit an excess preference for the largest-scale values – when we consistently let the universal metrics swamp our quieter interests.
- Experimental Evidence That Conversational Artificial Intelligence Can Steer Consumer Behavior Without Detection - September 28, 2024
- Don’t be Fooled: The Misinformation Effect of Explanations in Human-AI Collaboration - September 20, 2024
- Implementing New Technology in Educational Systems - September 19, 2024