// ABOUT_HUMAN_PREFERENCES
A partly-tongue-in-cheek provocative project (and partly serious data gathering exercise).[SYSTEM_DESCRIPTOR_v1.2]
Human Preferences is a thought experiment that explores a fundamental tension in the future of AI governance: as AI systems become increasingly capable of making optimal decisions, what happens to democracy as a decision-making mechanism?
The project presents a provocative premise: When AI can make better decisions than humans collectively, traditional democratic processes may become obsolete. However, AI systems will still need to be aligned with human interests and values. But what are those values? And who gets to decide?
Through our polls, we invite users to consider these questions in a playful yet thought-provoking way, while subtly highlighting the inherent contradiction in the premise itself.
There's an interesting paradox at the heart of this project: if AI can make better decisions than democratic processes, why are we asking humans to vote on how AI should make decisions?
This contradiction is intentional. It highlights the recursive nature of the alignment problem: AI needs to be aligned with human values, but we may disagree about what those values should be, so we would need to vote... but if voting is inferior to AI decision-making, then how do we break this circle?
The project doesn't claim to resolve this paradox, but rather to make it visible and to provoke discussion about the future of human decision-making in an AI-governed world.
// SECURITY LEVEL: PUBLIC
The questions in our polls touch on important issues that societies will likely need to confront as AI systems become more capable and integral to governance, resource allocation, and other key societal functions.
This project aims to gather preference data - which itself is a valuable contribution to discussions about AI alignment and governance.
READY_TO_EXPRESS_YOUR_PREFERENCES?
>GO_TO_POLLS