User choice in security-sensitive systems is never free. Every exposed preference adds attack surface, policy complexity, and long-term maintenance cost.
Users do not make decisions in a vacuum. They make them under pressure, with partial context, and against adversaries using deception, coercion, dark patterns, compromised defaults, and stolen sessions. Building for “choice” without building for abuse is wishful thinking.
Microsoft’s UCPD (UserChoice Protection Driver) is a useful example. Under pressure to support genuine default-choice expectations (especially in the EEA) while preventing silent hijacking by malware and third-party software, Microsoft introduced an undocumented kernel-level filter driver that can block write operations even when a tool appears to have valid permissions. Observed behaviour from independent analysis suggests user-driven changes through supported Windows flows are often permitted, while many scripted or third-party attempts are blocked. That likely prevents one class of abuse, but it also disrupted legitimate admin workflows. The burden did not disappear; it moved. Now both Microsoft and operators spend ongoing effort maintaining, updating, and securing the control surface around user choice.
This pattern appears elsewhere. Office macros are a classic case: users were asked to make a trust decision with “Enable Content”, and attackers repeatedly used that path for initial access, forcing stricter defaults for internet-origin files. Browser default hijacking is another: adware and malware abused flexible default-setting paths, and platforms responded with stronger, user-mediated controls and protection layers that must now be maintained indefinitely. OAuth consent flows repeat the same lesson in modern systems: visible prompts alone are not enough when users are under pressure and attackers can mimic legitimacy, so ecosystems need scope minimisation, publisher trust signals, and risk-based guardrails around user choice.
A suckless approach avoids perverting the whole user experience: keep trusted paths small, explicit, and inspectable. Expose supported extension points instead of undocumented enforcement layers. Fewer moving parts reduce attack surface and make legitimate automation easier to reason about, test, and maintain.
When full transparency is not possible, trade-offs between security, usability, and operability must be explicit. Design for user intent, not just user input. Build guardrails that are visible, reversible, and proportionate to risk, with supported automation paths for managed environments.
If user control is a requirement, informed choice must be preserved while attackers are forced to work harder. The hidden cost remains: user choice creates permanent engineering and operational burden, and systems that ignore that cost usually fail both users and defenders.