top of page
Henry Fraser

AI threats to democracy

Today's post is going to be a little different. It's an experiment in 'lean' writing. I'm going to compile a taxonomy of threats to democracy by AI, both near and long term... But I'm going to do it incrementally and iteratively. I'll start with some bullet points, and gradually update, sharing updates as I go. At first I'm not going to include references to existing work, but I'll gradually fill this in and gloss important work on each head of risk.


The list

  • Algorithmic curation of information / Socio-technical organisation of informtion production and consumption

    • prioritising engagement or clickthrough leads to unintended dynamics in the way information is prioritised

    • filter bubbles and echo chambers

    • facility of spread of misinformation, where misinformation too easily moves up the hierarchy of visibility

    • vulnerability of algorithmic curation to hacking by bots, including for the purposes of state-sponsored disinformation

    • outrageification and polarisation, including pushing people to extremes by failing to expose them to sufficiently diverse points of view

  • Targeted political and consumer advertising

  • State surveillance enhanced with facial recognition, voice recognition and data processing power

  • Location tracking by both state and non- state actors: threatens freedom of association

  • Automated decision-making: administrative decisions, law enforcement decisions etc, without sufficient explainability, accountability or opportunities for appeal

  • Availability of powerful and inexpensive autonomous weapons to non-state actors

  • Reactionary measures

    • For example, concern about 'harmful' or violent content achieving prominence online through unintended consequences of operation of curation algorithms leading to overreaching state-sanctioned application of algorithmic filtration and censorship



32 views0 comments

Comentários


bottom of page