Abstract: Cases such as Cambridge Analytica or the use of AI by the Chinese government suggest that the use of artificial intelligence (AI) creates some risks for democracy. This paper analyzes these risks by using the concept of epistemic agency and argues that the use of AI risks to influence the formation and the revision of beliefs in at least three ways: the direct, intended manipulation of beliefs, the type of knowledge offered, and the creation and maintenance of epistemic bubbles. It then suggests some implications for research and policy.
Excited to share the news that during the next years I will take up two guest professorships:
The Horizon Europe-funded https://bit.ly/3HtXE97 ERA Chair at the Institute of Philosophy of the Czech Academy of Sciences in Prague where I will help to set up a new international research Center of Environmental and Technology Ethics – Prague (CETE-P)(https://bit.ly/3Bw4APe).
Guest professor at WASP-HS (https://bit.ly/3Wh1whB) and University of Uppsala, where I will have the pleasure to work on our exciting research project “AI Design Futures” with Amanda Lagerkvist, Magnus Strand, and Virginia Dignum.
All this will involve the recruitment of researchers at various levels including PhD vacancies during the next year(s)! Will keep you updated.