San Raffaele School of Philosophy 2020 - SRSP
Program
13.45 – Opening address – Roberto Mordacci (Dean of the Faculty of Philosophy, Vita-Salute San Raffaele University)
Chair: Francesca De Vecchi (Vita-Salute San Raffaele University)
14.00 – 15.00
Helena De Preester (University College Ghent and Ghent University)
Life is what you fill your attention with – the war for attention and the role of digital technology
Chair: Claudia Bianchi (Vita-Salute San Raffaele University)
15.00-16.00
Viviana Patti (University of Turin)
Language resources and automatic tools for analyzing and countering misogyny in social media
16.00-16.15 Break
Chair: Francesca Forlè (Vita-Salute San Raffaele University)
16.15-17.15
José Luis Martí (Pompeu Fabra University, Barcelona)
Artificial Intelligence and Collective Intelligence: how new technologies need to strengthen our democracies
Chair: Roberta Sala (Vita-Salute San Raffaele University)
17.15-18.15
Damiano Palano (Catholic University of Sacred Heart, Milan)
Partisans and bubbles. Polarization in a fragmented public sphere
18.15-18.30
Conclusion and final discussion
Abstracts
Helena De Preester (University College Ghent and Ghent University)
Life is what you fill your attention with – the war for attention and the role of digital technology
Digital technology is the primary infrastructure in the attention economy, in which human attention is treated as a scarce commodity. When information is abundant, attention is the limiting factor in the consumption of information. Algorithmic governmentality (Rouvroy & Berns, 2013) exploits the knowledge about what we pay attention to in order to commodify our desires. Hypernudging (Yeung, 2017), or the use of nudges in the context of big data, influences user choices by nudging attention in certain directions. Attention is the primary means used to manipulate users, and by taking control over attention mechanisms, the subject is gradually excluded from making decisions about thinking and behaving, from voting to spending their time. At the same time, people seem to be on a quest to recapture control of their own attention mechanisms. There are at least two ways in which this recapturing of control can happen: education and attention practices.
Rouvroy, A. & Berns, Th. (2013). ‘Algorithmic governmentality and prospects of emancipation’, La Découverte 1(177), pp. 163-196.
Yeung, K. (2017). “Hypernudge’: Big Data as a mode of regulation by design”, Information, Communication & Society, 20(1), pp. 118-136.
Viviana Patti (University of Turin)
Language resources and automatic tools for analyzing and countering misogyny in social media
In recent years, hateful language and in particular the phenomenon of online hatred against women are exponentially increasing in social media platforms becoming a relevant social problem that needs to be monitored. The seminar aims to illustrate artificial intelligence and computational linguistics techniques that can be applied to monitoring hate speech online, with particular emphasis on the development of linguistic resources and automatic tools to analyze, detect and contrast online misogynistic and sexist discourse. The phenomenon of online hatred will be observed in a multilingual perspective through the presentation of corpora of Twitter texts for Italian, English and Spanish and models developed in the context of evaluation campaigns for the automatic detection of misogynistic speech. We will discuss the specificities of misogynistic hatred online, the common traits between misogyny, sexism and other forms of hate speech online, and the expressions of intersectional hate speech, when different types of discrimination and hatred intersect and interact (e.g., the intersection of gender and ethnic discrimination). Considering that misogynistic and sexist language in social media is often expressed and to be interpreted in the context of widespread social phenomena of stereotyping and gender discrimination, two main lines of reflections will be proposed. The first line concerns the challenges related to the need of applying semantic grids of finer-grained analysis, by identifying underlying phenomena, such as prejudices, and unintended bias, or more subtle forms of abuse, such as micro-aggressions. On the other hand, we will elaborate on the possibilities to exploit computational linguistics methods to support concrete interventions against online hate speech, through the development of positive counter-narratives, devoted to raising awareness about online toxic discourse, and empower discriminated people in expressing themselves online and offline. Finally, the challenges related to the application of automatic hate speech detection techniques for content moderation will be discussed, with particular emphasis on the risks associated with the unrecognized use of irony or appropriated uses of slurs.
José Luis Martí (Pompeu Fabra University, Barcelona)
Artificial Intelligence and Collective Intelligence: how new technologies need to strengthen our democracies
We are constantly reading about spectacular advances in the field of artificial intelligence. Even if achieving real machine intelligence might still seem to be a faraway goal, in the immediate years to come we are going to witness an important leap in the ways AI can combine with other technologies, such as machine learning, big data, and VR, and produce deep and consequential changes in our lives and societies. Democracies cannot and should not be kept apart from these changes. We all have already seen different ways in which AI is affecting, mostly negatively, our democracies. But there's a whole camp of study that analyzes the several ways in which AI may actually strengthen, not undermine, our collective intelligence, which is the basis of democracy. Whatever the future of democracy is, it will have to remain human and it will inevitably depend on technology. Hybrid (machine-human based) mechanisms of public decision making do already exist, and they will develop more and more, changing forever not only the way we organize politics and law in our societies but also our collective identity as a demos and the image we have of ourselves as a common mind. There are many costs and threats in the transition. The good news is that, precisely because we are still living the early years of this new era, the decisions we can make today may have positive (or very negative) effects on our future. That is why it is so crucial to undertake this debate right now. And we can't fail.
Damiano Palano (Catholic University of Sacred Heart, Milan)
Partisans and bubbles. Polarization in a fragmented public sphere
The paper considers the role of partisanship in the context of a fragmented public sphere. The so-called ‘post-truth’ can in fact be understood as the consequence of a “fragmented” public sphere. This scenario is substantially different from that of the ‘party democracy’, which is protagonist of significant part of the twentieth century, and from that of the ‘audience democracy’ (Manin). The emergence of new media entails a series of consequences, including the fragmentation of the audience into a plurality of self-referential segments and politically polarized ‘bubbles’. Taking such developments into account, the article seeks to construct the ‘ideal type’ of a ‘bubble democracy’, marked by the mistrust of institutions, fragmentation of the audience, disintermediation, homophilic tendencies, and polarization. Finally, the paper considers the risks of partisanship in this context.