Peer-Reviewed Articles
The “attention economy” refers to the tech industry’s business model that treats human attention as a commodifiable resource. The libertarian critique of this model, dominant within tech and philosophical communities, claims that the persuasive technologies of the attention economy infringe on the individual user’s autonomy and therefore the proposed solutions focus on safeguarding personal freedom through expanding individual control. While this push back is important, current societal debates on the ethics of persuasive technologies are informed by a particular understanding of attention, rarely posited explicitly yet assumed as the default. We step away from a negative analysis in terms of external distractions and aim for positive answers, turning to Buddhist ethics to formulate a critique of persuasive technology from a genuinely ethical perspective. We offer input for further philosophical inquiry on attention as practice and attention ecology. We put forward comfort/effort and individualism/collectivism as two remaining central tensions in need of further research.
The impact of social-media technologies (SMTs) on digital well-being has become an increasingly important puzzle for ethicists of technology. In this article, we explain why individualised theories of digital well-being (DWB) can only solve part of this puzzle. While an individualised conception of DWB is useful for understanding online self-regulation, we contend that we must seek greater understanding of how SMTs connect us. To build an account of this, we seek the conceptual resources for our account in Confucian ethics. In contrast to individualised conceptions of human flourishing that are found in the Western tradition, Confucian thinkers emphasise that individuals cannot flourish alone, but need wider social structures (partner, family, society, nation). Not only do strands of Confucian ethics explain how individuals are defined by the roles they take up in relationships, but it also makes practical suggestions for how these roles can be judiciously cultivated. We conclude our essay by identifying the Confucian notions that seem to have most promise for the design of future SMTs.
Social media technologies (SMTs) are routinely identified as a strong and pervasive threat to digital well-being (DWB). Extended screen time sessions, chronic distractions via notifications, and fragmented workflows have all been blamed on how these technologies ruthlessly undermine our ability to exercise quintessential human faculties. One reason SMTs can do this is because they powerfully affect our emotions. Nevertheless, (1) how social media technology affects our emotional life and (2) how these emotions relate to our digital well-being remain unexplored. Remedying this is important because ethical insights into (1) and (2) open the possibility of designing for social media technologies in ways that actively reinforce our digital well-being. In this article, we examine the way social media technologies facilitate online emotions because of emotional affordances. This has important implications for evaluating the ethical implications of today’s social media platforms, as well as for how we design future ones.
** Winner of the Eindhoven University of Technology Postdoctoral Article Award **
Global lockdowns during the COVID-19 pandemic have offered many people first-hand experience of how their daily online activities threaten their digital well-being. This article begins by critically evaluating the current approaches to digital well-being offered by ethicists of technology, NGOs, and social media corporations. My aim is to explain why digital well-being needs to be reimagined within a new conceptual paradigm. After this, I lay the foundations for such an alternative approach, one that shows how current digital well-being initiatives can be designed in more insightful ways. This new conceptual framework aims to transform how philosophers of technology think about this topic, as well as offering social media corporations practical ways to design their technologies in ways that will improve the digital well-being of users.
The COVID-19 pandemic has transformed the domains of work, education, medicine, and leisure. It has also precipitated a spike in concern regarding our digital well-being. Prominent lobbying groups, such as the Center for Humane Technology, have responded to this concern by offering a set of ‘Digital Well-Being Guidelines during the COVID-19 Pandemic.' These guidelines seek to follow the many academic insights into digital well-being over the last decade. In this article, I evaluate (1) the Center for Humane Technology’s approach, comparing it with (2) character-based strategies and (3) approaches to redesigning online architecture. I argue that each of these approaches needs to be integrated into a complete theory to digital well-being.
Value-sensitive design theorists propose that a range of values that should inform how future social robots are designed. This article explores a new value: digital well-being, and proposes that the next generation of social robots should be designed to facilitate this value in those who use these machines. To do this, I explore how the morphology of social robots is connected to digital well-being. I argue that we need to decide whether tomorrow's social robots should be designed as embodied or disembodied. After exploring the merits of both approaches, I explore why there may be persuasive reasons why disembodied social robots could be better aligned with the value of digital well-being.
Self-care app companies have recently begun employing artificial intelligence (AI) to improve the functionality their products. This use of AI has already transformed – often enhancing – how many users experience online self-care. By dramatically narrowing the gap between offline and online self-care techniques, apps that incorporate AI have come close to replicating much of the face-to-face and personalised input of self-care gurus. Nevertheless, using AI to mimic and replace human agents invokes a cluster of interconnected ethical concerns. This article surveys the benefits that AI-enabled self-care products offer, and assesses the ethical challenges they must overcome.
Book Chapters
Increasing digital well-being is increasingly viewed as a key challenge for the tech industry, largely driven by the complaints of online users. Recently, the demands of NGOs and policy makers have further motivated major tech companies to devote practical attention to this topic. While initially their response has been to focus on limiting screentime, self-care app makers have long pursued an alternative agenda, one that assumes that certain kinds of screentime can have a role to play in actively improving our digital lives. This chapter examines whether there is a tension in the very idea of spending more time online to improve our digital well-being. First, I break down what I suggest can be usefully viewed as the character-based techniques that self-care apps currently employ to cultivate digital well-being. Second, I examine the new and pressing ethical issues that these techniques raise. Finally, I suggest that the current emphasis on reducing screentime to safeguard digital well-being could be supplemented by employing techniques from the self-care app industry.