+ A
A -
Algorithms hugely impact our consumption of news, media and much more but very little is known about how they do it.
Automated decision-making systems have crept into every corner of our lives they impact the news we see, the songs we're recommended, the products we buy and the way we travel.
At the heart of these systems lie algorithms computerised instruction sets that operate over data to produce controlled outcomes. Algorithms that, until recently, operated with very little scrutiny.
When it comes to news, algorithms can determine what content comes top of your search, what advertising is targeted at you, and what is and isn't allowed to exist on a platform through automated moderation.
Despite their ubiquity, algorithms can harm. Automated decision-making can discriminate on the basis of race, sex, age, class and more. These systems have been exploited by individuals and groups to proliferate misinformation.
Many news algorithms operate in closed proprietary systems shrouded in secrecy aptly described as black boxes. To best assess the potential and risks of automated decision-making in news and media, the community would need to access information about how these systems work in practice. That requires transparency.
Most of us would have encountered collaborative filtering a prevalent content recommendation algorithm popularised by Netflix. Collaborative filtering makes recommendations by extrapolating from shared qualities between items and/or users, directing audiences with messages like people similar to you enjoyed this film, so you should also enjoy this film. More data improves this prediction accuracy.
The volume of user preference data collected by platforms such as Spotify, Facebook, and YouTube is now so vast that serendipity is rendered mostly absent. These platforms instead resemble what marketing researchers Aron Darmody and Detlev Zwick describe as highly personalised worlds that are algorithmically emptied of irrelevant choice.
As algorithmically-enabled social media platforms become embedded in our cultural fabric, many users are developing their own beliefs about how algorithms work. Not all are correct. Two commonly encountered beliefs are that our devices are listening to us all the time, and that we collectively exist in filter bubbles.
We've all heard stories along the lines of: I was discussing a fairly niche topic with friends and shortly afterwards I started receiving targeted advertisements about the topic we were discussing.
This idea has been widely debunked the data transfer requirements alone make it implausible but the pervasiveness of this belief speaks to a deeper concern: that algorithms know more about us than we feel they should. The fact that many believe our phones are listening to us is partly psychological, but also a testament to the power of modern algorithmic recommendation systems.
The reality is our devices don't need to listen in to our conversations to know how to profile us and target advertising. It's much easier than that: we give away thousands of data points each day by sharing our location, click-throughs, text comments, status updates, and broader web-surfing behaviours.
A lot of this data isn't even captured inside the app we're using web trackers mean our ordinary internet-surfing activity can be made available to big social media platforms. If we appear predictable, it is because our phones and the platforms behind them can figure out an awful lot about us through our everyday activity.
Fear around algorithms has led to other exaggerations and moral panics, particularly around so-called filter bubbles and echo chambers. Coined by activist Eli Pariser, the filter bubble hypothesis suggests that as platform algorithms tune towards our specific tastes, those systems would self-reinforce us into bubbles' where we eventually only encounter materials that confirm our pre-existing world views.
The bulk of work on this idea has so far failed to produce such evidence. Most evidence suggests we are consuming a wider variety of content in the modern digital news era.
While platforms attempt to optimise their algorithms and drive specific user experiences, there are also external actors that attempt to exploit such algorithms for their own gain. For instance, YouTube's search ranking has been gamed by highly active niche entrepreneurs who were able to gain exceptional levels of visibility by posting inflammatory and controversial content.
On Twitter, coordinated inauthentic activity has been noted around major elections, the agents behind these campaigns exploiting the networked and algorithmic structure of the popular social media platform, purposely entangl(ing) orchestrated action with organic activity to boost platform visibility.
The myths and harms of algorithms point to a bigger issue: consumers do not know much about how these systems function.
What is being done to address these problems?
The algorithmic audit creates artificial user personas that can be made to interact programmatically with various algorithmic systems and track, at scale, whether specific demographic traits lead to specific forms of discrimination.
Critical simulation seeks to replicate aspects of algorithms (such as how Instagram uses machine vision to profile images uploaded by users) but in an open way. Doing so exposes the inner workings, subjecting it to scrutiny and experimentation.
Citizen science data donation' tools look back at platforms. Unlike the audit, data donations rely on users volunteering their authentic profile information. An Australian study is analysing how Google personalises search results, and another examines how Facebook's algorithmic advertising model works.
Those methods seek to bring transparency to otherwise opaque algorithmic processes. But some approaches have been met with resistance. In 2021, researchers at AlgorithmWatch and New York University NYU were threatened with legal action by Facebook over alleged violations of the platform's terms of service for developing plugins that allowed users to anonymously donate advertising-related data. In the case of the NYU project, several researchers had their personal Facebook accounts suspended.
A way forward from this impasse is the recently tabled US Platform Transparency and Accountability Act. The Act is yet to pass and only provides coverage within the United States. It proposes that platforms be forced to make certain data available to researchers and provide basic computational tools for supporting transparency research.
It suggests appointing the US National Science Foundation to act as the arbiter of these data requests, but it is not clear whether they would support non-US citizens from making data requests. Given the global reach of these platforms the internationalising of any data access model would be highly practical.
Automated decision-making systems have crept into every corner of our lives they impact the news we see, the songs we're recommended, the products we buy and the way we travel.
At the heart of these systems lie algorithms computerised instruction sets that operate over data to produce controlled outcomes. Algorithms that, until recently, operated with very little scrutiny.
When it comes to news, algorithms can determine what content comes top of your search, what advertising is targeted at you, and what is and isn't allowed to exist on a platform through automated moderation.
Despite their ubiquity, algorithms can harm. Automated decision-making can discriminate on the basis of race, sex, age, class and more. These systems have been exploited by individuals and groups to proliferate misinformation.
Many news algorithms operate in closed proprietary systems shrouded in secrecy aptly described as black boxes. To best assess the potential and risks of automated decision-making in news and media, the community would need to access information about how these systems work in practice. That requires transparency.
Most of us would have encountered collaborative filtering a prevalent content recommendation algorithm popularised by Netflix. Collaborative filtering makes recommendations by extrapolating from shared qualities between items and/or users, directing audiences with messages like people similar to you enjoyed this film, so you should also enjoy this film. More data improves this prediction accuracy.
The volume of user preference data collected by platforms such as Spotify, Facebook, and YouTube is now so vast that serendipity is rendered mostly absent. These platforms instead resemble what marketing researchers Aron Darmody and Detlev Zwick describe as highly personalised worlds that are algorithmically emptied of irrelevant choice.
As algorithmically-enabled social media platforms become embedded in our cultural fabric, many users are developing their own beliefs about how algorithms work. Not all are correct. Two commonly encountered beliefs are that our devices are listening to us all the time, and that we collectively exist in filter bubbles.
We've all heard stories along the lines of: I was discussing a fairly niche topic with friends and shortly afterwards I started receiving targeted advertisements about the topic we were discussing.
This idea has been widely debunked the data transfer requirements alone make it implausible but the pervasiveness of this belief speaks to a deeper concern: that algorithms know more about us than we feel they should. The fact that many believe our phones are listening to us is partly psychological, but also a testament to the power of modern algorithmic recommendation systems.
The reality is our devices don't need to listen in to our conversations to know how to profile us and target advertising. It's much easier than that: we give away thousands of data points each day by sharing our location, click-throughs, text comments, status updates, and broader web-surfing behaviours.
A lot of this data isn't even captured inside the app we're using web trackers mean our ordinary internet-surfing activity can be made available to big social media platforms. If we appear predictable, it is because our phones and the platforms behind them can figure out an awful lot about us through our everyday activity.
Fear around algorithms has led to other exaggerations and moral panics, particularly around so-called filter bubbles and echo chambers. Coined by activist Eli Pariser, the filter bubble hypothesis suggests that as platform algorithms tune towards our specific tastes, those systems would self-reinforce us into bubbles' where we eventually only encounter materials that confirm our pre-existing world views.
The bulk of work on this idea has so far failed to produce such evidence. Most evidence suggests we are consuming a wider variety of content in the modern digital news era.
While platforms attempt to optimise their algorithms and drive specific user experiences, there are also external actors that attempt to exploit such algorithms for their own gain. For instance, YouTube's search ranking has been gamed by highly active niche entrepreneurs who were able to gain exceptional levels of visibility by posting inflammatory and controversial content.
On Twitter, coordinated inauthentic activity has been noted around major elections, the agents behind these campaigns exploiting the networked and algorithmic structure of the popular social media platform, purposely entangl(ing) orchestrated action with organic activity to boost platform visibility.
The myths and harms of algorithms point to a bigger issue: consumers do not know much about how these systems function.
What is being done to address these problems?
The algorithmic audit creates artificial user personas that can be made to interact programmatically with various algorithmic systems and track, at scale, whether specific demographic traits lead to specific forms of discrimination.
Critical simulation seeks to replicate aspects of algorithms (such as how Instagram uses machine vision to profile images uploaded by users) but in an open way. Doing so exposes the inner workings, subjecting it to scrutiny and experimentation.
Citizen science data donation' tools look back at platforms. Unlike the audit, data donations rely on users volunteering their authentic profile information. An Australian study is analysing how Google personalises search results, and another examines how Facebook's algorithmic advertising model works.
Those methods seek to bring transparency to otherwise opaque algorithmic processes. But some approaches have been met with resistance. In 2021, researchers at AlgorithmWatch and New York University NYU were threatened with legal action by Facebook over alleged violations of the platform's terms of service for developing plugins that allowed users to anonymously donate advertising-related data. In the case of the NYU project, several researchers had their personal Facebook accounts suspended.
A way forward from this impasse is the recently tabled US Platform Transparency and Accountability Act. The Act is yet to pass and only provides coverage within the United States. It proposes that platforms be forced to make certain data available to researchers and provide basic computational tools for supporting transparency research.
It suggests appointing the US National Science Foundation to act as the arbiter of these data requests, but it is not clear whether they would support non-US citizens from making data requests. Given the global reach of these platforms the internationalising of any data access model would be highly practical.