Researching Social Media Algorithms: Dangerous Data

How come Twitter shows you the cutest cat video EVER each time you log in? How much does Facebook make from advertising? How do the algorithms used by social media companies impact society?
As part of the Open Content Curator Internships, we were given the opportunity to design open resources of our choice. Before starting this role I watched a documentary on the ethical problems with the machine learning algorithms used by many companies to make hiring and redundancy decisions. Computer says no, also covered the wider applications of technology in decision making and featured a particular story of a young girl drawn into a dangerous online community on social media.
After attending an Intern event led by Lilinaz Rouhani and Dr Vicki Madden on EDI in tech, and a focus group on Digital Safety, I decided that I wanted to make a resource related to this topic. They talked about data bias in many forms and provided examples of the huge consequences for marginalised groups in aspects of life from credit cards to healthcare. I had first learned about the topic of data bias when it was included in an optional data science course I took in first year of my degree. I’d really like to see more information on this included in the maths curriculum because so many students will go on to be statisticians or to work in tech so will be working with data and the computer programs that collect it.
There have been countless applications of machine learning developed in my lifetime, but social media and specifically content recommendation algorithms was the one I chose to focus on. Almost all of the information I see online is presented to and organised specifically for me by an algorithm and I wanted to understand better the effects that this has on me and on others.
I collaborated with Amy Yin, ISG’s Digital Safety Intern who clarified and explained points that I didn’t have the understanding for, and answered my confused questions! I also received content suggestions and input from Megan Thomson and Dr Vicky Madden who I got to meet through this role. One of my favourite parts of this internship has been being able to meet and learn from people from a wide variety of interesting subject areas. I would have guessed before starting here that everyone would have similar computer science backgrounds, but actually I don’t think I’ve met any two people who took the same path into their roles.
The resource is in the form of a quiz power point because I thought that this format is the one I’d be most likely to engage with if I came across it. The resource can be read through or could be used as a presentation to a school class. It includes brief descriptions of algorithms and machine learning in general, then focusses specifically on applications to social media. The main body of the resource covers the dangers of unchecked use of automated decision making, and the contribution of social media systems to societal bias, polarisation and the promotion of dangerous content online.
View the detailed resource description or download it here!
‘Social Media Algorithms: Dangerous Data’ was created by Alyssa Heggison, with guidance and input from from Amy Yin, Megan Thomson and Dr Vicki Madden at The University of Edinburgh Information Services Group. The resource contents are available under a CC BY-SA 4.0 licence unless otherwise stated.
Header Image: Cropped version of Neural Network by Rick Bolin via flickr,
CC BY-NC-SA 2.0, https://flic.kr/p/bqA6Gz