The purpose of this guide is to highlight the issue of algorithmic secrecy and explain how the goals of engagement focused social media algorithms are opposed to public interest.
Thesis:
The lack of transparency and accountability in social media algorithms allows for unchecked power by tech companies to prioritize questionable incentives that threaten democracy and damage public health.
Research Questions:
How do social media algorithms work?
What role do algorithms play in the spread of misinformation?
What type of content do people most often engage in?
How do social media companies make money?
Why aren't social media algorithms available for public viewing?
How can algorithms impact democracy?
How is the brain impacted by algorithmic content?
Since algorithms first became a part of social media, the world of online media has changed exponentially. This drastic change has all been made without public knowledge on how social media algorithms work. Although there are no major social media algorithms available for public viewing, there has been extensive research on the negative effects these algorithms as a whole have on users. This research paper looks to explain how social media algorithms work and shows why they are so dangerous in their current state. The methodology included searching through SVC databases, university websites, EBSCO, and corporate financial statements. Thirteen different articles were collected and annotated for this paper. The lack of transparency means algorithms cannot be directly cited as a source of bias, yet this paper is able to show a connection between the type of media these companies are incentivized to promote which can undermine both democracy and public health. Unregulated algorithms have already proven to have a negative impact on users and will continue to do so if nothing is done. Social media algorithms need to be transparent in how they work in order to further research their effects and establish regulations. Social media users should vouch for transparent algorithms so they can make better decisions about their health and which platforms they choose to use.
Algorithms have the ability to be used for good and are used throughout most industries nowadays. Algorithms being used on social media are a fairly recent development that shows users the content they are most likely to interact with. Algorithmic-driven content is the choice for platforms and users alike. Users like them because algorithms are great at showing videos that will keep them entertained and platforms like them because their algorithms drive engagement which brings in ad revenue. There is no transparency required in social media algorithms which means there can be no meaningful regulation regarding them. Users being continuously more engaged in content has known harmful effects including addiction.
The video below highlights how engrained algorithms are in our daily lives and explains why the lack of regulation should be worrisome for the public. The presenter speaks of the harmful impacts that AI algorithms have on those belonging to underrepresented groups, as the datasets an AI program is given may not include them. They talk about creating more representative datasets by putting a "nutrition label" on them to show the biases that AI may incidentally perpetrate. The work they have done has caused those looking to create AI and those who sell data to be more conscious of how they use and collect data. This resource is important to the topic because the speaker explains algorithmic secrecy, which is the main concept in this guide.
Citation:
Chmielinski, Kasia. “Why Ai Needs a ‘Nutrition Label.’” TED, 2024, www.ted.com/talks/kasia_chmielinski_why_ai_needs_a_nutrition_label?language=en.
All Content CC-BY. |