Why a social scientist should learn Machine Learning and Artificial Intelligence
Never have I thought that I would ever be staring at a dark-themed Atom window and ferociously typing in lines of Python codes on a Friday night. Owing to the several ugly C minuses marked on my 11th-grade mathematics papers (gosh I hate statistics), I had always thought of myself as more of an “arts and humanity” person. History, politics, society: yum. Hard science, computer, programming: yuck.
Such an aversion to science is also socially constructed. Under my education system, there was an artificial watershed separating the “humanity” subjects from “science” subjects. Students, having chosen their track, had their fate sealed and would never consider setting foot on the opposite side. With no surprises, I rarely wandered outside the humanity field. I got myself a social sciences degree and a specialization in international relations under political studies. So predictable.
Well, this article is not a review of education systems of any form. It is, however, my reasoning for crossing the footbridge thinly hung across this watershed. Why is a political/social sciences student choosing to learn to code, machine learning (ML), and eventually, artificial intelligence (AI)?
The primary reason is obviously money-driven, given the comparatively ominous career outlook for most humanity majors. But that ends the article too soon.
The secondary (more important) reason is that I’m sensing an urgent need for more socially minded people in the making of the coming — potentially the most disrupting — Fourth Industrial Revolution.1 Sparked by the surge of AI presence practically everywhere, the latest Revolution sees billions of people linked and aided by ever-evolving mobile devices, the Internet of Things, robotics, biotechnology, etc. The Founder of the World Economic Forum Klaus Schwab predicted our ways of living, working, and maintaining relational networks to be fundamentally altered, and it is up to us if the change is for the better or worse.2
Wait, that means, the fate of mankind is in the hands of computer scientists, engineers, or whoever designs the AI algorithms? The answer is: technically yes, but no. Yes, we rely on talented designers to whip up less of a Terminator-rogue-AI catastrophe and more of an auto-driving utopia. But this response deems the development of AI purely as a matter of science as if scientific experiments gone wrong is the only possible cause of doom.
The truth is, we have a lot more to worry about, and the challenges are already stamping on our doormat. In terms of AI, or any other data-enabled technology, the threat is more political than technological. And that’s why socially minded people must join the chat.
AI is inherently political
Sorry, how is AI political? Aren’t real-world data-powered algorithms more neutral than human’s disappointingly biased decision-making? This is where things get interesting.
One indisputable truth under my discipline is that “everything is politics.”3 This is even truer for AI.
Don’t believe me? Let me use my favorite “level of analysis” model in political science for an examination.4
Hold up, for those who don’t know what that is, let’s imagine a microscope. To investigate an alien object, we place it under lenses of different magnifications and scrutinize its micro and macro attributes. The same goes for the study of politics or society. Waltz puts it nicely, “[a]s men live in states, so states exist in a world of states.”5 We, therefore, use 3 levels — the individual, the societal/state, and the international — when probing complex phenomena.
All good? Let’s start with the most minuscule yet equally influential level:
The individual level
“The root of all evil is man, and thus he is himself the root of the specific evil, […].”6
Kenneth Waltz, Man, the State and War, 2001 ed.

Human nature is, unfortunately, the major reason why AI is not neutral like Newton’s Three Laws of Thermodynamics, and this very political-ness is captured by, you guessed it, “real-world data.” Author Ivana Bartoletti notably challenged the “sanctity of data,” or the conventional belief that data is simply objective information reflective of reality.7 But from our consumption of online news to the “likes” we send on social media, nothing is apolitical. The former shakes out our party inclinations and predisposed positions. The latter informs our personality, interests, social networks, and even psychological make-up.8 For example, if you want to test out how much Google knows about you, follow the steps:
- Open a blank Chrome tab, click on the round profile icon at the top right corner
- Click “Manage Your Google Account”
- Click “Data & Personalization” on the left-hand side column
- Scroll down to “Ad Settings”, click on it to see your ad personalization
Boom, there you go. If you have had the setting on for a few years, your ad profile should be quite impressive, or alarmingly precise. These “super profiles” Facebook and Google create are the results of year-long tracking of our “everyday life” on the web,9 and when aggregated and compared with other people, our social-economic status, sexual orientation, racial profile, or political convictions can all be deduced.
Now, that is not the most unsettling part. Machine learning entails that algorithm needs to be trained on data, improved by feedback, and make predictions or even decisions in our lives.10 It turns out that biases can be woven into almost every step of the way. Researchers at the University of Washington unraveled that the Google hate speech detector Jigsaw flagged more African American vernaculars as offensive over white-aligned language.11 Author Virginia Eubanks discovered the tendency for algorithms to view low-income families as less deserving of public housing programs.12 Twitter users fed Microsoft’s AI chatbot TayTweet with misogynistic, racist, and heavily skewed statements, successfully making Tay a Trump’s babe.13 Because each of our individual life is inherently political, so is the data fed to the algorithms. How neutral is it, when the churned-out AI predictions perpetuate the worst of humanity?
AND I HAVEN’T EVEN GONE TO THE NEXT LEVEL:
The societal/state level
Kenneth N. Waltz gave a beautiful analogy: Water, chemically and atomically consistent, can behave in different manners when put into containers, such as how heated steam can power engines or become a bomb under high pressure.14
Kenneth Waltz, Man, the State and War, 2001 ed.

Exchange “water” with “data” in our context, then it becomes clear how the political nature of data ineluctably encroaches the concepts we narrowly perceive as political: democracy, authoritarianism, party politics, populism, etc.
How? By channeling data into algorithms and having these algorithms make decisions for us, the fabric of domestic politics is slowly undergoing changes. One of the most contentious debates surrounds the rise of populism in western democracies and the algorithmic amplification of opinions through News Feed on social media platforms or news websites.15 Author Eli Pariser first coined the term “filter bubble” to describe how Google’s, Facebook’s, and Netflix’s omnipotent algorithms sift out information that doesn’t fit our consumption pattern, resulting in personalized bubbles in which each piece of information presented to us just agrees with us more.16 Inquiries about the linkage between social media “echo chambers” and personal political stance abound, some proving that the tie is weak as of yet,17 others uncovering a certain level of ideological segregation,18 and still others wary of the “black box” calculations the algorithms are making.19 Bartoletti forebodes a grim domestic political scenery where the all-mighty algorithmic propaganda machines deteriorate democracies by discouraging discussions and debates should no controls over their transparency and usage be implemented now.20 The whole Cambridge Analytica debacle where leaked FB user data was allegedly put into political campaigns just made the issue a lot more close to home.21
There is also the problem of facial recognition and biometric data misuse, which gives governments (authoritarian or not) a firmer grip on us. But hey, all that for a safer society? My two cents are that “just because we can do it doesn’t mean that we should.” And when it involves normative judgments as such, ethical and philosophical deliberations are welcomed.
Still with me? Please don’t click away. Because, please.
While the political change at the domestic level is gradual and subtle, the diplomatic language spoken among the high-rank officials have been dramatically rewritten:
The international level
“With many sovereign states, with no system of law enforceable among them, with each state judging its grievances and ambitions […] conflict […] is bound to occur.”22
Kenneth Waltz, Man, the State and War, 2001 ed.

The quote above describes an anarchic view of the international system, whereby states are always insecure about their relative power positions against other states.23 This is probably very self-explanatory and frequently seen in daily lives. During a pillow fight, my brother takes away the largest pillow; I, being 80% certain about the imminent attack, decide to reach for the harder memory foam ones, which makes him feel the need to build a pillow fort. The rest is history.
This perspective comes under what we call “realism” in international relations. While this article finds it unnecessary to reiterate the century-old debate on paradigms,24 it certainly cannot eschew addressing the “security dilemma” — as vividly shown above — in the current three-way AI competition between the United States, China, and European Union.
President Vladimir Putin best captured this race: who that leads AI “will be the ruler of the world.”25 Not one bit of that is untrue as more than 30 national AI strategies have been rolled out.26 The Center for Data Innovation called the US to be the lead of the game momentarily, with China catching up quickly and the EU dragging its feet.27 There is this essential difference between the US and Chinese models:28 The United States acknowledged the tremendous wielding force of the Silicon Valley and let commerce take the wheel. China, instead, took the matter into the government’s own hands and invest generously in military AI. Scholar Michael Horowitz juxtaposed these two approaches to AI and forecasted scenarios in which the dispersion of AI might influence how tomorrow’s wars are fought.29
Pretty intense, huh? When science fiction is enthralled about the Terminator-like human extinction, we are actually facing malign humans behind algorithms facing other malign human beings behind algorithms. And the casualty could be just as traumatic.
Up to this point, the message should be unambiguous. Nothing about AI is apolitical. Facial recognition technology on our phones is by all means a milestone in human history, but so is its usage in mass surveillance or targeted autonomous killer drones.30 This particular technology, trained on our data and used against us, creates a ripple effect in the political realm. If the social, political, or other humanity-centered experts-to-be do not pick up the discussion, we will soon lose the advantage of “nipping it in the bud.”
If AI is political, does it make it…bad?
Don’t get me wrong, this is not an anti-technology or anti-development piece. I choose to join the conversation not because I believe AI to be the end of mankind or my job interview line. It is because I also see a future where meticulously examined and thoroughly accounted for algorithms reinvent how we approach social welfare programs, human rights gatekeeping, invisible human trafficking, smart city managing, and sustainability.
For instance, Wood for Trees31 is utilizing big data technology and algorithms to help not-for-profit charities achieve maximum social impact like never before. Traffik Analysis Hub32 partnership program unleashes the IBM computing power to extrapolate undetected patterns of illicit financial flows and human trafficking routes from layers of complex data. Deep Genomics33 complements the practice of medicine by spilling out predictions on genetic transformation with new drug testing.
Am I tired of listing examples? Yes. But trust me when I say that the list goes on. To attain all the amazing things we wish for, we’d need more socially minded people to not just “talk the talk” but also “walk the walk.” We need to know the algorithms enough to keep biases at bay, to have enough background knowledge to realize the trade-off between operability and transparency, and to be skilled coders to some extent to build tools the society could benefit from. I can write bitter opinion pieces all day, but none solves the problem other than knowing how to change up the codes.34
So why do I start my journey on machine learning? It’s because I think I have a thing or two to say about how I want a future society to be, and I’d much rather code my own visions than leave it up to a machine.
In the following posts, we will review key concepts in this emerging technology while providing more analysis on the potential political or social impacts. Follow us on this self-learning journey and explore the intersection of politics and AI/ML.
Recommended reading list
Eli Pariser, The Filter Bubble: What the Internet Is Hiding from You, New York: The Penguin Press, 2011.
Ivana Bartoletti, An Artificial Revolution: On Power, Politics and AI, London: The Indigo Press, 2020.
Nathaniel Persily and Joshua A. Tucker (editors), Social Media and Democracy: The State of the Field, Prospects for Reform, Cambridge: Cambridge University Press, 2020.
Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor, New York: Picador, 2019.
References
- Klaus Schwab, The Fourth Industrial Revolution: what it means, how to respond (2016), World Economic Forum. ↩︎
- Ibid. ↩︎
- Insert your favorite author, philosopher, thinker, from George Orwell to Thomas Mann, they’ve all said something similar. It’s free real estate. ↩︎
- See its usage in Kenneth N. Waltz, Man, the State and War (2001, revised edition), New York: Columbia University Press, and J. David Singer, The Level-of-Analysis Problem in International Relations (1961), World Politics, Vol. 14 (1), pp. 77–92. Also, shout out to my political sciences professors. I did retain some knowledge after all. ↩︎
- Waltz, Man, the State and War, p.6. ↩︎
- Ibid, p.3. The original quote was a synopsis of several philosophers’ beliefs on human nature and its link to wars. We’re not exactly talking about war, and I’m not saying that all of human nature is pure evil. The idea is that since human nature is complicated and political, so is information about our behavior and thoughts. ↩︎
- Ivana Bartoletti, An Artificial Revolution: On Power, Politics and AI (2020), London: The Indigo Press, ch.2. ↩︎
- Andrew Hutchinson, What Does Facebook Know About You Really? (2019), Social Media Today. ↩︎
- Rebecca Bellan, Germany Bans Facebook From Super Profiling. Here’s What That Means For The U.S. (2020), Forbes; Matt Southern, Google Under Fire for Using “Super Profiles” to Serve Targeted Ads (2017), Search Engine Journal; Simon Van Dorpe, German strike at Facebook business model faces new court hurdle (2021), Politico. ↩︎
- See Ajay Agrawal, Joshua Gans, and Avi Goldfarb, Prediction Machines: The Simple Economics of Artificial Intelligence (2018), Boston, M.A.: Harvard Business Review Press. They provide detailed yet straightforward explanations on how algorithms work with neat graphics. Highly recommended for anyone who wishes to grasp at least conceptually how ML/AI works. ↩︎
- Devin Coldewey, Racial bias observed in hate speech detection algorithm from Google (2019), Tech Crunch. ↩︎
- Virginia Eubanks, We created poverty. Algorithms won’t make that go away (2018), The Guardian. ↩︎
- James Vincent, Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day (2018), The Verge. ↩︎
- The full quote: “Water running out of a faucet is chemically the same as water in a container, but once the water is in a container, it can be made to ‘behave’ in different ways. It can be turned into steam and used to power an engine, or, if the water is sealed in and heated to extreme temperatures, it can become the instrument of a destructive explosion.” See Waltz, Man, the State and War, p.80. ↩︎
- Swathi Meenakshi Sadagopan, Feedback loops and echo chambers: How algorithms amplify viewpoints (2019), The Conversation. ↩︎
- Eli Pariser, The Filter Bubble: What the Internet Is Hiding from You (2011), New York: The Penguin Press. ↩︎
- Shelley Boulianne, Karolina Koc-Michalska, and Bruce Bimber, Right-wing populism, social media and echo chambers in Western democracies (2020), New Media & Society, Vol. 22 (4), pp. 683–699; Eytan Bakshy, Solomon Messing and Lada A. Adamic, Exposure to ideologically diverse news and opinion on Facebook (2015), Science, Vol. 348 (6239), pp. 1130–1132. ↩︎
- Matteo Cinelli, Gianmarco De Francisci Morales, Alessandro Galeazzi, Walter Quattrociocchi, and Michele Starnini, The echo chamber effect on social media (2021), Proceedings of the National Academy of Sciences of the United States of America (PNAS). ↩︎
- Pablo Barberá, Social Media, Echo Chambers, and Political Polarization (2020), in Social Media and Democracy, edited by Nathaniel Persily and Joshua A. Tucker, New York: Cambridge University Press, pp. 34–55. ↩︎
- Bartoletti, An Artificial Revolution, ch.3. ↩︎
- Alex Hern, Cambridge Analytica: how did it turn clicks into votes? (2018), The Guardian. ↩︎
- Waltz, Man, the State and War, p.160. ↩︎
- So millennial. ↩︎
- Fine, it is I who don’t want to address it. There’s no way I could do justice to it. Apologies to my professors. ↩︎
- James Vincent, Putin says the nation that leads in AI ‘will be the ruler of the world’ (2017), The Verge. ↩︎
- Alexendra Mousavizadeh, Alexi Mostrous, and Alex Clark, The arms race: A groundbreaking new index ranking 54 countries (2019), Tortoise. ↩︎
- See the Center of Data Innovation, Who Is Winning the AI Race: China, the EU, or the United States? (2019, 2021 update). ↩︎
- Kelley M. Sayler, Artificial Intelligence and National Security (August 26, 2020, R45178 — Version: 9), Congressional Research Service. ↩︎
- Michael Horowitz, Artificial Intelligence, International Competition, and the Balance of Power (2018), Texas National Security Review, Vol. 1 (3), pp.36–57. ↩︎
- This is not fiction. Human Rights Watch has launched campaigns against the usage of autonomous weapons, or “killer robots”. See Human Rights Watch, Killer Robots (accessed 2021). True events have also occurred in the last few years, see Vivek Wadhwa, Killer Flying Robots Are Here. What Do We Do Now? (2021), Foreign Policy. ↩︎
- Wood for Trees, About us (accessed 2021). ↩︎
- Traffik Analysis Hub, Our Mission (accessed 2021). ↩︎
- Deep Genomics, AI Workbench (accessed 2021). ↩︎
- A note for me to go back to my Python learning. ↩︎