CV | Portfolio

I am a sociotechnical researcher in AI, examining how society shapes AI technologies and vice versa. On the one hand, I use data science and computational methods to examine artifacts composing AI systems; on the other, I adopt sociological and anthropological lenses to unearth the human practices, habits, and beliefs around human-AI interaction. I’m interested in interrogating the nature of a synthetic social fabric, encompassing the values, norms, institutions, and structures stemming from the social connections of humans and machines.

Currently, I am a research assistant at the TRACE Lab at the Cambridge University Centre for Human-Inspired AI (CHIA). Under Dr. Umang Bhatt, I work on the AI Externalities project testing the second-order effects of AI use on human-human interaction. As AI becomes part of our daily workflow, how does our AI usage affect our subsequent interaction with other human beings? I design a range of tasks across multiple domains (e.g., law, medicine, math, finance) where AI use may have tremendous sociopolitical or ethical consequences.

Areas of Interest

My fascination with digital technologies, mostly AI, started with my very first research job as a Laidlaw undergraduate fellow in 2020. Majoring in political science at the University of Hong Kong at the time, I was looking for new trends that just might flip over the geopolitical chessboard–and AI emerged as a key piece. Through the Laidlaw Foundation scholarship, I worked with the UCL European Institute on cataloguing human rights, environment, and privacy impact assessments of AI in 11 countries with granted EU adequacy decisions. I further visualized the comparative analysis on Tableau and presented it at an industry-academic workshop.

Ever since, I have journeyed through multiple disciplines to study the same phenomenon: the mutual shaping of AI and society. Here, I list selected highlights of my work organized into broad and shifting areas of interest. Their methodological and theoretical dispositions draw from all the disciplines I have dabbled in: social and political science, diplomacy and international organization studies, science and technology studies (STS), digital anthropology, computational social science, and human-AI interaction.

Civic Technologies

My interests in civic technologies, or the creation/adoption of digital artifacts for civic purposes, underlie my most passionate focus. Having spent much of my master’s degree and first formal job studying the unending battle between regulatory authorities and big tech companies, I was looking for a reason for optimism. Are all tech products created for profit and profit first, or could we imagine an alternative that treats users as humans and not consumers? My quest for better alternatives brought me from International Geneva to Bethnal Green, where the London College of Political Technology (Newspeak House) is.

I was part of the2024-25 cohort of Newspeak House fellowship candidates, studying the evolution and status quo of civic technologies under the guidance of talented faculty. I was primarily interested in using digital technologies to enhance access to civic participation, especially including the marginalized communities typically excluded from pertinent policy discussions. I co-founded Not A Stranger, a migrant-led digital narrative initiative, to uplift UK-based migrants’ voices amidst growing anti-immigration sentiments. I created both the initiative website and the Immigration Email Drafter, an LLM tool that helps migrants write emails to their representatives in whatever style or language they find comfortable.

AI increasingly makes up our civic infrastructure, mediating social interactions and political processes. I became interested in its effects on civic discourse qualities and co-hosted Democratic AI event series to experiment with AI-facilitated mini-publics (workshops 1 and 2). I also edited a multimedia publication The Political Technologist alongside other fellows to document our learnings and observations during the fellowship.

Algorithmic Investigations

Algorithmic systems are increasingly embedded in decision-making systems guiding and shaping our civic life; nevertheless, their impacts on individuals, communities, and populations remain difficult to measure. I understand “algorithmic investigations” as broader than algorithmic audits. Unlike a technical or a governance audit that has limited focus on the technical artifacts (datasets, algorithms, code scripts, etc.) or stakeholders (developers, AI companies, deployers, etc.), an algorithmic investigation begins at specific individuals and communities who experience the effects of these data-driven systems. Such an investigation may incorporate auditing methods (e.g., metric-based evaluations, code inspection, document reviews), but it also heavily grounds itself in the lived experience of communities through interviews, ethnography, and participatory research.

After critically examining existing algorithmic auditing methods in my master’s thesis, I decided to get my hands dirty by transitioning from a policy analyst to an actual auditor. I have so far investigated:

  • Machine learning-driven (ML-driven) ride-hailing apps (presented at TiCTEC 2025);
  • Smart vape detectors with sound anomaly detection (featured in Tech Policy Press (TPP) with a full report on Present Moment Enterprises);
  • Open-source facial recognition models (internal work);
  • Social welfare fraud detection algorithms (internal work).

Each of these was done in collaboration with grassroots organizations serving marginalized communities. I’ve also co-authored a community-oriented algorithmic auditing methodology, whose aim is to provide a range of no-code to technical methods where independent auditors and community members could investigate discriminatory or unjust algorithmic harms.

AI in Practice

Algorithmic and AI systems don’t just exist in a vacuum; instead, they are constantly “in the making,” shaping and shaped by the humans that make and use them. My thoughts on “AI in Practice” are heavily influenced by the works in STS, critical AI/data/algorithm studies, and feminist technoscience studies (FTS). They provide a wide range of approach to understanding the human practices around AI systems through ethnography, critical reflections, and case studies.

In my master’s thesis, I typologized existing ethics-based auditing methods according to their purpose, targets, involved stakeholders, timing, and techniques. I critiqued their failure in capturing how AI developers and users interact with AI systems in the real-world, re-inscribing meaning and ways of acting and co-producing socioethical consequences unforeseen by the rigid conceptions of statistical fairness. Ensuing a six-month ethnographic study, I documented how developer practices encoded specific values and social imagination into an LLM-based system. I published my findings with the self-innovated “ethnographic audit trail” in AI and Ethics. I became convinced that engineering socially inclusive systems requires intervening at their conception. Later, I collaborated with the Equiano Institute to produce a framework for integrating ethnographic methods in AI design, development, and deployment, which I presented at the inaugural 2025 Paris Participatory AI Research & Practice Symposium.

Digital Diplomacy

My time at International Geneva–a colloquial term referring to the United Nations’ policymaking and negotiation arms based in Geneva, Switzerland–cultivated in me an interest in digital diplomacy. As a field, I studied the constellation of international organizations (IOs), non-governmental organizations (NGOs), state actors, and transnational companies that collectively influence digital policymaking across privacy, human rights online, cybersecurity, e-commerce, and so on. I contributed to DiploFoundation’s multiple projects to this end: I authored and co-designed the Geneva Digital Passport, which was a nifty pocketbook introducing the 20 most salient digital policy topics and technologies discussed at the UN ecosystem. I was the curator of the information page on the EU Digital Services Act package during and immediately after the EU Parliamentary debates. I also produced a range of content, including:

As a practice, I explored how digital technologies could be used for diplomats and the UN in diplomacy functions, peace mediations, and policy negotiations. I created an AI Toolbox for Diplomats to demonstrate what an AI-assisted diplomat may look like, benefiting from commercially available AI tools (back in 2023). I also managed the Geneva Engage initiative, which aimed to evaluate the effectiveness of Geneva-based IOs, NGOs, and embassies at conveying their policy work to worldwide audiences through their social media activities. I was the lead coordinator, data analyst, and host for the 9th Geneva Engage Award, where 200 UN officials, ambassadors, and NGO representatives gathered for a ceremonial evening with a full program celebrating their yearlong achievements. Finally, I served as Diplo’s representative to the UN DPPA CyberMediation Network during the 2022 and 2023.

Critical Technology Studies

My encounter with Bruno Latour’s Actor-Network Theory in a digital anthropology class during my graduate studies marked a turning point in my academic pursuits: by reasserting the ability of artefacts to shape human agency, Latour opened me up to a world where human-machine systems coproduce sociopolitical consequences that influence our social fabric. While this analytical orientation has seeped into much of my work throughout, there are two distinct pieces to highlight. 

In an academic-industry applied research project during my master’s, I worked with Umoja Lab, a social enterprise piloting cryptocurrency-based humanitarian aid delivery in countries that suffered recent disasters. While decentralized finance (DeFi) brought promises of deconstructing the colonial project of humanitarian aid, the technological means were often less examined. In a team of four, we conducted systematic literature review on critical technology studies, feminist, and postcolonial literature to create an evaluation framework assessing the decolonial potential of Umoja Lab’s pilots in Haiti, Ecuador, and Vanuatu. Our results, reported in “DeFi to Decolonize Aid”, won the Human Rights and Humanitarianism Track Prize at the Geneva Graduate Institute for research excellence and impact.

In a class project, I wrote a research brief based on a detailed review of FTS on how information and communication technologies (ICTs) are designed with gendered inscriptions; I further summarized different strategies on countering harmful imaginations of gender in ICT designs.