About me

I am a third-year Computer Science PhD student at the University of Maryland, College Park (UMD), fortunately advised by Prof. Hal Daumé III and Prof. Zubin Jelveh, who care and foster conceptual creativity. The broad theme of my PhD research is incorporating insights from law and moral philosophy to audit for and enable fairer attribution of responsibility in collaborative human-AI systems. The first category of “responsibility’’ is “burden”, e.g., 1/ decision subjects’ burden of bearing unfavorable and potentially erroneous decisions and 2/ decision makers’ burden of potentially suffering more when AI assistance is given (compared to when they make decisions alone). The second category of “responsibility’’ is “blame”, e.g., when a human-AI system decides or generates something wrong, how to fairly determine whom to blame?

Regarding research methods, through PhD and internship work, I have got experience in and been further refining three distinct but complementary methods:
1/ formulating new arguments based on primary legal sources (Law);
2/ designing human subject experiments and statistical analyses of their results (HCI);
3/ scraping online text, building classification models, fine-tuning language models against harmful content and measuring the quality of such fine-tuning (technical NLP/ML).

Funding and Awards

  1. DOJ/NIJ Graduate Research Fellowship 2023 ($166,500 over 3 years; topic: Operationalizing the Individual versus Group Fairness Dichotomy for Recidivism Risk Assessment: US Legal Challenges and Technical Proposals)
  2. Funded Proposal: Effort-aware Fairness (approx. $98,500 for one year; project conceptualized and proposal drafted by me, then refined and submitted with Donald Braman, Furong Huang and my PhD Advisors to NIST-NSF TRAILS)
  3. Dean’s Fellowship ($5000 over 2 years)

Publications

  1. Tin Nguyen, Jiannan Xu, Aayushi Roy, Hal Daumé III and Marine Carpuat. Towards Conceptualization of “Fair Explanation”: Disparate Impacts of anti-Asian Hate Speech Explanations on Content Moderators. EMNLP 2023. 6-minute, pre-recorded presentation and poster.
  2. Navita Goyal, Connor Baumler, Tin Nguyen and Hal Daumé III. The Impact of Explanations on Fairness in Human-AI Decision Making: Protected vs Proxy Features. IUI 2024.