Carnegie Mellon University

Collaged portraits of Meryl Ye, Hemant Gouni, Shrey Tiwari, Alexandra Li and Xiaoyuan Owen Wu.

November 05, 2025

Five S3D Ph.D. Students Receive Fellowships

By Aaron Aupperlee

The Software and Societal Systems Department (S3D) recently awarded fellowships to five of its graduate students. The fellowships cover a portion of tuition and provide a stipend. Students were selected by S3D faculty and staff on the fellowship committee.

Meryl Ye and Hemant Gouni received Sansom Graduate Fellowships.

Ye, who is pursuing a Ph.D. in societal computing, investigates AI sycophancy — where AI systems prioritize user flattery and validation over accuracy. Research shows that sycophantic chatbots increase user overconfidence and extremism, yet users prefer them and view them as unbiased.

Ye's proposed work aims to reduce preference for sycophantic AI and test its social implications. Building on her current work with AI literacy interventions, she plans to investigate whether motivating accuracy reduces preference for sycophantic AI and whether sycophantic chatbots weaken motivation for human relationships. Ye aims to use experimental and computational methods to develop intervention frameworks that ensure AI systems complement rather than substitute human judgment and social interaction.

Gouni, who is pursuing a Ph.D. in software engineering, is developing a new foundation for information flow reasoning that enhances the usability and scalability of security verification. Traditional systems verifying confidentiality and integrity struggle with complexity, duplication and poor modularity.

Gouni's proposed structural information flow framework unifies these properties, showing that declassification naturally aligns with noninterference rather than violating it. This approach simplifies specifications, supports modular reasoning and reduces syntactic burden. Future work aims to demonstrate that confidentiality and integrity can function synergistically, extend the framework to complex language features, and validate its cognitive usability through user studies, advancing both secure programming and human-centered system design.

Shrey Tiwari and Alexandra Li received Hima and Jive Fellowships.

Tiwari, who is pursuing a Ph.D. in software engineering, aims to improve the correctness and reliability of date/time computations in software. This is an area notorious for subtle, high-impact bugs.

Building on prior work analyzing hundreds of real-world date/time bugs in open-source Python projects, Tiwari's research investigates whether AI coding assistants can overcome these challenges or perpetuate them. Systematically testing large language models (LLMs) on realistic date/time programming tasks using differential fuzzing exposes hidden logical flaws in AI-generated code. The overarching goal is to develop tools, benchmarks and best practices that help both human and AI programmers write more robust, temporally accurate software systems.

Li, who is pursuing a Ph.D. in societal computing, examines how international and domestic university students perceive, experience and respond to digital scams, focusing on how cultural, linguistic and institutional factors shape vulnerability.

Li's work will include a large-scale survey of about 1,000 students across U.S. universities to compare scam encounters, reporting behaviors and the effectiveness of university and government support. By identifying how students assess communication legitimacy and access trusted resources, the research seeks to develop more inclusive, culturally aware and effective antiscam education and policy interventions. Ultimately, Li aims to advance digital safety frameworks benefiting both student populations and other at-risk communities.

Xiaoyuan Owen Wu, who is pursuing a Ph.D in societal computing, received the Presidential Fellowship. Wu researches how to make LLMs safer and more trustworthy in digital security and privacy contexts.

Wu's work pursues two goals: first, to understand how users interpret and rely on signals such as consistency, confidence and accuracy when evaluating LLM responses, and to identify when and how these signals should be presented to improve decision-making. Second, it examines whether LLMs can learn and respect users' individual privacy preferences through personalization techniques like prompt engineering and memory-based profiles. Together, these studies will inform the design of LLM systems that better support user trust, accuracy, and privacy protection.