Undergraduate and graduate students in UC Santa Barbara’s Computer Science Department have received numerous prestigious accolades from top academic conferences and organizations in the past few months, ranging from best-paper awards to highly competitive fellowships. Department leaders see the national recognitions bestowed on their students as confirmation of the interdisciplinary, supportive, and solutions-driven efforts of the faculty and staff in computer science, the College of Engineering, and the university.
“These major awards are proof that our department is shaping the future of computing with our outstanding research and education programs,” said Tevfik Bultan, department chair. “Our students at all levels engage in world-leading research in a collaborative and energetic environment, tackle significant problems of societal importance, and make impactful contributions in all frontiers of computer science.”
Aesha Parekh and Samhita Honnavalli, both members of the university’s National Language Processing (NLP) Group, were recognized as two of the top fifty undergraduate researchers in the nation by the Computing Research Association (CRA), whose membership includes more than two hundred computer science and computer engineering departments in the United States. Parekh was named a 2022 CRA’s Outstanding Researchers Award Finalist, distinguishing her among the top ten undergraduate researchers in the nation. Honnavalli received honorable mention, placing her among the top fifty undergraduate researchers.
The pair became involved with the NLP Group and its co-director, computer science associate professor William Wang, through the department’s Early Research Scholars Program. Overseen by Diba Mirza, an associate teaching professor, the year-long research apprenticeship program provides undergraduate students with their first research experience. Parekh and Honnavalli analyzed how gender and seniority bias were manifested in text generated by natural language models, and they designed experiments to quantify bias against African-American English in Generative Pre-trained Trransformer 2 (GPT-2), a popular language model used for text generation and language translation.
Their graduate student mentor, Sharon Levy, also received a commendation of her own, an Amazon Fellowship to support her research in NLP. The fourth-year PhD student says that the fellowship enables her to focus on academics and provides the freedom and time to pursue her work on responsible artificial intelligence (AI).
“I am developing new methodologies to detect and mitigate harmful infromation in machine learning models,” said Levy, who is advised by William Wang. “My work is important because the adoption of these models without prior intervention can leave users vulernable to the spread of societal biases and information.”
PhD students Yuke Wang and Shlomi Steinberg received graduate fellowships from NVIDIA, the company that invented the graphics processing unit (GPU) and is a world leader in computer graphics, scientific computing, and artificial intelligence. They were two of the ten graduate students in the world selected by NVIDIA in 2022 to receive up to $50,000 in funding to support research that could lead to major advances in accelerated computing and its applications. Wang’s work could improve the performance, efficiency, and user experience of next-generation deep-learning (DL) systems that require powerful computational support.
“This fellowship provides a major boost for my research in GPU-based high-performance computing for deep-learning applications,” said Wang, a fourth-year graduate student who is advised by computer science assistant professor Yufei Ding. “The award will allow me to explore new software and hardware features of modern GPUs to accelerate deep-learning workloads in numerous real-world applications, such as autonomous driving and content suggested to e-commerce and social media users.”
Steinberg, who is advised by Lingqi Yan, an assistant professor of computer science, will develop models and computational tools for physical light transport — the computational discipline that studies the simulation of partially coherent light in complex environments. He simulates on a computer the physics of light behavior as it happens in real-world scenarios and environments.
"I am delighted to be a recipient of the graduate fellowship," said Steinberg, whose work could be applied to realistic computer graphic renderings in movies and multimedia, radar, WiFi, and cellular signals. "To receive research support from the company that revolutionized computer graphics is both humbling and inspiring."
M-Lab, an open-source project that provides measurement data to consumers about their internet performance, awarded one of its prestigious research fellowships to Sanjay Chandrasekaran, a third-year PhD student. Founded and led by the Code for Society in Sciences, Google, and academic researchers, M-Lab selected its three 2022 fellows based on the potential of their proposed research to improve internet performance.
“This fellowship means a lot to me because it reinforces that my project and the direction of my research are meaningful and impactful,” said Chandrasekaran, who earned his bachelor’s degree from Carnegie Mellon University and is advised at UCSB by computer science assistant professor Arpit Gupta.
As an M-Lab fellow, Chandrasekaran will focus on understanding the relationship between the user’s quality of experience (QoE) and the metrics that can be collected through active and passive measurements.
“The motivation for my research is to one day be able to knowledgably schedule network traffic in real time based on the needs of any application,” he explains. “Demystifying the relatonship between what we can observe in the network and a user’s QoE when using any application is a critical step toward solving this problem.
First-year PhD student Gyuwan Kim, who is advised by William Wang, was awarded Best Paper at the Simple and Efficient NLP Workshop during the Association for Computational Linguistics’ Conference on Empirical Methods in National Language Processing (EMNLP), one of the top conferences for publishing research in NLP and AI. Kim co-wrote the paper, titled “Length-Adaptive Transformer: Train Once with Length Drop, Use Anytime with Search,” with Kyunghyun Cho, an associate professor at New York University.
“The most exciting aspect about receiving the award is that it increases the chances that researchers in the same field will read my paper,” said Kim, whose research focuses on improving the efficiency and robustness of NLP models. “The increased exposure of our paper could also lead to future collaborations and push our work in new directions.”
Kim and Cho worked to improve transformer models, which are widely used in NLP and machine learning but are often costly. Their paper proposed a new framework that achieves a superior trade-off between accuracy and efficiency and can be used with any computational budget.