Finding the right subject area to study isn’t always straightforward. This was the case for Karthik Narasimhan, who initially started his undergraduate journey in a completely different major. Once he took the leap to pursue a Master of Science (MS) degree in Artificial Intelligence (AI) and Machine Learning (ML) at Drexel's College of Computing & Informatics (CCI), he knew it was exactly the right move to set him on a life-changing academic trajectory.
Narasimhan completed his study titled “Extractive and Abstractive Text Summarization of US Supreme Court Opinions” as his Drexel capstone project during the Fall ‘22 and Winter ‘23 quarters — his most treasured scholarly accomplishment to date. He presented a poster on his research at the University’s Emerging Graduate Scholars Conference on April 20.
Narasimhan also put his knowledge to the test in real-world application while at Drexel. From June to September 2022, he interned with Oasis-X, an AgTech startup in Philadelphia. But before attending Drexel, Narasimhan had to do some soul-searching.
After enrolling as a philosophy major at Arizona State University, Narasimhan eventually dropped college altogether to find out what he truly wanted to study. And, after taking a several-years-long break, he completed his undergraduate degree in computer science in May 2021 from Empire State University.
We talked with Narasimhan to discuss his exciting research here at Drexel CCI and his next steps after earning his master’s in AI & ML in spring 2023.
CCI: What is the main concept behind your paper, Extractive and Abstractive Text Summarization of US Supreme Court Opinions?
Karthik Narasimhan: The main concept of the paper revolves around the challenge of creating abstractive text summarization for legal case judgments, specifically US Supreme Court opinions, using natural language processing (NLP) techniques. Abstractive summaries are important as they help laypeople without legal training better understand court judgments. However, due to factors like long length and limited availability of human-written summaries for these opinions, current language models face limitations in generating accurate summaries. To address this issue, I propose a dual transfer learning approach involving LEGAL-BERT and BART language models to improve summary generation while acknowledging that further research is needed to enhance their performance in this domain.
CCI: What has inspired your studies at Drexel?
KM: While completing my undergraduate degree in computer science, I learned about machine learning online after wondering how Netflix and Spotify recommended content to their users. I also read the book The Master Algorithm by Pedro Domingos, which explained the major concepts within ML for a broader audience. Toward the end of my undergraduate degree, I decided that I wanted to study AI in graduate school. I specifically sought out master's in AI programs, and when I found the website for Drexel's MS in AI and ML program, I knew it was right for me.
CCI: How has Drexel prepared you to conduct your research?
KM: I learned much of the fundamentals of language modeling that were directly applicable to this project from taking Natural Language Processing course at Drexel. I also applied techniques in exploratory data analysis, statistical analysis and data visualization learned from other data science courses at Drexel. More broadly, though, I significantly improved my research presentation skills through extensive practice while delivering numerous final project presentations.
CCI: What's your next step?
KM: My original intention was to move forward with research in abstractive summarization of US Supreme Court opinions by building a large dataset of US Supreme Court opinion-syllabus pairs, but ChatGPT with GPT4 can now generate accurate and customizable summaries of US Supreme Court opinions. So instead, I am exploring further research in the burgeoning field of generative large language models like GPT4 and LLaMA, and I am open to collaboration with researchers outside of computer and information sciences. There are many use cases for these models that have yet to be invented and I am excited to see what the future holds for them.