Q & A with Yue Zhang: Have AI Chatbots Earned Our Trust?

Yue ZhangMillions are turning to AI-driven large language models (LLMs) like ChatGPT, Gemini, and Copilot for tasks ranging from writing assistance to cybersecurity. Despite their growing role as trusted assistants, questions remain about whether these programs — and the companies behind them — have truly earned our trust. A new survey from Drexel University's College of Computing & Informatics (CCI) explores the benefits and risks of LLMs in enhancing privacy and security. 

The survey was conducted by CCI faculty members Yue Zhang, PhD, and Eric Sun, PhD, who head up the College’s Security and Privacy Analytics Laboratory. The survey reviewed 281 papers on the subject of LLMs, privacy and security and noted a majority of them (144), mostly in the past year, are focused on vulnerabilities and weaknesses within LLMs and the security and privacy risks they could pose — suggesting that this is an area of growing concern.

While LLMs can improve code security and support cybersecurity monitoring, they also present vulnerabilities and privacy risks. Zhang recently shared some insights with Drexel University’s News Blog about LLM security and what people should know about the technology before using it.


Read more on Drexel’s News Blog

Contact Us

Have a question? We’re eager to talk with you.

Contact Us