CS seminar: Attack Large Language Models

Warning Icon This event is in the past.

When:
November 18, 2024
11:30 a.m. to 12:20 p.m.
Where:
M. Roy Wilson State Hall
5143 Cass Ave (Room #1216)
Detroit, MI 48202
Event category: Seminar
In-person

Speaker

Yuguang Yao, Michigan State University

Abstract

OpenAI GPT-4o? Claude AI Sonet 3.5? LLAMA 3? What kind of LLM are you using? However, it is not fun when you get rejections. Sorry, I am an AI assistant and cannot help you due to the regulations. Safety alignment of such LLM makes it harder to get through with your requests related to political, sexual, health, or like racial information. In this talk, I will detail some explorations around LLM/VLLM (Visual Large Language Model) jailbreaking. Can we attack LLM to get whatever we want? How about VLLM? How about embodied AI agent based on foundation models?

Bio

Yuguang Yao is a final-year PhD candidate from OPTML group, Computer Science and Engineering, Michigan State University. He has publications in top conferences including NeurIPS, ICLR, CVPR, etc. He is also chair of Adversarial machine learning: Frontiers Workshop in ICML 2022, ICML 2023, and NeurIPS 2024. He loves working out and benches 240 lbs.

Contact

Lori Smith
lorismith@wayne.edu

Cost

Free
November 2024
SU M TU W TH F SA
272829303112
3456789
10111213141516
17181920212223
24252627282930