
- This event has passed.
Large Language Model (LLM) Jailbreaking and Prompt Hijacking
February 24, 6:30 pm-7:30 pm
Free
Join Root66Tulsa to learn how Large Language Models (LLMs) like GPT-4, Claude, and Gemini can be manipulated through jailbreaking and prompt hacking, and exploring the techniques and risks associated with them. Open to all UTulsa students.