阅读疯子 ydfz.cn » Tomshardware » Researchers jailbreak AI chatbots with ASCII art -- ArtPrompt bypasses safety measures to unlock malicious queries
首页
刷新
搜索
管理
展开/收起
Tomshardware
March 08, 2024
Researchers have developed ArtPrompt, a new way to circumvent the safety measures built into large language models (LLMs). According to their research paper, chatbots such as GPT-3.5, GPT-4, Gemini, Claude, and Llama2 can be induced to respond to queries they are designed to reject using ASCII art prompts generated by their tool.
固定链接 ' Researchers jailbreak AI chatbots with ASCII art -- ArtPrompt bypasses safety measures to unlock malicious queries '
提交:
March
7
, 2024, 7:50am CST
← Intel to get $3.5 billion infusion from U.S. gov't to make chips for military: Report
AMD updates FreeSync minimum requirements — 144Hz or higher refresh rate needed for 1080p and 1440p displays →
未登陆
登录