Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Microsoft reports that GPT-4 is vulnerable to "jailbreak" prompts to generate undesirable content

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >

Share

Shulou(Shulou.com)11/24 Report--

CTOnews.com, Oct. 18, Microsoft's research team recently published a paper that studies in detail the "credibility" and potential toxicity of large language models (LLM), with particular attention to OpenAI's GPT-4 and its predecessor GPT-3.5.

The research team, GPT-4, although more reliable than GPT-3.5 in standard benchmarks, is vulnerable to "jailbreak" hints (bypassing model security measures). GPT-4 may generate harmful content based on these "jailbreak" tips.

The paper emphasizes that GPT-4 is more vulnerable to malicious "jailbreak" systems or user prompts and will accurately follow (misleading) instructions to generate bad content. Microsoft stressed that this potential vulnerability will not affect current customer-oriented services.

CTOnews.com is attached with the address of Microsoft's official paper, which can be read in depth by interested users.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

IT Information

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report