Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Microsoft launched Azure AI Content Safety, which can automatically detect negative online content such as hatred and violence.

2025-02-14 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >

Share

Shulou(Shulou.com)11/24 Report--

CTOnews.com May 24, Microsoft launched Azure AI Content Safety (Azure AI content Security Service), a content review service driven by artificial intelligence (AI), which aims to reduce negative information in the community environment.

The product provides a series of trained AI models that can detect negative content in pictures or text related to prejudice, hatred, violence, etc., and can understand and detect pictures or texts in eight languages, rate the severity of the marked content, and instruct human auditors what needs to be taken.

The ▲ source Azure official website Azure AI content security service is built into the Azure OpenAI service and is open to third-party developers in eight languages, including English, Spanish, German, French, Chinese, Japanese, Portuguese and Italian. Azure OpenAI Service is an enterprise-centric service product managed by Microsoft that aims to give enterprises access to AI Lab OpenAI technology and increase governance capabilities. Compared with other similar products, Azure AI content security services have a better understanding of text content and cultural background, and are more accurate in dealing with data and content.

Microsoft said that compared with other similar products, the product has significantly improved in terms of fairness and understanding of context, but the product still needs to rely on manual auditors to mark data and content. This means that in the end, its fairness depends on human beings. Manual auditors may be biased when dealing with data and content, so they are still unable to be completely neutral and prudent.

When dealing with issues involving machine learning visual perception, Google's AI image recognition software marked colored (black) people as orangutans in 2015, causing a huge controversy. Eight years later, today's tech giants are still worried about "political correctness" and "repeating the same mistakes". Earlier this month, OpenAI CEO Sam Altman called on the government to regulate artificial intelligence (AI) at a U.S. Senate subcommittee hearing. He pointed out that if this technology goes wrong, it will be very wrong and will cause great harm to the world. CTOnews.com warns that even advanced AI technology can be abused or misused, so it is important to regulate and regulate AI.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

IT Information

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report