Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

UGC products are taken off the shelves frequently, how should the platform get out of the dilemma of content review?

2025-02-22 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Network Security >

Share

Shulou(Shulou.com)06/01 Report--

A national grass-planting APP disappeared from the Android app market due to content violations, which should be regarded as the hottest topic recently.

As C2C's grass-planting platform, the APP not only allows users to share a list of grass planting online, but also posts purchase links that allow consumers to accept Amway while buying goods directly. The combination of the two made it a perfect diversion artifact.

In fact, the platform itself is just a simple content sharing community, everyone can publish any content, like the half-Buddha immortal article mentioned, for a C2C UGC platform, as long as it is able to publish content, the risk of content out of control is there. Comment area, user nickname, private message area, forum... All of them may be used by lawbreakers to publish prohibited commodities, ××× drainage, ×× fraud, etc., thus leading to out of control of platform content.

The picture shows: violation information

The problem of content out of control is actually part of anti-fraud risk control, so this article will start from two aspects to talk about the current situation of enterprise content governance: one is how black ash industry uses UGC platform to publish illegal information and conduct drainage fraud; the other is what kind of solution enterprises will adopt when encountering such problems.

I. Why is it difficult for enterprises to identify the large amount of prohibited information released by black ash production?

In the black ash production of the Internet, whether it is the wool party in a hurry or the navy of the wave trace forum, having accounts on various platforms is the first step for them to enter the "workplace," and registering a large number of accounts in batches has thus become the source of all malicious behavior. At present, this black industrial chain has formed a clear division of labor in the upper, middle and lower reaches. First of all, the number chamber in the middle reaches buys a batch of mobile phone numbers from the upstream card merchants, and then uses the code receiving platform, cat pool, code printing platform, other automatic tools, etc. to register false accounts in batches; then, the accounts are sold downstream for illegal profit-making behaviors such as collecting wool, swiping and spreading prohibited content.

Picture from the Network: Cat Pool

The picture shows: code receiving/coding platform

The source of the content platform spreading prohibited content is precisely such a batch of accounts. Many platforms filter their sources at the registration/login stage by setting security protection policies or real-name authentication. However, the platform risk control and black production breakthrough is always a continuous game process. When the black industry discovers that the illegal content such as advertising released by the newly registered account will soon be discovered and blocked by the platform, they will first let the account live a period of "normal person" days. After the platform monitoring really thinks that they are "normal people," they will tear off the mask of disguise and do evil on the platform. This behavior is "keeping the number."

"Number keeping party" uses group control technology + automation script to control multiple accounts to post, leave messages or like on the platform in batches, all of which are to simulate the behavior of normal individual users and escape the risk control rules of the platform.

Picture from network: group control scene

According to the continuous monitoring of threat hunters, the newly registered new accounts will sell for 0.1-10 yuan according to the benefits available on different platforms. After half a year or more of "maintenance," the price will double to 1-100 yuan or higher. If the account "has its own name," the price can be further increased. As long as you buy ×× and other real-name information from the "material merchant" to bind the account, the selling price of the account can be doubled directly to 10-200 yuan or more (such as WeChat account).

The picture shows: Account sales monitored by Threat Hunter Anti-Fraud Intelligence Monitoring Platform

The "raised" account was then bought into the hands of downstream black producers for advertising marketing, dissemination of illegal information and other ways, so there was a black account appearing on the content platform. They escaped the platform's protection rules by impersonating normal accounts.

After that, by constantly changing the expression form of sensitive words (e.g., WeChat can become prestige, VX, etc.), black production can bypass the platform's filtering rules for sensitive words, and then use automated scripts to publish illegal content in batches.

The picture shows: violation information

However, in fact, the days of black production are not so good now, which is related to the crackdown measures taken by the platform.

Many content platforms, after being infringed by black products, will immediately strengthen the review of content, intercept malicious accounts through risk control strategies, or set up audit teams and build data models to crack down on platform cheating behavior.

II. Basic measures to prevent the occurrence of illegal content

It is normal for regulatory authorities to strengthen supervision and content platforms to improve self-audit efforts. Under strong supervision, enterprises may strengthen their own audit capabilities, or cooperate with third parties to explore ways to improve content security, but their underlying logic is relatively fixed, that is, risk control plus audit system.

1. Use anti-fraud risk control system to intercept black accounts from the source

Both the machine review and manual review mechanisms mentioned above are a way to manage content after violations occur.

However, in the process of confrontation with the black industry, we found that if we can intercept the black account when it enters the platform and before publishing the illegal content, we can solve a large part of the violation problem.

The account number that publishes illegal content on the platform is usually bought by the black industry on the card issuing platform first, and then registered in batch by using automatic tools. If the platform can identify and intercept the black product at the moment of registration or login through the account identification system, it can prevent further evil.

In the process of establishing the anti-fraud system, we can master a large number of false mobile phone numbers and malicious IP resources used by the black industry through the deployment and control of the core nodes of the black industry chain. When the black industry uses these resources to register the account, it can intercept or downgrade the risk account in the registration or login link.

This layer of protection system can solve the generation of illegal content from the source. For the fish that escape the net, as long as the content review mechanism is supplemented, the content security problems of most platforms can basically be solved.

2. Supplementary content review mechanism to filter illegal content

Use machine auditing to filter out offending content

At present, violation information mainly appears on the platform in four main forms: text, images, video and audio.

Text content will be relatively less expensive than the other three types of content. The platform can maintain a dynamic text library by itself, continuously collect and update sensitive violation vocabulary, or contact a security service provider to access an external text library. The logic is the same, that is, through the form of text thesaurus filtering, filter out illegal text content, and then carry out subsequent manual processing or direct deletion and other operations.

For images, videos and audio videos other than text, they are often linked to artificial intelligence and machine learning technologies. To build a complete set of intelligent audit system requires relatively high costs. Therefore, for such content, the platform will choose to access the services of security service providers.

The picture shows: intelligent technology involved in machine audit

Take picture review as an example. There are two types of violations on the picture:

One is that the content of the picture itself is illegal, for example, there are ×××, weapons, violence scenes in the picture;

The other is to add illegal information to the picture artificially in the later stage. For example, the promotion QR code, WeChat number, telephone number and other contents are added to the picture.

For these two types of images, the use of third-party intelligent recognition technology can be identified. As long as the service provider continuously uses the collected image data information to train the model and ensure the rapid iteration of the model, the accuracy of recognition can be further improved.

Manual auxiliary audit to reduce misjudgment rate

Making full use of artificial intelligence technology to identify illegal content, to a large extent, is actually to save manpower. But in fact, no matter how much the industry brags about artificial intelligence technology, at this stage, this intelligent technology will not be intelligent enough, or without artificial assistance.

The picture shows: manual audit + machine audit complementary audit mechanism

Take the word "scanning code," for example, which can take various forms such as stone horse, less horse, scanning code, etc. Artificial intelligence cannot accurately identify all variants. In addition, the filtering rules for sensitive content should not be set too strictly, otherwise, normal content will be killed by mistake.

In such cases, manual review is an essential link.

Different platforms can set different audit policies by using the machine audit mechanism, and then label the adjudicability of the violation content to different degrees, and then deal with it accordingly.

For example, for the content determined to be illegal, it can be marked as [high risk], directly deleted or sealed; for the content suspected of violating the rules, it can be marked as [medium risk], and then given a text warning; for hitting a small amount of illegal content, but the nature cannot be accurately determined, it can be marked as [low risk], and for the low-risk content, it can be manually reviewed and handled to avoid accidental killing.

The manual review process is critical to all content platforms. At present, relatively large content platforms will even set up their own review mechanism, with a few dozen people and thousands of people to review the platform content.

written in the end

In practice, different platforms will have different content display forms, and the final solution and rule setting will vary from "platform" to "platform." However, no matter what platform, one thing that needs to be realized is that the black production methods are always evolving. In order to avoid interception strategies, new content distribution forms will continue to appear. The platform side can only pay continuous attention to content management solutions, and the business security side can only gain the upper hand in this "cat and mouse game" by constantly iterating protection technologies.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Network Security

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report