Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Example Analysis of logical regression restriction in python

2025-02-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

This article mainly shows you the "example analysis of logical regression limitations in python", which is easy to understand and clear. I hope it can help you solve your doubts. Let the editor lead you to study and study this article "sample analysis of logical regression limitations in python".

1. The limitation of logical regression

In logical regression classification, the linear function is input into the sigmoid function for conversion, and then classified, a classified line is drawn on the graph, but in the case of the following figure, no matter how you draw it, it is impossible for a straight line to completely separate it.

But if we can make a transformation of the input features, it is possible to classify perfectly. For example:

Create a new feature x1: the distance to (0re0), and another x2: the distance to (1mem1). In this way, the new features corresponding to the four points can be calculated and drawn on the coordinate system as shown in the following right-hand figure. After this conversion, the converted data can be input into a logical regression to completely separate it.

Although it is not easy for us to find such a conversion standard directly, we can find the standard through logical regression, use the first logical regression, find the first transformed parameter x1, and then use the second logical regression to find the second transformed parameter x2, take these two as new inputs, give them to the third logical regression, and then complete the classification.

Therefore, we can adjust the parameters so that the input x1line x2 belongs to two categories of probability (actually a number in the middle of 0-1, let's call it probability for the time being) as shown in the following figure. Then the probability that the point in the upper left corner belongs to two categories is (0.73, 0.05). Similarly, other points also have a probability of belonging to two categories. Put it on the coordinate axis and complete the conversion of the feature. Taking the transformed results as input and giving a new logical regression, the classification can be completed.

two。 The introduction of deep learning

It can be seen that each logical regression unit can not only receive the input data as a receiver, but also as a sender, and use its own output as the input data of other logical regression units.

Multiple logical regression units are intertwined, which is called neural network, and each logical regression unit is a neuron. This way of learning is called deep learning.

Here is an example:

Assuming that the initial input data is 1 and-1, and we know all the weights, for example, the weights of the two neurons from the two data to the first layer are 1, respectively, and then after the conversion through the sigmoid function, we can calculate that the results are 0.98, 0.12, respectively. Similarly, if we know all the weights (parameters), we can finally get two outputs, 0.62, 0.83.

When the initial data input is 0 and 0, through the same conversion, you can get the output 0.51 0.85. It can be seen that no matter what the input is, we can always carry out a series of transformations through a series of parameters and output it into data with completely different characteristics.

Therefore, the whole network can be regarded as a function. More generally, as shown in the following figure, each circle is a neuron, the foremost input is called the input layer, the last one without any neurons is called the output layer, and all in the middle is called the hidden layer. Each neuron in the picture below is connected to all the neurons in the next layer, called a fully connected neural network.

3. The calculation method of deep learning

For deep learning, matrix operation is usually used for calculation.

More generally:

That is, the parameters of the upper layer * the input value given by the previous layer + offset items, and then convert the whole sigmoid function, you can output a data of this layer for the lower layer to use. The operation is the same for all neurons all the way up to the output layer.

4. Loss function of Neural Network

For a sample, the loss function is shown in the following figure:

For example, the input is sample "1", there are 256 pixels, that is, 256 features, input into the neural network, the final output is a 10-dimensional vector, each dimension, there will be a probability value, for example, the probability of "1" is 0.8, the probability of "2" is 0.1, and the actual label is "1", that is, only y1hat is 1, and the others are 0. The cross-entropy of these two vectors is calculated and summed, such as the formula of the above figure, and the C obtained is the loss of this sample.

For the whole, all the sample losses are calculated and summed up.

The above is all the content of the article "sample Analysis of logical regression limitations in python". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report