Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to use maximum likelihood estimation to calculate logical regression parameters

2025-02-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/03 Report--

This article will explain in detail how to use maximum likelihood estimation to calculate logical regression parameters. The content of the article is of high quality, so the editor shares it for you as a reference. I hope you will have some understanding of the relevant knowledge after reading this article.

one。 Maximum likelihood estimation

Select a set of parameters to maximize the probability of the experimental results.

a. If the distribution is discrete, and its distribution law is the parameter to be estimated, here we assume that it is a known quantity, then let X1

X2 Xn is a sample from X. the joint distribution law of X1 Magi X2... Xn is as follows:

(1)

Let x1recorder x2dome. Xn be a sample value of X1Magic X2Magne.Xn, then we can know the probability of X1Magi Xn taking X1Mague... X2 as X2, that is, the probability of the event {X1 = x1Magne.

(2)

Here, because the sample value is known, (2) is the function, which is called the likelihood function of the sample.

Maximum likelihood estimation: the known sample value x1mem.xn, select a set of parameters to make the probability reach the maximum value, at this time is the maximum estimated value. That is, fetching makes:

It is related to x1, and is recorded and called the maximum likelihood estimate of the parameter.

b. If the distribution X is continuous and its probability density is known and is a parameter to be estimated, then the joint density of the event X1. X n is:

(3)

Let x1Magne.xn be a sample value of the corresponding X1Magne.Xn, then the probability of random points falling in the field of (x1Magneol) is approximately as follows:

(4)

The maximum likelihood estimation is the evaluation, so that the probability of (4) is maximum. Due to

It does not change with each other, so the likelihood function is:

(5)

c. To find the maximum likelihood estimation parameters:

(1) write the likelihood function:

(6)

Here, n is the number of samples, and the likelihood function represents the probability of n samples (events) occurring simultaneously.

(2) logarithm of likelihood function:

(3) the logarithmic likelihood function is used to calculate the partial derivative of each parameter and make it 0, and the logarithmic likelihood equations are obtained.

(4) solve the parameters from the system of equations.

d. For example:

Set; is an unknown parameter, x1g.xn is a sample value from X. The maximum likelihood estimate is obtained.

Solution: the probability density of X is:

The likelihood function is:

The order is:

Solution: bring into the solution

two。 Logical regression

Logical regression is not regression, but classification. It is a classification strategy derived from linear regression. When the y value is only two values (for example, 0J1), when the linear regression can not fit well, the logical regression is used to classify it.

Here the logical function (S-type function) is:

(7)

Therefore, the estimation function can be obtained:

(8)

Here, our aim is to find a set of values, so that this group can well simulate the class values of the training samples.

Since binary classification is very similar to binomial distribution, we assume that the class value of a single sample is the occurrence probability, then:

(9)

It can be written as a general formula of probability:

(10)

According to the principle of maximum likelihood estimation, we can estimate the value through m training samples, so that the value of likelihood function is maximum.

(11)

Here, the probability of m training samples occurring at the same time. For log, you have to:

(12)

We use the random gradient rising method to maximize the value, and the iterative function is:

(13)

Here we derive each component, and we get:

(14)

Therefore, the iterative algorithm of the stochastic gradient ascending method is:

Repeat until convergence {

For I = 1 to m {

(15)

}

}

Think about:

The foothold of finding the parameters of the maximum likelihood function is step C, that is, we find the partial derivative in the direction of each parameter, and let the partial derivative be 0, and finally solve the system of equations. Because of the uncertainty of the number of parameters and considering the large number of possible parameters, it is very difficult to solve the system of equations directly. Therefore, we use the random gradient ascending method to solve the value of the system of equations.

Note:

(a) the simplification of formula (14) is based on the g (z) derivative, as follows:

(16)

(B) the following figure shows the distribution of the logical function g (z):

On how to use maximum likelihood estimation to find logical regression parameters is shared here, I hope the above content can be of some help to you, can learn more knowledge. If you think the article is good, you can share it for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report