In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly explains "what is the Python moment and moment generating function". Interested friends may wish to have a look at it. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn "what is the Python moment and moment generating function"?
Slope
What is worth thinking is, are expectations and variances enough to describe a distribution? If the answer is yes, then there is no need to look for other descriptive quantities. In fact, these two descriptions are not enough to describe a distribution completely.
Let's look at two distributions, one is exponential distribution:
F (x) = {ex0ififx ≥ 0x2
The density curve is symmetrical with the original density curve with respect to the straight line Xue1, and has the same expected value and variance as the original distribution. The expectation is E (x) = 1E (x) = 1, and the variance is Var (x) = 1Var (x) = 1
We draw two density curves of the distribution, as shown in the following figure:
It can be seen that even if the expected value and variance remain the same, the two distribution curves are obviously different. The area under the first curve deviates to the left, while the second curve tilts to the right. In order to express this feature of distribution, we introduce a new descriptive quantity, skewness. It is defined as follows:
Skew (X) = E [(X − μ) 3] Skew (X) = E [(X − μ) 3]
For the above two distributions, the first curve is skewed to the left, with a slope of 2, respectively. The slope of the other curve is-2. It is obvious that different slopes can lead to very different distributions (even if expectations and variances are the same).
The drawing program is as follows
From scipy.stats import exponimport numpy as npimport matplotlib.pyplot as pltrv = expon (scale = 1) x1 = np.linspace (0,20,100) x2 = np.linspace (- 18,2100) y1 = rv.pdf (x1) y2 = rv.pdf (2-x2) plt.fill_between (x1, y1,0.0, color = "green") plt.fill_between (x2, y2,0.0, color = "coral", alpha = 0.5) plt.xlim ([- 6) 8]) plt.title ("two distribution") plt.xlabel ("RV") plt.ylabel ("f (x)") plt.show () moment
Definition of observation variance and slope
Var (X) = E [(X − μ) 2] Var (X) = E [(X − μ) 2]
Skew (X) = E [(X − μ) 3] Skew (X) = E [(X − μ) 3]
Are all expectations of functions of X. The only difference between them lies in the form of the function, that is, the power of (X − μ) (X − μ) is different. The variance is 2 power and the slope is 3 power.
The above descriptions can be classified as a family of "moment" descriptors. Similar to variance and slope, they are both expectations of the (X − μ) (X − μ) multiplier, called central moment. E [(x − μ) k] E [(x − μ) k] is called k order central moment, expressed as μ k μ k, where k = 2, 3, 4,.
There is another kind of moment of origin (moment about the origin), which is the expectation of XX multiplier. E [XK] E [XK] is called k-order origin moment and is expressed as μ'k μ k', where k = 1, 2, 3,.
Expectation is the first order origin moment:
E (X) = E (X1) E (X) = E (X1)
Moment generating function
In addition to the characteristics of center, discrete program and slope, higher-order moments can describe other characteristics of distribution. Moment statistics plays an important role, for example, an important method of parameter estimation is the use of moments. However, according to the definition of moments, we need to find expectations for X powers of different orders, which involves a complex integration process, which is not easy. Moment also gives birth to moment generating function (moment generating function), which is a powerful weapon for solving moments.
Before you look at the moment generating function, let's review the power series (power series). A power series is the power of different orders (for example, 1MagnexPersonality x2memex3.. 1pr xrex2jinx3...) The weighted sum of:
∑ iTunes 1 + ∞ aixi ∑ iTunes 1 + ∞ aixi
Aiai is a constant.
Power series is an important tool in mathematics. Its beauty is that analytic functions can be written in the form of power series, such as trigonometric function sin (x) sin functions (x) can be written as:
Sin (x) = x − x33 cycles x55! − x77 cycles... sinks (x) = x − x33 cycles x55! − x77 cycles.
The process of decomposing an analytic function into a power series is the Taylor decomposition (Taylor). We will no longer delve into the specific process. Xnxn is a very simple form of function, which can be derived infinitely and easily. This feature makes power series easy to deal with. If the analytic function is written as a power series, it has the effect of simplifying complexity.
The tool of power series is widely used in mathematics. It is used in mathematical analysis, differential equations, complex functions. It has to be said that mathematicians are very good at using a thoroughly studied tool)
If we regard the x of the power series as a random variable X and find the expectation. According to the characteristics that expectations can be linearly added, there are:
E (f (X)) = a0+a1E (X) + a2e (X2) + a3e (X3) +... E (f (X)) = a0+a1E (X) + a2E (X2) + a3e (X3) +.
We can calculate the expectation of f (X) through moments.
On the other hand, can we get the moment through the analytic function? Let's look at the following exponential function, written in the form of a power series:
Etx=1+tx+ (tx) 22 letters + (tx) 33 letters + (tx) 44 letters. Etxboxes 1 million txx + (tx) 22 boxes + (tx) 33 letters + (tx) 44 letters.
Once again, we regard x as a random variable X and calculate the expectations on both sides, that is,
E (etX) = 1+tE (X) + t 2E (X2) 2E (X3) 3e (X3) 3e (X4) 4e (X4) 4E (X4) 4E (etX) = 1+tE (X) + t 2E (X2) 2E (X3) 3E (X4) 4E (X4) 4E.
Even if the distribution of random variables is determined, the value of E (etX) E (etX) still varies with t, so this is a function of t. We call it M (t) M (t), which is the moment generating function (moment generating function). Deriving the series form of M (t) M (t) and making t equal to 0 can make the multiplier of higher order t disappear, leaving only E (X) E (X), that is,
M' (0) = E (X) M' (0) = E (X)
The first moment. If you continue to find higher-order derivatives and let t equal to 0, you can get higher-order moments.
M (r) (0) = E (Xr) M (r) (0) = E (Xr)
Interestingly, the multiple derivative coefficients are exactly equal to the factorial in the power series coefficients, so the above beautiful form can be obtained. We prove that the moment of each order can be obtained by deriving the moment generating function in the form of power series. Relative to the integral, derivation is an easy operation.
Properties of moment generating function
One side of the moment generating function is the power series, which we have talked about a lot. The other side of the moment generating function is the analytical form of its exponential function. That is,
M (t) = E [et X] = ∫∞−∞ etxf (x) dxM (t) = E [et X] = ∫−∞∞ etxf (x) dx
After we know the specific form of f (x), we can use the integral to obtain the moment generating function, and then obtain the moments of each order. Of course, you can also calculate moments through the definition of moments. But in many cases, the above exponential integral can use some existing results, so it is easy to obtain the moment generating function. The way to solve the moment of the moment generating function is much more convenient.
This definition of moment generating functions is based on expectations, so some of the properties of expectations can be used to produce interesting results.
Property 1 if the moment generating function of X is $MX (t)] and [$Y=aX+b$MX (t)], and [$Y=aX+b], then
MY (t) = eatMX (bt) MY (t) = eatMX (bt)
(the expectation of writing Y in exponential form can easily prove this conclusion.)
Property 2 if X and Y are independent random variables, there is a moment generating function MX,MYMX,MY respectively. So for the random variable Z=X+YZ=X+Y, there are
MZ (t) = MX (t) MY (t) MZ (t) = MX (t) MY (t)
(the expectation based on the product of independent random variables is equal to the product of the expectation of random variables)
Exercise:
Derivation of moment generating function of Poisson distribution
At this point, I believe you have a deeper understanding of "what is Python moment and moment generating function". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.