Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

The method of realizing kalman filter by python

2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/02 Report--

This article mainly explains "python to achieve kalman filtering method", the content of the article is simple and clear, easy to learn and understand, now please follow the editor's train of thought slowly in depth, together to study and learn "python kalman filtering method"!

The essence of Kalman filter is the iterative operation of the least square method, which can give the state estimation of the time series. Suppose the process to be estimated is as follows:

X [k] * x [k] + Benzu [k] + w [k] / / equation of state z [k] = H [k] * x [k] + v [k] / / measured p (w) ~ N (0, Q) p (v) ~ N (0, R)

Where w and v represent the noise terms that satisfy the normal distribution, the mean value of the normal distribution is 0, and the covariance matrix is Q and R respectively. U represents the control of x, z represents the measured value, and KF1 and k represent the values at different times. A _ () _ () B _ () H is the corresponding incidence matrix, respectively. Kalman filter divides the predicted value of the process into two parts, one is to infer the prior value through the model, which is called time update, and the other is to modify the measured value, which is called measurement update. The core equation is as follows:

/ / time update xb [k] * x [k] + B* u [k] PB [k + 1] = A [k] * P [k] * transverse (A [k]) + Q [k] / / Measurement Update K [k] = Pb [k] * transverse (H [k]) * inverse (H [k] * PB [k] * transverse (H [k]) + R [k]) x [k] = xb [k] + K * ]-H [k] * xb [k]) P [k] = (I-K [k] * H [k]) * PB [k]

Where xb is a priori estimation, Pb is a priori error covariance matrix, and P is a posteriori error covariance matrix. In each time update, the a priori estimate xb of the next time is given by using the previous a posteriori estimate, and the a priori error covariance estimation of the next time is given. In each measurement update, the Kalman gain K is calculated first, and then the current a posteriori estimate x is calculated by using the measured value z and the prior estimation value xb. Finally, the current a posteriori error covariance estimation is given.

The two update processes combine a priori estimates (system states inferred from past data and models) and measurements of possible noise, thus giving the most likely state (distribution) of the system. The advantage of this method is that the best estimation of the combination of the two data is given when the measurement and control are not accurate enough. The following is a simple python code and running results for reference.

Import numpyclass Kalman_Filter: def _ init__ (self, A, H, Q, R, z, B = None, impulse = None): self._A = A self._H = H self._Q = Q self._R = R self._z = z self.m = len (z) self.n = len (z [0]) self._identity = numpy.ones ([self.n]) Self.n]) if (B is None): self._B = numpy.zeros ([self.n, self.n]) else: self._B = B if (impulse is None): self._impulse = numpy.zeros ([self.m] Self.n]) else: self._impulse = impulse def _ del__ (self): return def _ kalman (self, xb, Pb, z, impulse): # Survey Update tmp = numpy.matmul (Pb, self._H.T) K = numpy.matmul (tmp, numpy.linalg.inv (numpy.matmul (self._H) Tmp) + self._R) x = xb + numpy.matmul (K, (z-numpy.matmul (self._H, xb) P = numpy.matmul ((self._identity-numpy.matmul (K, self._H)), Pb) # time update xb = numpy.matmul (self._A, x) + numpy.matmul (self._B, impulse) Pb = numpy.matmul (numpy.matmul (self._A, P) Self._A.T) + self._Q return x, xb, Pb def _ kalman1d (self, xb, Pb, z Impulse): # Survey update tmp = Pb*self._H K = tmp/ (self._H*tmp + self._R) x = xb + K * (z-self._H*xb) P = (1-K*self._H) * Pb # time update xb = self._A*x + self._B*impulse Pb = self._ A*P*self._A + self._Q return x Xb, Pb def get_filtered_data (self, xb, Pb): xx = [] for i in range (0, self.m): if (self.n = = 1): (x, xb, Pb) = self._kalman1d (xb, Pb, self._z [I], self._ impact [I]) else: (x, xb, Pb) = self._kalman (xb, Pb) Self._z [I], self._ impact [I] xx.append (x) return xx# = test = import matplotlib.pyplott = numpy.linspace (0mem10100) # Abscissa Time # = 2d = A = numpy.array ([[1jue 0.1], [0je 1]]) H = numpy.array ([[1jue 0], [0je 1]]) Q = 0.5*numpy.array ([[1jue 0], [0jue 1]]) R = 0.5*numpy.array ([[1jue 0], [0jue 1]]) noise = numpy.random.randn (2100) real = numpy.vstack (10*numpy.sin (t)) 10*numpy.cos (t) # True value z = real + noise # measured value kf = Kalman_Filter (A, H, Q, R, z.T) xb = numpy.array ([0L10]) Pb = numpy.array ([[1LJ 0], [0LJ 1]]) x = kf.get_filtered_data (xb, Pb) fig = matplotlib.pyplot.figure (figsize= (10.24J 7.68)) matplotlib.pyplot.plot (t, z.T,'r') matplotlib.pyplot.plot (t, real.T) 'g') matplotlib.pyplot.plot (t, x,'b') matplotlib.pyplot.show () # = 1D = A = 1H = 1Q = 0.5R = 0.5B =-1 # Correction based on feedback noise = numpy.random.randn (1,100) real = 10*numpy.exp (- tactit) z = real + noisekf1 = Kalman_Filter (A, H, Q, R, z.T) # No feedback kf2 = Kalman_Filter (A, H, Q, R, z.T) B, noise.T) # feedback correction xb = 10Pb = 1x1 = kf1.get_filtered_data (xb, Pb) x2 = kf2.get_filtered_data (xb, Pb) fig = matplotlib.pyplot.figure (figsize= (10.24)) matplotlib.pyplot.subplot (3) 1) # draw the first picture below Correction without feedback matplotlib.pyplot.plot (t, z.T,'r') matplotlib.pyplot.plot (t, real.T,'g') matplotlib.pyplot.plot (t, x1,'b') matplotlib.pyplot.subplot (3d1) # draw the second picture below With feedback correction matplotlib.pyplot.plot (t, z.T,'r') matplotlib.pyplot.plot (t, real.T,'g') matplotlib.pyplot.plot (t, x2,'b') matplotlib.pyplot.subplot (3d1) # draw the third picture below Compare the results with and without feedback matplotlib.pyplot.plot (t, x1,'r') matplotlib.pyplot.plot (t, real.T,'g') matplotlib.pyplot.plot (t, x2,'b') matplotlib.pyplot.show ()

The calculation results are as follows. As you can see, in most cases, the blue line (the result of Kalman filtering) is closer to the green line (true value) than the red line (measured). In the second figure, the situation of adding external feedback to modify and not add according to the measurement results is compared. It can be seen that after adding feedback, better results are given at some larger values, and the vibration is more uniform near 0 o'clock. But feedback doesn't improve results in all places.

Thank you for your reading, the above is the content of "the method of python to achieve kalman filtering". After the study of this article, I believe you have a deeper understanding of the method of python to achieve kalman filtering, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report