In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly explains "what is the simultaneous localization and mapping method of SLAM Noob". The content of the article is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn "what is the simultaneous localization and mapping method of SLAM Noob".
What is SLAM?
Real-time positioning and map construction (simultaneous localization and mapping, abbreviated as SLAM), used for parallel construction of environment model (map) and state estimation of robots moving in it. In other words, SLAM provides you with a way to track the location of robots in the world in real time and identify landmarks such as buildings, trees, rocks and other world features. In addition to localization, we also want to model the robot environment so that we have a concept of objects and landmarks around the robot. so that we can use this map data to ensure that robots are on the right path as they move around the world. Therefore, the key to building a map is that the robot itself may lose tracking of its position because of its motion uncertainty, because there is no existing map, and we are using robots to build maps in parallel. And this is where SLAM works.
SLAM's work:
Simultaneous positioning and mapping (SLAM) is based on collecting information from the robot's sensors and motion over time, and then using information about measurement and motion to reconstruct the world map. In this case, we locate the robot in the 2D grid world, so the graph-based SLAM method constructs a simplified estimation problem by extracting the original sensor measurements. These original measurements are replaced with edges in the figure, which can then be treated as virtual measurements. Suppose we have a robot and the initial positions x 0 = 0 and y 0 = 0. For this example, to keep it simple, we don't care about direction. Let's assume that the robot moves 10% to the right in the X direction. So, in an ideal world, you would know that x1 is in the same position as x0 + 10, that is, x1 = x0 + 10, and in the same way, y1 is the same as y0.
As shown in the figure, the displacement of the robot in the x direction is 10:
But according to Kalman filters and various other robotic technologies, we already know that the location is actually uncertain. Therefore, instead of assuming that the robot in our XY coordinate system has moved exactly 10 positions to the right, it should be understood that its actual position after the motion update of x1 = x0 + 10 is a Gaussian distribution centered on (10P0), but the robot may also be somewhere else.
Such as the picture: after the motion is updated, Gauss centers on the position of the robot.
This is the mathematical formula of the Gaussian of the x variable: when the two conditions are the same, instead of setting x1 to x0x10, it is better to use the Gaussian function to express it, and the Gaussian function reaches its peak at this time. So if you subtract x1-x0-10, turn it into a square, and then convert it to Gaussian, we will get the probability distribution related to x1 and x0. We can do the same conversion for y. According to our motion y is the same, so y1 and y0 are as close as possible.
The product of these two gauss is now our constraint. The goal is to maximize the possibility of location x1 when location x0 is (0rem 0). So what Graph SLAM does is it uses a series of such constraints to define probabilities. Suppose we have a robot moving in a space, and GRAPH SLAM collects its initial position (0P0), which is initially called "initial constraint", and then collects many relative constraints, which associate each robot attitude with the previous robot attitude as a relative motion constraint. For example, let's use landmarks that the robot can see at various locations, which is the relative measurement constraint every time the robot sees the landmark. Therefore, Graph SLAM collects these constraints to find the most likely robot path configuration and landmark location, that is, the mapping process.
Implement the build environment:
We will generate a 2D world grid with landmarks, and then generate data by placing the robot in the world and moving and sensing over a certain number of time steps. Instantiated robots collect data as they move and perceive in the world. Our SLAM function will take this data as input. So let's first create this data and explore how it makes motion and sensor measurements on behalf of our robots.
SLAM input:
In addition to the data, our slam function has:
N: the number of time steps that the robot will move and sense.
Num_landmarks: the number of landmarks in the world.
World_size: the size of your world (w / h).
Motion_noise: motion-related noise; the update confidence of motion should be 1.0/motion_noise.
Measurement_noise: noise related to measurement / sensing; the update weight of the measurement should be 1.0/measurement_noise.
The implementation of import numpy as npfrom helpers import make_data#slam should use the following inputs # Please feel free to change these input values and see how they respond! # World parameter num_landmarks = 5 # number of landmarksN = 20 # time stepsworld_size = 100.0 # size of world (square) # Robot parameter measurement_range = 50.0 # range at which we can sense landmarksmotion_noise = 2.0 # noise in robot motionmeasurement_noise = 2.0 # noise in the measurementsdistance = # distance by which robot (intends to) move each iteratation # make_data instantiates a robot And generate random landmarks data = make_data (N, num_landmarks, world_size, measurement_range, motion_noise, measurement_noise, distance) for a given world size and a given number of landmarks
Let's write two main functions that allow robots to move around, help locate landmarks and measure the distance between them on a 2D map:
Move: try pressing dx,dy for Mobile Robot.
Sense: returns the x and y distance of the mainland target of the visible range.
Class robot: # move function def move (self, dx, dy): X = self.x + dx + self.rand () * self.motion_noise y = self.y + dy + self.rand () * self.motion_noise if x
< 0.0 or x >Self.world_size or y
< 0.0 or y >Self.world_size: return False else: self.x = x self.y = y return True # sense function def sense (self): measurements = [] for landmark_index Landmark in enumerate (self.landmarks): landmark_distance_x = landmark [0] landmark_distance_y = landmark [1] random_noise = self.rand () cal_dx = self.x-landmark_distance_x + random_noise * self.measurement_noise cal_dy = self.y-landmark_distance_y + random_noise * self.measurement_noise is_ Not_in_measurement_range = self.measurement_range =-1 if (is_not_in_measurement_range) or ((abs (cal_dx))
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.