In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
How to use GNN to improve the accuracy of ETAs, I believe that many inexperienced people do not know what to do. Therefore, this paper summarizes the causes and solutions of the problem. Through this article, I hope you can solve this problem.
GNN improves the accuracy of ETAs
The main core ideas are:
The road is cut into road sections, each section is regarded as a node, and the relative position of the road section constitutes the edge between the nodes, forming a subgraph of the road network (called Supersegments, super road section). Then the GNN model is used to estimate the travel time of the super road section.
Cut the road into super sections
DeepMind cuts the road network into road sections one by one, and the super road section is composed of several adjacent road sections. In other words, the super road section contains several road sections, and each section has a specific length and corresponding speed characteristics. Each road section corresponds to a node, and a continuous section of a road has edges, or is connected through an intersection.
>
The system has a special module (route analyser, route analyzer) to deal with a large amount of road information and build super road sections.
The model is used to predict the travel time (travel time) of each super section.
Prediction model
Deepmind also tried several models:
It works well to equip each super section with a fully connected network, but the length of the super section is dynamic, so you need to train a model for each super section separately, which is obviously not realistic on a large scale. RNN model can deal with variable length series, but the highway structure is complex and this kind of model is difficult to deal with.
So we finally choose the graph neural network and regard the local road network as a graph. In other words, the super road section is actually a road submap randomly sampled according to the traffic density.
This also shows that the length of each road section is inconsistent and is divided according to the flow density.
GNN can handle not only the front and rear sections of roads, but also a variety of complex roads, such as intersections. Through this feature, DeepMind experiments have found that performance improvements can be achieved by extending the scope of super roads to adjacent roads, rather than just main roads (such as congestion in an alley or even a main road).
By considering the situation of multiple intersections, the model can take into account the turn, the delay in merging and the total time in the case of stop-and-go.
No matter how long the super road section is (composed of two or hundreds of road sections), it can be handled by the same GNN model.
From research to productivity
DeepMind found that GNN is very sensitive to changes in the training process (training curriculum). The main reason is that the graph structure used in the training process is too different. The number of nodes in each graph in a batch ranges from two to more than a hundred.
I have a doubt here, isn't it true that every super road network samples the same node, estimates the travel time of each road section, and then splices the total time according to the user's starting point and end point? Even if the global road network is so large that it is difficult to calculate for each road section, it is still possible to find specific "commonly used" road sections based on historical records. So we still have to wait for them to write a paper and see how to deal with it.
DeepMind has also tried several technologies:
New reinforcement learning techniques under supervised conditions. After the predefined training phase, DeepMind uses the exponential decay learning rate plan to stabilize the parameters of the model. The integration technology is also tried to observe whether the model differences in the training process can be reduced.
The original text here is quite messy, so I'll just list the technologies involved. I still have to figure out whether to use them in combination or try them in turn.
Finally, the most successful solution is to use MetaGradient to dynamically adjust the learning rate during training, so that the system can effectively learn its own optimal learning rate plan. Finally, a more stable result is achieved, so that the new architecture can be applied to production.
Custom loss function to realize model generalization
DeepMind found that the linear combination of multiple loss functions (properly weighted) can greatly improve the generalization ability of the model and avoid overfitting. Specifically, this multi-loss target uses:
Regularization factor of model weight, L2 and L1 loss in global traversal time, Huber and negative logarithmic likelihood (negative-log likelihood, NLL) loss of each node in the graph.
Although it does not improve the training index, it can be more generally applied to test sets and end-to-end experiments.
At present, DeepMind is still exploring whether MetaGradient technology can change the combined components of the loss function in the training process under the guidance of reducing the stroke estimation error (travel estimate errors). The study was inspired by MetaGradient, and early experiments yielded good results.
After reading the above, have you mastered how to use GNN to improve the accuracy of ETAs? If you want to learn more skills or want to know more about it, you are welcome to follow the industry information channel, thank you for reading!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.