In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly explains "how to understand Java first traversal and breadth first traversal algorithm". The content of the article is simple and clear, and it is easy to learn and understand. please follow the editor's train of thought to study and learn "how to understand Java priority traversal and breadth first traversal algorithm".
Depth first traversal
The main idea is to start with an unvisited vertex V in the picture, go all the way to the end, then fall back from the node at the end of the road to the last node, and then walk to the end from another road. This process is repeated recursively until all the vertices are traversed. Its characteristic is not to hit the south wall and not to look back, first to finish a road, and then another way to continue.
A tree is a special case of a graph (a connected acyclic graph is a tree). Let's take a look at how trees traverse with depth-first traversal.
1. We start traversing from the root node 1, and its adjacent nodes are 2Magazine 3 and 4, traversing Node 2, then traversing Node 5 of 2, and then traversing Node 9 of 5.
2. In the figure above, a path has gone to the end (9 is a leaf node, and there are no more traversable nodes). At this time, you will fall back from 9 to the previous node 5 to see if there are any nodes other than 9 in the next node 5. There is no further fallback to 2 and 2, and there are no nodes other than 5. Back to 1, there are nodes 3 except 2, so depth-first traversal starts from node 3, as shown below:
3. Similarly, start from 10 and go up to 6. 6 has no child nodes other than 10, and then go up and find that 3 has sub-points 7 except 6, so 7 will be traversed at this time.
3. Go back from 7 to 3, 1, and find that 1 has node 4 that has not been traversed, so traverse along 4, 8 at this time, so the traversal is complete.
The traversal order of the complete node is as follows (the blue number on the node represents):
I believe that if you see the above traversal, it is not difficult to find that this is the preorder traversal of the tree. In fact, whether it is preorder traversal, middle order traversal, or post order traversal, it belongs to depth first traversal.
So how to achieve depth-first traversal? there are two forms of recursion and non-recursion. Next, we take binary tree as an example to see how to use recursion and non-recursion to achieve depth-first traversal respectively.
1. Recursive implementation
The recursive implementation is relatively simple, because it is a preorder traversal, so we can traverse the current node, the left node and the right node in turn. For the left and right nodes, we can traverse their left and right nodes in turn, and continue recursively until the leaf node (recursive termination condition). The code is as follows:
Public class Solution {private static class Node {/ * * Node value * / public int value; / * left Node * / public Node left; / * right Node * / public Node right Public Node (int value, Node left, Node right) {this.value = value; this.left = left; this.right = right;}} public static void dfs (Node treeNode) {if (treeNode = = null) {return } / / traversal node process (treeNode) / / traversal left node dfs (treeNode.left); / / traversal right node dfs (treeNode.right);}}
Recursion is expressive and easy to understand, but if the hierarchy is too deep, it can easily lead to stack overflows. So let's focus on the non-recursive implementation.
2. Non-recursive implementation
Carefully observe the characteristics of depth-first traversal, for binary trees, because it is a first-order traversal (first traversing the current node, then the left node, then the right node), so we have the following ideas:
For each node, first traverse the current node, then press the right node on the stack, and then press the left node (in this way, you will get the left node traversal first, which meets the depth-first traversal requirement).
Pop the stack, get the node at the top of the stack, if the node is not empty, repeat step 1, if empty, end the traversal.
Let's take the following binary tree as an example to see how to use stack to implement DFS.
The overall motion picture is as follows:
The overall idea is relatively clear. Use the stack to press the stack of the node to be traversed, and then check whether there are any untraversed nodes on the node after going out of the stack. If not, keep backtracking (off the stack). It is not difficult to write the depth-first traversal code of the binary tree implemented with stack as follows:
/ * use stack to implement dfs * @ param root * / public static void dfsWithStack (Node root) {if (root = = null) {return;} Stack stack = new Stack (); / / push the root node to stack stack.push (root); while (! stack.isEmpty ()) {Node treeNode = stack.pop () / / traversal node process (treeNode) / / press the right node if (treeNode.right! = null) {stack.push (treeNode.right);} / / then press the left node if (treeNode.left! = null) {stack.push (treeNode.left);}
You can see that the implementation of depth-first traversal with stack is actually not complicated, and you don't have to worry about stack overflow caused by recursion.
Breadth first traversal
Breadth-first traversal refers to starting from an untraversed node of the graph, traversing the adjacent nodes of this node first, and then traversing the adjacent nodes of each adjacent node in turn.
The breadth-first traversal of the tree described above is shown below, and the value of each node is their traversal order. So breadth-first traversal is also called sequence traversal, first traversing the first layer (node 1), then traversing the second layer (node 2, 3, 4, 4), the third (5, 6, 7, 8), and the fourth layer (9, 10).
Depth-first traversal uses stacks, while breadth-first traversal is implemented by queues. Let's take a binary tree as an example to see how to use queues to achieve breadth-first traversal.
The moving picture is as follows:
I believe it is not difficult to write the following code after looking at the above dynamic picture:
/ * use queue to implement bfs * @ param root * / private static void bfs (Node root) {if (root = = null) {return;} Queue stack = new LinkedList (); stack.add (root); while (! stack.isEmpty ()) {Node node = stack.poll (); System.out.println ("value =" + node.value); Node left = node.left If (left! = null) {stack.add (left);} Node right = node.right; if (right! = null) {stack.add (right);}
Exercise exercise
Next let's take a look at some of the problems that appear in leetcode that use DFS,BFS to solve problems:
Leetcode 104 dint 111: given a binary tree, find out its maximum / minimum depth.
For example, a given binary tree [3, 9, 20, 5, 5, 15, 7]
3 / 9 20 / 15 7
Then its minimum depth is 2 and its maximum depth is 3.
The idea of solving the problem: this problem is relatively simple, but it is only a kind of deformation of depth priority traversal, as long as the maximum / minimum depth of the left and right subtree is obtained recursively, how to find the depth, each recursive call function, depth plus one. It is not difficult to write the following code:
/ * leetcode 104: find the maximum depth of the tree * @ param node * @ return * / public static int getMaxDepth (Node node) {if (node = = null) {return 0;} int leftDepth = getMaxDepth (node.left) + 1; int rightDepth = getMaxDepth (node.right) + 1; return Math.max (leftDepth, rightDepth) } / * leetcode 111: find the minimum depth of the tree * @ param node * @ return * / public static int getMinDepth (Node node) {if (node = = null) {return 0;} int leftDepth = getMinDepth (node.left) + 1; int rightDepth = getMinDepth (node.right) + 1; return Math.min (leftDepth, rightDepth);}
Leetcode 102gives you a binary tree and asks you to return the node values obtained by traversing in sequence. (that is, access all nodes layer by layer, from left to right.) For example, a binary tree is given: [3pjr 9re20pr nullrer nullpr 15pr 7].
3 / 9 20 / 15 7
Returns the result of its hierarchical traversal:
[[3], [9,20], [15,7]]
Idea to solve the problem: obviously this problem is a variation of breadth-first traversal. You only need to add the nodes of each layer to the same array in the process of breadth-first traversal. The crux of the problem is that before traversing the nodes of the same layer, you must calculate in advance the number of nodes in the same layer (that is, the number of elements in the queue), because BFS is implemented by queues, and the left and right child nodes will continue to be queued in the traversal process. Keep that in mind! The moving picture is as follows:
According to the above dynamic diagram, it is not difficult to get the code as follows:
Java code
/ * leetcdoe 102: sequence traversal of binary tree, using bfs * @ param root * / private static List bfsWithBinaryTreeLevelOrderTraversal (Node root) {if (root = = null) {/ / root node is empty, indicating that the binary tree does not exist and directly returns the empty array return Arrays.asList ();} / / the final sequence traversal result List result = new ArrayList (); Queue queue = new LinkedList () Queue.offer (root); while (! queue.isEmpty ()) {/ / record each layer List level = new ArrayList (); int levelNum = queue.size (); / / traverse the node for of the current layer (int I = 0; I < levelNum; iTunes +) {Node node = queue.poll () / / the left and right children of the first node of the team join the queue. Since levelNum is calculated before joining the queue, the left and right nodes joining the queue will not be traversed to if (node.left! = null) {queue.add (node.left);} if (node.right! = null) {queue.add (node.right) at the current layer. } level.add (node.value);} result.add (level);} return result;}
Python code
Class Solution: def levelOrder (self, root): "": type root: TreeNode: rtype: list [list] "res = [] # nested list, save the final result if root is None: return res from collections import deque que = deque ([root]) # queue Save while len (que)! = 0: lev = [] # list of nodes to be processed Save the value of the node in this layer thislevel = len (que) # the number of nodes in this layer while thislevel nodes 0: head = que.popleft () # pop up the first node # the left and right children of the first node join the queue if head.left is not None: que.append (head.left) If head.right is not None: the value of the first node of the que.append (head.right) lev.append (head.val) # team is pressed into the current layer thislevel-=1 res.append (lev) return res
It is obvious to use BFS for this question, but in fact, you can also use DFS. If you can use DFS to deal with it in the interview, it will be a big bright spot.
How to deal with DFS? we know that DFS can be implemented by recursion. In fact, all you have to do is add a "layer" variable to the recursive function. As long as the node belongs to this layer, put the node into an array of equivalent layers. The code is as follows:
Private static final List TRAVERSAL_LIST = new ArrayList (); / * leetcdoe 102: sequence traversal of binary trees using dfs * @ param root * @ return * / private static void dfs (Node root, int level) {if (root = = null) {return;} if (TRAVERSAL_LIST.size () < level + 1) {TRAVERSAL_LIST.add (new ArrayList ()) } List levelList = TRAVERSAL_LIST.get (level); levelList.add (root.value); / / traversing left node dfs (root.left, level + 1); / / traversing right node dfs (root.right, level + 1);}
The application of DFS,BFS in search engines We use Google and Baidu search engines almost every day. Do you know how these search engines work? to put it simply, there are three steps:
1. Web page crawling
The search engine crawls the page through the crawler, obtains the page HTML code and stores it in the database.
2. Pretreatment
The indexing program carries out text extraction, Chinese word segmentation and (inverted) indexing of the crawled page data for use by the ranking program.
3. Ranking
After the user enters the keywords, the ranking program calls the index database data, calculates the correlation, and then generates the search results page in a certain format.
Let's focus on the first step, web crawling.
The general operation of this step is as follows: assign a set of starting pages to the crawler, and we know that the page actually contains a lot of hyperlinks. After the crawler crawls a page, it parses and extracts all the hyperlinks in the page, and then crawls out these hyperlinks in turn. No, no, no. So that you can constantly extract web pages based on hyperlinks The following figure is shown:
As shown above, it finally forms a picture, so the question becomes how to traverse the graph, obviously in a depth-first or breadth-first way.
If it is breadth-first traversal, first climb the starting page of the first layer, and then climb the hyperlinks in each web page in turn. If it is depth-first traversal, first climb the starting page 1, and then climb the link in this page. After crawling, and then crawl the starting page 2.
In fact, crawlers use both depth-first and breadth-first strategies. For example, in the starting page, some pages are more important (with higher weight), so first do depth-first traversal on this page, and then do breadth-first traversal on other starting pages (with the same weight) after traversing.
Thank you for your reading, the above is the content of "how to understand Java priority traversal and breadth first traversal algorithm". After the study of this article, I believe you have a deeper understanding of how to understand Java priority traversal and breadth first traversal algorithm. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.