In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly explains "how to get started with Java sorting algorithm". Interested friends may wish to take a look. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn "how to get started with Java sorting algorithm".
Insert sort
The basic idea of insertion sorting: each step, an element to be sorted is inserted into the previously sorted set of elements according to its sort code size, until all the elements are inserted. Here, we introduce three specific insertion sorting algorithms: direct insertion sort, Hill sort and half insert sort.
1. Directly insert sort
* * directly insert the idea of sorting: * * when inserting the I (I > = 1) element, the previous V [0], … , V [I-1] and other iMur1 elements have been ordered. At this time, compare the I element with the previous imur1 element V [I-1], … , V [0] compared in turn, found that the insertion position is to insert V [I], while the element in the original position moves backward. Here, the search for the insertion location is a sequential lookup. Direct insertion sorting is a stable sorting algorithm, which is implemented as follows:
/ * Title: direct insertion sort in insertion sort, depending on the initial sequence * Description: continuously inserting new records into the ordered sequence to achieve the purpose of expanding the ordered area to the entire array * time complexity: best case O (n), average case O (n ^ 2) Worst-case O (n ^ 2) * Space complexity: O (1) * Stability: stability * Internal sorting (data elements are entirely in memory during sorting) * @ author rico * @ created 10:40:00 on May 20, 2017 * / public class StraightInsertionSort {public static int [] insertSort (int [ ] target) {if (target! = null & & target.length! = 1) {/ / the array to be sorted is not empty and the length is greater than 1 for (int I = 1) I for (int j = I; j > 0; JMY -) {/ / insert a new element if (target [j] return target;}}) into the ordered sequence
2. Hill sorting
* * Hill's idea of sorting: * * suppose the sequence to be sorted consists of n elements, and first take an integer gap.
/ * Title: Hill sort in the insertion sort, depending on the initial sequence * Description: directly insert and sort the gap subsequences with intervals of gap, and keep shrinking the gap until the gap is larger, each subsequence has fewer elements, and the sorting speed is faster. * in the later stage of sorting, the gap becomes smaller and there are more elements in each subsequence, but most of the elements are basically ordered, so the sorting speed is still fast. * * time complexity: O (n) ~ O (n ^ 2) * Space complexity: O (1) * Stability: instability * Internal sorting (data elements are completely in memory during sorting) * @ author rico * @ created 10:40:00 on May 20, 2017 * / Public class ShellSort {public static int [] shellSort (int [] target) {if (target! = null & & target.length! = 1) {int gap = target.length / / gap subsequence do {gap = gap/3 + 1 of size gap; / / keep shrinking gap to 1 for (int I = 0 + gap; i if (target [I] do {target [j + gap] = target [j]) / / backward element j = j-gap; / / compare the previous element} while (j > = 0 & & target [j] > temp); / / the termination condition for forward comparison target [j + gap] = temp / / insert the value to be inserted into the appropriate location} while (gap > 1);} return target;}}
3. Sort by half insert
* * the idea of half-and-half insertion sorting: * when inserting the I (I > = 1) element, the previous V [0], … , V [I-1] and other iMur1 elements have been ordered. At this time, half-search for the first element in the former iMur1 element V [I-1], … , the insertion position in V [0], and then insert V [I] directly, while the element in the original position moves backward. Unlike direct insert sort, half insert sort significantly reduces the number of comparisons between keywords compared to direct insert sort, but the number of moves remains the same. Therefore, the time complexity of half-insert sort and insert sort is the same as O (N ^ 2), but it reduces the number of comparisons, so the algorithm is still better than direct insertion sort. Half-insert sort is a stable sorting algorithm, which is implemented as follows:
/ * Title: half insert sort in insertion sort, depending on the initial sequence * Description: half search to find out the insertion position and insert directly; the difference with direct insertion search is that the latter search is faster than sequential search * time complexity: half insertion sort significantly reduces the number of keywords compared with direct insertion sort, but the number of moves remains unchanged. So, * half-insert sort and insert sort have the same time complexity of O (N ^ 2), which is really good at reducing the number of comparisons, so the algorithm is still better than direct insertion sort. * Space complexity: O (1) * Stability: stability * Internal sorting (data elements are completely in memory during sorting) * @ author rico * @ created May 25, 2017 12:03:23 * / public class BinaryInsertSort {public static int [] binaryInsertSort (int [] target) { If (target! = null & & target.length > 1) {for (int I = 1) I if (temp while (left if [target] else if > temp) {right = mid-1; / / reduce the insertion interval} else {/ / the value to be inserted is equal to the target [mid] in the ordered sequence, left = left + 1 to ensure stability }} / / left and its subsequent data move backwards and insert for (int j = I; j > left; Jmuri -) {target [j] = target [j-1] at the left location } target [left] = temp;} return target;}} Select sort
* * the basic idea of choosing a sort: * * each trip (for example, the first trip, I = 0pl, …) In the following n-I elements to be sorted, the smallest element is selected as the I-th element of the ordered sequence, and all the elements are ordered until the end of the n-1st round. Here, we introduce two specific selection sorting algorithms: direct selection sorting and heap sorting.
1. Direct selection sorting
* * the idea of direct selection sorting: * * select the minimum value from R [0] to R [n-1] for the first time, exchange it with R [0] for the first time, and exchange it with R1 for the second time. For the first time, select the minimum value from R [I-1] to R [n-1] and exchange it with R [I-1]. For the first time, select the minimum value from R [n-2] to R [n-1], exchange it with R [n-2], and get an ordered sequence arranged from small to large through nmai for a total of times. Direct selection sorting is an unstable sorting algorithm, which is implemented as follows:
/ * Title: the direct selection sort in the selection sort, depending on the initial sequence * Description: each trip (for example, I trip, I = 0 ~ 1 ~ 1.) In the following n-I elements to be sorted, select the smallest element as the first element of the ordered sequence * time complexity: best case O (n ^ 2), average case O (n ^ 2) Worst-case O (n ^ 2) * Space complexity: O (1) * Stability: instability * Internal sorting (data elements are entirely in memory during sorting) * @ author rico * @ created 20 May 2017 10:40:00 * / public class StraightSelectSort {public static int [] selectSort (int [] target) {if (target! = null & & target.length! = 1) {for (int I = 0) I for (int j = I + 1; j if (target [min _ index] > target [j]) {min_index = j;}} if (target [min _ min]! = target [I]) {/ / cause of instability: exchange int min = target [min _ index] Target [min _ index] = target [I]; target [I] = min;} return target;}}
2. Heap sort
* * the core of heap sorting is the heap adjustment algorithm. * * first, according to the initial input data, the initial heap is formed by using the heap adjustment algorithm shiftDown (); then, the top element of the heap is exchanged with the tail element, narrowing the range of the heap and readjusting it to the heap, and so on. Heap sorting is an unstable sorting algorithm, which is implemented as follows:
/ * Title: heap sort (select sort), ascending sort (maximum heap), depending on the initial sequence * Description: now adjust the given sequence to the maximum heap, and then swap the top element with the tail element each time and narrow the range of the heap Until the heap is reduced to 1 * time complexity: O (nlgn) * Space complexity: O (1) * Stability: instability * Internal sorting (data elements are completely in memory during sorting) * @ author rico * @ created 9:48:06 * / public class HeapSort {public static int [] heapSort (int [] target) { If (target! = null & & target.length > 1) {/ / adjust to maximum heap int pos = (target.length-2) / 2 While (pos > = 0) {shiftDown (target, pos, target.length-1); pos--;} / / heap sort for (int I = target.length-1; I > 0; iMurray -) {int temp = target [I]; target [I] = target [0] Target [0] = temp; shiftDown (target, 0, iMur1);} return target;} return target } / * @ description adjusted from top to bottom to the largest heap * @ author rico * @ created 9:45:40 * @ param target * @ param start * @ param end * / private static void shiftDown (int [] target, int start, int end) {int I = start; int j = 2 * start + 1 Int temp = target [I]; while (j if (j target [j]) {/ / find older children j = j + 1;} if (target [j] break;} else {target [I] = target [j]; I = j; j = 2 * j + 1) }} target [I] = temp;}} Exchange sort
* * the basic idea of exchange sorting: * * change the position of the two records in the sequence according to the comparison results of the two elements in the sequence, that is, move the record with larger key value to the tail of the sequence, and the record with lower key value to the front of the sequence.
1. Bubble sorting
* * the idea of bubble sorting: * * change the position of the two records in the sequence according to the comparison results of the two elements in the sequence, move the record with larger key value to the tail of the sequence, and the record with smaller key value to the front of the sequence. Therefore, each trip moves the smaller elements to the front, and the larger elements naturally sink to the last, that is, the largest elements can only be determined at last, which is bubbling. Bubble sorting is a stable sorting algorithm, and its implementation is as follows:
/ * Title: bubble sort in exchange sort. Generally speaking, it refers to the optimized bubble sort, which is compared at most once, depending on the initial sequence * Description: because the larger the element will slowly "float" to the top of the sequence (the last position) through the exchange, and the maximum number is finally determined, it is called bubble sort * time complexity: best case O (n) Average case O (n ^ 2) Worst-case O (n ^ 2) * Space complexity: O (1) * Stability: stability * Internal sorting (data elements are completely in memory during sorting) * * @ author rico* @ created on the morning of May 20, 2017 10:40:00*/public class BubbleSort {/ * * @ description naive bubbling sort (a total of 1 NMAE comparison) * @ author rico * / public static int [] bubbleSort (int [] target) {int n = target.length If (target! = null & & n! = 1) {/ / need to lie down at most. Each time you move the smaller elements to the front, the larger elements will naturally gradually sink to the last side. This is Bubble for (int I = 0; I for (int j = nMub 1; j > I; jmure -) {if (target [j] return target). } / * @ description optimized bubbling sort * @ author rico * / public static int [] optimizeBubbleSort (int [] target) {int n = target.length If (target! = null & & n! = 1) {/ / need to lie down at most. Each time you move the smaller elements to the front, the larger elements will naturally gradually sink to the last side. This is Bubble for (int I = 0; i false; for (int j = nMul 1; j > I). Jmurt -) {if (target [j] true;}} System.out.println (Arrays.toString (target)); if (! exchange) {/ / if the elements from I to nmurl have been ordered, return return target directly } return target;}}
2. Quick sort
* * the idea of quick sorting: * * the data to be sorted is divided into two independent parts through a sort, and all the data in one part is smaller than all the data in the other part (partition process). Then the two parts of data are quickly sorted according to this method (quick sorting process), and the whole sorting process can be carried out recursively, so that the whole data becomes an ordered sequence. Quick sorting is an unstable sorting algorithm.
/ * Title: fast sorting in exchange sorting, the most widely used sorting algorithm at present, is a recursive algorithm, depending on the initial sequence * Description: quick sorting includes two processes: partition and fast sorting * "partition" refers to dividing the original sequence into two subsequences according to benchmark elements * "fast sorting" refers to fast sorting of subsequences separately * * in terms of average computing time Quick sort is the best of all internal sorting methods. When sorting large-scale data, quick sort is fast. When sorting small-scale data, fast sorting is slow, even slower than simple sorting methods such as simple selection sorting * * Fast sorting depends on the original sequence, so its time complexity varies from O (nlgn) to O (n ^ 2) * time complexity: best case O (nlgn), average case O (nlgn) Worst-case O (n ^ 2) * * Stack space consumed by recursion * Space complexity: O (lgn) * * any element can be selected as the benchmark element * Stability: instability * Internal sorting (data elements are completely in memory during sorting) * * @ author rico* @ created on the morning of May 20, 2017 10:40:00*/public class QuickSort {/ * * * @ description Quick scheduling algorithm (Recursive algorithm): solve the problem during delivery * @ author rico * @ created 5:12:06 on May 20, 2017 * @ param target * @ param right * @ return * / public static int [] quickSort (int [] target Int left, int right) {if (right > left) {/ / Recursive termination condition int base_index = partition (target,left, right) / / the position of the datum element quickSort (target, left, base_index-1) after the original sequence partition; / / A quick sort of the first subsequence, excluding the datum element! QuickSort (target, base_index+1, right); / / A quick sort of the second subsequence, without reference elements! Return target;} return target } / * * @ description sequence partition Based on the first element, the element * @ author rico * @ created at 5:10:54 on May 20, 2017 * @ param target sequence * @ param left sequence * @ param right sequence * @ return * / public static int partition (int [] target, int left, int right) {int base = target [left] / / the value of the base element int base_index = left; / / where the base element should eventually be for (int I = left+1; I if (target [I] if (base_index! = I) {/ / equal means that all the elements before I are less than base, you only need to change it once, not every time int temp = target [base _ index] Target [base _ index] = target [I]; target [I] = temp;} / / put the base element in place target [left] = target [base _ index]; target [base _ index] = base; System.out.println (Arrays.toString (target)); return base_index / / return the position of the datum element after partition}} merge sort
* * merge sorting includes two processes: "return" and "merge". * * among them, "return" refers to dividing the original sequence into half sub-sequences and recursively sorting the sub-sequences respectively, and "and" refers to merging the ordered sub-sequences into the original sequence. Merge sorting algorithm is a typical recursive algorithm, so it is also the simplest sorting algorithm in concept. Compared with the fast sorting algorithm, the merge sorting algorithm does not depend on the initial sequence, and it is a stable sorting algorithm, but it needs the same auxiliary storage space as the original sequence.
/ * Title: merge sorting, the simplest sorting algorithm in concept, is a recursive algorithm that does not depend on the initial sequence * Description: merge sorting includes two processes: merge and merge * "return" refers to dividing the original sequence into half subsequences Recursive sorting of subsequences separately means merging the ordered subsequences into the original sequence. * the main problem of merging sorting is that it needs an auxiliary array space as large as the array to be sorted * * merge sorting does not depend on the original sequence, so its best case, average case and worst case time complexity are the same * time complexity: best case O (n) Average case O (n ^ 2), worst case O (n ^ 2) * Space complexity: O (n) * Stability: stability * Internal sorting (data elements are completely in memory during sorting) * * @ author rico* @ created in the morning of May 20, 2017 10:40:00*/public class MergeSort {/ * * @ description merge sorting algorithm (recursive algorithm): recursive decomposition Return merge * @ author rico * @ created 4:04:52 * @ param target sequence to be sorted * @ param left sequence to be sorted start position * param right sequence to be sorted termination position * @ return * / public static int [] mergeSort (int [] target, int left Int right) {if (right > left) {/ / Recursive termination condition int mid = (left + right) / 2 MergeSort (target,left,mid); / merge sort the first subsequence mergeSort (target, mid+1, right); / merge sort the second subsequence return merge (target,left,mid,right); / / merge the subsequence into the original sequence} return target } / * @ description two-way merge algorithm * @ author rico * @ created May 20, 2017 3:59:16 * @ param target is used to store the merge results * @ param left location of the first element of the first ordered table * @ param mid location of the last element of the first ordered table * @ param right location of the last element of the second ordered table * @ return * / public static int [] merge (int [] target Int left, int mid, int right) {/ / requires an auxiliary array space int [] temp = Arrays.copyOf (target, target.length) as big as the array to be sorted. / / S1 if S2 is the check pointer, index is the storage pointer int S1 = left; int S2 = mid + 1; int index = left; / / both tables have not been checked, and the pairwise comparison while (S1 if (temp [S1] else {target [index++]) = temp [S2 +] }} / / if the first table is not checked, copy while (S1 while (S2 return target;})
Ps: merge sort and quick sort are both typical recursive algorithms, so they are easier to understand and implement. For recursive thinking and in-depth analysis of connotation, please see the blog "algorithm Design method: the connotation and Classical Application of Recursion".
Assign sort (cardinality sort)
The basic idea of allocating and sorting: trade space for time. In the whole sorting process, there is no need to compare keywords, but by using additional space to "allocate" and "collect" to achieve sorting, their time complexity can reach the linear order: O (n). Among them, the cardinality sorting includes two processes: first, the elements of the target sequence are assigned to each bucket (allocation process); then, the elements in each bucket are put back in the first-in-first-out order (collection process), and so on. A total of d times are required, and d is the number of digits of the elements.
/ * Title: the cardinality sort in the allocation sort does not depend on the initial sequence * Description: instead of sorting on the basis of comparing the elements, it adopts the method of "allocation + collection" * * first, assign the elements of the target sequence to each bucket. * secondly Put the elements in each bucket back and forth in first-in-first-out order * and so on. * * time complexity: O (d * (renders)) or O (dn) The size of d is generally affected by n * Space complexity: O (rd + n) or O (n) * Stability: stability * Internal sorting (data elements are completely in memory during sorting) * @ author rico * @ created 10:40:00 on May 20, 2017 * / public class RadixSort {/ * @ description assignment + collection * @ author rico * @ created May 21, 2017 9:25:52 * @ param target array * @ param r cardinality * @ param d elements * @ param n number of elements to be sorted * @ return * / public static int [] radixSort (int [] target Int r, int d, int n) {if (target! = null & & target.length! = 1) {int [] [] bucket = new int [r] [n] / / there are a total of cardinality r buckets. Each bucket can put a maximum of n elements int digit; / / get the number on the corresponding bit of the element, that is, load that bucket int divisor = 1; / / define the divisor of each round, 1,10,100,. Int [] count = new int [r]; / count the actual number of elements stored in each bucket for (int I = 0; I for (int ele: target) {digit = (ele/divisor)% 10; / / get the number on the corresponding bit of the element (clever!!) Bucket [digit] [counter [counter] +] = ele; / / put the elements into the corresponding bucket, the number of elements in the bucket plus 1} / / collect int index = 0; / / the subscript for of the target array (int j = 0; j while (k return target;}}) summary
1. Directly insert sort Vs. Insert sort Vs by half. Hill ranking
These three sorting methods all belong to the category of insertion sorting. Compared with the sequential search insert position of the direct insertion sort, the half insert sort searches for the insertion position by the half search method, so in terms of searching the insertion position, the half insertion sort is faster than the direct insertion sort. In fact, the half-insert sort only reduces the number of comparisons between keywords than the direct insert sort, but the number of moves remains the same. Therefore, the time complexity of half-insert sort and insert sort is the same as O (n ^ 2), but the number of comparisons is reduced, so this algorithm is better than inserting sort directly. Hill sorting can be seen as an optimization of direct insertion sorting, which divides all elements into gap subsequences with intervals of gap and sorts each subsequence directly, while shrinking the interval gap until all elements are in the same sequence. Using this method can ensure the sorting efficiency, because at the beginning, the gap is larger, each subsequence has fewer elements, and the sorting speed is faster; in the later stage of sorting, the gap becomes smaller and each subsequence has more elements, but most of the elements are basically ordered, so the sorting speed is still fast. Therefore, Hill sort is more efficient than direct insert sort and half insert sort, but it is not stable.
2. Directly select and sort Vs. Heap sort
These two sorting methods belong to the category of insertion selection sorting, and their core idea is to select an extreme element for each trip and put it in the front / back position until the sequence is ordered. Unlike direct selection sorting, heap sorting is not a "brute force selection", but constantly adjusts the heap to achieve the extreme value of each trip. Therefore, heap sorting is more efficient than direct selection sorting, but they are both unstable.
3. Bubble sort Vs. Quick sort
These two sorting methods belong to the category of selective sorting, and their core idea is the exchange of elements. in bubbling sorting, the adjacent elements are compared with each other, and the smaller ones are exchanged to the front (the larger ones naturally sink to the back). Quick sorting is based on the benchmark point, swapping smaller elements and larger elements to both sides of it. Therefore, quick sort is more efficient than bubble sort, but it is not stable.
4. Merge and sort Vs. Quick sort
These two sorting methods belong to the category of recursive algorithms, so they are easy to understand and implement. Compared with quick sorting, merge sorting is not only stable, but also independent of the original sequence (independent of the order of the original sequence, the time complexity is always O (nlgn)), but it needs the same space as the original sequence, while quick sorting is generally faster than other efficient sorting algorithms (including merge sorting), and the space complexity is only O (1). In addition, we can see from the algorithm implementation that the two recursive algorithms have the following differences and relations:
The recursive termination conditions of the two are the same; the implementation structures of the two are similar: merge sorting is first return and then merge, and fast sorting is divided first and then arranged; the core implementation of merge sorting lies in the merging of ordered subsequences. the core implementation of fast sorting lies in the division of the original sequence.
5. Summary
Direct insertion sort, direct selection sort and bubble sort are the basic sorting methods, their average time complexity is O (n ^ 2), and their implementation is relatively simple, they are very effective for smaller element sequences.
Quick sort, heap sort and merge sort are efficient sorting methods, and their average time complexity is O (nlgn), in which fast sorting is the most general efficient sorting algorithm, but it is unstable. Merge sorting is the only one that has nothing to do with the initial sequence, and the time complexity is always O (nlgn), but its space complexity is O (n), so it is a stable sorting algorithm. The time complexity of heap sorting is always O (nlgn), and the space complexity is O (1), which is also unstable. They are effective for large sequences of elements.
The efficiency of Hill sorting is between the basic sorting method and the efficient sorting method, and it is an unstable sorting algorithm. They each have their own strengths and have specific usage scenarios. Although cardinality sorting has a linear growth in time complexity, but in fact, the cost is not much less than fast sorting, and the application is relatively not widespread.
Therefore, in practical application, we must make the most appropriate choice according to the characteristics of actual tasks and the characteristics of various sorting algorithms.
At this point, I believe that everyone on the "Java sorting algorithm how to quickly get started" have a deeper understanding, might as well to the actual operation of it! Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 270
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.