Data structures and algorithm tutorial

Breaking

Saturday, September 26, 2020

How to solve kruskal algorithm with prectical example

September 26, 2020 0
How to solve kruskal algorithm with prectical example


Kruskel's algorithm

 This tutorial will help, you to find out about how Kruskel's algorithm works. Additionally, you'll find working samples of Kruskel's algorithm in C, C ++, Java, and Python.

Kruskel's algorithm is that the minimum spanning tree algorithm that takes a graph as input and finds a subset of the sides of that graph

 Form a tree that contains all the titles
 it's rock bottom weight among all the trees which will be formed from the graph
 

Understand working of Kruskel's algorithm


Greed belongs to the category of algorithms referred to as algorithms, which find the local optimal within the hope of finding a global optimal solution.

We start with rock bottom weight from the sides and continue adding edges until we reach our goal.

The steps for implementing Kruskell's algorithm are as follows:

  •  Sort all edges from low to high
  •  Take the sting with the smallest amount weight and fasten it to the wide tree. If adding a foothold creates a cycle, discard this edge.
  •  Continue adding edges until all vertices are reached.


Kruskel's Algorithm Pseudocode

The Minimum Spanning Tree Algorithm is employed to see whether adding a foothold creates a loop or not.

The most common thanks to find this is often with the algorithm UnionFind. 


The union-find algorithm divides the vertices into clusters and allows us to see whether the 2 vertices belong to an equivalent cluster, so determine whether adding a foothold creates a cycle.

Kruskel's Algorithm Pseudocode
Kruskel's Algorithm Pseudocode
 

Example of Kruskel's algorithm with solution

 

Step-01:

 Sort all the sides from low weight to high weight.

 
Step-02:

  •  Take the sting with rock bottom weight and use it to attach the vertices of graph.
  •  If adding a foothold creates a cycle, then reject that edge and choose subsequent least weight edge.


Step-03:

 Keep adding edges until all the vertices are connected and a Minimum Spanning Tree (MST) is obtained.

Minimum Spanning Tree Analysis-


  •  the sides are maintained as min heap.
  •  subsequent edge are often obtained in O(logE) time if graph has E edges.
  •  Reconstruction of heap takes O(E) time.
  •  So, Kruskal’s Algorithm takes O(ElogE) time.
  •  the worth of E are often at the most O(V2).
  •  So, O(logV) and O(logE) are same.


 
Special Case-

  •  If the sides are already sorted, then there's no got to construct min heap.
  •  So, deletion from min heap time is saved.
  •  during this case, time complexity of Kruskal’s Algorithm = O(E + V) 

SOLVE PROBLEMS ON KRUSKAL’S ALGORITHM

 

SOLVE PROBLEMS ON KRUSKAL’S ALGORITHM
SOLVE PROBLEMS

 

 Follow these steps to solve the problem

 

Steps to solve KRUSKAL’S ALGORITHM
Steps to solve KRUSKAL’S ALGORITHM

 Now follow other two steps

Now follow other two steps
Now follow other two steps

 Step 5 and 6

Step 5 and 6
Step 5 and 6
Last Step 7 :-


Last Step 7
Last Step 7

 Since all the vertices are connected / included within the MST, so we stop.

Weight of the MST

= Sum of all edge weights

= 10 + 25 + 22 + 12 + 16 + 14

= 99 units

 Time Complexity for Kruskal’s Algorithm

 

Time Complexity for Kruskal’s Algorithm
 Time Complexity for Kruskal’s Algorithm

C program for Kruskal’s Algorithm

#include <stdio.h>

#define MAX 30

typedef struct edge {
  int u, v, w;
} edge;

typedef struct edge_list {
  edge data[MAX];
  int n;
} edge_list;

edge_list elist;

int Graph[MAX][MAX], n;
edge_list spanlist;

void kruskalAlgo();
int find(int belongs[], int vertexno);
void applyUnion(int belongs[], int c1, int c2);
void sort();
void print();

// Applying Krushkal Algo
void kruskalAlgo() {
  int belongs[MAX], i, j, cno1, cno2;
  elist.n = 0;

  for (i = 1; i < n; i++)
    for (j = 0; j < i; j++) {
      if (Graph[i][j] != 0) {
        elist.data[elist.n].u = i;
        elist.data[elist.n].v = j;
        elist.data[elist.n].w = Graph[i][j];
        elist.n++;
      }
    }

  sort();

  for (i = 0; i < n; i++)
    belongs[i] = i;

  spanlist.n = 0;

  for (i = 0; i < elist.n; i++) {
    cno1 = find(belongs, elist.data[i].u);
    cno2 = find(belongs, elist.data[i].v);

    if (cno1 != cno2) {
      spanlist.data[spanlist.n] = elist.data[i];
      spanlist.n = spanlist.n + 1;
      applyUnion(belongs, cno1, cno2);
    }
  }
}

int find(int belongs[], int vertexno) {
  return (belongs[vertexno]);
}

void applyUnion(int belongs[], int c1, int c2) {
  int i;

  for (i = 0; i < n; i++)
    if (belongs[i] == c2)
      belongs[i] = c1;
}

// Sorting algo
void sort() {
  int i, j;
  edge temp;

  for (i = 1; i < elist.n; i++)
    for (j = 0; j < elist.n - 1; j++)
      if (elist.data[j].w > elist.data[j + 1].w) {
        temp = elist.data[j];
        elist.data[j] = elist.data[j + 1];
        elist.data[j + 1] = temp;
      }
}

// Printing the result
void print() {
  int i, cost = 0;

  for (i = 0; i < spanlist.n; i++) {
    printf("\n%d - %d : %d", spanlist.data[i].u, spanlist.data[i].v, spanlist.data[i].w);
    cost = cost + spanlist.data[i].w;
  }

  printf("\nSpanning tree cost: %d", cost);
}

int main() {
  int i, j, total_cost;

  n = 6;

  Graph[0][0] = 0;
  Graph[0][1] = 4;
  Graph[0][2] = 4;
  Graph[0][3] = 0;
  Graph[0][4] = 0;
  Graph[0][5] = 0;
  Graph[0][6] = 0;

  Graph[1][0] = 4;
  Graph[1][1] = 0;
  Graph[1][2] = 2;
  Graph[1][3] = 0;
  Graph[1][4] = 0;
  Graph[1][5] = 0;
  Graph[1][6] = 0;

  Graph[2][0] = 4;
  Graph[2][1] = 2;
  Graph[2][2] = 0;
  Graph[2][3] = 3;
  Graph[2][4] = 4;
  Graph[2][5] = 0;
  Graph[2][6] = 0;

  Graph[3][0] = 0;
  Graph[3][1] = 0;
  Graph[3][2] = 3;
  Graph[3][3] = 0;
  Graph[3][4] = 3;
  Graph[3][5] = 0;
  Graph[3][6] = 0;

  Graph[4][0] = 0;
  Graph[4][1] = 0;
  Graph[4][2] = 4;
  Graph[4][3] = 3;
  Graph[4][4] = 0;
  Graph[4][5] = 0;
  Graph[4][6] = 0;

  Graph[5][0] = 0;
  Graph[5][1] = 0;
  Graph[5][2] = 2;
  Graph[5][3] = 0;
  Graph[5][4] = 3;
  Graph[5][5] = 0;
  Graph[5][6] = 0;

  kruskalAlgo();
  print();
}

Sunday, September 20, 2020

greedy algorithm with example

September 20, 2020 0
greedy algorithm with example

 introduction Of greedy algorithm with example


In an algorithm design, there's no band aid to all or any computational problems. Different problems require the utilization of various sorts of techniques. an honest programmer uses all of those techniques counting on the sort of problem. Some commonly used techniques are:

  •  Divide and conquer
  •  Random algorithms
  •  Greedy algorithms (It's not an algorithm, it is a technique.)
  •  Dynamic programming

 

Understand About “greedy algorithm”?


A greedy algorithm, because the name suggests, always makes the selection that seems to be the simplest at that point . this suggests that he makes a locally optimal choice within the hope that this choice will cause a globally optimal solution.

How does one decide which choice is optimal?


Suppose you've got an objective function that must be optimized (maximized or minimized) at some point. A Greedy algorithm makes greedy choices at every step to make sure that the target function is optimized. The
Greedy algorithm only has one move to calculate the optimal solution in order that it never goes back and reverses the choice .

advantages and disadvantages for Greedy algorithms With Example:


  •  It's pretty easy to seek out a Greedy algorithm (or even multiple greedy algorithms) for a drag .
  •  time period analysis for greedy algorithms will generally be much easier than for other techniques (like Divide and conquer). For the Divide and Conquer technique, it's not clear whether the technique is fast or slow. Indeed, at each level of recursion, the dimensions of becomes smaller and therefore the number of sub-problems increases.
  •  The hard part is that for greedy algorithms you've got to work tons harder to figure out the accuracy issues. Even with the proper algorithm, it's hard to prove why it's correct. Proving that a greedy algorithm is correct is more of an art than a science. It involves tons of creativity.


Note: Most of the greedy algorithms in DAA(data structure and algorithm) aren't correct. An example is described later during this article.


Greedy Algorithm Example

Greedy Algorithm Example
Greedy Algorithm Example
Solution : - 

  • First Make a empty set solution-set = { }.
  • Coins 5,2,1
  • sum = 0
  • While sum ≠ 28, do the rest.
  • Select a coin C from coins so that sum + C < 28.
  • If C + sum > 28, return a empty solution.
  • Else, sum = sum + C.
  • Add C to solution-set.

 

Up to the primary 5 iterations, the answer set contains 5 $5 coins. then , we get 1 $2 coin and eventually , 1 $1 coin.

Where to use Greedy algorithms?


A problem must comprise these two components for a greedy algorithm to work:

 it's optimal substructures. The optimal solution for the matter contains optimal solutions to the sub-problems.

 it's a greedy property (hard to prove its correctness!). If you create a choice that seems the simplest at the instant and solve the remaining sub-problems later, you continue to reach an optimal solution. you'll never need to reconsider your earlier choices.
 

Thursday, September 17, 2020

heap sort algorithm and its Performance

September 17, 2020 0
heap sort algorithm and its Performance

Heap Sorting Algorithm

The heap sort algorithm is a arrangement and therefore the heaps are arrays of binary trees. Each node of the binary tree corresponds to a component of the array. Since a node has zero, one or two child nodes, for the i-th element of the array, the 2-th and (2i + 1) -th elements are its left and right children respectively.

The following figure shows an example of a heap during which the numbers in nodes are adequate to the orders in an array of 12 elements. the basis is that the first element of the array, and its left and right children are the second and third elements, respectively. The sixth element has just one child left and therefore the seventh doesn't .

Heap sort algorithm
min heap


There are two sorts of heap: min-heap and max-heap. In min-heap parent nodes, nodes are smaller than child nodes (root node is that the smallest), while in max-heap it's opposite (root node is largest). we'll be using the max-heap property for Heapsort algorithm during this article.


Heapsort Algorithm

The strategy is as follows; 

i) transform an array into a max heap; 

ii) choose the basis , which is that the maximum number; 

iii) keep the remaining array as a maximum heap; 

iv) recurse ii) and iii).

Here is that the code from step iii) which is additionally utilized in step i). This "Max-Heapify" function takes two inputs: an array and an integer. The function compares the node during a given order (input integer) with its two children. If the node is smaller than either of the youngsters , it's swapped with the larger of the 2 child nodes.
 

Heap sort Algorithm
Heap sort Algorithm

Although Max-Heapify doesn't exhaust the whole heap, by running the function on all nodes except leaves (the end nodes of the heap), we will turn any array into a max heap.
 

Heap sort algorithm
Heap sort

We are now able to implement Heapsort.

Heap sort algorithm
Heap sort algorithm

The following figure explains how Heapsort treats an array of 12 elements; 

i) first, we turn the array into a max heap; 

ii) take the basis , 12, and replace it with the last element, 

3; iii) process Max-Heapify on the array of 11 elements remaining at the basis , and therefore the affected nodes are shown in dark blue; iii) recurse ii) and iii). within the end, we'll get an array sorted in ascending order. Heapsort works in situ , only storing a continuing amount of knowledge outside of the input array.

solution for Heap sort algorithm
solution for Heap sort algorithm


 

heap sort time complexity


We can analyze the value of Heapsort by watching the sub-functions of Max-Heapify and Build-Max-Heap.

The cost of Max-Heapify is O (lgn). Comparing a node and its two child nodes costs Θ (1), and within the worst case, we rewind ⌊log₂n⌋ times down. Alternatively, the value of Max-Heapify are often expressed because the height h of the heap O (h).


heap sort time complexity
heap sort time complexity

The cost of Build-Max-Heap is O (n). Since a heap of n elements features a depth of ⌊lgn⌋ and at any height there are ⌈n / 2ʰ⁺ ¹⌉ nodes except the leaves, we will derive the value as a complete of Max-Heapify O costs (h) shown left.

The total cost of Heapsort is O (nlgn), because it calls Build-Max-Heap (O (n)) and Max-Heapify (O (lgn)) over (n-1) times.


Check Performance For heap sort algorithm

The following table describes different sorting algorithms. needless to say , Heap sort algorithm follows other O (nlgn) cost algorithms, while it's not as cheap as Quicksort with a better constant hidden within the O notation.

Check Performance For heap sort algorithm



While Heap sort doesn't beat Quick sort as a algorithm , Heap as a knowledge structure offers many various uses, and one among the foremost notable would be priority queues. With this Heap sort algorithm as an introduction to Heaps, we'll see how Heaps are applied to an efficient priority queue algorithm later.


Saturday, September 12, 2020

Binary Search Algorithm And Binary Search program in c programming

September 12, 2020 0
Binary Search Algorithm And Binary Search program in c programming
In this tutorial, you'll find out how sorting by binary search algorithm works. you'll also find practical samples of binary search in C, C ++, Java, and Python.

A binary search may be a search algorithm to seek out the position of a component during a sorted array.

In this approach, the element is usually searched for within the middle of a neighborhood of an array.

Binary search Algorithm working


The binary search algorithm is often implemented in two ways described below.

  •  Iterative method
  •  Recursive method

The recursive method follows the divide and conquer approach.

The general steps for both methods are described below.

  •  The table during which the search should be performed is:
binary search in c
binary search in c

Let x = 4 be the element to look for.
  • Set two pointers down and up respectively at rock bottom and highest positions.
Binary search algorithm
Binary search algorithm


  • Find the center element within the mid of the array, that is. (arr [low + high]) / 2 = 6.
binary search in C
binary search in C


  • If x == mid, then returns mid.Else, compare the item to look for with m.
  • If x> mid, compare x with the center element of the weather on the proper side of mid. this is often done by adjusting from low to low = medium + 1.
  • Otherwise, compare x with the center element of the weather on the left side of the center. this is often done by setting high to high = medium - 1.
Binary search Algorithm in data structure
Binary search algorithm in data structure


  • Repeat steps 3 through 6 until rock bottom meets the highest .
Mid element in binary search
Mid element in binary search


  • x = 4 is found.

binary search program in c with Recursive Method



#include <stdio.h>

int binarySearch(int array[], int x, int low, int high) {
  if (high >= low) {
    int mid = low + (high - low) / 2;

    if (array[mid] == x)
      return mid;


    if (array[mid] > x)
      return binarySearch(array, x, low, mid - 1);

   
    return binarySearch(array, x, mid + 1, high);
  }

  return -1;
}

int main(void) {
  int array[] = {3, 4, 5, 6, 7, 8, 9};
  int n = sizeof(array) / sizeof(array[0]);
  int x = 4;
  int result = binarySearch(array, x, 0, n - 1);
  if (result == -1)
    printf("Not found");
  else
    printf("Element is found at index %d", result);
}

binary search program in c with Iterative Method

#include <stdio.h>

int binarySearch(int array[], int x, int low, int high) {
  // Repeat until the pointers low and high meet each other
  while (low <= high) {
    int mid = low + (high - low) / 2;

    if (array[mid] == x)
      return mid;

    if (array[mid] < x)
      low = mid + 1;

    else
      high = mid - 1;
  }

  return -1;
}

int main(void) {
  int array[] = {3, 4, 5, 6, 7, 8, 9};
  int n = sizeof(array) / sizeof(array[0]);
  int x = 4;
  int result = binarySearch(array, x, 0, n - 1);
  if (result == -1)
    printf("Not found");
  else
    printf("Element is found at index %d", result);
  return 0;
}

binary search Time Complexity

Time complexities
  •  Best case complexity: O (1)
  •  Average complexity of cases: O (log n)
  •  Worst case complexity: O (log n)

Spatial complexity


The spatial complexity of the binary search is O (n).



Where We Use Binary Search Algorithm

In the linear search algorithm case when things are sorted we didn't use the very fact and ended up with an equivalent complexity using linear search. In such a case, a binary search involves our rescue. The binary search exploits the very fact that the things are in ascending order.


linear search algorithm and its program in c

September 12, 2020 0
linear search algorithm and its program in c
In this tutorial, you'll study the linear search algorithm. you'll also find functional samples of linear search C programming, C ++, Java, and Python.

Linear search is the simplest search algorithm that searches for an item during a list in sequential order. We start at one end and check each item until the item we would like isn't found.

How does the linear search algorithm work?

  • The following steps are followed to seek out a component k = 1 within the list below.

linear search algorithm


  • Start with the primary element, compare k with each element x.

 

Linear Search in c
Linear Search in c

  • If x == k, return the index.
linear search
linear search 


 Linear Search Algorithm

 Linear Search Algorithm
 Linear Search Algorithm

 Linear Search Program In C



#include <stdio.h>

int search(int array[], int n, int x) {
  
  // Going through array sequencially
  for (int i = 0; i < n; i++)
    if (array[i] == x)
      return i;
  return -1;
}

int main() {
  int array[] = {2, 4, 0, 1, 9};
  int x = 1;
  int n = sizeof(array) / sizeof(array[0]);

  int result = search(array, n, x);

  (result == -1) ? printf("Element not found") : printf("Element found at index: %d", result);
}

Otherwise, return not found.

Complexities For linear search Algorithm


Time complexity: O (n)

Spatial complexity: O (1)


Use Of Linear Search Algorithm

Let's say you would like to look for a component (x) during a given array (arr)

the primary and foremost algorithm that might strike an individual is to iterate through each element of the array and check whether or not it's adequate to the search element (x).
 

 This type of algorithm is known as a Linear Search Algorithm in Data Structure.


Friday, September 11, 2020

Redix Sort Algorithm in C and Redix Sort Program

September 11, 2020 0
Redix Sort Algorithm in C and Redix Sort Program
In this tutorial, you'll find out how radix sorting works. you'll also find working samples of radix sort in C, C ++, Java, and Python.

Radix sorting use to sorts the items by first grouping the individual digits of equivalent place value. Then sort the things in ascending / descending order.

Suppose we have got an array of 8 elements. First, we'll sort the things that supported the worth of the unit square. Next, we'll sort the things that supported the worth of the tenth place. This process continues until the last significant place.

Let be the initial table [121, 432, 564, 23, 1, 45, 788]. it's sorted consistent with the essential sort as shown within the figure below.



radix sort in c program
radix sort in c program


How does the Radix sort algorithm work?



  •  Find the most important element within the array, i.e. max. Let X be the number of digits in max. X is calculated because we've to travel through all the many places in all the weather.

 during this table [121, 432, 564, 23, 1, 45, 788], we've the most important number 788. it's 3 digits. Therefore, the loop must go up to many places (3 times).



  •  Now undergo each significant spot one by one.
radix sort in c
radix sort in c

 Use any stable sorting technique to sort the digits at each significant place. We used the count sort for this.

 Sort the things supported the unit's location digits (X = 0).

  • Now sort the things supported the ten numbers.
radix sort in data structure
radix sort in the data structure


  • Finally, sort the things by numbers with the hundredth place of giving number.
Redix Sort Algorithm at hundredth place
Redix Sort Algorithm at the hundredth place

Time complexity of radix sort

Since the radix sort Algorithm is a non-comparative algorithm, its advantages over comparative sorting algorithms.
  • For basic sort which uses count sort as an intermediate stable sort, the time complexity is O (d (n + k)).

  • Here, d is that the cycle of numbers and O (n + k) is that the time complexity of the counting sort.


  • Thus, radix sorting has linear time complexity which is best than O (nlog n) of comparative sorting algorithms.


  • If we take very large numbers of digits or the amount of other bases like 32-bit and 64-bit numbers, then it can add linear time, but the intermediate sort takes up tons of space.


  • This makes the essential sorting space inefficient. this is often the rationale why this type isn't utilized in software libraries.

Redix Sort program in C


#include <stdio.h> 
 int getMax(int array[], int n) {
  int max = array[0];
  for (int i = 1; i < n; i++)
    if (array[i] > max)
      max = array[i];
  return max;
}


void countingSort(int array[], int size, int place) {
  int output[size + 1];
  int max = (array[0] / place) % 10;

  for (int i = 1; i < size; i++) {
    if (((array[i] / place) % 10) > max)
      max = array[i];
  }
  int count[max + 1];

  for (int i = 0; i < max; ++i)
    count[i] = 0; 
 for (int i = 0; i < size; i++)
    count[(array[i] / place) % 10]++; 
 for (int i = 1; i < 10; i++)
    count[i] += count[i - 1]; 
 for (int i = size - 1; i >= 0; i--) {
    output[count[(array[i] / place) % 10] - 1] = array[i];
    count[(array[i] / place) % 10]--;
  }

  for (int i = 0; i < size; i++)
    array[i] = output[i];
} 
 void radixsort(int array[], int size) {
  // Get maximum element
  int max = getMax(array, size);

  // Apply counting sort to sort elements based on place value.
  for (int place = 1; max / place > 0; place *= 10)
    countingSort(array, size, place);
} 
 void printArray(int array[], int size) {
  for (int i = 0; i < size; ++i) {
    printf("%d  ", array[i]);
  }
  printf("\n");
} 
 int main() {
  int array[] = {121, 432, 564, 23, 1, 45, 788};
  int n = sizeof(array) / sizeof(array[0]);
  radixsort(array, n);
  printArray(array, n);
}


Applications For Radix sort Algorithm


Radix sorting is implemented in
  •  DC3 algorithm (Kärkkäinen-Sanders-Burkhardt) while creating an array of suffixes.
  •  places where there are numbers in large ranges.

shell sort algorithm in C, C ++, Java, and Python.

September 11, 2020 0
shell sort algorithm in C, C ++, Java, and Python.
In this tutorial, you'll find out how the shell sort algorithm works. You'll also find working samples of shell sort in C, C ++, Java, and Python.

Shell sort is an algorithm that first sorts of items that are far aside from one another and successively reduces the interval between items to be sorted. it's a generalized version of insertion sort.

In shell sorting, items at a selected interval are sorted. The interval between elements is gradually reduced counting on the sequence used. The performance of the shell sort depends on the sort of sequence used for a given input array.

Some of the optimal sequences used are:

  •  Shell's original sequence: N / 2, N / 4,…, 1
  •  Knuth increments: 1, 4, 13,…, (3k - 1) / 2
  •  Sedgewick increments: 1, 8, 23, 77, 281, 1073, 4193, 16577 ... 4d + 1 + 3 2d + 1
  •  Hibbard increments: 1, 3, 7, 15, 31, 63, 127, 255, 511 ...
  •  Papernov & Stasevich increment: 1, 3, 5, 9, 17, 33, 65, ...
  •  Pratt: 1, 2, 3, 4, 6, 9, 8, 12, 18, 27, 16, 24, 36, 54, 81 ....


How does the Shell sort Algorithm work?



  •  Suppose we'd like to sort the subsequent table.

shell sort sequence
shell sort sequence

 

  • We use the first shell sequence (N / 2, N / 4, ... 1) as intervals in our algorithm.

In the first loop, if the dimensions of the array is N = 8 then the weather located at the interval of N / 2 = 4 is compared and exchanged if they're out of order.

  •  The 0th element is compared to the 4th element.
  •  If the 0th element is bigger than the 4th then, the 4th element is first stored within the variable temp, and therefore the 0th element (i.e. largest element) is stored within the 4th position, and therefore the stored element in temp is stored within the 0th position.
Shell Sort algorithm
Shell Sort algorithm


This process continues for all remaining items.

rearrange the element in shell sort
rearrange the element in shell sort

In the second loop, an interval of N / 4 = 8/4 = 2 is taken and again the things at these intervals are sorted.

shell sort algorithm
shell sort algorithm

You might be confused now.
shell sort in data structure
shell sort in the data structure


The elements in the 4th and 2nd position are compared. the weather in the 2nd and 0th positions also are compared. All array elements at the present interval are compared.
  • The same process continues for the remaining items.
shell sort in the data structure
shell sort in the data structure


  • Finally, when the interval is N / 8 = 8/8 = 1, the array elements located at the interval of 1 are sorted. The table is now completely sorted.
 
shell sort in data structure
shell sort in the data structure

Shell sort example in C


#include<stdio.h> 

// Shell sort
void shellSort(int array[], int n) {
  // Rearrange elements at each n/2, n/4, n/8, ... intervals
  for (int interval = n / 2; interval > 0; interval /= 2) {
    for (int i = interval; i < n; i += 1) {
      int temp = array[i];
      int j;
      for (j = i; j >= interval && array[j - interval] > temp; j -= interval) {
        array[j] = array[j - interval];
      }
      array[j] = temp;
    }
  }
}

// Print an array
void printArray(int array[], int size) {
  for (int i = 0; i < size; ++i) {
    printf("%d  ", array[i]);
  }
  printf("\n");
}

// Driver code
int main() {
  int data[] = {9, 8, 3, 7, 5, 6, 4, 1};
  int size = sizeof(data) / sizeof(data[0]);
  shellSort(data, size);
  printf("Sorted array: \n");
  printArray(data, size);
}

Time Complexity For Shell Sort Algorithm


Shell sort Algorithm is an unstable algorithm because this algorithm doesn't examine items between the intervals.

Time complexity

  •  Worst case complexity: but or adequate to O (n2)
 The worst case complexity for the shell sort is usually but or adequate to O (n2).

 consistent with Poonen's theorem, the worst-case complexity for shell sorting is Θ (Nlog N) 2 / (log log N) 2) or Θ (Nlog N) 2 / log log N) or Θ (N (log N) ) 2) or something in between.

  •  Best case complexity: O (n * log n)
 When the array is already sorted, the entire number of comparisons for every interval (or increment) is adequate to the dimensions of the array.
  •  Average complexity of cases: O (n * log n)
 it's located around O (n1,25).

The complexity depends on the chosen interval. The above complexities differ for the various increment sequences chosen. the simplest increment sequence is unknown.

  • Space complexity:

The spatial complexity for the shell sort is O (1).

Shell sorting Algorithm applications


Shell sorting Algorithm is employed when:
  •  calling a stack is overloading. The uClibc library uses this type.
  •  recursion exceeds a limit. The bzip2 compressor uses it.
  •  Insertion sorting doesn't work correctly when nearby items are far apart. Sorting shells helps reduce the space between nearby items. Thus, there'll be fewer permutations to perform.