top of page

Complete Guide on Time Complexity in Data Structure

Updated: Jan 23



Time Complexity

Time complexity is a measure of how the performance of an algorithm changes as the size of the input data increases. It is typically expressed using big O notation, which describes the upper bound of the time complexity. For example, an algorithm that has a time complexity of O(n) will take longer to run as the size of the input data increases, but the increase in time will be directly proportional to the size of the input data.


Asymptotic notations

Asymptotic notations are used to express the behaviour of an algorithm as the size of the input data increases, without considering the exact input values. The most commonly used asymptotic notations are:

  • Big O notation: O(f(n)) describes the upper bound of an algorithm's ttime complexity in data structure and gives an idea of how the running time increases as the size of the input data increases. It is used to express the worst-case time complexity.

  • Big Omega notation: Ω(f(n)) describes the lower bound of an algorithm's time complexity and gives an idea of the best-case time complexity.

  • Big Theta notation: Θ(f(n)) describes the tight bound of an algorithm's time complexity and gives an idea of the average-case time complexity.

  • Little o notation: o(f(n)) describes the tight bound of an algorithm's time complexity asymptotically as the size of the input data becomes larger.

  • Little omega notation: ω(f(n)) describes the tight bound of an algorithm's time complexity asymptotically as the size of the input data becomes smaller.


Questions

Find the time complexity of the following code snippets

1.



The inner loop and the outer loop both are executing n times. So for single value of i, j is looping n times, for n values of i, j will loop total n*n = n 2 times. So the time complexity is O(n 2 ).



2.


In this case, after each iteration the value of i is turned into half of its previous value. So the series will be like: . So the time complexity is O(log n).


Why is time complexity important?

Time complexity is important because it helps us understand the efficiency of an algorithm. Knowing the time complexity of an algorithm can help us identify the most efficient and effective solution for a given problem.


What is the difference between time complexity and space complexity?

The main difference between time complexity and space complexity is that time complexity measures the amount of time it takes for an algorithm to complete, while space complexity measures the amount of memory an algorithm requires to complete.


What are the common time complexities?

Some common time complexities include:

  • O(1): Constant time complexity

  • O(log n): Logarithmic time complexity

  • O(n): Linear time complexity

  • O(n log n): Log-linear time complexity

  • O(n^2): Quadratic time complexity

  • O(2^n) or O(c^n): Exponential time complexity


What is the time complexity of quick sort?

Quick Sort in the best case is O(nlogn). In the worst case, the time complexity is O(n^2). Quicksort is considered to be the fastest of the sorting algorithms due to its performance of O(nlogn) in best and average cases.


Time Complexity of Searching algorithms

Let us now dive into the time complexities of some Searching Algorithms and understand which of them is faster.


Time Complexity of Linear Search:

Linear Search follows sequential access. The time complexity of Linear Search in the best case is O(1). In the worst case, the time complexity is O(n).


Time Complexity of Binary Search:

Binary Search is the faster of the two searching algorithms. However, for smaller arrays, linear search does a better job. The time complexity of Binary Search in the best case is O(1). In the worst case, the time complexity is O(log n).



Data Structures

A data structure organizes and stores data in a computer so that it can be accessed and modified efficiently. Some common data structures include arrays, linked lists, stacks, queues, trees, and graphs. Each data structure has its own advantages and use cases, and choosing the right one for a given problem can greatly affect the performance and efficiency of a program.



Types of Data Structure -

  • Linear

A data structure where elements are stored in a linear or sequential fashion, meaning that the elements are arranged in a specific order and can only be accessed in that same order.

Examples - array, attack, queue, linked list, etc


  • Non-Linear

A data structure where elements are not stored in a linear or sequential fashion, meaning that the elements are not arranged in a specific order and can be accessed in multiple ways.

Examples - trees and graphs.


Sorting

  • Bubble sort: A simple sorting algorithm that repeatedly steps through the list to be sorted, compares each pair of adjacent items and swaps them if they are in the wrong order.


  • selection sort: A simple sorting algorithm that repeatedly selects the minimum element from the unsorted portion of the list and appends it to the sorted portion.


  • insertion sort: A simple sorting algorithm that builds the final sorted list one item at a time by comparing each new item to the items already in the list and inserting it in the correct position.


  • merge sort: A divide-and-conquer sorting algorithm that divides the list into two halves, recursively sorts each half, and then merges the two sorted halves back into a single sorted list.


  • quick sort: A divide-and-conquer sorting algorithm that selects a "pivot" element from the list and partition the other elements into two sub-lists, according to whether they are less than or greater than the pivot.


  • Heap sort: A comparison-based sorting algorithm that builds a heap from the data, and then repeatedly extracts the maximum/minimum element from the heap and move it to the end of the sorted array.


These are some well know algorithms which can be implemented using arrays, vectors, or other data structures.



Searching

The process of finding a specific element or value in a given data structure, such as an array or linked list. The goal of searching is to determine if the element or value is present in the data structure, and if so, its location or index.


Types -

  • Sequential / Linear Search

It is a simple searching algorithm that iterates through a given data structure, such as an array or linked list, comparing each element to the search key until a match is found or the end of the data structure is reached.


  • Interval

It is a searching algorithm that is used to find all elements within a given range in a data structure such as an array or linked list.


Linear Search –

The algorithm that starts at one end and goes through each element of a list until the desired element is found, otherwise, the search continues till the end of the data set.




Binary Search –

Binary search is a more efficient searching algorithm that takes advantage of the ordered nature of the data structure and repeatedly divides the search interval in half. It can only be applied on a sorted array.



Jump Search

Algorithm that improves on linear search by skipping over some elements in the data structure, thus reducing the number of comparisons needed. It works by jumping a fixed number of elements, called the "jump size," at a time, instead of comparing every element one at a time.




Hash Table Search

Hash table search is a search algorithm that uses a hash function to calculate the index of an element in an associative array, and then compares the element at that index with the search key.



There are many more –

Tree-based, Interval, Fibonacci etc


Backtracking

It involves exploring all possible solutions to a problem by incrementally building a solution and then undoing or "backtracking" the choices that lead to a dead-end. This is done by recursively trying out different solutions, and undoing the solutions that are not valid.


Recursion

It is a function calls itself, either directly or indirectly, in order to solve a problem or complete a specific task. A function that uses recursion is called a recursive function.


Linked-list

A linked list is a data structure that consists of a sequence of elements, called nodes, each of which stores a reference (or "link") to the next node in the list. Linked lists can be used to implement other data structures such as stacks, queues, and associative arrays.


Types –

  • Singly-linked list – A linked list in which each node contains a reference (or "link") to the next node in the list, but not the previous one.

  • Doubly linked list - A linked list in which each node contains a reference to both the next and previous nodes in the list.

  • Circular linked list – A linked list in which the last node in the list points back to the first node, creating a circular loop.

  • Circular Doubly linked list - A type of linked list in which each node contains a reference to both the next and the previous nodes in the list and the last node in the list points back to the first node and the first node points to the last node, creating a circular loop in both directions.


Stack –

A stack is a data structure that follows the Last In First Out (LIFO) principle, meaning that the last element added to the stack is the first one to be removed. This is also known as the "last-in, first-out" or "LIFO" order. Stacks can be implemented using various data structures such as arrays, linked lists, and dynamic arrays.




Queue –

Data structure that follows the First In First Out (FIFO) principle, meaning that the first element added to the queue is the first one to be removed. This is also known as "first-in, first-out" or "FIFO" order. Queues can be implemented using various data structures such as arrays, linked lists, and dynamic arrays.



Binary Tree –

A data structure that consists of nodes, where each node can have at most two children, which are referred to as the left child and the right child. The topmost node in the tree is called the root. The left and right subtrees of a node are also binary trees.



A binary tree can be traversed in three ways:

  1. In-order traversal: Visit left subtree, then root, then right subtree

  2. Pre-order traversal: Visit root, then left subtree, then right subtree

  3. Post-order traversal: Visit left subtree, then right subtree, then root


Binary Search Tree

A type of binary tree that is used for searching and sorting elements. In this, for each node, the value of the node in the left subtree is less than the value of the node, and the value of the node in the right subtree is greater than the value of the node.


Heap

A specialized tree-based data structure that satisfies the heap property, which is a specific ordering of elements such that for any node in the tree, its value is either greater than or equal to (for a max heap) or less than or equal to (for a min heap) the values of its children.


Types –


  • Max Heap: In which the key present at the root node must be greatest among the keys present at all of its children. The same property must be recursively true for all sub-trees in that Binary Tree.


  • Min Heap: In which the key present at the root node must be minimum among the keys present at all of its children. The same property must be recursively true for all sub-trees in that Binary Tree.


Hashing


Hashing is a technique for efficiently storing and retrieving data by using a hash function to map the data to a unique index in an array or table called a hash table. The hash function takes an input (or "key"), and produces an output (or "hash value") that is used to index into the hash table.


Dynamic Programming –

A method of solving problems by breaking them down into smaller subproblems and storing the solutions to these subproblems in order to avoid redundant computation.


Dynamic programming can be used in two ways:


  • Top-down approach: This approach starts with the problem and breaks it down into smaller subproblems, solving each subproblem recursively and storing the solutions in a table for later use. This approach is also known as memorization.


  • Bottom-up approach: This approach starts with the smallest subproblems and solves them first, then uses these solutions to solve larger subproblems, eventually leading to the solution of the original problem. This approach is also known as tabulation.


Graph –

A graph is a non-linear data structure that consists of a set of vertices (or nodes) and a set of edges that connect them. Graphs are used to model many types of relationships and processes in physical, biological, social and information systems.


Types –


  • Undirected graph - In this, edges do not have a direction and can be traversed in both directions.


  • Directed graph - In this, edges have a direction and can only be traversed in one direction. This is also called a directed acyclic graph or DAG.


Questions –


What is a data structure in C++?

A data structure is a way of organizing and storing data in a computer program, in order to make it more efficient to access, modify, and manipulate. Some examples of data structures in C++ include arrays, linked lists, stacks, queues, trees, and graphs.


What is the difference between an array and a linked list in C++?



What is sorting in C++?

Sorting is the process of arranging the elements of a collection in a specific order, such as ascending or descending. Sorting can be applied to different types of data structures, such as arrays, linked lists, and trees.


What is the difference between stable and unstable sorting algorithms?



What is the time complexity of linear search and binary search?

The time complexity of linear search is O(n) and time complexity of binary search is O(log n), where n is the number of elements in the collection. It's not efficient for large dataset.


What is the difference between iterative and recursive searching algorithms?



How is backtracking used in C++?

Backtracking is used in C++ to solve problems that involve finding all possible solutions, such as in combinatorial problems, graph theory, and solving mazes. It can be implemented using recursion and the use of a stack to keep track of the current state of the problem.


How does backtracking improve the time complexity of a problem?

Backtracking improves the time complexity of a problem by pruning branches of the search tree that are not promising, i.e, that cannot lead to a valid solution. This is done by keeping track of the constraints and making sure that they are not violated while exploring different options.



What is the difference between backtracking and depth-first search (DFS) ?


How is recursion used in C++?

Recursion is used in C++ to solve problems that can be broken down into smaller instances of the same problem, such as in mathematical calculations, tree traversals, and backtracking. It can be implemented using a recursive function, where the function calls itself with different input until a base case is reached, at which point the function stops calling itself and the recursive calls return their results.



How does recursion affect the memory usage of a program in C++?

Recursion can affect the memory usage of a program in C++ by consuming stack memory for each recursive call. Each call of the function adds a new stack frame to the call stack, and if the recursion is deep and not properly handled, it can lead to a stack overflow error which is caused by the stack memory being exhausted.






184 views

Recent Posts

See All
bottom of page