Home Big O Notation
Post
Cancel
Big O Notation | SEG

Big O Notation

This is part 3 of our Data Structures and Algorithms series. This post focuses on Big O Notation.

What is Big O Notation?

At its core, Big O Notation is a mathematical way to describe the upper bound of an algorithm’s growth rate as input size increases.

In simpler terms, it helps you answer:

  • How fast is this algorithm?
  • How much memory will it use?
  • What happens when the input size grows?

It focuses on worst-case scenarios, which is what you want to optimize against most of the time.

Here’s a quick tabular visualisation of common complexities, from best to worst:

Big O NotationNamePerformanceNote
O(1)Constant time🚀 ExcellentNo matter how large the input, the algorithm takes a constant amount of time. This is the ideal scenario.
O(log n)Logarithmic time⚡ GreatThe time taken increases logarithmically with the size of the input. Binary search is an example of an algorithm with logarithmic time complexity.
O(n)Linear time👍 FairThe time taken increases linearly with the size of the input. For example, simple search algorithms typically have linear time complexity.
O(n log n)Linearithmic time😐 DecentMore time-consuming than linear time but better than quadratic time. Algorithms with this complexity often involve dividing the input in each step. Merge sort and quicksort (on average) are examples.
O(n²)Quadratic time🚩 BadThe time taken is proportional to the square of the size of the input. Algorithms with nested loops often have quadratic time complexity. Bubble sort, insertion sort, and selection sort are examples.
O(2ⁿ)Exponential time❌ HorribleThe time taken doubles with each additional element in the input. Algorithms with this complexity are often not practical for large inputs. The recursive calculation of the nth Fibonacci number is an example.
O(n!)Factorial time💀 AvoidThe time taken grows factorial with the size of the input. Permutation algorithms often have factorial time complexity.

Let’s connect this to how operations perform on real data structures.

Data Structures and Time Complexity

Let’s look at three foundational data structures:

  • Arrays
  • Linked Lists
  • Binary Search Trees (BST)

For each, we considered four common operations:

  • Access
  • Search
  • Insert
  • Delete

And let’s compare their average and worst-case time complexities.

OperationArraysLinked ListsBinary Search Trees
AccessO(1)O(n)O(log n) (avg) / O(n) (worst)
SearchO(n)O(n)O(log n) (avg) / O(n) (worst)
InsertO(n)O(1)O(log n) (avg) / O(n) (worst)
DeleteO(n)O(1)O(log n) (avg) / O(n) (worst)

How to Choose the Right Data Structure

When deciding which data structure to use, it’s essential to consider the operation you’ll perform the most frequently.

When to Use Arrays

Arrays are optimal when:

  • You need fast access (think dashboards, UI updates).
  • Insertion/deletion isn’t frequent or isn’t a bottleneck.

When to Use Linked Lists

Linked lists shine when:

  • You need frequent insertions or deletions.
  • Access patterns aren’t predictable or sequential.

When to Use Binary Search Trees

BSTs offer a balanced tradeoff:

  • Faster searches than linear structures (on average).
  • Efficient insert/delete (if kept balanced).
  • Can degrade to O(n) if unbalanced.

Pro Tip: Self-balancing trees like AVL or Red-Black Trees help avoid the worst-case O(n) scenario.

Conclusion

Mastering Big O notation is crucial for any software engineer. It helps you make informed decisions about which algorithms and data structures are best suited for your needs. By understanding time and space complexities, you can build more efficient software that scales well as the input size grows. Whether you’re solving algorithmic challenges, building systems, or optimizing code, the knowledge of Big O notation will be invaluable in helping you write efficient, scalable, and maintainable code.

As a software engineer, it’s vital to practice using Big O notation to evaluate the efficiency of algorithms and data structures. While many programming languages offer built-in data structures and algorithms, understanding the underlying mechanics will help you write more efficient code. Practicing with coding challenges and understanding the worst-case scenarios for each algorithm will prepare you for real-world performance bottlenecks.

Remember, the choice of data structure or algorithm depends on your application’s specific requirements, and by leveraging Big O analysis, you can ensure that your solution is optimized for performance. The next time you face a problem, think about how to approach it with the most efficient algorithm or data structure.

Live discussion

Want to see a live discussion on the topic? Check out our YouTube recording:

Stay tuned!

Our journey in the Data Structures and Algorithms ends here! In the next installment, we’ll delve into other topics. Stay tuned!

This post is licensed under CC BY 4.0 by the author.