Computational complexity of calculating mean vs median?Help with deterministic selection algorithmComputational complexity of coefficientsDifference between time complexity and computational complexityDifference between time complexity and computational complexityComputing the k shortest edge-disjoint paths on a weighted graphMean and median distance in unweighted graphComputational complexity vs Computational costFind the computational complexity of a concatenated systemMathematically calculating time complexityPolynomial time algorithms for rank 1 elliptic curves over QTime complexity of finding subsequences of a string segmented into parts
Prepare a user to perform an action before proceeding to the next step
Easy way to get process information from a window
Coworker mumbles to herself when working, how to ask her to stop?
Balancing Humanoid fantasy races: Elves
Patio gate not at right angle to the house
Why are prop blades not shaped like household fan blades?
Move arrows along a contour
Best Ergonomic Design for a handheld ranged weapon
Applying for mortgage when living together but only one will be on the mortgage
LWC: Removing a class name on scroll
How do discovery writers hibernate?
Word for giving preference to the oldest child
Why would an invisible personal shield be necessary?
How does Asimov's second law deal with contradictory orders from different people?
What to expect in a jazz audition
Should I put my name first or last in the team members list?
Why don't short runways use ramps for takeoff?
UX writing: When to use "we"?
What is my clock telling me to do?
Was Donald Trump at ground zero helping out on 9-11?
Reducing the time for rolling hash
Word for soundtrack music which is part of the action of the movie
Why are we moving in circles with a tandem kayak?
Gold Battle KoTH
Computational complexity of calculating mean vs median?
Help with deterministic selection algorithmComputational complexity of coefficientsDifference between time complexity and computational complexityDifference between time complexity and computational complexityComputing the k shortest edge-disjoint paths on a weighted graphMean and median distance in unweighted graphComputational complexity vs Computational costFind the computational complexity of a concatenated systemMathematically calculating time complexityPolynomial time algorithms for rank 1 elliptic curves over QTime complexity of finding subsequences of a string segmented into parts
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;
$begingroup$
Suppose we have a list $L$ consisting of $N$ numbers (may include repetitions).
I am curious which is more computationally intensive to calculate, the mean or the median?
Naively, I would suppose calculating the mean involves summing up the $N$ numbers and then dividing by $N$, hence it has linear $O(N)$ complexity.
Computing the median would need to perform some sort of sorting algorithm, https://en.wikipedia.org/wiki/Sorting_algorithm, it seems the best algorithm performs at $O(nlog n)$ complexity.
Hence, for general $N$, it is more computationally intensive to calculate median? Is my reasoning correct?
Thanks for any help.
algorithms time-complexity
$endgroup$
add a comment |
$begingroup$
Suppose we have a list $L$ consisting of $N$ numbers (may include repetitions).
I am curious which is more computationally intensive to calculate, the mean or the median?
Naively, I would suppose calculating the mean involves summing up the $N$ numbers and then dividing by $N$, hence it has linear $O(N)$ complexity.
Computing the median would need to perform some sort of sorting algorithm, https://en.wikipedia.org/wiki/Sorting_algorithm, it seems the best algorithm performs at $O(nlog n)$ complexity.
Hence, for general $N$, it is more computationally intensive to calculate median? Is my reasoning correct?
Thanks for any help.
algorithms time-complexity
$endgroup$
$begingroup$
en.wikipedia.org/wiki/Quickselect
$endgroup$
– Pseudonym
Jul 22 at 5:21
$begingroup$
Note that computing the mean of a large number of floating point numbers needs more than $O(N)$, typically $O(N log N)$.
$endgroup$
– Simon Richter
Jul 22 at 13:22
$begingroup$
Do you want worst case performance or average (over the set of initial permutations of $L$) performance? For instance, quickselect is $O(N)$ on average, but $O(N^2)$ in the worst case.
$endgroup$
– Eric Towers
Jul 22 at 17:45
add a comment |
$begingroup$
Suppose we have a list $L$ consisting of $N$ numbers (may include repetitions).
I am curious which is more computationally intensive to calculate, the mean or the median?
Naively, I would suppose calculating the mean involves summing up the $N$ numbers and then dividing by $N$, hence it has linear $O(N)$ complexity.
Computing the median would need to perform some sort of sorting algorithm, https://en.wikipedia.org/wiki/Sorting_algorithm, it seems the best algorithm performs at $O(nlog n)$ complexity.
Hence, for general $N$, it is more computationally intensive to calculate median? Is my reasoning correct?
Thanks for any help.
algorithms time-complexity
$endgroup$
Suppose we have a list $L$ consisting of $N$ numbers (may include repetitions).
I am curious which is more computationally intensive to calculate, the mean or the median?
Naively, I would suppose calculating the mean involves summing up the $N$ numbers and then dividing by $N$, hence it has linear $O(N)$ complexity.
Computing the median would need to perform some sort of sorting algorithm, https://en.wikipedia.org/wiki/Sorting_algorithm, it seems the best algorithm performs at $O(nlog n)$ complexity.
Hence, for general $N$, it is more computationally intensive to calculate median? Is my reasoning correct?
Thanks for any help.
algorithms time-complexity
algorithms time-complexity
asked Jul 22 at 5:11
yoyosteinyoyostein
1384 bronze badges
1384 bronze badges
$begingroup$
en.wikipedia.org/wiki/Quickselect
$endgroup$
– Pseudonym
Jul 22 at 5:21
$begingroup$
Note that computing the mean of a large number of floating point numbers needs more than $O(N)$, typically $O(N log N)$.
$endgroup$
– Simon Richter
Jul 22 at 13:22
$begingroup$
Do you want worst case performance or average (over the set of initial permutations of $L$) performance? For instance, quickselect is $O(N)$ on average, but $O(N^2)$ in the worst case.
$endgroup$
– Eric Towers
Jul 22 at 17:45
add a comment |
$begingroup$
en.wikipedia.org/wiki/Quickselect
$endgroup$
– Pseudonym
Jul 22 at 5:21
$begingroup$
Note that computing the mean of a large number of floating point numbers needs more than $O(N)$, typically $O(N log N)$.
$endgroup$
– Simon Richter
Jul 22 at 13:22
$begingroup$
Do you want worst case performance or average (over the set of initial permutations of $L$) performance? For instance, quickselect is $O(N)$ on average, but $O(N^2)$ in the worst case.
$endgroup$
– Eric Towers
Jul 22 at 17:45
$begingroup$
en.wikipedia.org/wiki/Quickselect
$endgroup$
– Pseudonym
Jul 22 at 5:21
$begingroup$
en.wikipedia.org/wiki/Quickselect
$endgroup$
– Pseudonym
Jul 22 at 5:21
$begingroup$
Note that computing the mean of a large number of floating point numbers needs more than $O(N)$, typically $O(N log N)$.
$endgroup$
– Simon Richter
Jul 22 at 13:22
$begingroup$
Note that computing the mean of a large number of floating point numbers needs more than $O(N)$, typically $O(N log N)$.
$endgroup$
– Simon Richter
Jul 22 at 13:22
$begingroup$
Do you want worst case performance or average (over the set of initial permutations of $L$) performance? For instance, quickselect is $O(N)$ on average, but $O(N^2)$ in the worst case.
$endgroup$
– Eric Towers
Jul 22 at 17:45
$begingroup$
Do you want worst case performance or average (over the set of initial permutations of $L$) performance? For instance, quickselect is $O(N)$ on average, but $O(N^2)$ in the worst case.
$endgroup$
– Eric Towers
Jul 22 at 17:45
add a comment |
2 Answers
2
active
oldest
votes
$begingroup$
You can find the median in linear time using the linear time selection algorithm. There are also faster randomized algorithms such as quickselect and Floyd–Rivest.
The two tasks are really incomparable, since computing the mean requires arithmetic (mainly addition) whereas computing the median requires comparisons.
$endgroup$
3
$begingroup$
The median of medians is not generally the same as the median. It is sufficiently close for use in algorithms like quicksort to guarantee an O(n log n) time complexity, but it can deviate quite a bit from the actual media.
$endgroup$
– Dreamer
Jul 22 at 13:57
$begingroup$
Right, the problem is that Wikipedia doesn't have a page for the linear time selection algorithm based on median-of-medians. Let me assure you, however, that the exact median can be found in linear times.
$endgroup$
– Yuval Filmus
Jul 22 at 14:54
$begingroup$
@YuvalFilmus this might be what you're looking for.
$endgroup$
– ryan
Jul 23 at 21:36
$begingroup$
@ryan Right, that should be a separate article. Hopefully someone will make it so in the future.
$endgroup$
– Yuval Filmus
Jul 23 at 22:10
add a comment |
$begingroup$
Just quickly saying how can get find the median in linear time: Say you have a million items 0 .. 999,999 then the median is the average of items 499,999 and 500,000.
Run the quicksort algorithm. But after each partitioning, you don't sort both halves, you only sort the half (or two halves) containing elements #499,999 and 500,000.
The average would be O(n) if you just add all the values and divide by n. The problem is you get rounding errors. At the extreme, you could get a result that is less than the minimum or greater than the maximum of all values (especially if all items are equal to the same value x; due to rounding errors it's quite unlikely that your result is exactly x).
A reasonably precise method for large n is this: Add the numbers in pairs. Say $b_0 = a_0 + a_1$, $b_1 = a_2 + a_3$ etc. Then $c_0 = b_0 + b_1$, $c_1 = b_2 + b_3$ and so on, until only one number is left. Since the results are smaller than if you added sequentially, the errors are smaller. So you get a better approximation for the average.
That approximation is still not good. If the average you calculated is A, you then calculate the average of $a_i - A$. This is more precise since the values involved are smaller (the sum should in theory be 0 but isn't due to rounding errors), so you just add that average to A to get a better result.
It's still linear time, but it's a bit slower than just adding all the numbers.
$endgroup$
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "419"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcs.stackexchange.com%2fquestions%2f112059%2fcomputational-complexity-of-calculating-mean-vs-median%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
You can find the median in linear time using the linear time selection algorithm. There are also faster randomized algorithms such as quickselect and Floyd–Rivest.
The two tasks are really incomparable, since computing the mean requires arithmetic (mainly addition) whereas computing the median requires comparisons.
$endgroup$
3
$begingroup$
The median of medians is not generally the same as the median. It is sufficiently close for use in algorithms like quicksort to guarantee an O(n log n) time complexity, but it can deviate quite a bit from the actual media.
$endgroup$
– Dreamer
Jul 22 at 13:57
$begingroup$
Right, the problem is that Wikipedia doesn't have a page for the linear time selection algorithm based on median-of-medians. Let me assure you, however, that the exact median can be found in linear times.
$endgroup$
– Yuval Filmus
Jul 22 at 14:54
$begingroup$
@YuvalFilmus this might be what you're looking for.
$endgroup$
– ryan
Jul 23 at 21:36
$begingroup$
@ryan Right, that should be a separate article. Hopefully someone will make it so in the future.
$endgroup$
– Yuval Filmus
Jul 23 at 22:10
add a comment |
$begingroup$
You can find the median in linear time using the linear time selection algorithm. There are also faster randomized algorithms such as quickselect and Floyd–Rivest.
The two tasks are really incomparable, since computing the mean requires arithmetic (mainly addition) whereas computing the median requires comparisons.
$endgroup$
3
$begingroup$
The median of medians is not generally the same as the median. It is sufficiently close for use in algorithms like quicksort to guarantee an O(n log n) time complexity, but it can deviate quite a bit from the actual media.
$endgroup$
– Dreamer
Jul 22 at 13:57
$begingroup$
Right, the problem is that Wikipedia doesn't have a page for the linear time selection algorithm based on median-of-medians. Let me assure you, however, that the exact median can be found in linear times.
$endgroup$
– Yuval Filmus
Jul 22 at 14:54
$begingroup$
@YuvalFilmus this might be what you're looking for.
$endgroup$
– ryan
Jul 23 at 21:36
$begingroup$
@ryan Right, that should be a separate article. Hopefully someone will make it so in the future.
$endgroup$
– Yuval Filmus
Jul 23 at 22:10
add a comment |
$begingroup$
You can find the median in linear time using the linear time selection algorithm. There are also faster randomized algorithms such as quickselect and Floyd–Rivest.
The two tasks are really incomparable, since computing the mean requires arithmetic (mainly addition) whereas computing the median requires comparisons.
$endgroup$
You can find the median in linear time using the linear time selection algorithm. There are also faster randomized algorithms such as quickselect and Floyd–Rivest.
The two tasks are really incomparable, since computing the mean requires arithmetic (mainly addition) whereas computing the median requires comparisons.
edited Jul 22 at 14:53
answered Jul 22 at 6:36
Yuval FilmusYuval Filmus
205k15 gold badges200 silver badges364 bronze badges
205k15 gold badges200 silver badges364 bronze badges
3
$begingroup$
The median of medians is not generally the same as the median. It is sufficiently close for use in algorithms like quicksort to guarantee an O(n log n) time complexity, but it can deviate quite a bit from the actual media.
$endgroup$
– Dreamer
Jul 22 at 13:57
$begingroup$
Right, the problem is that Wikipedia doesn't have a page for the linear time selection algorithm based on median-of-medians. Let me assure you, however, that the exact median can be found in linear times.
$endgroup$
– Yuval Filmus
Jul 22 at 14:54
$begingroup$
@YuvalFilmus this might be what you're looking for.
$endgroup$
– ryan
Jul 23 at 21:36
$begingroup$
@ryan Right, that should be a separate article. Hopefully someone will make it so in the future.
$endgroup$
– Yuval Filmus
Jul 23 at 22:10
add a comment |
3
$begingroup$
The median of medians is not generally the same as the median. It is sufficiently close for use in algorithms like quicksort to guarantee an O(n log n) time complexity, but it can deviate quite a bit from the actual media.
$endgroup$
– Dreamer
Jul 22 at 13:57
$begingroup$
Right, the problem is that Wikipedia doesn't have a page for the linear time selection algorithm based on median-of-medians. Let me assure you, however, that the exact median can be found in linear times.
$endgroup$
– Yuval Filmus
Jul 22 at 14:54
$begingroup$
@YuvalFilmus this might be what you're looking for.
$endgroup$
– ryan
Jul 23 at 21:36
$begingroup$
@ryan Right, that should be a separate article. Hopefully someone will make it so in the future.
$endgroup$
– Yuval Filmus
Jul 23 at 22:10
3
3
$begingroup$
The median of medians is not generally the same as the median. It is sufficiently close for use in algorithms like quicksort to guarantee an O(n log n) time complexity, but it can deviate quite a bit from the actual media.
$endgroup$
– Dreamer
Jul 22 at 13:57
$begingroup$
The median of medians is not generally the same as the median. It is sufficiently close for use in algorithms like quicksort to guarantee an O(n log n) time complexity, but it can deviate quite a bit from the actual media.
$endgroup$
– Dreamer
Jul 22 at 13:57
$begingroup$
Right, the problem is that Wikipedia doesn't have a page for the linear time selection algorithm based on median-of-medians. Let me assure you, however, that the exact median can be found in linear times.
$endgroup$
– Yuval Filmus
Jul 22 at 14:54
$begingroup$
Right, the problem is that Wikipedia doesn't have a page for the linear time selection algorithm based on median-of-medians. Let me assure you, however, that the exact median can be found in linear times.
$endgroup$
– Yuval Filmus
Jul 22 at 14:54
$begingroup$
@YuvalFilmus this might be what you're looking for.
$endgroup$
– ryan
Jul 23 at 21:36
$begingroup$
@YuvalFilmus this might be what you're looking for.
$endgroup$
– ryan
Jul 23 at 21:36
$begingroup$
@ryan Right, that should be a separate article. Hopefully someone will make it so in the future.
$endgroup$
– Yuval Filmus
Jul 23 at 22:10
$begingroup$
@ryan Right, that should be a separate article. Hopefully someone will make it so in the future.
$endgroup$
– Yuval Filmus
Jul 23 at 22:10
add a comment |
$begingroup$
Just quickly saying how can get find the median in linear time: Say you have a million items 0 .. 999,999 then the median is the average of items 499,999 and 500,000.
Run the quicksort algorithm. But after each partitioning, you don't sort both halves, you only sort the half (or two halves) containing elements #499,999 and 500,000.
The average would be O(n) if you just add all the values and divide by n. The problem is you get rounding errors. At the extreme, you could get a result that is less than the minimum or greater than the maximum of all values (especially if all items are equal to the same value x; due to rounding errors it's quite unlikely that your result is exactly x).
A reasonably precise method for large n is this: Add the numbers in pairs. Say $b_0 = a_0 + a_1$, $b_1 = a_2 + a_3$ etc. Then $c_0 = b_0 + b_1$, $c_1 = b_2 + b_3$ and so on, until only one number is left. Since the results are smaller than if you added sequentially, the errors are smaller. So you get a better approximation for the average.
That approximation is still not good. If the average you calculated is A, you then calculate the average of $a_i - A$. This is more precise since the values involved are smaller (the sum should in theory be 0 but isn't due to rounding errors), so you just add that average to A to get a better result.
It's still linear time, but it's a bit slower than just adding all the numbers.
$endgroup$
add a comment |
$begingroup$
Just quickly saying how can get find the median in linear time: Say you have a million items 0 .. 999,999 then the median is the average of items 499,999 and 500,000.
Run the quicksort algorithm. But after each partitioning, you don't sort both halves, you only sort the half (or two halves) containing elements #499,999 and 500,000.
The average would be O(n) if you just add all the values and divide by n. The problem is you get rounding errors. At the extreme, you could get a result that is less than the minimum or greater than the maximum of all values (especially if all items are equal to the same value x; due to rounding errors it's quite unlikely that your result is exactly x).
A reasonably precise method for large n is this: Add the numbers in pairs. Say $b_0 = a_0 + a_1$, $b_1 = a_2 + a_3$ etc. Then $c_0 = b_0 + b_1$, $c_1 = b_2 + b_3$ and so on, until only one number is left. Since the results are smaller than if you added sequentially, the errors are smaller. So you get a better approximation for the average.
That approximation is still not good. If the average you calculated is A, you then calculate the average of $a_i - A$. This is more precise since the values involved are smaller (the sum should in theory be 0 but isn't due to rounding errors), so you just add that average to A to get a better result.
It's still linear time, but it's a bit slower than just adding all the numbers.
$endgroup$
add a comment |
$begingroup$
Just quickly saying how can get find the median in linear time: Say you have a million items 0 .. 999,999 then the median is the average of items 499,999 and 500,000.
Run the quicksort algorithm. But after each partitioning, you don't sort both halves, you only sort the half (or two halves) containing elements #499,999 and 500,000.
The average would be O(n) if you just add all the values and divide by n. The problem is you get rounding errors. At the extreme, you could get a result that is less than the minimum or greater than the maximum of all values (especially if all items are equal to the same value x; due to rounding errors it's quite unlikely that your result is exactly x).
A reasonably precise method for large n is this: Add the numbers in pairs. Say $b_0 = a_0 + a_1$, $b_1 = a_2 + a_3$ etc. Then $c_0 = b_0 + b_1$, $c_1 = b_2 + b_3$ and so on, until only one number is left. Since the results are smaller than if you added sequentially, the errors are smaller. So you get a better approximation for the average.
That approximation is still not good. If the average you calculated is A, you then calculate the average of $a_i - A$. This is more precise since the values involved are smaller (the sum should in theory be 0 but isn't due to rounding errors), so you just add that average to A to get a better result.
It's still linear time, but it's a bit slower than just adding all the numbers.
$endgroup$
Just quickly saying how can get find the median in linear time: Say you have a million items 0 .. 999,999 then the median is the average of items 499,999 and 500,000.
Run the quicksort algorithm. But after each partitioning, you don't sort both halves, you only sort the half (or two halves) containing elements #499,999 and 500,000.
The average would be O(n) if you just add all the values and divide by n. The problem is you get rounding errors. At the extreme, you could get a result that is less than the minimum or greater than the maximum of all values (especially if all items are equal to the same value x; due to rounding errors it's quite unlikely that your result is exactly x).
A reasonably precise method for large n is this: Add the numbers in pairs. Say $b_0 = a_0 + a_1$, $b_1 = a_2 + a_3$ etc. Then $c_0 = b_0 + b_1$, $c_1 = b_2 + b_3$ and so on, until only one number is left. Since the results are smaller than if you added sequentially, the errors are smaller. So you get a better approximation for the average.
That approximation is still not good. If the average you calculated is A, you then calculate the average of $a_i - A$. This is more precise since the values involved are smaller (the sum should in theory be 0 but isn't due to rounding errors), so you just add that average to A to get a better result.
It's still linear time, but it's a bit slower than just adding all the numbers.
answered Jul 23 at 21:03
gnasher729gnasher729
13.7k18 silver badges24 bronze badges
13.7k18 silver badges24 bronze badges
add a comment |
add a comment |
Thanks for contributing an answer to Computer Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcs.stackexchange.com%2fquestions%2f112059%2fcomputational-complexity-of-calculating-mean-vs-median%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
en.wikipedia.org/wiki/Quickselect
$endgroup$
– Pseudonym
Jul 22 at 5:21
$begingroup$
Note that computing the mean of a large number of floating point numbers needs more than $O(N)$, typically $O(N log N)$.
$endgroup$
– Simon Richter
Jul 22 at 13:22
$begingroup$
Do you want worst case performance or average (over the set of initial permutations of $L$) performance? For instance, quickselect is $O(N)$ on average, but $O(N^2)$ in the worst case.
$endgroup$
– Eric Towers
Jul 22 at 17:45