Problem : problem

Code :

**code**

. I want to know the time complexity of my algorithm. Although it runs within 16 ms on leetcode but is it really O(n) time and O(1) space??

# | User | Rating |
---|---|---|

1 | tourist | 3557 |

2 | Um_nik | 3494 |

3 | Radewoosh | 3344 |

4 | wxhtxdy | 3309 |

5 | LHiC | 3302 |

6 | Benq | 3286 |

7 | mnbvmar | 3274 |

8 | Petr | 3254 |

9 | yutaka1999 | 3190 |

10 | ksun48 | 3170 |

# | User | Contrib. |
---|---|---|

1 | Errichto | 191 |

2 | Radewoosh | 181 |

3 | PikMike | 165 |

3 | antontrygubO_o | 165 |

3 | Vovuh | 165 |

6 | tourist | 161 |

6 | rng_58 | 161 |

8 | majk | 156 |

8 | Um_nik | 156 |

10 | 300iq | 155 |

Problem : problem

Code :

class Solution

{

public:

vector<int> majorityElement(vector<int>& nums) { srand(time(NULL)); int c=0,n=nums.size(); vector <int> ans; while((n-c)>(n/3)) { int i=rand()%n,f=0; int x=nums[i]; for(int j=0;j<n&&x!=INT_MAX-1;j++) { if(nums[j]==x) { nums[j]=INT_MAX-1; f++; } } //cout<<x<<" "<<f<<" "; if(f>(n/3)) ans.push_back(x); c+=f; } return ans; }

};

. I want to know the time complexity of my algorithm. Although it runs within 16 ms on leetcode but is it really O(n) time and O(1) space??

↑

↓

Codeforces (c) Copyright 2010-2019 Mike Mirzayanov

The only programming contests Web 2.0 platform

Server time: Oct/16/2019 03:47:34 (g2).

Desktop version, switch to mobile version.

Supported by

User lists

Name |
---|

Yes. It is obvious that your algorithm uses $$$O(1)$$$ space (ignoring the compulsory extra vector that you have to create as a return value).

Your algorithm uses $$$O(n)$$$ time (if properly implemented). Since the sample space remains the same throughout, we can say that each trial is independent. Furthermore, each trial has a binary outcome (fail/success).

Let $$$X$$$ represent the number of trials needed to get all the majority elements. Obviously, we can have at most 2 such elements. So let's assume that we have the worst case (i.e. there are exactly 2 of them with the lowest frequency possible).

Let $$$p$$$ = probability of success (we find a new majority element) $$$\approx \frac{1}{3} $$$.

Then $$$X$$$ follows a negative binomial distribution.

Since $$$E(X)=\frac{r}{p}$$$, where $$$r$$$ is the number of successes required, we have $$$E(X)=\frac{2}{p}=6$$$

Since you have a linear scan of the array on each of the 6 iterations, your algorithm will take

on average$$$O(6*n)=O(n)$$$ time. Therefore, your algorithm's time complexity is linear which was what we wanted to prove.Edit: This proof works given that the solution limits the number of iterations.What happens in the case that we don't have any majority elements?

Oops. In that case, using the same distribution, we get that the expected value of $$$X$$$ will be $$$ \frac{2n^2}{3} $$$ . So the OP's solution actually has quadratic time complexity. But it is not difficult to repair his solution though — just limit the number of iterations to $$$\le 12$$$. We can use some other number but 12 already gives us enough confidence that we will find all majority elements (if there are any). Thanks for pointing that out!

Although I am not an expert in probability and statistics and I did not understood a word you said up there but it's good to know it's linear in time. Can you explain me what you said above with an example like an array of size 5 or 6 just for understanding. Thanks by the way.

The thing is, if the array consists of any majority elements, they will be found in 6 iterations on average. Hence, limiting your loop to about 10 to 12 iterations should suffice. In this sense, your idea is correct (has linear time complexity) but your implementation is wrong. As MSchallenkamp mentioned, the actual time complexity of your implementation is not linear. It is $$$O(n^2)$$$. I have also done a separate analysis above.

There is no "example" for proving time complexity analysis btw. You just have to use math (probability in this case).

Thanks man I will try to understand the probability part. Although after adding the condition for number of iterations to not exceed 12 the runtime remained the same. Any specific reason for that.

codeclass Solution

{

public:

vector majorityElement(vector& nums)

{

}

};

As I said, the input arrays in leetcode's tests are too small. Try it on your computer. Say an array of 100000 elements or more.

To see why your original implementation was incorrect, use a large array of unique numbers.

ok thanks.

By the way, MSchallenkamp has hinted at how to make your algorithm even better — by trimming away the elements that you have already checked. That way, your algorithm will make 2 passes of the array on average. You can use the binomial distribution to see why this is true.

Sir I'm green not purple all this is like magic to me xd....

On a side note, your algorithm is technically incorrect due to the use of the rand() function to generate indexes. Why? Note the maximum value of the rand() function.

What can be done to cure it? Also does this mean that leetcode has weak test cases for this question??

I wouldn't say that the test cases are weak. I would just say that you just got lucky that the input arrays are not large enough.

How to handle provided the input arrays were large?? Can I use rand() in that case??

Obviously not. I shall not give you the fix directly. But you can google for how to generate random number in C++11.

Gotcha

I see you are also suffering from negative contribution just like me but that's not gonna stop us ...

Let's consider an array with no repeated numbers. In order to complete the outer loop we must mark off at least 2/3rds of them. Each mark must run through the inner loop once, taking n time. Therefore we must be at least 2/3 * n^2 time in the worst case, ignoring misses. If we factor in misses we have a 1/3 chance or better of finding some element so we can expect less than 3 * (2/3) * n average misses total.