Codeforces celebrates 10 years! We are pleased to announce the crowdfunding-campaign. Congratulate us by the link https://codeforces.com/10years. ×

### Ninjo's blog

By Ninjo, 11 months ago, ,

Motivation:
In google code jam 2019, in the qualification round something interesting happens with me.
The first problem Foregone-Solution, simple problem: given number N and want you to print two numbers A and B such that A+B = N, and neither A nor B contains any digit that is a 4.

We have 3 test sets:
1 < N < $10^5$
1 < N < $10^9$
1 < N < $10^{100}$

Numerical implementation solution for the first 2 test sets: https://ideone.com/MZwocZ i got TLE on second test set N < $10^9$

When i try to generate A and B randomly: https://ideone.com/LUFyhI i got AC (can anyone explain why that happens)

Topic:
That makes me wonder, we use algorithms to get better, faster and optimal solutions for example if we have N elements with for example binary search we can find any element in the best case in 1 step and in the worst case in ($lg N$) steps

Well but the question is, if we search for element randomly is not possible to get element in just 1 step also or less than ($lg N$) steps or maybe in N steps

why we do not use both randomization and binary search for example $\frac{(lg N)}{2}$ steps use randomization and $\frac{(lg N)}{2}$ steps use binary search Or why not use 1 step randomization, 1 step binary search, 1 step randomization, 1 step binary search and so on..

The answer is because we maybe can’t find the number in those ($lg N$) steps or we maybe need more than ($lg N$) steps to find the number.

Well but we can during randomization steps update interval for binary search and during binary search update interval for randomization. which will decrease searching space and increase the probability of finding our target fast.. example

The idea i want to discuss: is it better to use randomization before/during performing some algorithms or not?

• +14

 » 11 months ago, # |   +5
 » 11 months ago, # |   +7 this algorithm is actually genius lolBut there is no use in the binary search and similar examples since it costs more time to generate the random numbers and there is no speedup on the average. So it can only slow down the algorithms and make them less predictable in terms of speed.
 » 11 months ago, # | ← Rev. 2 →   +47 There are a few algorithms that use this technique. One of them which i think is the coolest solves this problem:Given a sorted linked list, find the index of a certain element. list[1] <= list[next[1]] <= list[next[next[1]] <= ... <= list[next[...next[1]]] What is the value of it so that list[it] = Q ?Both the randomized and the incremental approach have an expected of O(n/2) = O(n), but the following algorithm has an expected of O(sqrt(n)): it = 1 while it has been improved: if list[next[it]] <= Q: it = next[it]; rnd = a random number between 1 and n if list[rnd] <= Q and list[rnd] > list[it]: it = rnd It has an expected of O(sqrt(n)) because in ~sqrt(n) steps it is less than sqrt(n) incremental steps away from the answer.
 » 11 months ago, # | ← Rev. 3 →   0 this algorithm has actually a terrible runningtime. Allmost all numbers contain the digit $4$ if written in decimal. If we randomly generate a number in $[0, 10^n)$ the probability of it containing a $4$ is rougthly the same as choosing $n$ times a digit in $[0, 9]$ and none of them beeing a $4$. Therefore the propability is $0.9^n$. This is ok for $n=9$ but for $n=100$ the probability is already quite low and it would get even worse for $n=10^6$ (which can be solved in $O(n)$)
 » 11 months ago, # |   +14 I've also noticed that randomized solution should pass the first two subtasks, I was surprised that problemsetters probably hadn't noticed this fact (last subtasks was worth only one point).
•  » » 11 months ago, # ^ |   +12 Google's problem author mentioned randomized solution in Analysis tab, that was the intended solution for subtask 2.