__Motivation:__

In google code jam 2019, in the qualification round something interesting happens with me.

The first problem Foregone-Solution, simple problem: given number N and want you to print two numbers A and B such that A+B = N, and neither A nor B contains any digit that is a 4.

We have 3 test sets:

1 < N < $$$10^5$$$

1 < N < $$$10^9$$$

1 < N < $$$10^{100}$$$

Numerical implementation solution for the first 2 test sets: https://ideone.com/MZwocZ i got `TLE`

on second test set N < $$$10^9$$$

When i try to generate A and B randomly: https://ideone.com/LUFyhI i got `AC`

(can anyone explain why that happens)

yes i know it should be solved using string instead but i'm talking about this case ^

__Topic:__

That makes me wonder, we use algorithms to get better, faster and optimal solutions for example if we have N elements with for example binary search we can find any element in the best case in 1 step and in the worst case in ($$$lg N$$$) steps

Well but the question is, if we search for element randomly is not possible to get element in just 1 step also or less than ($$$lg N$$$) steps or maybe in N steps

why we do not use both randomization and binary search for example $$$\frac{(lg N)}{2}$$$ steps use randomization and $$$\frac{(lg N)}{2}$$$ steps use binary search Or why not use 1 step randomization, 1 step binary search, 1 step randomization, 1 step binary search and so on..

The answer is because we maybe can’t find the number in those ($$$lg N$$$) steps or we maybe need more than ($$$lg N$$$) steps to find the number.

Well but we can during randomization steps update interval for binary search and during binary search update interval for randomization. which will decrease searching space and increase the probability of finding our target fast.. example

The idea i want to discuss: is it better to use randomization before/during performing some algorithms or not?

http://www.cplusplus.com/articles/EywTURfi

this algorithm is actually genius lol

But there is no use in the binary search and similar examples since it costs more time to generate the random numbers and there is no speedup on the average. So it can only slow down the algorithms and make them less predictable in terms of speed.

There are a few algorithms that use this technique. One of them which i think is the coolest solves this problem:

Given a sorted linked list, find the index of a certain element.

What is the value of

`it`

so that`list[it] = Q`

?Both the randomized and the incremental approach have an expected of

`O(n/2) = O(n)`

, but the following algorithm has an expected of`O(sqrt(n))`

:It has an expected of

`O(sqrt(n))`

because in ~`sqrt(n)`

steps`it`

is less than`sqrt(n)`

incremental steps away from the answer.this algorithm has actually a terrible runningtime. Allmost all numbers contain the digit $$$4$$$ if written in decimal. If we randomly generate a number in $$$[0, 10^n)$$$ the probability of it containing a $$$4$$$ is rougthly the same as choosing $$$n$$$ times a digit in $$$[0, 9]$$$ and none of them beeing a $$$4$$$. Therefore the propability is $$$0.9^n$$$. This is ok for $$$n=9$$$ but for $$$n=100$$$ the probability is already quite low and it would get even worse for $$$n=10^6$$$ (which can be solved in $$$O(n)$$$)

I've also noticed that randomized solution should pass the first two subtasks, I was surprised that problemsetters probably hadn't noticed this fact (last subtasks was worth only one point).

Google's problem author mentioned randomized solution in Analysis tab, that was the intended solution for subtask 2.