By Vladithur, history, 10 months ago,

You can also find video editorials for problems B-D on ak2006's Youtube channel!

Hint 1
Hint 2
Solution
Solution codes

Hint 1
Hint 2
Solution
Solution codes

Hint
Solution
Solution codes

Hint 1
Hint 2
Hint 3
Solution
Solution codes

#### 1632E2 — Distance Tree (hard version)

Hint 1
Hint 2
Hint 3
Hint 4
Solution
Solution codes

P. S. Solution codes will be published a little later.

P. P. S. Do not forget to evaluate the problems in the announcement.

UPD: Solution codes have been posted.

• +228

 » 10 months ago, # | ← Rev. 2 →   +9 damn, that was fast!
•  » » 10 months ago, # ^ |   +40 We tried our bfast :)
•  » » » 10 months ago, # ^ |   +36 Problems were really good, but i am stupid.
•  » » » 10 months ago, # ^ |   0 Authors, please look at this solution for problem C. I have written a bruteforce solution (determined $b'$ with a loop).Is there a possibility that the solution can get TLE for some cases? If not then why?
•  » » » » 10 months ago, # ^ |   +1 Notice that $b' \le 2b$, since $a < b$. This means that your first loop will run at most $b$ times, and your second loop will run at most $2b$ times, so at most $3b$ times in total. Since $\sum b \le 10^6$ over all test cases, your solution will run well within the time limit and won't TLE.
 » 10 months ago, # |   +2 Thanks for the fast editorial!!
 » 10 months ago, # |   +20 Order of Speed Cf editorial > light > bolt
 » 10 months ago, # |   +16 somebody reads tags
 » 10 months ago, # |   0 someone mind explaining the 2nd question?
•  » » 10 months ago, # ^ |   0 Note that the highest bits can never disappear.
•  » » 10 months ago, # ^ |   0 hi, so basically it says, for any permutation of numbers, the minimum of max xor's you will get is 2^k, i.e for range 0-9, for any permutation you cannot go below 8 = 2^3 for range 0-22 you cannot go below 16 = 2^4 so the nearest 2^k is the value you needyou can try doing it manually for smaller ranges eg 0-5 and then increase the range till 15 or 17, the largest bump you get is when you xor 2^k and (2^k)-1, eg 7^8 or 15^16, you need to avoid this , therefore you need to place the 0 appropriately
•  » » » 10 months ago, # ^ |   0 thanks!!
•  » » » 10 months ago, # ^ |   0 But if we want to avoid 2^k and (2^k)-1, why don't we divide them? and why the minimum of max I get is 2^k. I can't understand it I am new.
•  » » » » 10 months ago, # ^ | ← Rev. 2 →   +1 yes, if you need to avoid 2^k and (2^k)-1, you need to separate them, i.e put 0 between them for eg for 0,1,2,3,4 — you can rearrange as 1,2,3,0,4although this is how I did it and not exactly the editorial suggested but i guess idea is the samefor knowing why 2^k is the min of the max, you can try that yourself manually with smaller ranges or consider this example, range 0-18 here the highest power of 2 is 16(2^4) 16- 10000 (binary)if you do xor with any number between (0-15), it will end up in a number >=1616^2=10010=18 16^5=10101=21 16^0=10000=16do some yourselfso here the concept of highest set bit comes into play here for 16 its the 5th bit from the right (10000) this is present in all numbers from 16-18 in the sequenceso if you do xor of any number in 16-18 with any numbers in 0-15, they will result in >=16 always as for these smaller numbers(0-15) the 5th bit is always 017- 10001 6 — 00110 xor-10111 — 23you gotta minimize this...why? because for any permutation there will be at least one number among (16-18) which will be adjascent to a number in (0-15) whose xor will blow up the value (eg 1,2,3,4,**17**,5,6,...)now interestingly if you xor 16^17 or 17^18 or 16^18 , the values are small cuz they have the 5th bit as 1 16- 10000 18- 10010 xor-00010 -2so for a very simple solution, put them serially 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18the highest value you get is from 15^16, just put 0 between them 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,0,16,17,18 and it should worktry doing the xor's manually and you will recognize the pattern
•  » » » » » 10 months ago, # ^ |   +3 I sincerely thank you for such a detailed explain.I totally understand it.I will study hard and make progress everyday. Thank you.
•  » » 10 months ago, # ^ |   0 If you understood why the minimum cost of construction is 2^k then the problem is solved. I hope you got that. (its because you have to place ONLY 0 adjacent to the smallest number which has kth bit set to make XOR=2^k. If you place any other number, the XOR will always be more than 2^k.)
•  » » » 8 months ago, # ^ |   0 can we place 0,2^k,....all remaining elements so is this permutation valid?
 » 10 months ago, # |   +2 Fast editorial and good problems!
 » 10 months ago, # |   0 Very curious about the bonus in the tutorial. So hard!
 » 10 months ago, # | ← Rev. 2 →   +1 I'm quite confused about D.Can anyone hack me for the following method, as system test wouldn't be completed in quite a while?I have observed that for each $r$, at most 1 $l$ should make $[l,r]$ a bad segment, so I used binary search too. If an $l$ can be find, I would check if there is already a point between $[l,r]$. If there isn't, mark a point on $r$ using BIT, as it would be optimized if I put a point on the right-most position; otherwise, there is no need to ans++, we can use former points. Each point would be changed into a big prime.upd: it turned out that when I was using ST-chart to store range wise gcds I brainlessly put the loop for $2^j$ in the inner loop. And I (un)fortunately ACed it now,
 » 10 months ago, # |   0 Problem B:Please check My Submission I don't know why this construction of permutation incorrect. Thank you!
•  » » 10 months ago, # ^ |   0 try 9
•  » » » 10 months ago, # ^ |   0 Totally missed this case. Thanks for the comment
•  » » 10 months ago, # ^ |   0 Spoiler//int p = pow(2, int(log2(n-1)));int p = pow(2, (int)(log2(n-1)));
•  » » » 10 months ago, # ^ |   0 Thanks.
 » 10 months ago, # |   +2 I failed to solve C
 » 10 months ago, # | ← Rev. 2 →   +3 Giving hint is a good thing for beginner contestant like me and thanks for the fast editorial.
 » 10 months ago, # |   0 Love the contests more, when tutorial comes this fast.
 » 10 months ago, # | ← Rev. 4 →   +20 Simpler (in my opinion) solution for D in O(n log^2 A). We'll replace all elements with huge and different prime numbers. Let's keep a stack for GCDs of suffixes on current prefix, but we'll keep it compressed — we'll compress all suffixes with the same GCD into 1 element. Let's notice that GCD can decrease at most log A times (each time at least divides by 2), so we will have maximum of log A elements in stack. After moving to current prefix we'll recalculate GCDs for all blocks of previous prefix, and we'll also add new suffix of 1 element. Then we'll compress new stack (you can also do it while recalculating GCDs). Now let's look through stack. If there's a block with GCD x, and suffix of length x is in this block, then we'll replace current element (you don't need to do it by actually replacing element, you can just add 1 to answer if you clear stack) and clear stack (all GCDs of suffixes which include current element will become 1, but their lengths are more than 1). If there's no such block, we won't replace element and clear stack. This solution ran in 124 ms on round testing.Update: it seems like it's actually O(n log A) solution. We have n different begins for suffixes, and for each suffix there is at most log decreases, so total amount of decreases of gcd is O(n log A). But we still need to do compression, because otherwise we will do at most n operations per prefix (if gcd of suffix is not decreasing, we still do O(1) operations in gcd). So solution above has complexity of O(n log A).
•  » » 10 months ago, # ^ |   0 Sorry that I still don't understand your idea well. Can you share your submit? Many thanks!
•  » » » 10 months ago, # ^ |   0 144614536. Here is the implementation of zer0brain approach.
•  » » » 10 months ago, # ^ | ← Rev. 3 →   0 Codeusing namespace std; using namespace __gnu_pbds; using namespace __gnu_cxx; #define fast ios_base::sync_with_stdio(0); cin.tie(0); cout.tie(0) #define sfor(i, l, r) for (int i = l; i <= r; ++i) #define bfor(i, r, l) for (int i = r; i >= l; --i) #define range(a) a.begin(), a.end() using ll = long long; using ld = long double; using pii = pair; using vi = vector; using oset = tree, rb_tree_tag, tree_order_statistics_node_update>; int Gcd(int x, int y) { while (y > 0) { int z = x; x = y; y = z % y; } return x; } int main() { fast; int n; cin >> n; vector> b; int cnt = 0; sfor(i, 0, n - 1) { int x; cin >> x; vector> nb; b.emplace_back(x, i, i); for (auto p: b) { int y = Gcd(x, get<0>(p)); if (!nb.empty() && get<0>(nb.back()) == y) get<2>(nb.back()) = get<2>(p); else nb.emplace_back(y, get<1>(p), get<2>(p)); } b.swap(nb); for (auto p: b) { if (get<1>(p) <= i - get<0>(p) + 1 && i - get<0>(p) + 1 <= get<2>(p)) { ++cnt; b.clear(); break; } } cout << cnt << ' '; } } But I think that aakib_alam's implementation is also right
•  » » 10 months ago, # ^ |   0 if implemented correctly, you can achieve O(n log A) time complexity. Let's store in basic array all blocks in increasing order of gcd and start of suffix. Then, for each new element, we will do two operations. First operation is to update all gcd by taking gcd with new element from last suffix to the first (in reverse order). Second operation is to remove duplicates from array by shifting (it works in O(length of the array) in worst case). Now, consider second operation: its total time complexity is O(n log A) because we know largest possible length of array will be log A, and the number of elements we will remove from array is not greater than number of elements we add into array. Now, consider first operation. We calculate log A times gcd. We need to use following fact: gcd(a[0], a[1], a[2], ..., a[n-1]) calculated in following manner g = 0, then for each i: g = gcd(g, a[i]) complexity is O(n + log(max(a)). This is because each time g is updated, it's at least twice lower, so we can't decrease g more than log(max(a)). Using this fact, first operation if we keep last result of gcd of last block we calculate, then it should be O(log A + log A) where first log is length of array, and second log is total running time of gcd. In the end, two operations is O(n log A + n log A) = O(n log A).
•  » » » 10 months ago, # ^ | ← Rev. 2 →   0 Yes, I think idea is right and this will work in O(n log A).Update: I wrote this and submitted my old submission and this one at one time, but I got 124 ms on both. Maybe I wrote something wrong, but it feels that optimization shouldn't give huge performance boost, because for that you need sprecialized big tests and they don't usually appear in problems (only then authors want to kill one of the wrong solutions, which works on random testset).
•  » » » » 10 months ago, # ^ |   0 Perhaps without this change it's still n log A we just don't know how to show it. Or, making test is really hard. Oh, I started to think of it right now, and here is what. looks like a proofConsider 'life' of single element. It's pushed into array, interleaving hit by gcd with change and without change, and removed from array. Pushes and remove is clearly O(n log A). Now most interesting. To estimate without change — we just imagine all elements within array doesn't change every time. Thus, it's n log A times it doesn't change. BUT! When gcd(a, b) = b it does exactly one division and get remainder 0 straight ahead, so it's exactly O(n log A) in total. To estimate gcd hit with change, imagine during 'life' each element does maximum changes: maximum changes is log A, and number of elements is n. So again O(n log A). Perhaps I'm missing something.
•  » » » » » 10 months ago, # ^ |   0 Yes, I just understood that solution without optimization is still O(n log A). I will update solution.
 » 10 months ago, # |   -22 codeforces should be renamed mathsforces :)
•  » » 10 months ago, # ^ |   0 LOL
 » 10 months ago, # |   0 That's fast. btw When will you guys update the problems with difficulty ratings?
•  » » 10 months ago, # ^ |   0 That usually takes many days
•  » » » 10 months ago, # ^ |   0 How are they decided btw?
•  » » » » 10 months ago, # ^ |   0 Estimated by the rating of everyone who solved the problem
 » 10 months ago, # |   +15 Editorials with hints are a gem :D
 » 10 months ago, # |   +20 I'll cope up my inability to solve C in contest with new eps of demon slayer and AOT T__T
 » 10 months ago, # |   +8 Video Editorial if anyone is looking for video format
 » 10 months ago, # |   0 Solved C for the first time in many contests. Felt happy! Nice contest Team :-)
•  » » 10 months ago, # ^ |   0 Can you tell your approach
•  » » » 10 months ago, # ^ | ← Rev. 2 →   0 So I considered minimum of four possibilities: Let v = b-a Let u = abs(b-(b|a))+1 Let x be the operations to increase a until a|b = b. Hence, x+1 operations in total, including 1 operation for performing the OR operation. Let y be the operations to increase b until a|b = b. Hence, y+1 operations in total, including 1 operation for performing the OR operation. My answer is min(u,v,x+1,y+1).Let me know if my approach was clear enough to understand.
•  » » » » 10 months ago, # ^ |   0 Thanks
•  » » » » 10 months ago, # ^ |   0 Can you explain please why didn't you increment a and b simultaneously? Why was it enough to increment only one of them?
•  » » » » » 10 months ago, # ^ |   0 The only reason was that I was concerned if the code would run into TLE hence I excluded this from my approach. By intuition, I felt increasing one of them was sufficient.
•  » » » » 10 months ago, # ^ |   0 In statement 2. (Let u = abs(b-(b|a))+1) The logic of this is not clear . It would be nice to tell the details.
•  » » » » » 10 months ago, # ^ |   0 In statements 3 and 4, my idea was 'increment until OR operation yields the same number'.On the contrary, in statement 2, my idea was 'apply OR operation and then keep incrementing until a=b'. Hence, the +1 is for the OR operation and abs(b-(b|a)) is the count of increment operations.Hope this makes it clear.
•  » » » » » » 10 months ago, # ^ |   0 Thank you
 » 10 months ago, # | ← Rev. 2 →   +12 There was an unexpected fact that in problem C, instead of reading the problem as b = b + 1, I read it as a = a-1 . Then I solved this problem by DP algorithm and passed system tests ?_?. I was really shocked when I discussed it with my friends after the contest. Is this really the solution to this problem or is it just a luck? if so, can someone prove the correctness of this solution ?This is my solution: 144542918Sorry for my bad englisk
•  » » 10 months ago, # ^ | ← Rev. 3 →   0 lmao glad to see im not the only one who read b++ as a-- :p. i looped from 0 to b-a and checked if you could obtain b in i operations. tho similarly to u, i have no idea why it works xd
•  » » 10 months ago, # ^ |   +8 We have to make $a$ a submask of $b$, then apply the second operation once.Makeing $a$ a submask of $b$ can be done by incrementing a some times, or incrementing b some times. The number of times we would increment b is also the number of times we would decrement a. This is why a-- works like b++ in this problem.
 » 10 months ago, # |   +6 I solved Problem C slightly differently. I used binary search.Firstly we can see $a|b >= max(a,b)$.So if $b>a$ , then $a|b>b$. Now we can see that if perform the third type of operation than $a>=b$.Once this has happened we can see that increasing a will not be optimal and so we will increase b by 1. These means that after the one operation of type 3 we will use use the operations of type 2 only($b=b+1$)So this leaves us with two types of operations Type 1-> (a++ a++ a++ a++ ... a|b b++ b++ .....) Type 2-> (b++ b++ b++ b++ ... a|b b++ b++ ... )Once we know about this we can simply binary search to find the minimum number of operations required to make $a=b$.My code using this approach->144558213
•  » » 10 months ago, # ^ |   +6 When you wrote So this leaves us with two types of operations Type 1-> (a++ a++ a++ a++ ... a|b b++ b++ .....) Type 2-> (b++ b++ b++ b++ ... a|b b++ b++ ... ) How did you reach the conclusion that we will only increase a or only increase b before operation of type 3?It is clear that after type 3 operation we only need to increase b.
•  » » » 10 months ago, # ^ |   0 I don't have a proof for this but I can try to explain with the help of an example.Suppose we are increasing the number a x times and increasing the number b 1 times. In this case (a+x)|(y+1)As opposed to this if we simply increase the number a x times then the new a after the or operation is (a+x)|yI don't have a proof for this but (a+x)|y <= (a+x)|(y+1)
•  » » » » 10 months ago, # ^ |   0 (a+x)|y <= (a+x)|(y+1) it seems to be wrong, check this case: a+x=2 and y=1
•  » » » 10 months ago, # ^ |   0 I think this is the logic: Our aim is to match the bit values in corresponding positions of a and b. Proof for Type1: Consider the highest bit which is set in a but not in b. This can only set in b by incrementing b. But we can neglect setting that bit by making that bit 0 by incrementing a. Example: let binary representation of a = 01110 and b = 10000. Here, it is clear that using operation a|b would be good only after incrementing a upto 10000, else doing a|b = 11110 will cost b++ operations more. Now, making b++ operations in between while incrementing a upto 10000 would be of no use. Proof for Type2: lets take example a = 01010 and b = 10101. If we do a|b first, a would be 11111, after which we need to do b++ for b to match 11111. But we can reduce the operations by incrementing b to 11000 (why-> we save the later b++ operations making the third highest bit 0. Then a|b = 11010. And then incrementing b upto 11000.
•  » » 10 months ago, # ^ | ← Rev. 2 →   0 I solved this problem considering two cases: a++ a++ ... until a|b=b b++ b++ ... until a|b=b This would take O(b).Though it has passed all tests, I'm not sure why I don't have to consider the case where both a and b increase before operation 3.My Solution — 144589486
•  » » 10 months ago, # ^ |   0 Good observation about 2 types of operations. But I feel no need binary search, and just iterate operation times from 0 to (b — a) is good enough? Codes like: int main(int argc, char* argv[]) { int T; cin >> T; for (int t = 0; t < T; t++) { int a, b; cin >> a >> b; int ans = b - a; for (int i = 0; i <= (b - a); i++) { int times1 = i + 1 + (((a + i) | b) - b); int times2 = i + 1 + (((b + i) | a) - (b + i)); ans = min(ans, min(times1, times2)); } cout << ans << endl; } return 0; } 
 » 10 months ago, # |   0 In problem D,How to reach O(nlogn) by using a segment tree?(Cant understand the editorial) Could anyone give me a help?
 » 10 months ago, # |   0 C was "Strange Test" for me
 » 10 months ago, # |   0 Hints in Editorials >>>. The questions were pretty interesting. Loved the contest.
 » 10 months ago, # |   +9 More cheat today.we need to ban the cheat accounts,this kind of account makes imfair to cpWhat's more , I find that most of accounts which submitted their code in 01:59 of E1 were suspect of cheat.
•  » » 10 months ago, # ^ |   0 Congrats for becoming pink!
•  » » » 10 months ago, # ^ |   0 uha thank you . orz orz
 » 10 months ago, # |   +24 I read $C$'s editorial as:" It is optimal to have either a′=a or b′=b. Bonus: Prove it. "
 » 10 months ago, # |   0 for the third problem bonus to solve it in O(log(b)) is that: the only thing we care about is that the ones in a are perpendicular with the ones in b ans since a is smaller than b then we prove that the iteration does not exceed the number of bits in bnow in every iteration we check the bit state in a and in b if they are mismatch then we shift it to the closest one in the other number and we can do that by the sum of a geometric sequence in O(1).after we match all the ones it's clear that we need to take the min shifting form the two numbers.and at last we check if (a == b) then we don't have to apply OR operation.other wise after applying it we can grantee that a is equal to b
 » 10 months ago, # | ← Rev. 2 →   +1 Problem C: It could be easily solved using SOS DP, I couldn't come up with the idea during the contest, but I got it in up-solving phase.Code: #include using namespace std; int main(){ ios_base::sync_with_stdio(0); cin.tie(0); int T; cin>>T; while(T--){ int a, b; cin>>a>>b; int ans = 1e9; vector dp(2*b + 10, 1e9); for(int i=a; i
•  » » 10 months ago, # ^ |   0 why upto 2 * b?
•  » » » 10 months ago, # ^ | ← Rev. 2 →   +4 You don't want to check numbers greater than $2*b$ because it means that you have to do $b := b + b$ and since $a •  » » 10 months ago, # ^ | 0 Can you please explain your approach? •  » » » 10 months ago, # ^ | 0 Well, if you iterate over a number (let it be$b'$), you want to find other number that is submask of$b'$and it should be near as possible to$a$. •  » » » » 10 months ago, # ^ | 0 thanks  » 10 months ago, # | +14 can anybody prove or disprove my solution for problem D ?my solution : Spoilerwe know that there exists 1 or 0 bad segment starting from each index and no bad segment lies completely in an other oneeach time we select the first segment and erase the last element(put some good random number) and we continue from that onefor finding the first segment we can start from the first of the array and iterate until gcd of elements become less or equal to their number (j) if it's equal then that's the first segment otherwise we come backward and find the last place (i) where gcd[i,j] <= j-i+1 we erase [1,i) and continue computing starting from iI used testers and could not find any counter cases for ithere is the submission : 144576517 •  » » 10 months ago, # ^ | 0 I've solved it exactly this way during contest, however I was sure it was O(n^2).Seems like the proof has a lot to do with the fact that when we go back gcd can decrease up to log times.Maybe if I get some time, I'll try to prove it •  » » 10 months ago, # ^ | 0 -No bad segment lies completely in another one- proofLets suppose there is bad segment$[l_j,r_j]$which lies completely in another one$[l_i,r_i]$$r_j-l_j=gcd(a_{l_j},\ldots,a_{r_j})\ge gcd(a_{l_i},\ldots,a_{r_i})=r_i-l_i$$r_j-l_j\ge r_i-l_i$contradiction-each time we select the first segment and erase the last element(put some prime number) and we continue from that one- proof that its optimal strategyConsider optimal sequence of numbers$a_{i_1},a_{i_2},\ldots,a_{i_k}$we change. Deonte$[a_{l},a_{r}]$— leftmost segment by its left end point containing$a_{i_j}$. We have to prove that if we replace$a_{i_j}$in optimal sequence by right end point of$[a_{l},a_{r}]~=~a_r$, our sequence stays optimal. We prove it by showing, that each bad segment containing$a_{i_j}$contains$a_r$, because its left point$\le a_l$because$ a_l$was leftmost one, and its right end point$\ge a_r$otherwise bad segment will be completely in another one —$[a_{l},a_{r}]$. It means that such segment contains$a_r$. •  » » » 10 months ago, # ^ | 0 yes, but what about case where we don't find any bad segments in go back step? Then we cannot move our left pointer entirely to the right. •  » » » » 10 months ago, # ^ | 0 I dont quite get what you mean by -we don't find any bad segments in go back step- •  » » » » » 10 months ago, # ^ | 0 I might be misunderstanding something but as far as I understand as long as gcd > l — r + 1 we increase our r.The moment gcd becomes smaller than l — r + 1, we have to build new gcd and we do that by going back from r. Now if we manage to find a bad segment this way then everything is fine as we'll be able to move our l pointer to r + 1. But what if we only find segment [new_l, r] such that l < new_l < r andgcd(new_l,...,r) > r — new_l + 1andgcd(new_l — 1, ..., r) < r — new_l + 2Then we have to continue analyzing from l = new_l and r = r + 1  » 10 months ago, # | 0 is it possible to solve problem C, using dp and bitmask? •  » » 10 months ago, # ^ | +2 For finding the first submask>=a and less than bfor every x>=b and x< (1<<(max_bit in b+1)) •  » » 10 months ago, # ^ | 0 Finally I implemented my idea correctly!  » 10 months ago, # | ← Rev. 2 → +14 I don't know why this solution works for C: Either answer is: 1. b-a or 2. increase a until a is a subset of b (i.e. a|b = b) and then do OR operation or 3. increase b until b is a superset of a (i.e. a&b = a) and then do OR operation I just found it intuitively correct during the contest and checked on various test cases before submitting it, I have no clue of the proof. Bonus Task 3: Prove it :) •  » » 10 months ago, # ^ | +1 I also used the same approach, but I'm also not able to prove this. If you get any proof in the future, then plz do share.  » 10 months ago, # | 0 Could someone please explain why are we iterating from 21 in the first solution of C? Thanks •  » » 10 months ago, # ^ | ← Rev. 2 → 0 coz 10^6 is less than 2^21. Thus, reducing ms/complexity.  » 10 months ago, # | 0 Why are there so less submissions for C?,it was the easiest question in this round •  » » 10 months ago, # ^ | ← Rev. 3 → 0 Nice solution. I struggled a lot also after watching streams. but your code was the easiest way to understand. Thanks!! and congo for green. •  » » 10 months ago, # ^ | 0 Can you explain why is it enough to increment only a or b, but not both of them simultaneously? I can't understand this. •  » » » 10 months ago, # ^ | 0 You can see this comment which proves a'=a or b'=b. https://codeforces.com/blog/entry/99442?#comment-882489  » 10 months ago, # | ← Rev. 3 → +1 C upsolved in "$O(\log b)$". Converted numbers to 1-0-strings and did string manipulations first. (144595427,Perl). SolutionA_2: 0100|0|0[0]1011(0)101 <- lowest bit B_2: 0100|1|0[1]1001(1)100 ----------a---b---c--d---- We find the rightmost 0/1 which is not preceded by 1/0 somewhere. In this case it's b column, because d column is preceded by c. Then we can discard all bits to the left from b, but we flag if we discarded any 0/1. A_2: ...[0]1011(0)101 <- lowest bit B_2: ...[1]1001(1)100 ---------b---c--d---- flag=1 ('a' was discarded) Now we can either 1) increment B (without highest bit) upto A and apply OR, either 2) increment A upto B wihout lower bits (i.e.$2^{length(B)-1}$) and then apply OR if we have flag=1 or B has any of lower bits (i.e. it is not power of 2). Instead of incrementing we subtract and find minimum of these two results. •  » » 10 months ago, # ^ | 0 oh wow, Perl, that's nice  » 10 months ago, # | 0 can someone help me to understand c more? in this equation : (a′−a)+(b′−b)+((a′ | b′)−b′)+1i understand that we use the first term to find number of steps required to turn a into a' and the second term to find number of steps required to turn b into b' and the "1" is the step when we turn a' into b' using (a' | b') but what about the third term ((a' | b') — b') can someone explain it more? •  » » 10 months ago, # ^ | 0 We are not turning a' into b' using a'|b' , rather we are turning it into some value >= b', (a'|b'-b') tells us the number of moves we still have to do (using increment operation) to convert a'|b' into b'. •  » » » 10 months ago, # ^ | 0 ok now i understand thanks alot  » 10 months ago, # | ← Rev. 2 → +1 Problem C. Can someone proof, that optimal solution can be achieved if and only if we perform one of two procedures (i. e. one of two procedures always represents optimal solution): Procedure 1: (a++) ... (a++) (a|b) Procedure 2: (b++) ... (b++) (a|b) Why it is correct?  » 10 months ago, # | 0 Really cool problems.  » 10 months ago, # | +8 i feel sorry for kids studying in school 179  » 10 months ago, # | +10 My solution to E1 which is somewhat different from the editorial (although shares some ideas):Think about computing the answer for a particular$x$. Similar to the editorial, we will see if we can achieve an answer of$ans$. This can be done with binary search or the two pointers method (although actually the former TLE'd for me). :(We consider the nodes with depth greater than$ans$. Unlike the editorial which deduces where we should add the edge$(1, u)$, my solution tries all nodes (i.e.$1 \leq u \leq n$). The new distance to some node$v$with depth greater than$ans$is now$x + dist(u, v)$. We can achieve$ans$if the maximum value over all$v$of this expression is$\leq ans$. In other words, if the max$dist(u, v)$is$\leq ans - x$.How do we quickly query this max value? In precomputation, calculate the distance from each node to every other node. If we're adding the edge$(1, u)$, we only want to think about the distance from$u$to nodes$v$with$depth[v] > ans$. So sort the distance array (with source node$u$) by the nodes' depth. Now, we can create a suffix maximum array out of this array, which, along with knowing which index each depth starts at in this array, will allow us to answer the queries above. •  » » 10 months ago, # ^ | 0 Hi, robinz62,May I have a look at your code following the idea you mentioned here? Since I hold the same idea, however it returns TLE for 6th test, here is my codeThe time complexity I used is O(n*nlogn), which should be very close to O(n^2) while n <= 3000. but the submission result proves that's not true (at least for problem E1 here ... ) •  » » » 10 months ago, # ^ | 0 I think the timing might be a bit tight, maybe also depending on the language. As I mentioned, my solution also TLEd if I used binary search instead of two pointers. Here is my submission: https://codeforces.com/contest/1632/submission/144598040It's probably a bit hard to read: you can ignore the random comments to myself, where I was just thinking$depth$is the depth of nodes, rooted at 0$dist[u]$is the array of distances from$u$to everyone else$st[u]$is the same information as$dist$but sorted by the depths (given that name because originally I used segment trees until I realized that wasn't necessary) •  » » 10 months ago, # ^ | 0 Great explanation! Thanks a lot!  » 10 months ago, # | +20 really good contest, my respect to the authors  » 10 months ago, # | 0 Can any one please prove why increasing either a or b and then checking minimum operations condition works in problem C, why do we skip the part when both a and b could be increased?  » 10 months ago, # | 0 In the editorial of D, what does this line mean? "In our case, this is easy to do because our segments are not nested." •  » » 10 months ago, # ^ | +1 I meant that it is easy to remove all segments that have$r$in them since we can keep deleting the first one while the condition holds.Why segments are not nested. Suppose they are,$[l_2, r_2]$is inside$[l_1, r_1]$. We have$\gcd(a_{l_2} \ldots a_{r_2}) \ge \gcd(a_{l_1} \ldots a_{r_1})$and$r_2 - l_2 + 1 \le r_1 - l_1 + 1$. Since the segments are bad, this is possible only when both of the inequalities have the right and the left side equal. This means that the segment lengths are equal => they can not be nested. •  » » » 10 months ago, # ^ | 0 Got it, thanks a lot.  » 10 months ago, # | +11 Here's another (Harder) solution for E:Instead of calculating the answer for$X$, Calculate the maximum$X$for each answer, then do a suffix max.Finding the maximum$X$is just getting the node that has minimum distance to all nodes with$f(v) \ge ans$, this is easy with an$O(N^2)$brute force by iterating$ans$from$N$to$1$and having addition events for nodes.Solving it in$O(N)$just needs an extra observation that all optimal nodes form a chain up to the root so you just need to start at the node with the largest$f(v)$and whenever it's better to consider the parent just do that you can maintain the distances of nodes inside and outside the subtrees by using sets.  » 10 months ago, # | +34 Problem C Proof of Bonus 2: a'=a or b'=bWe know that once we apply the third operation, we have to increase$b$. Let's see how many of these operations are required after the third operation. Let's say a' = (a|b). So we would have to add$2^i$, for all bits$i$, where$a'$is 1 and$b$is 0. This also means that the$i_{th}$bit in$a$is 1 and$b$is 0. So, we have to look for the bits where$a$is set and$b$is not. Let's start greedily. We will look for the highest such bit. Let's say that bit is the$x_{th}$bit. Now we have two ways to handle it. We can either increase$a$till its$x_{th}$bit becomes 0 or we can increase$b$till its$x_{th}$bit becomes 1. Let's go with the first way. (Second could be explained in a similar way). Now to make the$x_{th}$bit in$a$as 0, we would have to keep increasing it till all bits from$0tox$become 1, and then we increase once more to make all these bits 0 and some bit$j$becomes 1,for$j>x$.Now if the$j_{th}$bit is already set in$b$, we have found our solution without using the second operation. Else, if the$j_{th}$bit was not set in$b$, we again have a similar problem, now with$x=j$. WLOG, this time let's choose to increase$b$. So we have to increase$b$till its$j_{th}$bit becomes 1. In doing so, first we will most definitely reach a state in which the bits from$0toj-1$in$b$become a superset of these bits in the original$a$(the$a$we were given). (One such example is when all these bits become 1). So we see that could originally just have increased$b$to make$b$a superset of$a$, which would have taken less operations, instead of first increasing$a$.Following this logic, we can see that it's best to either just increase$a$or just increase$b$before doing the third operation.  » 10 months ago, # | +6 The idea of problem D is similar to 475D.  » 10 months ago, # | 0 the time limit for B is too tight. Even with O(N), I got over 700 ms. When the time limit is 1 second. •  » » 10 months ago, # ^ | +5 I copied you code and submitted it again.only 46 ms i dont know why •  » » » 10 months ago, # ^ | 0 GNU C11 vs GNU C++17 :) •  » » » » 9 months ago, # ^ | 0 Yes. I now realised that C is stupid slow compared to C++.  » 10 months ago, # | 0 Problem is perfect but i am dummy  » 10 months ago, # | 0 [1632D — New Year Concert] You can also use the fact that for a prefix, there at most O(logA) different suffix gcd values. This leads to another way to find the bad segments.Does anybody have the code with this method? thanks.  » 10 months ago, # | ← Rev. 3 → 0 I'm done with the bitmask problems :( I can't solve these ever. how can I improve on this type of problems?  » 10 months ago, # | ← Rev. 2 → +1 1632D - New Year ConcertHow to solve with segment tree and two pointers with complexity$O(n (\log n + \log A))$?For a query on the segment tree, there are$O(\log n)$times of$\gcd$. So it will cost$O(\log n\log A)$for each query, the total complexity should be$O(n\log n\log A)$?Similarly, the code of segment tree and binary search in the tutorial may be with complexity$O(O(n\log^2 n\log A))$, although we can do binary search on the segment tree itself to decrease it to$O(n\log n\log A)$as well.The complexity of the method with ST is ok. •  » » 10 months ago, # ^ | ← Rev. 2 → 0 Oh, the complexity of ST with 2-pointers seems not accurate too. The preparation of ST does$O(n\log n)$times of gcd and costs$O(n\log n\log A)$.  » 10 months ago, # | 0 I have a doubt in the solution code of problem C for(int a1 = a; a1 < b; a1++) { int b1 = 0; for(int i = 21; i >= 0; i--) { if((b >> i) & 1) { b1 ^= (1 << i); } else { if((a1 >> i) & 1) { b1 ^= (1 << i); break; } } } ans = min(ans, a1 - a - b + (a1 | b1) + 1); } why have they chosen the inner lop to range from 21 to 0? Can somebody pls explain it to me •  » » 10 months ago, # ^ | 0 a and b are limited to 1e6, that means they have at most 20 bits. •  » » » 10 months ago, # ^ | 0 ooooo... ok i get it, Ty  » 10 months ago, # | 0 Can anybody explain how to solve E1?  » 10 months ago, # | 0 Can someonoe please explain this line :- in Problem C editorialIf current bit of a′ is 1 and b is 0, set the current bit of b′ to 1 and stop.Why we are stopping here ? •  » » 10 months ago, # ^ | 0 in order to ensure that the a| b is the minimum, because behind bposition of break, all bits is 0, and 0 | n = n. whats more, in this situation, b is definite greater than b.  » 10 months ago, # | 0 Cs answer is obscure that i spent 1.5 hours in understanding the way of constructing b!!! But i learnt a lot from the process of understanding.  » 10 months ago, # | 0 A general doubt. My codes output on codeforces differs from local output in 1632B — Roof Construction. https://codeforces.com/contest/1632/submission/144622647On Codeforces for n = 9 1 2 3 0 4 5 6 7 8 Locally I get (Which is correct) n = 91 2 3 4 5 6 7 0 8I couldn't find any early termination and I don't employ global variables. There are also no uninitiallized variables looking at the code. I'm guessing It to be compiler specific? (Tried for both C++14, C++17). •  » » 10 months ago, # ^ | 0 int bit = log(n-1)/log(2) + 1 ; This is error prone to rounding errors since floating numbers are used. Instead use a simple log2(int) function based on bit counting instead. •  » » » 10 months ago, # ^ | 0 Bruh I laughed so hard when It got accepted just like that by changing to log2(n). Although I hadn't faced any prob when using log(n)/log(2) using leetcode. Might be codeforces judge specific.  » 10 months ago, # | 0 B Bonus solution is 2*(n-2) •  » » 10 months ago, # ^ | 0 Too small and seems to work only for$n = 3$. •  » » » 10 months ago, # ^ | 0 no it is working for all  » 10 months ago, # | 0 Can someone help me for Problem C and D, For C i ran the sample code and checked that each time in answer either a' = a or b' = b, but why don't know, I know to maximize or operation we need to express either of the numbers and get the maximized counter part (a'| b').For D , I think There is the hint of DP there, because prefixes are used so for example if "l" length of segment than we can optimize that segment by the optimizing previous segment having S1 intersection S2 != 0 Where S1 is first segment and S2 is another  » 10 months ago, # | 0 For problem A shouldn't complexity be O(1) (instead of O(n))? We compute answer purely by looking and the length of the string (the n parameter) and that must give us YES or NO in constant time. •  » » 10 months ago, # ^ | 0 You still need to read the input.  » 10 months ago, # | 0 why is my C Solution is working?  » 10 months ago, # | +3 What's the use of giving contests, when all you see is Maths and Bits? Great Problems?  » 10 months ago, # | 0 for problem C (Strange Test) the tutorial says that if kth bit of a' is 1 and for b is 0 we should make b' bit 1 but isn't that counterproductive if our goal is to reduce a'|b'.instead if I make b' bit as 0 then a'|b' bit will also be 0 and b' will still be greater or equal to b(because that bit was anyway 0 in b).can somebody explain this to me.this line to be specific: If current bit of a′ is 1 and b is 0, set the current bit of b′ to 1 and stop.  » 10 months ago, # | 0 Is there a way to download the CF test cases? This submission on D is wrong, but I can't debug it. The idea is that I store the (prime, power) pairs and count in how many last entries they appear. Then I create a map with GCDs and I propagate it downward. If I find the pair where the number of last occurrences == GCD, it is bad. If the smallest number of occurrences is >= the value of GCD it is also bad.I tried generating some small random test cases, but couldn't repro the issue. •  » » 10 months ago, # ^ | +3 •  » » » 10 months ago, # ^ | 0 Thanks!  » 10 months ago, # | +1 can anybody help me with the proof for C. that a'=a or b'=b i am not able to get it.  » 10 months ago, # | ← Rev. 2 → 0 My approach to C: make$a$, submask of$b$, that is$(a | b) = b$.1) we can do$a = a+1$, keep doing that until$(a | b) = b$. 2) we can do$b = b+1$, which is the same as$a = a-1$, keep doing it until$(a | b) = b$.Explanation: If$ith$bit of$a$is on, but in$b$it is off, then$a$is not a submask of$b$. Either you turn off$ith$bit of$a$(by doing$a = a+1$enough times), or you turn on$ith$bit of$b$(by doing it$b = b+1$enough times). Yes, you might turn on other bits in the process, but you can always turn them of by doing these operations enough times. now if$x =$turn on$ith$bit of$b$by doing$b$++,$y =$turn off the$ith$bit of a by doing$a$--, you can actually observe$x = y$. Code  » 10 months ago, # | ← Rev. 2 → +10 A proof for bonus$2$in$C$(it also proves why it is always optimal to only increase$a$until it is submask of$b$or only increase$b$until it is supermask of$a$):Some observations:$x+y$is equivalent to finding two masks$m_1$(its set bits are not set in$x$) and$m_2$(its set bits are set in$x$) where$x+y=x+m_1-m_2$. Any application of$b:=b+1$after$a:=a|b$is replaceable by$b:=b+((a|b)-b)$first then$a:=a|b$and the cost will be the same. So, we can always find an equivalent solution with$a:=a|b$being the last applied operation. Suppose the set of bits set only in$a$is$set_a$, bits set only in$b$are$set_b$, and bits set in both$a$and$b$are$set_{ab}$. Our final goal is to make$set_a$empty.Scenario$1$: If we apply$a:=a+1$only, this means we want to unset all the bits belonging to$set_a$in$a$. If the$MSB$of such bits is the$i^{th}$bit, it can be seen that is enough to find a$k^{th}$bit ($k>i$) which is in$set_b$, then do the following: For all the bits with index$j$in$set_a$. If the$j^{th}$bit is not set in$b$, set it. Apply Scenario$1$on the bits whose$MSB$is$j$. If the$MSB$of bits in$set_a$is the$i^{th}$bit, apply scenario$2$on the bits from$i$to$j+1$, and for all the bits with index$
•  » » 10 months ago, # ^ |   0 nice
 » 10 months ago, # | ← Rev. 2 →   0 I don't understand why my div2B solution works of using the highest power bit and looping down. Can someone give me a proof? https://codeforces.com/contest/1632/submission/144798711
•  » » 10 months ago, # ^ |   0 So no matter what a number w/ the max bit is next to a number w/o max bit so the answer is at least 2^max bit. So you put all the max bits to the left and those don't matter, put the lowest max bit number (that one matters) next to the 0, which should be the answer, and then the rest which don't matter.
 » 10 months ago, # | ← Rev. 2 →   +19 Alternative solution for 1632E2 - Distance Tree (hard version) (probably someone explained it here). Horrible wall of textSuppose we know which edge to add. Obviously, there is no point to walk over the edge twice. Therefore we can split all vertices into two groups: one which is shorter to reach using the edge we add, and other group is one which is shorter to reach without passing the edge we add. Also we know the maximum distance. Now, key insight: we don't actually care shortest path, but instead we just want to reach it without exceeding maximum distance. So, let's make other groups instead: one group will be all those vertices which can be reached without exceeding maximum distance (the answer), and other group will be all other vertices, which require to use our new edge. Also, it's easy to see that all vertices from first group is just all vertices at distance not exceeding maximum distance (the answer) in the tree without edge we add.Now, consider we still know which edge to add. Also, let's assume second group is not empty: there is at least single vertex which require to use edge to reach it without exceeding the limit. So, pick arbitrary vertex from second group and look at the path we take. If the edge is coming not from vertex 1 then we have additional distance from 1 to its one endpoint to jump to the other endpoint. And it's easy to see that if first endpoint would be vertex 1 we would save this distance we should take to reach it. It's true for arbitrary vertex from second group, thus one end point of edge should be vertex 1.Since we have shown that we should add edge (1, v) we also know that shortest distance to vertices in second group will be in form of weight_of_new_edge + distance(v, vertex_from_group). This means, maximum distance for second group is maximum distance from vertex v among vertices within second group plus weight of new edge. Therefore we want to pick v such that maximum distance among vertices within second group is minimal. Even if in our answer that we have v is different, then we can use vertex with this property and it will also work (easy to prove by contradiction using definition of vertex we pick).If for some vertex v the maximum distance from v to all other vertexes is minimal then this vertex is called "center" of graph. It's well known thing. Also, it's known that for trees there is one or two centers, and to find out them you can pick any arbitrary vertex. Then find out vertex u1 with largest distance from it. Then find out vertex u2 with largest distance from u1. Then, the path from u1 to u2 is called diameter because it's largest simple path within the tree. And then, center is in the middle of this diameter, and it's easy to see the maximum distance from center is distance from u1 to u2 divided by 2 round up. Using this knowledge we know maximum distance within second group, and we don't actually care where exactly vertex v is.Now it's easy to solve 1632E2 - Distance Tree (hard version). We need to use following fact. Whatever edge weight is, there has to be the answer. We can try any possible answer. Just brute force the answer(limit): for each limit find out first group and all other vertices put into second group. Find u1 in O(n) using bfs, find u2 in O(n) using bfs, now we know maximum distance from center within second group. Solution: in O(n) iterate over all possible weights of edge (1, v), and update answers. We do O(n + n + n) for each limit, and number of limits in worst case is O(n) so its total time complexity is O(n^2). Here is part how I iterate over weights (here z is limit, and m is distance between u1 and u2): for i in range(n): ans[i] = min(ans[i], max(z, i + 1 + (m + 1) // 2)) Note, instead of iterating over limit, it's easier just add vertex in first group one by one in increasing distance (in order of bfs). Here is source code: 144641370Hard version. First, notice if we do first bfs to find u1 from vertex 1 then we always traverse vertices in same order, and u1 is always last vertex within the order of same bfs which we add vertices into first group. This means, we can run bfs from vertex 1 once and store whole queue of vertices. u1 will be always last item of this queue. Then, we can add vertices in first group directly from this saved queue. With this optimization we have: bfs once, then in loop: bfs from u1 (which is constant) to get u2, and loop to update answers. O(n + n * (n + n)). Still O(n^2). But now let's use the fact that u1 is fixed. Let's also run bfs from u1 once and save queue. Now u2 is first vertex from the end of queue which is still in second group. Source code: 144641823. If we use two pointers technique new complexity would be O(n + n + n * n) where n * n comes only from loop to update all answers. (no source code example)Time to optimize this loop which is mentioned above (part of the code). We can analyze when max(z, i + 1 + (m + 1) // 2) evaluates as z and when otherwise. With a little bit of algebra it's easy to see that it's equal to z in range [0, z — (m + 1) // 2), and other value in range [z — (m + 1) // 2), n). This leads to code: for i in range(z - (m + 1) // 2): ans[i] = min(ans[i], z) for i in range(max(0, z - (m + 1) // 2), n): ans[i] = min(ans[i], i + 1 + (m + 1) // 2) First operation we can do using segment tree in O(log n). Second operation is tricky. Notice that each of them always has (i + 1) in it. So we just want to find out which (m + 1) // 2 was lowest applied for index i. So, we can make second segment tree which will store this info (minimum (m + 1) // 2). Now we have solution in O(n + n + n * (log n + log n)) which is O(n log n). Here is the code: 144643752Now, let's get rid of segment trees. Notice how we use two pointers for u2 which means that distance between u1 and u2 is only decreasing. In the code above it's m, and z is always increasing. This means, that value of expression: z — (m + 1) // 2 always non-decreasing. Also the shape of function we update looks like this: _/ if we plot. So each new relaxation is shifted up or right or up and right (or not shifted). Anyway, it can't be shifted left, or down. Using this knowledge you can show that shape in the end should be like this:  _/ __/ _/ Let's call 'midpoint' — vertex when horizontal line ends. And all those 'midpoints' can be stored in stack, and each time you add you need to compare only to last one. Then, suppose we left with 3 midpoints as in 'picture' above, then we can iterate once between midpoints of 1 and 2, then iterate once between midpoints of 2 and 3, and so on. O(n) in total. Also, iterate once up to midpoint 1, and iterate once after last midpoint. This solution time complexity is O(n). Source: 144647029And, in the end, it can be shown, that those / parts can be ignored. Why? Well, for optimal solution we never exceed the limit in the second group. So, in last mentioned submission, last for within solve function can be omitted. Source: 144832151Edit: sorry. I have mistake in part about /. Source mentioned without additional for above remove only last / and it passes the tests for some reason. But my claim was about all /. If I ignore all / I get WA2 144858000. But my claim is correct but only with big note: we need to keep all 'midpoints', because in my previous implementation I removed all midpoints which were aligned like this:  _/ _/ _/ But in new version I should keep them all. Source: 144858814 Now if you look closer you'll see that you don't need stack anymore. Source: 144860233
 » 10 months ago, # |   0 Why in C taking | at last only is optimal like if we take general case we will apply x1,y1 operation of 1st and 2nd kind and then taking | and again x2,y2 no of operation and so on...
 » 10 months ago, # | ← Rev. 2 →   0 Would BFS work for problem C?
 » 10 months ago, # |   0 For problem C, I don't understand how to construct the optimal version of b prime
 » 10 months ago, # |   0 "Analytical solution" for C, without increments: Spoiler unsigned int a; unsigned int b; cin >> a; cin >> b; int res = b - a; unsigned int bits = a & ~b; unsigned int mm = 1; while (mm <= bits) { mm *= 2; } int res2 = (a & (mm - 1)) - (b & (mm - 1)) + 1; res = min(res, res2); while (((b & mm) == 0) || ((a & mm) > 0)) { mm = mm << 1; } int res5 = mm - (a & (mm - 1)) + 1; res = min(res, res5); cout << res << endl; `1) take the bits which are set in a and not set in b, it will be some variable bits;2) take the smallest power of 2, which is strictly > bits, let call it m, it has only one bit set;3) leftshift this bit (if necessary), until it is set in b and not set in a: ((m & b) > 0) and ((m & a) == 0)The answer is min(b — a, (a & (m — 1)) — (b & (m — 1)) + 1, // increment b until it is superset of am — (a & (m — 1)) + 1 // increment a until it is submask of b);145072237
 » 10 months ago, # | ← Rev. 2 →   0 @Vladithur Bonus 2: prove that is optimal to have either a1=a or b1=b. My Solution: https://codeforces.com/contest/1632/submission/145206245 My approach is that I took either a1=a or b1=b. If a1=a and b is changed to b1 by using b++. Using this operation we are actually making lower bits of a(maybe all bits of a if the highest bit of a is not equal to the highest bit of b) equal to lower bits of b. Once we get all the lower bits(maybe all bits of a if highest bit of a is not equal to highest bit of b) of a equal to the subset of set bits in b. We shall do 3rd operation once to make a=b. If b1=b and a is changed to a1 by using a++. Using this operation we are actually making higher bits of a(may be all bits of a) equal to subset higher set bits of b. Once we get all the higher bits(may be all bits of a) of a equal to subset of higher set bits in b. We shall do 3rd operation once to make a=b. This is not trivial and was just a random thoughts which eventually led to AC. Please comment on this if you don't understand or want to provide much better proof or wanna discuss much on this.Thanks!
 » 10 months ago, # |   0 Interestingly problem D can be implemented using $O(\log A)$ memory, no need to even store the array.
•  » » 10 months ago, # ^ |   0 How?
•  » » » 10 months ago, # ^ |   0 Check out my solution, it's based on the well known stack trick: 145373154
 » 10 months ago, # |   0 can someone explain in problem C solution, why its use break when ith bit is on a1 ?
•  » » 9 months ago, # ^ | ← Rev. 2 →   0 Because you want to minimize a1 | b1 . Lets say a1 = 0111 and b = 1011 (in binary). Now you start comparing bits from the left to create a new b1. For the first index, you have 0 (from a1) and 1 (from b). You have to keep this 1 in b1 because if you make it 0 then it makes your b1 less than b and that's not possible (you can only add to b not subtract). For the second index, you have 1 and 0. Notice that if you make change the 0 into a 1 then the OR value does not increase! Also notice that if you change this 0 to a 1 in b1, then you can safely make all the bits in b1 that come afterwards into 0s. Making that bit into 1 guarantees that b1 is bigger than b, and making all the other bits into 0s guarantees that a1 | b1 is going to be minimized! That is why you use break, because the rest of the bits will just remain 0 !Hope that helps. I am a beginner so any feedback would be greatly appreciated!
 » 9 months ago, # |   0 Thank you so much for the awesome problems and editorial! I really enjoyed Problem C!
 » 2 months ago, # | ← Rev. 3 →   0 Hi, All. Would you provide me a rationale behind problem C's editorial that getting optimal b' using a' and b?