74TrAkToR's blog

By 74TrAkToR, history, 3 months ago, translation, In English

Thanks for the participation!

Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
 
 
 
 
  • Vote: I like it
  • +21
  • Vote: I do not like it

»
3 months ago, # |
  Vote: I like it +136 Vote: I do not like it

Nice.

Spoiler
»
3 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Wow, so fast editorial, thanks!

»
3 months ago, # |
  Vote: I like it +36 Vote: I do not like it

As a tester, I solved problem E with complexity $$$O(100(n+q)\log n)$$$: write the formula as $$$f_i=a_ib_i/\gcd^2(a_i,b_i)$$$. Let's maintain the answer by maintaining segments of same $$$a$$$-s together. It's easy to see that the segments only change $$$O(n+q)$$$ times. In queries, we specially manage the segments the query crosses (i.e. it contains $$$l$$$ or $$$r$$$) and the segments in the middle can be maintained with segment tree.

Now, we essentially have to calculate $$$f(l,r,x)=\max_{l\le i\le r} x\cdot b_i/\gcd^2 (x,b_i)$$$ for $$$O(n+q)$$$ times. Let's enumerate all divisors of $$$x$$$ and try it as a candidate for $$$\gcd(x,b_i)$$$. It's easy to see that we should select the maximum $$$b_i$$$ that is a multiple of $$$x$$$. To maintain largest $$$b_i$$$ in a segment that is a multiple of $$$x$$$, we can use a sparse table or a segment tree for each $$$x$$$. The sum of sizes of the tables is $$$100n$$$, since numbers in $$$[1,50000]$$$ have at most 100 divisors.

177649266

»
3 months ago, # |
  Vote: I like it +7 Vote: I do not like it

queryforces

»
3 months ago, # |
  Vote: I like it 0 Vote: I do not like it

nice questions

»
3 months ago, # |
  Vote: I like it +5 Vote: I do not like it

How to solve C1? I understand f(l,r) <= f(l,r+1), but I am not getting how binary search will help here

  • »
    »
    3 months ago, # ^ |
    Rev. 2   Vote: I like it +3 Vote: I do not like it

    finally got it.

    Note= maximum cost will always be sum(arr) - xor_sum(arr). because of f(l,r) <= f(l,r+1).So we need to find the minimum segment length which will give us this cost.

    • For every index i find the nearest index j for which f(i,j) == cost.
    • We can use binary search to find this position j as function f is increasing with increasing value of j.

    My solution is not clean. Especially binary search part but still :=; 177667352

  • »
    »
    3 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Since $$$f(l,r) <= f(l,r+1)$$$ that means that for some $$$1 <= i <= n$$$ the maximum value of $$$f(i,r)$$$ will be for $$$r = n$$$ since the function is increasing. Now the problem asked for the minimum segment therefore we need to find if there is some $$$r$$$ such that $$$i <= r <= n$$$ such that $$$f(i,r) = f(i,n)$$$ and it should be the minimum one, so therefore to find the minimum $$$r$$$ we can use binary search.

»
3 months ago, # |
  Vote: I like it +116 Vote: I do not like it

I think you should explain time complexities properly instead of ending the editorial with

Therefore, this solution will work quite quickly.

It might not be obvious to some of us.

»
3 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Can someone explain D2 problem, I can't understand, the author explanation.

  • »
    »
    3 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Same, solved it but I can't understand the author's intended solution as well.

»
3 months ago, # |
Rev. 3   Vote: I like it +20 Vote: I do not like it

Alternative solution for E (it's just worse than what's in the editorial, but wotever, it got AC):

To do range assign and point get in A, a VEBTree over "important" indices can be used to get O(log log n) updates and queries.

Divide the arrays into blocks of size R (constant we will fix later). Then inside each block, compute for every value $$$\leq 5 \cdot 10^4$$$ what is the smallest multiple of that value which occurs in the range.

Then to compute the answer when an entire block gets set to a value $$$v$$$, it's $$$\min(\texttt{smallest_multiple}[d] \cdot v / d^2 \enspace \vert \enspace d \vert v)$$$

When a an update or query touches but doesn't cover an entire block, just brute force.

$$$O(Rq \log \log n + nqd/R)$$$ where $$$d$$$ is the largest number of factors a number can have, which is something like $$$128$$$ for numbers this small. I chose $$$R=400$$$ from trial and error instead of calculating what block size would make sense.

submission

»
3 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Still don’t get A.. If a[n] mod n != 0, then why can we just check if gcd(g, n)? I mean if a[n] cannot be changed to n, then why checking gcd(g, n) is correct?

Sorry for my poor English in advance.

  • »
    »
    3 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Once you apply an operation on index N you are sure that it will contain only prime factors of number N. Similarly you can see for N-1. Now since N and N-1 won't have any common prime factors gcd(gcd(a[N],N),gcd(a[N-1],N-1)) also won't have any common prime factors. So the value gcd(gcd(a[N],N),gcd(a[N-1],N-1) is guaranteed to be 1.

    • »
      »
      »
      3 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Got it! Thanks a lot

    • »
      »
      »
      3 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      I am still facing trouble understanding A.

      we can change a[i] --> gcd(a[i],i) = gn(let's say) so we know gn can be [1, i]

      The prime factors of gn would surely be in i, but not vice versa.

      Therefore, I am unable to understand the logic how we can directly check for gcd(g,i), which is like we have changed the operation from gcd(a[i],i) to change a[i]-->i

      Please help me understand

    • »
      »
      »
      2 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Got it!Thanks a lot

  • »
    »
    3 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    It's simple we need to find the min cost . So the best approach we know the gcd of two co-prime numbers is always 1 hence at the worst case we need to check last two index and hence answer<=3.

    if(initial gcd == 1) ans = 0;

    if(gcd(initial, n) == 1)) ans = 1;

    if(gcd(initial, n-1) == 1) ans = 2;

    else ans = 3;

    initial_gcd --> gcd of the entire array

    //Code

    if(initial_gcd == 1) 
        cout << "0";
    else if(gcd(initial_gcd,lastElementIndex) == 1)
        cout<<"1";
    else if(gcd(initial_gcd,secondlastElementIndex) == 1)
        cout<<"2";
    else
        cout<<"3";
»
3 months ago, # |
  Vote: I like it +1 Vote: I do not like it

I still cant understand Question B, can somebody explain?

  • »
    »
    3 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Let us perform the operation from the condition 1 time. What happened? We got rid of one bad couple. A bad pair is a pair like "01" or "10". We need to convert the original string into a string like 00000...0111111...1. There is only one bad pair in this line. From here comes the solution: https://codeforces.com/contest/1732/submission/177681969

  • »
    »
    3 months ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    My approach: If we apply the operation even no of times the ith bit will retain its original value.

    So from the beginning if the ith bit is '1' then change only if the no of operation applied are even and ith bit is '0' then apply operation only if the no of operation is odd.This will solve the problem.

    Solution

»
3 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Fascinated by Problem C, lots of concepts and optimizations to learn.

»
3 months ago, # |
  Vote: I like it +8 Vote: I do not like it

In problem E, formula answer_x = min(answer_(x/p) * p) is just plain wrong, even with d not equal to x.

Let n = 1, x = 4, b_1 = 8, p = 2. Then answer_x = lcm(x, b_1) / gcd(x, b_1) = lcm(4, 8) / gcd(4, 8) = 8 / 4 = 2. BUt answer_(x/p) = lcm(x/p, b_1) / gcd(x/p, b_1) = lcm(4/2, 8) / gcd(4/2, 8) = lcm(2, 8) / gcd(2, 8) = 8 / 2 = 4. answer_x is not equal to p * answer(x/p), actually, answer(x/p) > answer_x.

Please show me where I am wrong.

»
3 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Can somebody please explain A . Once u are done calling me grey , stupid , if u still got time . please explain with examples maybe ? Thanks

  • »
    »
    3 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    For some number $$$x$$$ the $$$gcd(x, x-1) = 1$$$, so it can be proved that you can always find the answer by substituting $$$a_{n}$$$ with $$$gcd(a_n, n)$$$ which in the worst case it's $$$min(a_n, n)$$$ and $$$a_{n-1}$$$ with $$$gcd(a_{n-1}, n-1)$$$ which again in the worst case it's $$$min(a_{n-1}, n-1)$$$. The total cost of these operations is $$$3$$$, because the cost of first operation is $$$1$$$ and the cost for the second operation is $$$2$$$.

    So first of all you check if the $$$gcd(a)$$$ ($$$gcd$$$ of the entire array) isn't already $$$0$$$ which would be the best case, if not then try performing the first operation explained which would cost only $$$1$$$ if the $$$gcd(a)$$$ stills $$$\ne 1$$$ then check with the second operation which would cost $$$2$$$, if same thing happens then you know the answer will be $$$3$$$ for sure, as explained above. Let me know if I made myself clear!

  • »
    »
    3 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Here is another way of explanation

    1) If gcd of entire array is 0 then we dont do any operation

    2) If gcd of array is some non-zero value and lets say value be X Lets say we perform one operation with some index then after first operation X<=20 will be true as n<=20 given in problem statement.

    3) Now for any number less than 20 there can be atmost 2 primes as factors because 2*3*5 = 30 which is >=20 so we can have atmost 2 primes as factors

    4) For every operation we can eliminate atleast one prime factor. Therefore we can have atmost 2 operations. so with atmost 3 operatons we can be sure that gcd of array becomes one.

    5) since we have atmost 3 operations we can do bruteforce as n<=20 and time complexity of brute force is n*(n-1)*(n-2) which is O(n3)

»
3 months ago, # |
  Vote: I like it +3 Vote: I do not like it

Managed to solve both C2 and D2, lost 16 rating.

»
3 months ago, # |
  Vote: I like it +35 Vote: I do not like it

The complexity analysis for D1 and D2 are not so obvious for me. Can someone maybe provide a better explanation?

»
3 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Wasn't B solution more straightforward if for each $$$s_i \in s$$$ it is checked if the amount of operations you've performed so far has left the $$$i$$$-th bit fixed or left it the same?

I think the editorial's solution is more complicated than this.

My submission: https://codeforces.com/contest/1732/submission/177702576

»
3 months ago, # |
Rev. 2   Vote: I like it -9 Vote: I do not like it

unordered_set/map and gp_hash_table are uphacked.

»
3 months ago, # |
Rev. 3   Vote: I like it 0 Vote: I do not like it

good

»
3 months ago, # |
  Vote: I like it +5 Vote: I do not like it

In editorial of D1,

"Note that if k is not one of the largest divisors of x, then x/k becomes greater than q"

Why?

»
3 months ago, # |
  Vote: I like it +47 Vote: I do not like it

Does anyone have a correct proof to the solution of D1?

»
3 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Can anybody explain problem C2? I don't understand what does $$$A$$$ means in the editorial.

  • »
    »
    3 months ago, # ^ |
      Vote: I like it +2 Vote: I do not like it

    $$$A$$$ means the maximum value in array $$$a$$$.

    • »
      »
      »
      3 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      thus the value of the function will become smaller

      Can you explain this? How the function value becomes smaller, when it always increases or stays the same if we add a new element?

      • »
        »
        »
        »
        3 months ago, # ^ |
        Rev. 4   Vote: I like it +2 Vote: I do not like it

        (My English may not be so good. ) I think it might mean this:
        When we're dealing with the query $$$[L,R]$$$, a simple and slow way is to calculate every nearest $$$r$$$ for all $$$l \in [L,R]$$$. However, when the value of $$$[l,R]$$$ is less than the value of $$$[L,R]$$$, we can just end and print the answer, because $$$\forall i \in [l,R]$$$, there's no subsegment of $$$[i,R]$$$ can affect the answer since we want to maximize value first.
        Then consider the time we have calculated before we break, if and only if when we delete the current first element $$$a_{now}$$$, the value of $$$[now+1,R]$$$ is less than $$$[now,R]$$$. And the reduction of the value is because, "at least one of the bits of $$$a_{now}$$$ occurs in $$$xor(now+1,r)$$$". The worst case is the first $$$\log A$$$ elements of $$$[L,R]$$$ have one bit in binary representation respectively and the bit every element have is pairwise distinct. So we have time complexity $$$O(q \log A \log n)$$$ if we use binary search.

        Upd: There's something I forgot just now(
        The worst case is that, the first $$$\log A$$$ elements such that $$$\ne 0$$$ of $$$[L,R]$$$ ......(the condition)
        So we should check the first $$$\log A + 1$$$ elements such that $$$\ne 0$$$ to get the answer.
        If you're confused because of this, I'm sorry.

»
3 months ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

C1 can be solved by brute force with optimization of skipping zeros with O = O(15*15) = 225, since any non-zero element is obliged to reduce the number of bits in the current xor. Thus C2 can be solved by adding a segment tree (to find the xor of segment) with O = (225+log(n))*n.

  • »
    »
    3 months ago, # ^ |
    Rev. 2   Vote: I like it +1 Vote: I do not like it

    xor (l, r) = xor(1, r) ^ xor(1, l — 1), so you don't need to use segment tree. You can find xor with O(1) with prefix xor.

»
3 months ago, # |
Rev. 4   Vote: I like it 0 Vote: I do not like it

great round!

»
3 months ago, # |
  Vote: I like it 0 Vote: I do not like it

What does Hack verdict 'Unexpected verdict' mean? Problem E's hacks have so many this type.

  • »
    »
    3 months ago, # ^ |
      Vote: I like it +5 Vote: I do not like it

    In the system there are solutions that are marked as correct. If a hack is able to break one or more of those solutions (by either TLEing one of them, RTEing one of them, or making their output inconsistent/wrong) then the verdict of the hack will say Unexpected verdict.

    The times when I've triggered Unexpected verdict in the past, the most common reason is that I TLEd one of the Python solutions from the problem setters.

»
3 months ago, # |
Rev. 2   Vote: I like it +15 Vote: I do not like it

This is related to the complexity analysis of $$$D1$$$ and $$$D2$$$:

If we have arrays $$$arr_1$$$ and $$$arr_2$$$, each of them has $$$N$$$ distinct values, where $$$1\le N \le 10^5$$$ and $$$1\le arr_1[i], arr_2[i]\le A_{max}$$$.

Then for each $$$arr_1[i]$$$, we iterate on its positive multiples in ascending order until the first multiple not in $$$arr_2[i]$$$. If $$$A_{max}=N$$$, then the total number of iterations can be calculated as $$$N\cdot \sum_{i=1}^{N}{\dfrac{1}{i}}=N\cdot H(N)=N\cdot log(N)$$$.

If $$$A_{max}$$$ is an arbitrary value larger than $$$N$$$, e.g., $$$10^{18}$$$, is there a way to prove the upper bound on the total number of iterations?

  • »
    »
    3 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Everyone is waiting for time complexity analysis part.

»
3 months ago, # |
  Vote: I like it +1 Vote: I do not like it

has anybody solved C1/C2 with segment tree ? i would like to see how the segments are Merged and stored

»
3 months ago, # |
  Vote: I like it 0 Vote: I do not like it

In C1 and C2, if I have a test like this : 0 8 2 0 and L = 1, R = 4

My output was "2 2" and Jury output was "1 1" and I got WA while segment [2,2] and [1,1] have the same length and f(l,r).

Could anybody explain to me why i'm WA, please :(

»
3 months ago, # |
  Vote: I like it 0 Vote: I do not like it

in D1, i keep getting TLE in 35th test case...I went through the submissions of some people and they have solved the problem in a similar way. Can someone please tell me what am i supposed to modify in my code? 177832843

  • »
    »
    3 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    okay i got it...i forgot that worst case TC for searching in unordered set in O(N) stupid mistake!

»
3 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Problem D2 : I couldn't understand the last part. How we can recalculate these set in the case of an add/remove operation? Can anyone explain? TIA

»
3 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Hi, Can someone please point me to a possible problem in this submission for problem D1? It is giving correct output on my local machine but WA on cf submission. I am doing a nlogn solution where I store the successive multiples of a number and binary search on the indexes later. Thanks you.

»
3 months ago, # |
  Vote: I like it 0 Vote: I do not like it

74TrAkToR, In the editorial of D2 shouldn't it be:
*then we should already have added approximately $$$\sum\limits_{i = 1}^t\frac{x}{k_i}$$$.
At second to last line, since we are iterating over t divisors of x from $$${k_1}$$$ to $$${k_t}$$$.

»
3 months ago, # |
  Vote: I like it 0 Vote: I do not like it

do any one have two pointer solution/submission for C1?

»
3 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Can someone please explain about problem E: how do we count answers for each x in blocks? Would apreciate it

»
3 months ago, # |
  Vote: I like it +8 Vote: I do not like it

Tried hard in D2 problem and at last accepted. I can't believe it was 2400 rated problem.

»
3 months ago, # |
  Vote: I like it +9 Vote: I do not like it

I have an alternate solution for D2 that uses sqrt optimization. Store a block of at max size $$$\sqrt n$$$ delete operations. For all queries run through all the erased numbers in the current block, if $$$x$$$ divides a number in the block ($$$b$$$) just min the answer you had kept previously with $$$b$$$.

If the block reaches size $$$\sqrt n$$$, clear the block and run D1's solution from 0 for all currently stored queries. Complexity over queries amortized is $$$O(n \sqrt n \log(n) + n \log(n)) = O(n \sqrt n \log(n))$$$. Because the clear and reset operation runs at max $$$\sqrt n$$$ times the total time complexity of that is also the same $$$O(n \sqrt n \log n)$$$. Needed some gp_hash_table optimization to remove the $$$\log n$$$ factor and make it pass. 178023023

»
3 months ago, # |
Rev. 2   Vote: I like it +3 Vote: I do not like it

Time limit on problem E is tooooooo tight. The overall complexity is still $$$\mathcal{O}(X\sqrt{X}\log X)$$$ where $$$X = 5 \times 10^4$$$. From the complexity perspective the fansy $$$\mathcal{O}(X\log \log X)$$$ optimization in the precalculation seems not necessary.

I've also tried a $$$\mathcal{O}(100n\log n)$$$ segment tree solution and it still can't pass the time limit.

»
3 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Anyone got a solution to C1 using two pointers? Would help a lot! Thanks in advance!

»
3 months ago, # |
  Vote: I like it +1 Vote: I do not like it

In fact,I don't understand problem B and this tutorial o(╥﹏╥)o

»
3 months ago, # |
  Vote: I like it 0 Vote: I do not like it

For C1 editorial, I think it should be iterate on left boundary and use binary search for optimal right boundary.

»
2 months ago, # |
  Vote: I like it 0 Vote: I do not like it

183765246 I am getting a wrong answer what can be the problem here

»
2 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Code for C2 is giving Runtime Error on submitting but working fine on my compiler. Can anyone please check it. https://codeforces.com/contest/1732/submission/183797648

»
2 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Man i don't understand how problem B is rated 900 and is supposedly easier than A!! B should be at least 1200 rated...

»
13 days ago, # |
  Vote: I like it 0 Vote: I do not like it

There needs to be a proof for the upper bound time complexity on D1