cdkrot's blog

By cdkrot, history, 6 years ago, In English

D2A author: 300iq, cdkrot, developer: 300iq

Tutorial is loading...

D2B author: isaf27, developer: cdkrot

Tutorial is loading...

D1A author: isaf27, developer: isaf27

Tutorial is loading...

Jury's solution (isaf27): 40973089

D1B author: 300iq, developer: Flyrise

Tutorial is loading...

D1С author: pashka, developer: cdkrot

Tutorial is loading...

D1D author: tourist, developers: qoo2p5, VArtem

Tutorial is loading...

The first solution: 40971595 and the second solution: 40971634.

D1E author: isaf27, developer: isaf27

Tutorial is loading...

Jury's solution (by isaf27): 40973023

D1F author: GlebsHP, developers: demon1999, PavelKunyavskiy

Tutorial is loading...

Credits to all jury members, who contributed to this round and EJOI: tourist, PavelKunyavskiy, niyaznigmatul, 300iq, GlebsHP, pashka, qoo2p5, VArtem, demon1999, Flyrise, ifsmirnov, isaf27, yeputons, cdkrot.

  • Vote: I like it
  • 0
  • Vote: I do not like it

| Write comment?
»
6 years ago, # |
  Vote: I like it +210 Vote: I do not like it

Downvoted, thanks for explaining the "simple cases analysis" for Div1D

  • »
    »
    6 years ago, # ^ |
    Rev. 2   Vote: I like it +87 Vote: I do not like it

    Well, I also hoped to see a nice solution for AB-strings, without considering much cases, and with a proof it works.

    But sadly, my crude explanation without proof looks better than the current tutorial.

    I hope the authors are going to write a more complete tutorial!

    • »
      »
      »
      6 years ago, # ^ |
      Rev. 2   Vote: I like it -52 Vote: I do not like it

      We tried to improve it a bit.

    • »
      »
      »
      6 years ago, # ^ |
        Vote: I like it -16 Vote: I do not like it

      We tried to improve it a bit.

    • »
      »
      »
      6 years ago, # ^ |
      Rev. 3   Vote: I like it +95 Vote: I do not like it

      The Real Div1D Tutorial

      • »
        »
        »
        »
        6 years ago, # ^ |
          Vote: I like it -29 Vote: I do not like it

        Readers practice is a legitimate learning tool, it forces you to wholly understand the soltuoins.

        An example: https://codeforces.com/blog/entry/53168

        Even this post relies alot on readers prectic, you can see it is so efficient and now so many people are using bitset optimizations.

»
6 years ago, # |
Rev. 2   Vote: I like it -12 Vote: I do not like it

During the round,I noticed that in div2 B it is written that "In one operation you can select some i (1 ≤ i ≤ n) and replace element ai with ai & x,",which it's said "some",meaning I can choose an many i(s) as I like.So I think the max ans should be 1 instead of 2(Maybe it's just a translation mistake).

However, I got tons of Wrong Answer because of this. So,is it a mistake....or because I am too weak in English?

SORRY FOR MY POOR ENGLISH

  • »
    »
    6 years ago, # ^ |
      Vote: I like it +6 Vote: I do not like it

    literally got same meaning just like u by "SOME"!!

»
6 years ago, # |
  Vote: I like it +31 Vote: I do not like it

my div1 D solution

40970859

I coded so many lines...

»
6 years ago, # |
  Vote: I like it +14 Vote: I do not like it

some of the submissions in editorial can not be visited

»
6 years ago, # |
Rev. 12   Vote: I like it +42 Vote: I do not like it

In Div2E/Div1C, here is a simplier solution:

Let dp[i][j][k] means the min cost that we are at pos i, we have built j houses in [1,i]and k means if we are going to build a house at pos i ( k = 0 or 1 )

when transforming , it's clear that:

when k = 0, we can use dp[i][j][0] to update dp[i+1][j+1][1] and dp[i+1][j][0],means if we are going to build a house at pos (i+1)

when k = 1, we can use dp[i][j][1] to update dp[i+2][j+1][1] and dp[i+2][j][0],means if we are going wo build a house at pos (i+2) ( we cannot build at pos(i+1) now since pos i has already had a lovely house)

You can see the DP equation in my Code: 40965480

#define Mymax(a,b) if(a<b) a = b;
int P( int pos , int goal ){
	if( a[pos] <= goal ) return 0;
	return a[pos] - goal;
}
dp[1][0][0] = 0;
	dp[1][1][1] = 0;
	int div2 = (n>>1) + (n&1);
	For(i,n){
		Forx(j,0,div2){
			For0(k,2){
				#define N dp[i][j][k]
				if( dp[i][j][k] == -INF ) continue;
				if(!k){
					Mymin(dp[i+1][j+1][1],N+P(i,a[i+1]-1));
					Mymin(dp[i+1][j][0],N);
				}else{
					Mymin(dp[i+2][j+1][1],N+P(i+1,min(a[i],a[i+2])-1));
					Mymin(dp[i+1][j][0],N+P(i+1,a[i]-1));
				}
			}
		}
	}

Complexity: O(n2)

I used the algorithm during the contest and acceptted,and I think it's much clearer than the Editorial

SORRY FOR MY POOR ENGLISH

  • »
    »
    6 years ago, # ^ |
      Vote: I like it +16 Vote: I do not like it

    Yeah, that is another of jury solutions, moreover, it is the first solution I have prepared :)

    But I decided that the second solution explained in the editorial is simpler.

    • »
      »
      »
      6 years ago, # ^ |
        Vote: I like it -46 Vote: I do not like it

      Stop commenting man. No matter what you write you will get downvotes for sure.

  • »
    »
    6 years ago, # ^ |
      Vote: I like it +3 Vote: I do not like it

    In your solution aren't we constructing j houses in [1,i] instead of [1,i) because when k =0 you incremented j i.e dp[i+1][j+1][1]. Here i th element was not taken as k=0 for i and i+1 th element is taken and that was shown in j too as j was incremented.

  • »
    »
    6 years ago, # ^ |
    Rev. 4   Vote: I like it +6 Vote: I do not like it

    why dont we update dp[i+1][j][0] from dp[i][j][1]?

    • »
      »
      »
      6 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      We cannot know the height of (i+1) if you do that.

      dp[i+1][j][0] = dp[i][j][1] + ????(something you don't know)

      • »
        »
        »
        »
        6 years ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        So how do we transform dp[i][j][1] into dp[i+2][j][0] then ?

        • »
          »
          »
          »
          »
          6 years ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          Since we have built a house at pos i, it's clear that we cannot build at pos (i+1).If we are not going to build at pos (i+2), then the cost will just be (making mountant (i+1) lower than i)

          See in my code:

          Mymin(dp[i+1][j][0],N+P(i+1,a[i]-1));

          Are you clear now? :)

  • »
    »
    6 years ago, # ^ |
      Vote: I like it +11 Vote: I do not like it

    Thanks for your code and solution, I think it's easier for me to understand it than Tutorial. But the code will be more readable if you don't use the word like "forx", "For". Anyway,Thanks. :)

  • »
    »
    6 years ago, # ^ |
    Rev. 4   Vote: I like it 0 Vote: I do not like it

    if(!k) Mymin(dp[i+1][j+1][1],N+P(i,a[i+1]-1));

    for this part,we are updating (i+1)th position considering ith position's state but why don't you consider i+2 position's value as this value can be greater or eqaul to the value of ith position. Doesn't it be? Mymin(dp[i+1][j+1][1],N+P(i,i+2,a[i+1]-1));

    • »
      »
      »
      6 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      The cost of pos i+2 is consider in the part

      Mymin(dp[i+2][j+1][1],N+P(i+1,min(a[i],a[i+2])-1));
      Mymin(dp[i+1][j][0],N+P(i+1,a[i]-1));
      

      What's more,if you write like Mymin(dp[i+1][j+1][1],N+P(i,i+2,a[i+1]-1));,You will consider the cost of pos i+2 twice and get wrong answer.

      • »
        »
        »
        »
        6 years ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        I understood it later. Thanks for the tutorial.It's more easier to understand than the editorial.

  • »
    »
    6 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Thanks for the explanation! I have a slight correction (please correct me if I'm wrong):

    when k = 1, we can use dp[i][j][1] to update dp[i+2][j+1][1] and **dp[i+2][j][0]**
    

    dp[i+2][j][0] should be dp[i+1][j][0]. This confused me a bit initially but might help others.

    • »
      »
      »
      6 years ago, # ^ |
        Vote: I like it +1 Vote: I do not like it

      It should be dp[i+2] If you want to go to dp[i+1]from dp[i]and k= 1 , it will be annoying to think of the cost of i+1 because we do not know whether we are going to build at pos i+2 So,use dp[i+2],for more convindence.

»
6 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Editorial for Div. 2 B:

"Clearly, if it is possible then there are no more than 2 operations needed."

Can someone explain that? Thanks in advance.

  • »
    »
    6 years ago, # ^ |
    Rev. 3   Vote: I like it 0 Vote: I do not like it

    Because having an "and" operation on one number for twice is no use, (a&x)&x is equal to a&x

    SORRY FOR MY POOR ENGLISH

  • »
    »
    6 years ago, # ^ |
    Rev. 2   Vote: I like it +3 Vote: I do not like it

    observation...

    if (value & x) = m

    then (m & x) = m and again (m & x) = m and so on ..

    (11000 & 10001) = (10000)

    (10000 & 10001) = (10000)

    (10000 & 10001) = (10000)

    ............. so on

    so from value V, the maximum one different number can be achieved.

    so it is enough to check two equal number can be achieved in two operations.

    sorry for my poor English.

»
6 years ago, # |
Rev. 2   Vote: I like it +19 Vote: I do not like it

In tutorial of Div1F, I guess it should be d ≥ dp[A] instead of d ≤ dp[A]. Also O(2nn2) seems to be sth like O(2n n2) (wow it seems to be a bug of codeforces).

»
6 years ago, # |
  Vote: I like it 0 Vote: I do not like it

can someone explain more clearly for div2D ?

  • »
    »
    6 years ago, # ^ |
      Vote: I like it +18 Vote: I do not like it

    Maybe I can help if you ask exact question.

  • »
    »
    6 years ago, # ^ |
    Rev. 2   Vote: I like it +6 Vote: I do not like it

    Have you tried drawing out the graph with nodes named r1,c1,r2,c2, you will see that connected component accuratly represents table.

    Now you have K components, how to connect them in a fast way? Well you can just do K-1 merges between them. So answer is K-1. (by merge I mean chose cell (r,c) such that r and c are not in same component, but you can add (r,c) to connect them).

    Actually vamaddur explained to me the intuition as normal floodfill but done in smarter way.

    • »
      »
      »
      6 years ago, # ^ |
        Vote: I like it +1 Vote: I do not like it

      Submission for Reference

      Note that the comps part was just used for debugging. I can provide a more detailed explanation of my solution if someone requests.

    • »
      »
      »
      6 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      What do you mean by "drawing out the graph with nodes named r1,c1,r2,c2"?

      • »
        »
        »
        »
        6 years ago, # ^ |
          Vote: I like it +4 Vote: I do not like it

        So if we have elements in positions (r1, c1), (r1, c2), (r2, c1), where r1 ≠ r2 and c1 ≠ c2, then we can produce element (r2, c2).

        Draw a edge between (r1, c1), (r1, c2), (r2, c1), and notice that (r2, c2) is in the same connected component.

»
6 years ago, # |
Rev. 2   Vote: I like it +1 Vote: I do not like it

Hey I noticed something strange on div2D/div1B:

When you submit the O(nlogn), its actually faster than O(n) dsu.

Strange right?

My O(nlogn) — 40990303 — 124 ms (path compression)

My O(n) — 40990296 — 155ms (path comp + ranking)

I thought, it must just be me. But I was curious, so I also edited vamaddur's solution to see if similar would happen.

The same pattern happened, even with different implementations (he had some debugging stuff, etc):

O(n) — 40963207 — 483 ms (path comp + ranking)

My edit on his O(n) to O(nlogn) — 40990415 — 311ms (removed ranking, only path comp)

Now of course I understand complexity is not everything, there is constant factor. But I don't understand how even bad constant factor of one O(n) solution can overcome logn factor of slower dsu.

I think this is strange, maybe it has something to do with the data? Why does the nlogn run faster than the linear solution?

  • »
    »
    6 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    DSU is O(NlogN), due to the get father part going through at most logN different DSUs before arriving at rhe correct one

    • »
      »
      »
      6 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Path compression + ranking is actually n*a(n), a(n) means you keep taking the log2 until you reach 1. Actually this is almost always considered O(1) because for n = 2^16 a(n) = 4, so most inputs you deal with a(n) won't be more than 4 or 5 operations

      • »
        »
        »
        »
        6 years ago, # ^ |
        Rev. 2   Vote: I like it 0 Vote: I do not like it

        However, I don't think that it could be the reason to consider InverseAckermann(n) as a constant completely...

        • »
          »
          »
          »
          »
          6 years ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          It caps at 4 for this problem, so I just consider it 4, which should be smaller than log2(n).

  • »
    »
    6 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    I remember that path compression + ranking is not O(n)...

    Isn't it O(N * InverseAckermann(N))?

    BTW, maybe that what makes ranking slower is the if-else statements and more RAM access for array rank?

    • »
      »
      »
      6 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      My comment above.

      And yeah I suspect maybe something like that, but even if you get the max value from a(4e5) = 4

      and you compare O(4n) with O(nlogn)

      log2(4e5) = 18.6something

      so to get 4n > 18n the added constant factor from if-else statements + accesses would have to make it at least 4 or 5 times slower.

      Now I'm wondering if that is actually the case, or that data is unbalanced. If that is actually the case, why not always just write nlogn dsu with the better runtime?

      • »
        »
        »
        »
        6 years ago, # ^ |
        Rev. 2   Vote: I like it 0 Vote: I do not like it

        As I know, the penalty for branch predicting failure in CPU is huge. And it's hard for CPUs to predict complex conditional jump like the if-else statements in ranking...

        (Sorry for the poor English...Maybe the proper noun isn't correct)

        BTW, if we consider the InverseAckermann(n) as a constant because it's capped at 4, why can't I say that since the n is capped at 2e5, my algorithm is O(1)?

        • »
          »
          »
          »
          »
          6 years ago, # ^ |
          Rev. 2   Vote: I like it 0 Vote: I do not like it

          OK, implementing it with just if statements is still 155 ms.

          As about treating a(n) as constant factor, it is typically done because it is easier to think about. So I just call it O(n).

          As for making something large a constant factor, it might be useful, I'm not sure.

          • »
            »
            »
            »
            »
            »
            6 years ago, # ^ |
              Vote: I like it 0 Vote: I do not like it

            I think he means that the if statements are harder to get branch predicted

          • »
            »
            »
            »
            »
            »
            6 years ago, # ^ |
            Rev. 2   Vote: I like it 0 Vote: I do not like it

            Well, actually I mean that the if statements in ranking DSU if(rank[x]>rank[y]) is complex for CPU (it used two RAM references) and it's harder than other branches (like loops) to get branch predicted (value of that expression is almost random). So the conditional jump in ranking DSU (whatever it is, if or if-else) could be the performance bottleneck.

            Another thing that increases the constant factor could be cache miss in RAM accessing. Since the size of array rank is over 1MB so it couldn't be put into cache entirely, it can be another reason.

            • »
              »
              »
              »
              »
              »
              »
              6 years ago, # ^ |
                Vote: I like it 0 Vote: I do not like it

              So would using conditional operator remove this bottleneck?

              • »
                »
                »
                »
                »
                »
                »
                »
                6 years ago, # ^ |
                Rev. 2   Vote: I like it 0 Vote: I do not like it

                Maybe.

                Normally, the operator ?: is based on conditional value statements which run faster than conditional jump statements if the expressions have no side effect because there's no branch prediction here. So you can try the operator ?: to optimize that.

                Try this: prt[rank[x]>rank[y]?x:y]=rank[x]>rank[y]?y:x;

                But for RAM accessing...I have no idea with optimizing that.

»
6 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Can someone explain more clearly div2B ?I don't understand why at most 2 operations will suffice for an equal pair??

  • »
    »
    6 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Firstly its obvious that you'll apply the operation to any element atmost once(because a&x == (a&x)&x ). Let's apply it to the value at position i. Suppose we couldn't find any element a[i]&x in the array. So we apply the operation to a position j. This makes the count of operations used = 2. Still we couldn't find any element a[j]&x in the array. Now suppose we find a position k, such that a[i]&x = a[k]&x. So, using the operation on the position j was a waste. So, at max 2 operations will always suffice. Rest of the elements need not be tampered with.

    • »
      »
      »
      6 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      How can you say that such a 'k' will definitely exist??

      • »
        »
        »
        »
        6 years ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        If it doesn't exist, then the answer is -1. Else the answer is less than or equal to 2.

»
6 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Which problems appeared in EJOI?

  • »
    »
    6 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    All problems starting Div1B.

    There was also one more problem (which targetted marathon solutions) that was in EJOI.

  • »
    »
    6 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Div 1 B, C, D, E, F Source

»
6 years ago, # |
  Vote: I like it 0 Vote: I do not like it

HELP with Problem B. I don't understand the editorial at all.

  • »
    »
    6 years ago, # ^ |
      Vote: I like it +3 Vote: I do not like it

    Ask exact question, please :).

    • »
      »
      »
      6 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      How Problem B is a greedy one ?

      • »
        »
        »
        »
        6 years ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        I wouldn't call the problem B greedy.

        The problem is about detecting which one of 4 cases happens.

        But yes, some cases are detected greedily. Like case for ans = 2.

»
6 years ago, # |
  Vote: I like it +8 Vote: I do not like it

My solution for div1D, finally:

  • let's append undeletable 'a' and 'b' at the end of the original strings (trying both ways to do it) and group together consecutive identical letters
  • if both compressed strings have equal lengths, it has an easy greedy solution
  • the optimal solution will reach this situation in at most 2 steps
  • there are only a few types of operations we want to do in these two steps, it seems "move the first group from one string to the other" and "swap approx. half of one string with approx. half of the other string" are the only ones needed (6 types in total)
  • bruteforce these at most two steps, do greedy if possible afterwards, pick the best solution obtained this way

It has a large constant, but at least it should be provable more easily. It's still kind of a PITA to code.

»
6 years ago, # |
  Vote: I like it +3 Vote: I do not like it

Wow, Div1B has an amazing solution in my opinion. What a great problem! It is so easy to code, yet so hard to solve.

  • »
    »
    6 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    help me with this problem ? if possible with a elaborate explanation . thanks !

    • »
      »
      »
      6 years ago, # ^ |
      Rev. 2   Vote: I like it 0 Vote: I do not like it

      The solution is above, so I'll try to explain how to get there intuitively.

      First, consider an empty n by m grid. Observe that the minimum number of nodes is n+m+1 — you get that number by making an L shape. Why is it minimal? Well, you must at least have a element in every column and every row, and in a way, they have to support each other. That is, simply making a diagonal can control each column and row, but you won't really be able to get anything unless you have elements in the same column and row.

      But, if you do have 2 elements in the same row (say row 0), then they will support each other a lot! Suppose we have 2 elements in the same row, and suppose they're in columns 1 and 2. Then, if we put another element in column 1, we will control that row of column 2 automatically. So, if we can solve column 1, we can also solve column 2, and vice versa.

      So, in a way, row 0 links columns 1 and 2 together. Similarly, columns can link rows together. If we put an element in column 1, row 3, then solving row 0 will solve row 3 and vice versa.

      Thinking about this relationship, maybe you are inspired to draw these links as edges on a graph. Each element is a link, from row (whatever row it is) to column (whatever column it is). You end up with a bipartite graph. When two rows are in the same connected component, that means that for every solved element in the first row, there is a corresponding solved element in the second, and vice versa. If we can connect the entire graph, then we say that, if any element of a row is solved, the whole column is solved! And since we've connected the whole graph, we must have drawn connections to each column, so the whole graph must be solved!

      So, our goal is to make the described bipartite graph fully connected. To do this, count the number of connected components. Then, we want to add edges to connect them. We can always find an edge to connect 2 components here, so the answer is the number of disconnected components — 1. To count we do a very simple dfs.

      Here is my solution for implementation details. http://codeforces.com/contest/1012/submission/41098552

»
6 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Another solution to Div1C (Div2E) is to extend the solution that you found for k and use it to find the solution for k+1.

Let's define used[i] = 1 if the i'th hill contains a flat, otherwise used[i] = 0.

To extend the solution from k to k+1, we need to build one extra flat, so we can make one of the following transformations to one of the contiguous subsequences:
1- Transform 0 0 0 → 0 1 0. That is, build a new house if it's neighbors are not occupied.

2- Transform 0 01010 0 → 0 10101 0. By this, we get an extra 1. Of course the length of this sequence is arbitrary.

Then you simply have to loop over the possible transformations and choose the transformation that minimizes the total cost.

However, keeping track of the total cost and the cost of transformations turned out to be a little bit tedious and that the DP solution is definitely much simpler to code. Anyway, here is my submission if someone is interested: 41124194.

»
6 years ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

I wrote a slow but correct dynamic programming for Div1D.And I found if the length of any string is large, decision making seems regular. You can found these regular patterns here.(in function g(x, y, d), where x represents the length of the first string, y represents the length of the second string, and d represents whether the first characters are different). Could anyone prove it or hack it?

»
4 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Please help. What is complexity for this?

https://codeforces.com/contest/1012/submission/91080831

I thought it was n^3 but it got AC.