neal's blog

By neal, 13 days ago, In English

Here are my approaches to the problems today:

1418A - Buying Torches

Since the second trade is the only way to get coal, we clearly need to perform the second trade $$$k$$$ times. So how many times do we need to do the first trade? We can see that in order to end up with enough sticks and coal by the end, we need to obtain $$$ky + k$$$ sticks ($$$ky$$$ to convert to coal and $$$k$$$ to save as sticks). Since the first trade really just gives us $$$x - 1$$$ new sticks each time, we'll need to make $$$\displaystyle \left \lceil \frac{ky + k - 1}{x - 1} \right \rceil$$$ first trades (reference to floor and ceiling functions for anyone unfamiliar).

For implementation details, note that for positive integers $$$a$$$ and $$$b$$$, $$$\displaystyle \left \lceil \frac{a}{b} \right \rceil = \left \lfloor \frac{a + b - 1}{b} \right \rfloor$$$. Code: 92851684

1418B - Negative Prefixes

We can think about the problem as follows: we want to order the $$$a_i$$$ to create the longest possible nonnegative prefix of $$$p_n, p_{n - 1}, \dots, p_1$$$ (in other words, the smallest possible $$$k$$$ such that $$$p_n \geq 0, p_{n - 1} \geq 0, \dots, p_k \geq 0$$$).

Notice that $$$p_n = a_1 + \dots + a_n$$$ is fixed. We can also see $$$p_{n - 1} = p_n - a_n$$$, $$$p_{n - 2} = p_n - a_n - a_{n - 1}$$$, etc. So we should make $$$a_n$$$ as small as possible (assuming it is unlocked), then $$$a_{n - 1}$$$, and so on. In other words the unlocked $$$a_i$$$ should be sorted in decreasing order from left to right. To prove this, you can use an exchange argument: if you consider an arrangement of the $$$a_i$$$ where two consecutive unlocked values are not in decreasing order, we can swap them with each other. This swap does not make any of the $$$p_i$$$ smaller (it can only make some $$$p_i$$$ bigger). Thus we can start with the optimal ordering and repeatedly apply swaps until the unlocked values are sorted, without making anything worse.

Code: 92797816

1418C - Mortal Kombat Tower

This is a DP where the state is the prefix of the bosses we've fought and the person whose turn it is next; the DP value is the number of hard bosses our friend had to fight, which we want to minimize. For each state we can simply consider both options of fighting one boss or fighting two bosses to progress to a future state.

See the code for details: 92798564. In this solution, dp[i][who] represents the minimum number of hard bosses for our friend given that we've finished the first i bosses and that it is who's turn to go next (0 is us, 1 is our friend). In the code I use a little trick that who * hard cancels out the hard bosses for us but includes them for our friend.

1418D - Trash Problem

Note that we can move multiple piles together at the same cost as moving a single pile. This means that if we have a group of piles we want to collect together, as long as we bring them together somewhere between min(x_i) and max(x_i), the cost of doing this will be max(x_i) - min(x_i).

In particular, since we can end up with two piles, we'll want to create two segments like the following in order to collect everything (* means pile, — means segment):

*----*-*-------*    *-----*---*

We want to minimize the combined length of the segments, but notice that this combined length is equal to max(x_i) - min(x_i) - the gap in between the segments. So we just need to maximize the gap, by taking the maximum value of $$$x_{i + 1} - x_i$$$.

In order to do this and handle queries, we can store all of the positions in a set and all of the gaps in a multiset. Then when we add/erase a position $$$x_i$$$, we only have to adjust three gaps: $$$x_{i + 1} - x_i$$$, $$$x_i - x_{i - 1}$$$, and $$$x_{i + 1} - x_{i - 1}$$$.

Code: 92847621. Be careful that when erasing from a multiset, ms.erase(key) removes ALL occurrences of key from the multiset. Instead, we want ms.erase(ms.find(key)).

1418E - Expected Damage

Given $$$a$$$ and $$$b$$$, let's define big monsters as monsters with $$$d \geq b$$$, and small monsters as monsters with $$$d < b$$$. We will only take damage from monsters that come after the $$$a$$$-th big monster. In particular, if $$$a$$$ is larger than $$$big$$$ (the number of big monsters), we know the answer is immediately 0.

Otherwise, every big monster has a $$$\displaystyle 1 - \frac{a}{big}$$$ chance of doing damage (the chance it is not one of the first $$$a$$$ big monsters). For small monsters, they are equally likely to be in any of the $$$big + 1$$$ gaps before/between/after the big monsters. In the first $$$a$$$ gaps, they will not do any damage, and after that they will. So each small monster has a $$$\displaystyle 1 - \frac{a}{big + 1}$$$ chance of doing damage.

So the answer we want is $$$\displaystyle 1 - \frac{a}{big}$$$ times the sum of every big monster, plus $$$\displaystyle 1 - \frac{a}{big + 1}$$$ times the sum of every small monster. We can use prefix sums to quickly get the sum of all small monsters / all big monsters for each query. Code: 92809090

1418F - Equal Product

Given a particular value for $$$x_1$$$, we can determine the range of $$$y_1$$$ that would be valid based on the two constraints $$$1 \leq y_1 \leq m$$$ and $$$\displaystyle \frac{L}{x_1} \leq y_1 \leq \frac{R}{x_1}$$$. Let's call it $$$[y_{low}, y_{high}]$$$.

Let's say there exists a valid $$$(x_2, y_2)$$$ that fits the desired constraints such that $$$x_1 y_1 = x_2 y_2$$$. Then if we write $$$\displaystyle \frac{x_2}{x_1} = \frac{a}{b}$$$ as a reduced fraction, we must have that $$$a > b$$$ and that $$$b$$$ is a factor of $$$x_1$$$. Moreover, since $$$x_1 y_1 = x_2 y_2$$$ means that $$$\displaystyle \frac{y_1}{y_2} = \frac{a}{b}$$$, we must also have that $$$a$$$ is a factor of $$$y_1$$$.

If given $$$x_1$$$ we can find $$$y_1$$$, $$$a$$$, and $$$b$$$ that satisfy the above constraints, we are almost done. The only remaining constraint is to ensure that $$$\displaystyle x_2 = x_1 \frac{a}{b} \leq n$$$. So in particular, given values for $$$x_1$$$ and $$$b$$$ (such that $$$b$$$ is a factor of $$$x_1$$$), we want to find the smallest value of $$$a$$$ such that $$$a > b$$$ and $$$a$$$ is a factor of at least one number in our desired $$$y$$$-range $$$[y_{low}, y_{high}]$$$.

In-contest, I then found two separate algorithms that should be fast in different scenarios and decided to use whichever of the two I guessed to be faster for each value of $$$x_1$$$. The first algorithm is to iterate all of the factors of $$$x_1$$$ as $$$b$$$ and find all of the corresponding valid choices for $$$a$$$. Then loop up those choices for $$$a$$$ from smallest to largest, and for each it just takes a quick O(1) check to determine whether any number in $$$[y_{low}, y_{high}]$$$ is divisible by $$$a$$$. This has a worst case of $$$O(n)$$$ per $$$x_1$$$ but often returns early (as soon as it finds a solution).

In most cases where the first algorithm takes a long time, it's due to the range $$$[y_{low}, y_{high}]$$$ being very narrow. In this case, a separate algortihm is possible by iterating $$$y$$$ from $$$y_{low}$$$ to $$$y_{high}$$$, then iterating the factors of $$$y$$$ as $$$a$$$, and then checking whether an appropriate value for $$$b$$$ exists among the factors of $$$x_1$$$.

This was able to pass the tests in about half of the time limit: 92840659. I don't have a solid proof for the efficiency of this solution though. Maybe someone can find a way to hack it?

Instead by using a segment tree, you can end up with a clean $$$O((n + m) \log^2 m)$$$ solution. The main idea is that since $$$y_{low}$$$ and $$$y_{high}$$$ are bounded by $$$[1, m]$$$, we are really doing a 2D range query on $$$y_{low}$$$, $$$y_{high}$$$, and $$$b$$$ where we want to find the smallest factor $$$a$$$ of any $$$y$$$ in $$$[y_{low}, y_{high}]$$$ such that $$$a > b$$$. We can do a sweep on $$$a$$$ and $$$b$$$ instead to just handle these queries with a 1D seg tree instead (query minimum in range, update individual element). See the code for more details: 92847130

1418G - Three Occurrences

First let's try to count the number of subarrays where the number of times every integer occurs is a multiple of 3. We can think about creating a 'signature' for each prefix of the array, where the signature tells us how many times each value $$$a$$$ occurs in the prefix of the array, modulo 3. So if the prefix is [3, 2, 1, 2, 2], our signature should be [1, 0, 1], since 1 occurs once, 2 occurs three times (0 mod 3), and 3 occurs once.

Now a subarray satisfies our condition iff the prefix for its start and the prefix for its end have the same signature. But we don't have time to actually store every signature, so instead we'll compute a hash of every signature and count equal pairs using a hash map.

We need a hash which we can update quickly after updating a single element. One obvious option is the polynomial string hash, but there's something even better: generate a random 64-bit number for every index random[i], and then we define our hash as the sum of freq[i] * random[i] where freq[i] is in {0, 1, 2}. We can show that if two frequency arrays are different, their hashes collide with probability at most $$$\frac{1}{2^{63}}$$$.

Now the only thing left to handle is the "exactly thrice" condition. The way we do this is by iterating over the end point from left to right and keeping track of the last three locations of every value. We can then ensure that we only consider start points that are large enough so that our subarray will never contain more than three of any value. To remove invalid prefixes, we can simply subtract them out from our hash map. The final solution is very short and $$$O(n)$$$: 92855419

 
 
 
 
  • Vote: I like it
  • +358
  • Vote: I do not like it

»
13 days ago, # |
  Vote: I like it +33 Vote: I do not like it

I spent over 1 hour debugging A, had got the formula pretty quickly but that ceil() function call was ruining everything. Typecasting to (long double) finally gave the right answer for sample testcase 4 and 5.

Thanks for sharing this useful ceil_div() function. :)

  • »
    »
    13 days ago, # ^ |
      Vote: I like it +9 Vote: I do not like it

    Me too, I dont know why the ceil funtion dont work, I cast the denominator to double and in the the fourth and fifth example of test 1 give me WA but this should work no? if someone could explain me because I dont finish of underestand why these fail :c.

    • »
      »
      »
      13 days ago, # ^ |
        Vote: I like it +12 Vote: I do not like it

      double probably doesn't have enough precision, I think it would have worked if you used long doubles, but avoiding doubles when possible is always better :P

  • »
    »
    13 days ago, # ^ |
      Vote: I like it +6 Vote: I do not like it

    Mee too. Btw, can you help with how the formula $$$⌈\frac{ky+k−1}{x−1}⌉$$$ is arrived at ? I could understand that $$$ky+k$$$ sticks are required and in each trade we gain $$$x-1$$$ sticks except for the last trade. But why $$$⌈\frac{ky+k−1}{x−1}⌉$$$ ?

    • »
      »
      »
      13 days ago, # ^ |
        Vote: I like it +6 Vote: I do not like it

      Initially you have one stick.

      After each trade you will get (x-1) extra sticks

      After first_trade you have 1+(x-1) sticks. After second_trade you have 1+(x-1)+(x-2) = 1+2*(x-2)sticks After nth_trade you have 1+n*(x-1) sticks --> equation 1

      tot sticks req = k*y+k -->equation 2

      equating 1 and 2 n = (k*y+k-1)/(x-1)

      • »
        »
        »
        »
        13 days ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        Got it. Thanks :)

      • »
        »
        »
        »
        13 days ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        Nice explanation,can you please explain, but why they are taking mod (k*y+k+1)%(x-1)

        • »
          »
          »
          »
          »
          13 days ago, # ^ |
            Vote: I like it +3 Vote: I do not like it

          Look carefully. They are not taking mod instead taking the ceiling value of the division of (k*y + k -1)/ (x-1)

        • »
          »
          »
          »
          »
          13 days ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          Number of trades 'n' should be integer value.

          for some values of x,y,k it turn into float.(Since we have (x-1) in denominator.)

          if n is float we take the ceiling value of n.

      • »
        »
        »
        »
        13 days ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        Thanks for the explanation

      • »
        »
        »
        »
        11 days ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        1 + (x-1) + (x-2) != 1 + 2*(x-2). See?

      • »
        »
        »
        »
        46 hours ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        1+(x-1)+(x-2) = 1+2*(x-2)

        please go through the equation once again. RHS != LHS

        YOU CAN JUST PROVE IT THIS WAY

        PLEASE GO THROUGH MY EXPLANATION BELOW :

        //to craft k torches, he needs to have k sticks and k coals //initially he has 1 stick and 0 coal //so h needs to generate k coals //no of stick required = ky //now we already had one stick at the start of the game //now he needs to have k sticks as well, so no of sticks additionally required = k-1 //so total no of sticks is ky+k-1 //in one iteration, you get additionally (x-1) sticks //total number of iterations is (ky+k-1)/(x-1) //answer should be the ceil of this value

  • »
    »
    13 days ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Thats the reason I never use ceil functions. To calculate ceil(n/m) I use floor((n+m-1)/m). C++ actually will floor this so we could just write [ ceil(n/m) = (n+m-1)/m ].

    • »
      »
      »
      13 days ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Easier version: ceil(n/m)=floor((n-1)/m)+1

  • »
    »
    13 days ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    You can even avoid ceil function. Just Add denominator — 1 to the Numerator. It will always give ceil value

  • »
    »
    11 days ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    I am also trying to typecast inbuilt ceil function of Java , and still it is showing wrong answer can anyone help me, please? Thanks in advance.

»
13 days ago, # |
  Vote: I like it +21 Vote: I do not like it

Thanks a lot neal

»
13 days ago, # |
  Vote: I like it +14 Vote: I do not like it

Hey neal, why do you define your lambdas as "auto&&" instead of "auto"? Is there any difference in terms of the generated code?

  • »
    »
    13 days ago, # ^ |
    Rev. 2   Vote: I like it +20 Vote: I do not like it

    That's a good question. I don't have any particularly good reason, it's just how I'm used to doing it.

    If someone understands the tradeoffs between the two versions better, I'd be curious to hear it. For example does one version result in less overhead than the other when used as a function argument?

    • »
      »
      »
      13 days ago, # ^ |
        Vote: I like it +83 Vote: I do not like it

      dont know about overhead but its way over my head

    • »
      »
      »
      13 days ago, # ^ |
      Rev. 2   Vote: I like it -33 Vote: I do not like it

      I think you must be knowing this but then also, if you use "auto &a" in loop then you can mutate the values of the container you're iterating in. But if you only use "auto a" , then the original container doesn't change. I think this everyone knows, but if someone doesn't then this might be helpful. Hope this helps someone!

    • »
      »
      »
      13 days ago, # ^ |
      Rev. 2   Vote: I like it +39 Vote: I do not like it

      With auto lclosure = <lambda-expr>, you are constructing the object in-place. (You could also write auto lclosure{<lambda-expr>}.) With auto&& rclosure, you creating a temporary object whose life-time is then extended through the rvalue-reference on the left-hand side. They are two distinct objects. The reference induces additional overhead, when calling the closure object since it only stores the temporary object's address. (This indirection is practically negligible, but nonetheless unnecessary.)

      While the rclosure's type is an rvalue-reference an expression only consisting of the variable name rclosure is always an lvalue of the underlying type. Therefore, in a templated function with a signature like sort(Func func), the template parameter is deduced as the type of the lambda expression not that of the reference. Since the "reference-ness" of a parameter is dependent on the declaration on the call site, a copy of the actual closure object is created, which can be rather costly (cf. the memcpy instruction in this example).

      The C++ Standard Library always expects functors to be light-weight objects. In case your lambda-expression has large captures and you can't capture them by reference, you can wrap the functor within a reference using std::ref.

      auto&& also has different purpose as a so-called forwarding reference, when used as a parameter in a generic lambda expression.

      • »
        »
        »
        »
        11 days ago, # ^ |
          Vote: I like it +10 Vote: I do not like it

        So, for practical purposes, the two will always be equivalent, other than the slight inefficiency in "auto&&"? It's strange that the compiler doesn't optimize that away.

        In your example, I tried replacing the signature of f2 to "void f2(F&& f)" and that gets rid of the memcpy as well. Is there any disadvantage to doing that compared to using std::ref?

        • »
          »
          »
          »
          »
          11 days ago, # ^ |
            Vote: I like it +8 Vote: I do not like it

          A local variable reference like this will surely be optimised away since the it points to an object whose address is fixed.

          There's no difference between std::ref and using a forwarding reference – performance-wise at least. However, you are making a decision whether the reference-ness is part of the signature or whether that shall be declared at call-site. In competitive programming, you are usually both developer and user of such a function and can switch between the alternatives at any time. But as a library designer, you have to decide what amount of control you give to the user and how much competency you expect from them. Either way you can create dangling references.

          TL;DR: std::ref would be the way you have to use the Standard Library with. Forwarding references are IMO more user-friendly. Finally, there is no need for either of them if your functors are light-weight objects – prefer capturing by reference.

»
13 days ago, # |
  Vote: I like it +13 Vote: I do not like it

The problem C can also be solved greedily , lets add the first number to the answer and then we have choice of changing turns so from there onwards lets say we have x number of consecutive 1's then the answer for that will be x/3 (eg for x=5 we can have the 1st two one by us then one cost of skip by our friend and last two 1's by us again giving 5/3=1) and then again after reaching 0 or end we have choice of changing turn .. easy implementation -92833708 .

»
13 days ago, # |
Rev. 8   Vote: I like it +71 Vote: I do not like it

Here is a non hashing solution for G.

For some number X lets assign it some value, 0 is just some filler value.

array: 0 0 X 0 0 X 0 0 0 0 X 0 0 0 0 X 0 0 0
value: 1 1 1 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0

We assign 1s to places where it is behind an X except for the 3rd last X, then the value of the index will be 0 when segment from that index to the endpoint has be 3 or 0 occurances of X.

Example for array {1,2,1,1,2,3,2,3,3,2}

2CGXZG.md.png

We just need to maintain the sum of all values for all numbers X and count number of 0s for each right endpoint. This can be done using sweep line and a segment tree that can handle range sum and range count 0.

Code: 92802723

  • »
    »
    13 days ago, # ^ |
      Vote: I like it +15 Vote: I do not like it

    Nice solution,make full use of the feature of the problem and the prefix add and minus make us avoid discussing the conditions and the important point is that the "restriant" that the min possible value will no less than 0 and what we want to count is just 0 . Just because of the feature we can use segtree maintain it.I just hope that I will never compete with you.... :D

    When can I be so smart as you errorgorn orz....

»
13 days ago, # |
  Vote: I like it +5 Vote: I do not like it

I used Ternary XOR in G instead of the hash neal mentioned. 92824456. It seems they both are identical. Just another fancy name.

»
13 days ago, # |
  Vote: I like it 0 Vote: I do not like it

can anybody explain how the greedy solution in the problem C

»
13 days ago, # |
  Vote: I like it 0 Vote: I do not like it

https://codeforces.com/contest/1418/submission/92829770 i got Wrong answer on test case 2 , can anybody help where's the fault is?

  • »
    »
    13 days ago, # ^ |
      Vote: I like it +1 Vote: I do not like it
    See this

    Correct ans : 1 Your ans : 2

    Explanation of correct ans:
    F1 F2 F1 F2 F2
    (F1 = weak friend) (F2 = you (strong one -_- ) )

»
13 days ago, # |
  Vote: I like it 0 Vote: I do not like it

I tried binary approach in A, but not able to understand why is it giving such unrealistic outputs. Here is the code. Can anyone please help?

  • »
    »
    13 days ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    comp = x + ((x — 1) * tmp);

    Here (x — 1) * tmp can exceed the range for long. You need to somehow lower your high at the starting.

    • »
      »
      »
      13 days ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      oh... yes... thanks. Anyway do you have any idea how can I lower the hign at the starting?

      • »
        »
        »
        »
        13 days ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        There a way around such conditions just start ur high pointer from 1 and multiply it by two until it doesnt satifies the good condition Code would be something like

        while(!good(x)) x *= 2;

        Btw I learned it from EDU binary search section best content for bin_search imo

  • »
    »
    13 days ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    In case you are still stuck, here is my binary search solution 92820069

»
13 days ago, # |
Rev. 2   Vote: I like it +3 Vote: I do not like it

My approach for C-

for(i=0;i<n;i++)
{
    if(a[i]==1)
        ans++;
    if(a[i+1]==1&&a[i+2]==1)
        i+=2;
    else if(a[i+1]==1)
        i++;
}

I got it with a simple logic!!

  • »
    »
    13 days ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Your array might be accessing elements out of bounds. This could lead to unpredictable results. 'a' is a vector or array?

    • »
      »
      »
      13 days ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      It's the input array size of n+5. So i+1 and i+2 will be 0, that's not a problem with my logic.

»
13 days ago, # |
  Vote: I like it +9 Vote: I do not like it

I have a Problem on D.

If we can only move one pile at a time, how to solve this?

Yesterday I misunderstood the problem, and fail to get the answer.

My solution is to maintain two such data-structure, support add/del/get_medium/get_max/get_min/get_result. For each operation, when add, maintain such two structure.

I think adjust(maintain) these for each operation will not cost too much, but I can't prove it.

  • »
    »
    13 days ago, # ^ |
      Vote: I like it +3 Vote: I do not like it

    Exactly the same happened to me, I thought the problem was other, and my solution for that problem was with ternary search + Binary Lifting + Binary Indexed Tree, so much more complicated than the solution for the actual problem. LOL

    • »
      »
      »
      13 days ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Can you explain it? I can't get the solution...

      • »
        »
        »
        »
        13 days ago, # ^ |
        Rev. 4   Vote: I like it +5 Vote: I do not like it

        Well, the problem I had understood was that you move the piles directly, so the cost of moving all piles to position $$$x$$$ is $$$\sum_{i=1}^n |p_i - x|$$$. So, here we do the same, partition the array in two disjoint subarrays. The optimal solution for a subarray is always to move all piles to the median pile (i.e.: if there are $$$x$$$ elements, the $$$\lfloor\frac x 2\rfloor$$$-th element), and the cost of that would be the sum of absolute values of the differences; that is a convex function, so we can apply ternary search in order to find the optimal position in which to partition the array in two.

        And now, in order to do the operations in an easier way, we read the queries and compress all coordinates, and now an update becomes: activate a position, or deactivate a position.

        Now, we need a Data Structure that allow us:

        • activate/deactivate a position

        • get the sum of activated positions in a range

        This Data Structure could be a Binary-Indexed Tree (we use two of them, one for keeping the sum of activated positions, and the other one for keeping the amount of activated positions, cause we need to find the middle element of activated positions). It also could be used a Segment Tree. To find the middle element in a range, we do an Implicit Binary Search over the Segment Tree of amount of activated positions, or Binary Lifting with the BIT, to do it in $$$O(\log n)$$$ time.

        So final time complexity is $$$O(Q\cdot\log_{1.5} (n+Q)\cdot \log (n+Q))$$$.

        Also, I think there might be an easier solution for this problem. I was thinking other idea with Ranges updates.

        Some details:

        We keep an array A[] that contains all coordinates sorted.

        We keep another array B[]; if B[i] == 1, it means position i is activated, and deactivadted if B[i] == 0

        We keep another array C[]; where if position i is deactivated, then C[i] == 0, and if position i is activated, then C[i] == A[i]

        And now, a query $$$(0, x)$$$ (erase pile from coordinate $$$x$$$) is just do:

        p := position such that A[p] == x
        B[p] := 0
        C[p] := 0
        

        a query $$$(1,x)$$$ (add a pile to coordinate $$$x$$$) is just do:

        p := position such that A[p] == x
        B[p] := 1
        C[p] := A[p]
        

        And now, to get the middle element in range $$$[l; r]$$$ we need to find the first position $$$m$$$ such that the sum of B[] in range $$$[l\dots m]$$$ is equal to the sum of B[] in range $$$[m+1\dots r]$$$, and this can be found with binary search.

        • »
          »
          »
          »
          »
          13 days ago, # ^ |
            Vote: I like it +3 Vote: I do not like it

          I think I get what you say.

          It should work. Wonder whether there exists another easiler solution...

          Anyway, Thanks a lot!

»
13 days ago, # |
  Vote: I like it +3 Vote: I do not like it

why inbuilt ceil and floor functions don't work here?

  • »
    »
    13 days ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    maybe you forgot to type cast to float or double first..

    int x = ceil(a/b); -- incorrect int x = ceil((double)a/b) -- correct

»
13 days ago, # |
Rev. 2   Vote: I like it -29 Vote: I do not like it

C can also be solved using a linear iterative dp using a single for loop and some base cases.

Define : dp[i] as maximum 1's you can collect uptil index i. Then answer would be (total ones-max element in dp[i]).

Transitions are straightforward :

Suppose we make 1 move and are currently at index i. There are 2 cases :

Our friend made 2 moves previously: So dp[i] = dp[i-3] + a[i]

Our friend made 1 move previously : So dp[i] = dp[i-2] + a[i]

Suppose we make 2 moves and are currently at index i(This means we covered index (i-1) also). There are 2 cases:

Our friend made 2 moves previously: So dp[i] = dp[i-4] + a[i] + a[i-1] (We made 2 moves which are a[i]+a[i-1]).

Our friend made 1 move previously: So dp[i] = dp[i-3] + a[i] + a[i-1]

Thats it! Finally dp[i] is the maximum of all 4 of these cases.

Notice the bottleneck here is dp[i-4] so we have to think for base cases 0,1,2,3.

These base cases can be thought of similarly.

Code
»
13 days ago, # |
  Vote: I like it 0 Vote: I do not like it

Another quick proof for B :

By property of prefix sums we all know that the position 1 will contribute the most, then position 2 and so on, hence position n will contribute the least. So the first unlocked position has to contribute the maximum value possible for all other later prefix sums. So the first unlocked position has to have the greatest unlocked element possible..you see where this is going =).

»
13 days ago, # |
  Vote: I like it 0 Vote: I do not like it

I have doubt in Problem D. The logic is clear to me but the problem is in implementation.

When i implement the solution using C++ STL function "set.find.()" my code was accepted https://codeforces.com/contest/1418/submission/92862051 .

But when i try to do the same with another C++ STL function "lower_bound(set.begin(),set.end(),key)" I am getting a TLE on test case 12 https://codeforces.com/contest/1418/submission/92871326.

I don't get it, both of my submissions are exactly the same except for that lower_bound and find() part in delif function. Also both the functions are logarithmic in size and the problem says the element being removed will definitely be there in set so both the solutions should be accepted. Help anyone?

  • »
    »
    13 days ago, # ^ |
      Vote: I like it +9 Vote: I do not like it

    set<> doesn't have random-access iterators, so doing lower_bound(S.begin(), S.end(), x) will cost linear time. As an alternative you can use S.lower_bound(x), and that will work in logarithmic time.

  • »
    »
    13 days ago, # ^ |
      Vote: I like it +1 Vote: I do not like it

    You should use s.lower_bound(key) instead.

    For details check this link.

»
13 days ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

Where can I find all the stl and functions which are useful for competitive coding? Like in today's contest, I didn't know about multiset, next and prev function in sets. Do we learn it by reading other's code? Or there is some specific source. In Solution of problem D, there is something I am not able to find anywhere. I can understand its meaning .....but I can't understand the syntax... and I don't know the name of this syntax type.

auto &&add_position = [&](int p) {
        auto it = positions.insert(p).first;
 
        if (next(it) != positions.end())
            gaps.insert(*next(it) - p);
 
        if (it != positions.begin()) {
            gaps.insert(p - *prev(it));
 
            if (next(it) != positions.end())
                gaps.erase(gaps.find(*next(it) - *prev(it)));
        }
    };

It would be very helpful if someone explains the syntax or provide the source. Thanks

  • »
    »
    13 days ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    That is a lambda function. You can learn language stuff here at C++ Reference. Maybe it's a better option reading a book like Deitel's, or just searching youtube tutorials.

»
13 days ago, # |
  Vote: I like it 0 Vote: I do not like it

I was complaining that the correction made to the original problem A is incorrect and original one was correct. I was so confident at that time but later on when test cases explanation came, I was like...... like............. i can't describe. Sorry for disturbing during the contest and thanks for really interesting problems!!

»
13 days ago, # |
  Vote: I like it 0 Vote: I do not like it

Please tell me what's wrong in my greedy approach for C.

(*> If it's my turn, I choose one element and if the next element is 1, I choose it too, otherwise not.

(*> If it's my friend's turn, he chooses one element (if that is 1, increment ans by 1) and if the next element is 0, he chooses it too, otherwise not.

Code: V Clean. Pls read.

Thanks.

  • »
    »
    13 days ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    u can try this test case: 1 5 0 0 0 1 1 ans should be 0 but i guess your logic will give 1

  • »
    »
    13 days ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Clearly, this approach will fail, you don't know what is coming next in the array, you made a decision depending on the current situation which may turn out to be a bad one later. Try to make few test cases you will get it. This is a typical DP question where you have two choices for each state. You may wanna look atcoder educational dp contest, first two problems are similar to this one.

»
13 days ago, # |
  Vote: I like it 0 Vote: I do not like it

Please check my code for the query loop, why is it giving TLE?

Submission: https://codeforces.com/contest/1418/submission/92876461

»
13 days ago, # |
  Vote: I like it 0 Vote: I do not like it

What is the correct solution for problem C : 0 0 0 1 1 ? and why ?

The problem statement is rather unclear on the fact that you are allowed to kill zero boss on your turn.

  • »
    »
    13 days ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    The solution to your test case is 0. Because your friend doesn't need to kill any hard boss at all. First he can kill the first boss and the array becomes

    [ 0 0 1 1]
    

    its our turn now, we will kill only one and then array becomes

    [ 0 1 1]
    

    friends turn now, he will kill only one and then array becomes

    [1 1]
    

    now our turn, we will kill both the bosses!

    What you can observe is, the only time your friend has to kill the hard boss is when there is more than or equal to 3 continuous occurences of hard bosses. Because we can only assist by killing 2 bosses, so our friend must kill one.

    for eg. if in some point array is [1,1,1] , then your friend must kill atleast one. and if array is [1,1,1,1,1,1] , then your friend must kill atleast two. actually, its the length of the continuous sequenece of ones divided by 3;

    so for each continous sub-sequence of ones, divide by 3 and add it in the answer.

    and at last to that final answer, add one if the first element in the array was '1' because the turn starts with our friend and he can't avoid it!

    92879674 code is really short and simple, hope will clear your doubts!

    • »
      »
      »
      13 days ago, # ^ |
        Vote: I like it +3 Vote: I do not like it

      Because your friend doesn't need to kill any boss at all. First, he can kill the first boss

      you said he will not kill and he killed the first boss(i think you meant he doesn't kill hard bosses)

      • »
        »
        »
        »
        13 days ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        OH Yeah.. since we only care about hard bosses, i thought it was in context. But sorry, should have made clearer!

    • »
      »
      »
      13 days ago, # ^ |
        Vote: I like it +6 Vote: I do not like it

      thanks. correct strategy is :
      0 0 0 1 1
      F Y F Y Y
      with F = friend, Y = you. so it means that there are no greedy solution.

»
13 days ago, # |
Rev. 4   Vote: I like it 0 Vote: I do not like it

Can anyone tell me why my code was wrong for the problem C?

My dp approach was that=> State i means at what position my friend is. I don't count what I did with bosses in code. Every time at my turn I kill 2 bosses. I only count what my friend did. I tried to find out from the initial moment if he kills 1 boss or 2 bosses what will be the min points. Was this approach wrong? then why?

UPDATE: Find the mistake and got AC. I glad to say that it is my first accepted dp solution with my own idea. Thanks to the authors for this type of educational problems.

»
13 days ago, # |
  Vote: I like it 0 Vote: I do not like it

can someone explain solution of D in easier way

»
13 days ago, # |
  Vote: I like it 0 Vote: I do not like it
  • »
    »
    13 days ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    It's because of this part memset(dp,-1,sizeof(dp))

    every time you are filling whole dp array of size 200001 with -1. just fill the dp array from 0 to n with -1.

»
13 days ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

What's the meaning of '&&' in the solution of problem D?

Is the lambda expression a right-value?

»
13 days ago, # |
  Vote: I like it 0 Vote: I do not like it

can anybody help me out with c. whats wrong with my code. https://codeforces.com/contest/1418/submission/92891613

  • »
    »
    13 days ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    In first if(!f) you should modify this line: if(i+1<n&&a[i+1]==0)i++;

    Instead it should look like this: if(i+2<n&&a[i+1]==0&&a[i+2]==1)i++;

    That's because your code outputs 1 for test case 0 0 0 1 1(correct output for this particular case is 0). The modification is that after you kill one easy mob you should only kill 2nd mob if that mob is easy to kill and the mob after that is hard to kill.

    • »
      »
      »
      12 days ago, # ^ |
        Vote: I like it +1 Vote: I do not like it

      I am really very thankful to u brother. i am happy to see that my logic was not totally wrong :) thank u once again.

»
13 days ago, # |
  Vote: I like it 0 Vote: I do not like it

btw, does anybody know the proof of equality floor((a+b-1)/b) == ceil(a/b)?

  • »
    »
    13 days ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    Let a = k*b + r

    floor ((a+b-1)/b) is (b+k*b+r-1)/b, which is k + (b+r-1)/b, if r = 0, then obv the second part becomes 0, we get, else second part becomes 1, and we get k + 1

»
13 days ago, # |
  Vote: I like it 0 Vote: I do not like it

neal Can you please explain E? How is the chance of every big monster doing damage is 1 — (a/big) ?

  • »
    »
    13 days ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    It would be nice if anybody could explain :)

  • »
    »
    12 days ago, # ^ |
      Vote: I like it +3 Vote: I do not like it

    Because, (the first 'a') big monsters don't have damage.

  • »
    »
    12 days ago, # ^ |
      Vote: I like it +3 Vote: I do not like it

    The first $$$a$$$ big monsters don't do any damage. The probability that any particular big monster is one of the first $$$a$$$ big monsters is $$$\displaystyle \frac{a}{big}$$$.

»
13 days ago, # |
  Vote: I like it +8 Vote: I do not like it

I have a segment tree solution for problem G:

We can enumerate right border from larger to smaller, suppose that the right border is $$$r$$$.

Now we need to know the number of left borders: $$$l$$$ that for each color there are exactly zero or three in the range $$$[l,r]$$$.

We can do following operations:

For each color $$$c$$$, store all the indexs $$$i$$$ that $$$a_i=c$$$ and $$$i\leq r$$$ in one vector: $$$v_c$$$ in increasing order.

Now for each color $$$c$$$ we pick the last element, the third biggest element and the fourth biggest element from the $$$v_c$$$. We suppose that they are $$$r$$$,$$$l$$$ and $$$ll$$$,(specially,if there are no such elements $$$l$$$,$$$r$$$ or $$$ll$$$ $$$=0$$$).

We make all elements in the range:$$$(ll,l]$$$ and $$$(r,n]$$$ increased by one.

The problem turn into : compute how many elements in the range [1,r] that exactly equal to $$$n$$$!

Segment tree can help!

For each node , we store two informations:

  1. biggest element
  2. the number of biggest elements

Then simple push_down can work!!!

Total : $$$O(n*log(n))$$$

Code

»
13 days ago, # |
  Vote: I like it 0 Vote: I do not like it

why it got (k*y+k-1)/(x-1) why it is not working when (ky+k)/(x-1)?

  • »
    »
    13 days ago, # ^ |
    Rev. 3   Vote: I like it +6 Vote: I do not like it

    because n operations of 1st kind give 1+n*(x-1) sticks,not n*(x-1) sticks.So you can subtract that extra 1 from total number of sticks required(which is k*y+k).

»
13 days ago, # |
  Vote: I like it 0 Vote: I do not like it

can someone tell my fault? this code is of question C https://codeforces.com/contest/1418/submission/92904064

»
13 days ago, # |
  Vote: I like it +5 Vote: I do not like it

wow,E is so beautiful ,although the EDU is often difficult and full of naughty and lovely hackers,the problems can always make me marvel at them! Thanks for neal's solution and the nice contest!

»
13 days ago, # |
Rev. 4   Vote: I like it 0 Vote: I do not like it

Why i can't do ceil function?

  • import math
  • def Ans():
  • N=int(input())
  • for i in range(N):
  • ans=0
  • A=input().split()
  • a=int(A[0])
  • b=int(A[1])
  • c=int(A[2])
  • ans+=math.ceil((c+b*c-1)//(a-1))
  • print(ans+c)
  • Ans()
»
13 days ago, # |
  Vote: I like it 0 Vote: I do not like it

Why do we add k before cout in A?

  • »
    »
    12 days ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Cause we should do K trades to transform K*Y sticks into Y coals, and we should print the numbers of trades done.

»
12 days ago, # |
  Vote: I like it 0 Vote: I do not like it

Can someone explain to me the solution of G. How can we use polynomial string hash function in this problem

»
12 days ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

Can anyone help me identify what's wrong in my approach in C :


dp[i][0] //denotes min coins spent if I kill the last boss dp[i][1] // denotes min coins spent if my friend kill the last boss . int ans(int i ,int j){ if(i<0) return 1e9 + 7 ; if(i==0) { return a[i]; // since my friend's turn is the first one } if(i==1) { if(j==0) return min(ans(0,0),ans(0,1)) ; else return a[i] + min(ans(0,0), ans(0,1)); } int& res = dp[i][j] ; if(res!=-1) return res ; if(j==0) res = min(ans(i-1,1) , ans(i-2,1)) ; else if(j==1) res = min(ans(i-1,0) + a[i], ans(i-1,0) + a[i-1] + a[i]) ; return res ; }

here's my submission : 92930622

»
12 days ago, # |
  Vote: I like it 0 Vote: I do not like it

Problem A I used python, Why I can't use the built in ceil() function? When I do that I get error on the test case 10^9 10^9 10^9(output 2000000002) and 2 10^9 10^9(output 1000000002000000000)? In a word, why it gives error when I use ceil(k*y+k-1) instead of the cell_div function given in editorial?

»
12 days ago, # |
  Vote: I like it 0 Vote: I do not like it

can you explain in problem G hashes collide with probability at most 1/2^63? .

»
11 days ago, # |
  Vote: I like it 0 Vote: I do not like it

Its cool how G is solvable with hashing! I see a lot of problems with some particular property that makes them hashable such that the hash provides an answer for a subarray. Although, I don't really get how the hash accounts for the result of every subarray ending at some position. Since the hash is calculated as a prefix in linear manner, it should only count subarrays beginning from 0th index.

Can anyone explain the general idea here? Suppose I have my hash value and the prefix sum over it in 'pre' array. Then how does it contribute to the answer?

»
11 days ago, # |
  Vote: I like it 0 Vote: I do not like it

neal In your solution of problem G, you are multiplying a 64-bit integer with Freq[i], Shouldn't It cause overflow?

  • »
    »
    11 days ago, # ^ |
    Rev. 2   Vote: I like it +3 Vote: I do not like it

    Yes, it will overflow. However, this is perfectly intended behavior. C++ unsigned types "wrap around" in their operations — that is, all operations are performed modulo (MAX + 1). In this case, because the numbers are of type uint64_t (unsigned long long), all operations are performed modulo 2^64, so we automatically have a hash mod 2^64.

»
10 days ago, # |
  Vote: I like it 0 Vote: I do not like it

https://codeforces.com/contest/1418/submission/93095427

Why my greedy approach is wrong for the problem C? Please give me a case that fail the solution. Thank you.

»
8 days ago, # |
  Vote: I like it +8 Vote: I do not like it

Can someone prove that the hash collision for G is (1/2^63) ?

  • »
    »
    3 days ago, # ^ |
      Vote: I like it +10 Vote: I do not like it

    let's denote by $$$u_i$$$ the $$$freq$$$ array corresponding to $$$a[1:i]$$$. and denote by $$$r$$$ the random array of integers we have constructed.

    Then we will have a collision iff $$$u_i \cdot r = u_j \cdot r, (i\ne j)$$$ $$$\implies (u_i-u_j) \cdot r = 0$$$.

    let denote maximum value for coordinate of $$$r$$$ by $$$M$$$.

    Now, if we see in $$$n$$$ dimensional space we have $$$M^n$$$ different vectors. Consider also that each vector $$$w_k = (u_i - u_j)$$$ forms a plane and every vector in that plane will be perpendicular to $$$w_k$$$. each such plane will contain almost $$$M^{n-1}$$$ vectors. We can have at most(roughly) $$$n^2$$$ such vectors like $$$w_k$$$. So combining all this we can have total $$$(n^2)(M^{n-1})$$$ different vectors $$$r$$$ satisfying $$$(u_i-u_j) \cdot r = 0$$$ for some $$$i, j$$$.

    So our probability of collision will be atmost(as we have overcounted values in union of planes) $$$p = n^2M^{n-1} / M^{n} = n^2 / M$$$

    • »
      »
      »
      3 days ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Does this claim to show that the probability of collision per pair is $$$\frac{1}{2^{64}}$$$? That's not quite correct unfortunately since we're working mod $$$2^{64}$$$ and we have to consider the range of values of $$$u_i$$$ (see my comment below).

      • »
        »
        »
        »
        2 days ago, # ^ |
        Rev. 2   Vote: I like it 0 Vote: I do not like it

        I forgot to take into account that we are working modulo $$$2^{64}$$$. Btw I didn't try to prove the collision probability to be $$$\frac{1}{2^{64}}$$$. I just wanted some intuition that as we increase the range of possible values probability of collision reduces to some acceptable limit.

        Is there any fault in my reasoning that causes such a large difference in my answer and yours? Because if I ignore the effects of modulo than my collision probability would be smaller than yours.

  • »
    »
    3 days ago, # ^ |
    Rev. 3   Vote: I like it +1 Vote: I do not like it

    The main idea is that if two signatures are different, the difference in their hashes is a sum of a linear weighting of some of the random values, where the weights are either 1 / -1 or 2 / -2.

    If any of the weights are 1 or -1, then consider isolating that particular random value. If we decide all of the other random values except for that one, it's clear that there will be exactly one choice out of $$$2^{64}$$$ that gives a final result of 0 mod $$$2^{64}$$$.

    If all of the weights are 2 or -2, then we just divide by 2 and are back in the case above, except we're now working mod $$$2^{63}$$$.

    • »
      »
      »
      2 days ago, # ^ |
      Rev. 4   Vote: I like it 0 Vote: I do not like it

      I think my collision probability(see my first comment) is much higher than yours because you have only calculated the collision probability of a single pair of signatures. We have to calculate a probability that with a given random array there will be at least one collision. what do you think?

      • »
        »
        »
        »
        47 hours ago, # ^ |
          Vote: I like it +1 Vote: I do not like it

        Yes, after getting the probability for a single pair of $$$\displaystyle \frac{1}{2^{63}}$$$, it's easy to get an upper bound for the overall probability of $$$\displaystyle \frac{\binom{n}{2}}{2^{63}}$$$, which is about 1.4e-8 for $$$n = 500,000$$$.

»
3 days ago, # |
  Vote: I like it 0 Vote: I do not like it

My code for Problem C using recursion : int helper(vector&arr , int i , int turn , vector<vector >&dp) { int n = arr.size() ; if (i >= n ) { return 0 ; }

if (dp[i][turn] != -1) {
    return dp[i][turn] ;
}


// friends turn is dennoted by 1 :
if (turn) {
    int kill1 , kill2 ;
    kill2 = kill1 = INT_MAX ;

    int cost1 = arr[i] ;
    if (i + 1 < n ) {
       int cost2 = arr[i] + arr[i + 1] ;
       kill2 = cost2 + helper(arr , i + 2 , turn ^ 1 , dp) ;
    }

    kill1 = cost1 + helper(arr , i + 1 , turn ^ 1 , dp) ;

    return dp[i][turn] = min(kill2 , kill1) ;
} else {

    int kill1 , kill2 ;
    kill2 = kill1 = INT_MAX ;

    if (i + 1 < n ) {
       kill2 = helper(arr , i + 2 , turn ^ 1 , dp) ;
    }
    kill1 = helper(arr , i + 1 , turn ^ 1 , dp) ;

    return dp[i][turn] = min(kill1 , kill2) ;
}
return 0 ;

}

void mortalCombat() { int n ; cin >> n ;

vector<int> arr(n) ;
for (int i = 0 ; i < n ; i++) {
    cin >> arr[i] ;
}
vector<vector<int> > dp(n + 1 , vector<int>(2 , -1) ) ;
cout << helper(arr , 0 , 1 , dp);

}