When submitting a solution in C++, please select either C++14 (GCC 6-32) or C++17 (GCC 7-32) as your compiler. ×

pabloskimg's blog

By pabloskimg, history, 3 years ago, In English
  • Vote: I like it
  • +123
  • Vote: I do not like it

| Write comment?
»
3 years ago, # |
  Vote: I like it +13 Vote: I do not like it

A.

tag
hint 1
hint 2
hint 3
hint 4
hint 5
hint 6
hint 7
codes
  • »
    »
    3 years ago, # ^ |
      Vote: I like it +6 Vote: I do not like it

    I also tried greedy from hint 7 without proofs and it runs significantly faster than without it. It would be interesting either to see if it fails at some point (outside of the bounds of this problem of course), or a proof that it is correct.

    Some intuition about the greedy:

    Greedy intuition
»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

How to solve M?

  • »
    »
    3 years ago, # ^ |
      Vote: I like it +23 Vote: I do not like it

    Maintain the suffix array of the reversed string (prefix array?) online with hashes and binary search (in a set with custom comparator). Now just count the total number of different substrings and subtract the number of substrings that appear exactly once. Both of these things are easy to calculate via the LCP of adjacent suffixes in the suffix array, so you can easily calculate the difference that each update causes in the answer.

    • »
      »
      »
      3 years ago, # ^ |
      Rev. 2   Vote: I like it +37 Vote: I do not like it

      Alternatively you can create a trie and do the suffix (prefix?) array of that trie using the same deterministic suffix array algorithm that doubles information + power of 2 ancestors.

      • »
        »
        »
        »
        3 years ago, # ^ |
          Vote: I like it +8 Vote: I do not like it

        This is my code using this idea, i.e. no hashing.

        I tried to add few tests to avoid several hashes. But of course it is very hard to avoid them on ICPC.

  • »
    »
    3 years ago, # ^ |
    Rev. 4   Vote: I like it +5 Vote: I do not like it

    Edit: Wrong solution, sorry. I got AC but really the time complexity is bad because of the rollbacks. Thanks to brunomont for the clarification.

    I solved it using suffix automaton: here my code. To solve it with this structure I thought two things:

    1. How to remove the last inserted character from the suffix automaton
    2. How to keep the answer updated after each operation (insert and remove)

    For the first one, we can use stacks (I used five) to keep enough information that allows us to recreate the changes that occurred when inserting that last character and thus be able to undo those changes.

    For the second one, keep in mind that each different subtring is only contained in a single state of the suffix automaton and that the number of different substrings contained in a state $$$u$$$ $$$(u \not = 0)$$$ of the suffix automaton is equal to $$$len(u) - len(link(u))$$$.

    Every time we insert a character a new state is created, let's call it $$$u$$$, if at the end of the insertion $$$link(u) \not = 0$$$, that means the substrings contained in the $$$link(u)$$$ state already occur more than once in the string, so we can add $$$len(link(u)) - len(link(link(u)))$$$ to the answer. However, we only have to add that value to the answer the first time that $$$link(u)$$$ is assigned as a suffix link. To do this (and also to undo changes) we can add to the states a counter that stores how many times that state has been assigned as a suffix link when inserting a character (this counter must also be copied when creating a clone state in the insert algorithm).

    About suffix automaton: https://cp-algorithms.com/string/suffix-automaton.html

    • »
      »
      »
      3 years ago, # ^ |
      Rev. 2   Vote: I like it +13 Vote: I do not like it

      Are you sure this is the correct complexity of your solution? From what I understand, suffix automaton have only an amortized $$$O(1)$$$ cost for adding a character, and linear worst case. So I don't see how this would not be quadratic when trying to do rollbacks.

      I tried your code on the following instance: $$$AAAA....AB-B-B-B-...$$$, and it took more than one minute, which would indicate the quadratic behavior.

      • »
        »
        »
        »
        3 years ago, # ^ |
          Vote: I like it +3 Vote: I do not like it

        I have almost no experience using suffix-automaton, but without understanding much about the internals, I think you can make the solution work by using persistent arrays to store all information for the SA. That way you can go back to the previous version by restoring the old root for each array in $$$O(1)$$$.

        There is an extra log factor both for memory and time, but that is probably ok.

        • »
          »
          »
          »
          »
          3 years ago, # ^ |
            Vote: I like it +13 Vote: I do not like it

          I'm not entirely sure I understand how that would work, but I think this instance would break it:

          $$$AAAAAA...A$$$

          $$$B--B--B--B--...$$$

          The idea is that each new $$$B$$$ would take time linear on the current string size, and I don't think memorization/persistence would help in this case.

      • »
        »
        »
        »
        3 years ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        Totally, you are right. After thinking a lot about how to do the rollbacks I lost the notion of the amortized complexity and in the end I code this that seemed to work. I sent the code without doing exhaustive tests and with the AC I was wrongly convinced. Now I feel a little bad about getting the AC. Thanks for the clarification!

    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it +3 Vote: I do not like it

      Is it possible that the time consumed by your solution on removals is $$$O(n)$$$ in the worst case? Since the changes done by adding one character can be up to $$$O(n)$$$ right, and they need to be reversed.

      If this is the case, on an adversarial test, your solution is forced to add and remove a character that triggers this behaviour all the time, and it can take up to $$$O(N \cdot Q)$$$.

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

How to solve G?

  • »
    »
    3 years ago, # ^ |
    Rev. 2   Vote: I like it +11 Vote: I do not like it

    I liked the problem, congratulations to the author. Here is the solution.

    Step 0: The probability that two cards are equal is way below the required precision, hence we may assume that the $$$2n$$$ cards are distinct.

    Step 1: Alice should place her cards in increasing order on the slots.

    Let $$$x_i$$$ be the number on the card Alice puts on the $$$i$$$-th slot. Assume that Bob's cards are fixed. Let $$$C(x_1, x_2,\dots, x_n)$$$ be the amount gained by Alice if Bob plays optimally. With a swapping argument (encoded in the following lemma) one can show that $$$C(x_1,x_2,\dots, x_n)$$$ is maximized when the $$$x_i$$$ are increasing.

    Lemma: If $$$i<j$$$ and $$$x_i>x_j$$$, then

    $$$C(x_1, x_2,\dots, x_i, \dots, x_j, \dots, x_n) \le C(x_1, x_2, \dots, x_{i-1}, x_j, x_{i+1}, \dots, x_{j-1}, x_i, x_{j+1}, \dots, x_n) \,.$$$

    proof. Let $$$y_i$$$ be the card put by Bob on the $$$i$$$-th slot in an optimal solution when Alice cards are $$$x_1, x_2, \dots, x_{i-1}, x_j, x_{i+1}, \dots, x_{j-1}, x_i, x_{j+1}, \dots, x_n$$$. Let us split in $$$3$$$ cases:

    • If the $$$i$$$-th slot and the $$$j$$$-th slot go to the same person, Bob can achieve the same gain also when Alice puts the cards as $$$x_1, x_2, \dots, x_n$$$ by swapping $$$y_i$$$ and $$$y_j$$$.
    • If the $$$i$$$-th slot is won by Bob and the $$$j$$$-th slot is won by Alice, Bob can increase his gain when Alice puts the cards as $$$x_1, x_2, \dots, x_n$$$ by swapping $$$y_i$$$ and $$$y_j$$$.
    • If the $$$i$$$-th slot is won by Alice and the $$$j$$$-th slot is won by Alice, Bob gains the same (or more) when Alice puts the cards as $$$x_1, x_2,\dots, x_n$$$ by not changing anything.

    Step 2: Bob should place his cards greedily starting from the largest one.

    Let $$$x_i$$$ be the card placed by Alice on the $$$i$$$-th slot (thanks to the previous step we can assume that $$$x_i<x_{i+1}$$$ for each $$$i$$$). Let $$$y_1<y_2<\cdots<y_n$$$ be Bob's cards.

    Lemma: The optimal way for Bob to put his cards is to repeat the following move until possible (and then place arbitrarily the remaining cards):

    • Put the largest card he still has in his hand on the highest slot he can manage to win with this method.

    proof. It is sufficient to show that there is an optimal solution such that the largest card is placed as the described. Let $$$j$$$ be the largest index such that $$$x_j < y_n$$$. Assume that an optimal solution for Bob is to place $$$y_{\sigma(i)}$$$ on the $$$i$$$-th slot ($$$\sigma$$$ is a permutation). One can prove (considering a small number of cases) that Bob's gain does not decrease if he swaps the positions of $$$y_n$$$ and $$$y_{\sigma(j)}$$$; thus we can assume that $$$y_n$$$ is on the correct position $$$j$$$ as desired.

    Step 3: All the possible card orderings are equally likely.

    Let $$$c_1<c_2<\dots <c_{2n}$$$ be the values on the $$$2n$$$ cards. Let $$$s_i$$$ be $$$0$$$ if the $$$c_i$$$ belongs to Alice and $$$1$$$ if it belongs to Bob. It is clear that Alice's gain depends only on the string $$$s_i$$$ (since a card's value is used only to compare it to another card).

    It turns out that the $$$\binom{2n}{n}$$$ possible binary arrays $$$s_1,s_2,\dots, s_{2n}$$$ can appear with equal probability. This is very intuitive and quite boring to prove, the idea is that, once we require that the cards are distinct, the random sampling described in the statement is equivalent to

    • First we choose $$$2n$$$ distinct values out of $$$[1,10^{18}]$$$ (all $$$\binom{10^{18}}{2n}$$$ choices being equally likely).
    • Then we assign $$$n$$$ values out of these $$$2n$$$ to Alice and the remaining to Bob (all $$$\binom{2n}{n}$$$ choices being equally likely).

    Step 4: Solve the problem given a card ordering.

    Assume that we are given the binary string $$$s_1,s_2,\dots s_{2n}$$$ with $$$n$$$ zeroes and $$$n$$$ ones. What is the answer to the problem? Thanks to what we observed in Step 1 and Step 2, we can perform the following algorithm

    We keep the amount gained by Alice $$$res$$$, which initially is $$$0$$$. Furthermore, we also keep a counter $$$q$$$ which denotes how many Bob's cards we have encountered and not used.

    We iterate from $$$i=2n$$$ to $$$0$$$.

    • If $$$s_i=1$$$, we increment $$$q$$$.
    • If $$$s_i=0$$$ and $$$q>0$$$, we decrement $$$q$$$.
    • If $$$s_i=0$$$ and $$$q=0$$$, we add $$$p$$$ to $$$res$$$, where $$$p$$$ is such that $$$s_i$$$ is the $$$p$$$-th zero in the string $$$s_1,s_2,\dots, s_{2n}$$$.

    At the end of the algorithm, the variable $$$res$$$ contains the amount gained by Alice.

    Step 5: Solve the problem for all possible card orderings via dynamic programming.

    It is well-known that, if we want to compute the expected value of a certain algorithm on a family of inputs, dynamic programming is often the solution.

    Let $$$din[z][o][q]$$$ be the sum of the amounts gained by Alice over all binary strings $$$s_1,s_2,\dots, s_{z+o}$$$ with $$$z$$$ zeroes and $$$o$$$ ones such that, if we perform the algorithm described in step 4, at the end the number of unused Bob's cards is $$$q$$$. It is easy to check that the value of $$$din[z][o][q]$$$ depends only on the values of $$$din[z-1][o][q+1]$$$, $$$din[z][o-1][q-1]$$$ (and possibly $$$din[z-1][o][0]$$$ if $$$q=0$$$).

    The answer to the problem is

    $$$\frac{\sum_{q=0}^n din[n][n][q]}{\binom{2n}{n}}\,.$$$

    The complexity of this solution is $$$O(n^3)$$$.

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

How to see solution of other people?pabloskimg

  • »
    »
    3 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    You need to enable coach mode to see all of the submitions in such training contests. You have to be either 1900+, or 2100+ (I don't remember exactly) to enable it in the gym section.

»
3 years ago, # |
  Vote: I like it +3 Vote: I do not like it

How to solve B?

  • »
    »
    3 years ago, # ^ |
    Rev. 4   Vote: I like it 0 Vote: I do not like it

    Consider every possible $$$k\ge 3$$$ such that n % k > 3. For each of those $$$k$$$, you can split the array in the contiguous subarrays of size $$$k$$$, and try to check for each subarray if it can be converted into a mountain. For checking that, you can do a binary search over the known values to find the leftmost and rightmost known values of the current subarray and do some $$$\mathcal{O}(1)$$$ case-work for determining if a subarray is good or not.

    The final complexity would be $$$\displaystyle\sum_{k=3}^n\left\lfloor\frac n k \right\rfloor\cdot \log n = \mathcal{O}\left(n\cdot \log^2 n \right)$$$.

    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      use sparse table. improves to O(n log n).

»
3 years ago, # |
  Vote: I like it +3 Vote: I do not like it

Whats the concept behind J and B and how to solve these?

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

What should be the output for following testcase for Problem N,

Testcase
»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

How to solve E??

»
2 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Could someone tell me how to solve F...Thanks!

»
17 months ago, # |
  Vote: I like it 0 Vote: I do not like it

I have a question with Problem F ,is the max cost with k steps euqal to the sum of the first K great numbers?