Geothermal's blog

By Geothermal, history, 5 weeks ago, In English,

A — Weather Prediction

Store the three weather states in an array. Then, identify the element of the array equal to the input value, and print the next element in the array.

Time complexity: $$$O(1)$$$. Click here for my submission.


B — Tap Dance

We can employ brute force here, checking each element of the string individually and printing No as soon as we find one that doesn't satisfy the given condition. Of course, if all elements of the string comply with the requirements, the answer is Yes.

To implement the solution faster, note that the given conditions are equivalent to \texttt{S[i] != 'L'} when $$$i$$$ is odd and \texttt{S[i] != 'R'} when $$$i$$$ is even, so you can implement each of them using just one condition instead of three.

Time complexity: $$$O(N)$$$. Click here for my submission.


C — Attack Survival

Suppose player $$$i$$$ answers $$$C_i$$$ questions correctly. Then, they lose $$$Q - C_i$$$ points from the other players' correct answers, leaving them with $$$K - (Q - C_i) = K - Q + C_i$$$ points in the end. Then, they must have a positive score to survive, which occurs when $$$K - Q + C_i > 0$$$. This is equivalent to $$$C_i > Q - K$$$.

We can thus compute the value of $$$Q-K$$$ and the values of $$$C_i$$$ for each $$$i$$$. Then, for each $$$i$$$, print Yes if $$$C_i > Q-K$$$ and No otherwise.

Time complexity: $$$O(N)$$$. Click here for my submission.


D — Powerful Discount Tickets

We'll employ a greedy approach: at each point in this process, apply a ticket to the item for which this ticket will decrease our cost the most. Because each subsequent ticket we apply to any given item will have less value than the last ticket we used on the same item, this approach is always optimal.

To implement this strategy, we'll employ a priority queue, containing a single entry for each item and sorted by the amount a ticket would save us if used on that item. $$$M$$$ times, we pick the most valuable use of a ticket and subtract the savings from our total cost. Then, we compute the amount the next ticket will save us on the current item and insert this into the priority queue. (To save implementation time, you could alternatively just insert every possible ticket for each item into the priority queue immediately--since each item's price will become zero after the use of at most thirty tickets, this won't be too time-consuming. However, this does degrade the performance to $$$O(N \log^2 N)$$$.)

Time complexity: $$$O((N+M) \log N)$$$. The logarithmic factor is due to our priority queue operations. Click here for my submission.


E — Who Says a Pun?

To start, let's hash every substring of $$$S$$$. (Be careful to do this in $$$O(N^2)$$$, not $$$O(N^3)$$$, by using the shorter hashes to help compute the long ones.) Then, we can iterate over each possible value $$$L$$$ of the answer and use brute force to check if two substrings of length $$$L$$$ satisfy our condition. To do so, we maintain a set containing hashes we've encountered so far and iterate over all positions of the string. At each position, we add the hash of the appropriate length ending at this position to our set and check if the hash starting at this position is in our set already. If so, $$$L$$$ is indeed a possible answer.

This has complexity $$$O(N^2 \log N)$$$, since we have $$$N$$$ values for $$$L$$$, $$$O(N)$$$ hashes at each length, and our set operations take $$$O(\log N)$$$. This is a little on the slow end, so to speed up our program, we use binary search rather than iterating over every possible $$$L$$$. It is easy to see that if we can achieve some $$$L$$$, we can also find an example for any lower value of $$$L$$$ (simply take prefixes of the length $$$L$$$ substring), so we can binary search on the answer, which speeds up the time complexity after hashing to $$$O(N \log^2 N)$$$.

Time complexity: $$$O(N^2)$$$, due to the hashing step. Click here for my submission.


F — Xor Sum 3

Let's start with an observation: we really only care about the bits that occur an even number of times in the overall set. If a bit occurs an odd number of times in the overall set, it will occur an odd number of times in one of our two subsets and an even number of times in the other, so it will be counted once in our overall sum. On the other hand, a bit occurring an even number of times in the overall set could occur an even number of times in each subset, counting it zero times, or an odd number of times in each subset, counting it once.

Let's remove every bit that occurs an odd number of times in our overall set from all elements in the data. Now, we're left only with the bits occurring an even number of times. Now, we want to find the maximum XOR-sum of any subset of the processed data. This is because any bits (that show up an even number of times in total) occurring in the XOR-sum of one of our subsets will also occur in the XOR-sum of the other, while any bits that don't occur in the XOR-sum of one subset won't occur in the other. Afterwards, we can double this XOR-sum (since each bit in it occurs in both subsets) and add back the values of the bits showing an odd number of times to get our final answer.

We've thus essentially reduced this problem to identifying the maximum XOR-sum of any subset of our data. We can solve this problem by computing the linear basis of our data and iterating over the values it contains in decreasing order of significance, adding each value if it increases our sum. More details on using linear bases to solve XOR problems can be found in this article--I used the author's solution to Problem 3 to finish this problem.

Time complexity: $$$O(N \log A_i)$$$. Click here for my submission.. Again, substantial credit goes to DrSwad, as I used code from his article to compute the maximum subset XOR after processing the data.


As usual, feel free to leave comments or discuss alternate solutions below!

 
 
 
 
  • Vote: I like it
  • +64
  • Vote: I do not like it

»
5 weeks ago, # |
Rev. 3   Vote: I like it +2 Vote: I do not like it

E may be solved using 2D dp also.

Hint

TC is $$$O(N ^ 2)$$$.

My submission

  • »
    »
    5 weeks ago, # ^ |
      Vote: I like it +8 Vote: I do not like it

    I thought of LCS problem at first sight and forgot the simple DP solution...

  • »
    »
    5 weeks ago, # ^ |
      Vote: I like it +3 Vote: I do not like it

    How is that lim=j-1 avoiding the O(n^3) brute force and also giving correct output.. because for the string "ababa" dp[0][2]=2 which is correct. For computing dp[0][2] we computed dp[1][3]=1 which is wrong. Can you please explain your solution in detail !!

    • »
      »
      »
      5 weeks ago, # ^ |
        Vote: I like it +1 Vote: I do not like it

      Maybe my solution helps you to understand the above solution easier: https://atcoder.jp/contests/abc141/submissions/7565470

      The factor min(j — i) in my code is similar to the lim above. It's because we want to avoid overlapping, so if we get an overlapping we just keep the longest possible substring.

    • »
      »
      »
      5 weeks ago, # ^ |
      Rev. 2   Vote: I like it +4 Vote: I do not like it

      The first thing to note is the dp transition. Note that from the state $$$[l1, l2]$$$ we move to state $$$[l1 + 1, l2 + 1]$$$. The difference $$$l2 - l1$$$ stays constant throughout the dp.

      The second thing to note is that for a fixed difference $$$l2 - l1$$$, if the answer for the state $$$[l1, l2]$$$ is say $$$p$$$ (ofc $$$p \le l2 - l1$$$), then the state $$$[l1 + 1, l2 + 1]$$$ cannot give an answer more than $$$p$$$. (Notice how, in the example you used, $$$[0, 2]$$$ gives the answer $$$p = 2$$$, but $$$[1, 3]$$$ also gives $$$2$$$ which is not more than $$$p$$$)

      To see why the second observation holds, notice that there can be only $$$2$$$ cases:

      Case 1
      Case 2

      So overall, the solution is correct (though I admit that this can be made more safe (as this approach works only for this particular problem), as done by Hepic_Antony_Skarlatos here).

»
5 weeks ago, # |
  Vote: I like it +8 Vote: I do not like it

I believe E can be solved in O(NlogN) First I hashed the entire string, which takes O(N) time. Then I did a binary search on length, which is the answer to the problem.

For every length len, I could get all the substrings' hash value of length len in O(n) time. And for every substring, I used an unordered map to store its first occurrence. If the same hash value occurred twice and the positions did not overlap, the answer could be len or larger.

  • »
    »
    5 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    So I did the same thing but the solution failed 1 test case. But implementing double hashing gave me AC.

»
5 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

Hi @Geothermal, Can you please tell why my solution fails in problem E? I am using double hash which is even better than single hash and doing binary search just as you did. Please someone help me.

code

  • »
    »
    5 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Double hash can also result in a collision. You should have used the stl map.

    • »
      »
      »
      5 weeks ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      What do you mean I have used map for storing pair of hashes??

      • »
        »
        »
        »
        5 weeks ago, # ^ |
          Vote: I like it +1 Vote: I do not like it

        The map which you used might have had a collision, leading to two different strings being stored in the same shell, hence the wrong answer. This very rare but it is not impossible. That is the reason most of the test cases got AC. Try implementing the same solution using stl maps or use chaining for resolving hashes.

        • »
          »
          »
          »
          »
          5 weeks ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          How come some other coders got AC using only a single hash but my double hash fails?

»
5 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

need help in understanding problem D and problem E not able to understand the above explanation can someone put in easy way to understand those in an easy way.

  • »
    »
    5 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    For D, you can use greedy approach by using discounts on priciest item, removing it from whatever data structure you use, and then inserting it back in with half of the value. the data structure that he used is a priority queue, but I think a set will work as well. Hope this helps.

  • »
    »
    4 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    just think how will you minimize your cost the most .. by halving the maximum element in your list..

»
5 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

i would like to share my implementation for Problem E , as it took less time(680ms) and considerably low space(256 KB) , compared to sir Geothermal's submission(1020 ms , 685224 KB).

my submission

»
5 weeks ago, # |
  Vote: I like it +3 Vote: I do not like it

Here is my java solution for E that uses Rolling Hash and Binary Search that runs in O(NLogN).

https://atcoder.jp/contests/abc141/submissions/7551803

»
5 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

Problem E can be also solved by using zfunction. Complexity is O(N^2). Link to my submission.

»
5 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

Is solution of F = (maximum sequence xor) + (maximum sequence xor ^ total xor) ??