BledDest's blog

By BledDest, 11 months ago, In English

1849A - Morning Sandwich

Idea: BledDest

Tutorial
Solution (awoo)

1849B - Monsters

Idea: BledDest

Tutorial
Solution (Neon)

1849C - Binary String Copying

Idea: BledDest

Tutorial
Solution (vovuh)

1849D - Array Painting

Idea: BledDest

Tutorial
Solution (BledDest)

1849E - Max to the Right of Min

Idea: awoo

Tutorial
Solution (awoo)

1849F - XOR Partition

Idea: BledDest

Tutorial
Solution (BledDest)
  • Vote: I like it
  • +102
  • Vote: I do not like it

»
11 months ago, # |
  Vote: I like it +1 Vote: I do not like it

Is there any way to solve c by following 1) store all prefixes of string 2)store all suffixes of string 3)count 1 and 0 and store them in prefix array

Now when we get query l,r we create a temp string Such that temp= prefix till l-1 + string of zeroes till of length number of zeroes till r-l-1 +same with 1 + suffix [r+1]

Ans put them temp string to set ..

Will this work?

  • »
    »
    11 months ago, # ^ |
    Rev. 3   Vote: I like it +1 Vote: I do not like it

    The time complexity of constructing such a string is like $$$O(n)$$$, so I think this will get a TLE.

    However I managed to solve this problem by using string hash, which makes it available to calculate the hash of an operated temp string in $$$O(1)$$$.

    To avoid hash collision, use a double hash.

    Here's my submission 216020500. (Note that the time complexity is $$$O(m+n\log n)$$$ ,because we need a set to count all different hashes.)

    • »
      »
      »
      11 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      If I solve using construction and suppose that constructed string temp;

      How can I use ur hashing on this temp?

      • »
        »
        »
        »
        11 months ago, # ^ |
          Vote: I like it +4 Vote: I do not like it

        Your approch to use std::string and concat the four precomputed strings PREFIX+ZEROS+ONES+SUFFIX is still $$$O(n)$$$. (The time of concat is $$$O(\text{len})$$$)

        To solve the problem more efficiently, you need to manually implement a hash function.

        Let's say the hash function $$$f(s)=\displaystyle\sum_{i=1}^{\ell}s[i]\times b^{\ell-i} \pmod M$$$, where $$$b$$$ and $$$M$$$ are constants.

        If you already have the hash of string $$$s_1$$$ and $$$s_2$$$, then $$$f(s_1+s_2)=f(s_1)\cdot b^{\operatorname{len}(s_2)}+f(s_2)\pmod M$$$, which means we can concat two strings in $$$O(1)$$$ time (precalculate all the powers of $$$b$$$).

        So, 1) calculate the hash of each prefix 2) calculate the hash of each suffix 3) calculate the hash of ones.

        Then you can $$$O(1)$$$ calculate the hash of an operated string by concating the four parts using the method above.

  • »
    »
    11 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    I have implemented same thing but got memory limit exceed on test 4

    • »
      »
      »
      11 months ago, # ^ |
      Rev. 2   Vote: I like it +1 Vote: I do not like it

      You didn't precomputed the prefixes.

      You used substr() function for getting prefix and suffix it,it's time complexity is o(size of substring) at worst case it will run till N .. somewhere it will lead to tle..

      I was talking about how we can beforehand store prefixes

      Like for example

      Map<int,string>prefix_at_index

      string pf="";

      For i till s.length()

      pf.push_back(s[i]);

      prefix_at_index[i]=pf

      Something like this..

      Atleast log(n) complexity to get prefix Nd suffix

      • »
        »
        »
        »
        11 months ago, # ^ |
        Rev. 2   Vote: I like it +1 Vote: I do not like it

        Storing all prefixes would also lead to MLE:

        What is the total length of all prefixes of a string length $$$n$$$? It is $$$1 + 2 + 3 + 4 + 5 + \dots + n$$$, which is equal to $$$\displaystyle\frac{n(n + 1)}{2}$$$.

        If $$$n = 200\,000$$$, $$$\displaystyle\frac{n(n + 1)}{2} = \frac{200\,000 \cdot 200\,001}{2} = 20\,000\,100\,000$$$.

        The total length of all prefixes would be $$$20\,000\,100\,000$$$ and since each character is $$$1$$$ byte, your code would need to store $$$20\,000\,100\,000$$$ bytes in memory. $$$20\,000\,100\,000$$$ bytes is about $$$20\,000$$$ megabytes, which is more than the memory limit of $$$256$$$ megabytes.

  • »
    »
    10 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Excuse the necroposting, but I used the same approach.

    Let the rolling hash H[n] = s[0] + b * s[1] + ... b^n * s[n] be the hash of string s.

    Precompute the rolling hash for a string containing all 0s and all 1s separately, i.e:

    Let H_0[n] = '0' + b * ('0') + ... b^n * ('0')

    Let H_1[n] = '1' + b * ('1') + ... b^n * ('1')

    Let h = H[n].

    To process the query [L..R], Assume you know number of zeros and ones in L..R via prefix sums.

    1. Get rid of substring s[L..R]
    2. Add the sorted substring of 000...111 to h.
    3. insert h into set.

    Step 1: h = h — H[R] + H[L-1]

    Use some algebra for step 2, and you'll find the following: Step 2: h = h + b^L * H_0[num_zeros] + b^(L+num_zeros) * H_1[num_ones]

    Step 3: insert into set.

    Use double hash to avoid collision. My submission link: link

    Though I recommend taking a look at linlexiao's comment, which has cleaner code.

»
11 months ago, # |
  Vote: I like it +14 Vote: I do not like it

Hello everybody!

The competition was very interesting, but despite this, I will still get negative points.

Thanks for the competition: adedalic, awoo, BledDest, Neon and MikeMirzayanov

»
11 months ago, # |
  Vote: I like it +2 Vote: I do not like it

Any problems simillar to C?

»
11 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Is there any way to solve C by constructing new string for each query? I am getting memory limit exceed

  • »
    »
    11 months ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    Highly doubt it. Constructing new string is O(n) ( not account for any sorting you might want to do ). So if you do that for each query thats too slow.

  • »
    »
    11 months ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    If you construct a new string each query (and store all of them), your code will store $$$200\,000 \cdot 200\,000 = 40\,000\,000\,000 = 4 \cdot 10^{10}$$$ bytes of data (each character is $$$1$$$ byte, each string has $$$200\,000$$$ characters, there are $$$200\,000$$$ strings).

    $$$4 \cdot 10^{10}$$$ bytes is about $$$4 \cdot 10^{4} = 40\,000$$$ megabytes which is quite a bit larger than the memory limit of $$$256$$$ megabytes.

  • »
    »
    10 months ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    Instead you can construct new Hash for every query. Let (L, R) is my query then,Hash[1 to (L - 1)] and Hash[(R + 1) to N] remains unchanged. So we need to reconstruct only Hash[L to R] which is pretty straightforward,
    Hash[L to R] = PrimePowerSum[ L to ( L + Zero in this range — 1)] + 2 * PrimePowerSum[( L + Zero in this range) to R].

  • »
    »
    10 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    you can use string hash to do what you want to do

»
11 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Did anyone B use the priority queue with the custom comparator and did not store any -ve value, just a pure comparator thing?

  • »
    »
    11 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    I tried to use comparator and priority queue but got TLE

  • »
    »
    11 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    I solved it using a comparator.

    The idea is that instead of using the actual values of the ai, we can use (ai mod k) instead because we simply keep subtracting k from each value until it reaches value less than or equal to 0.

    For example if we have something like this: n = 3, k = 4, a=[6,3,1000000000]

    The simplest approach is to subtract 4 from the largest number until it other number is larger than it first we will subtract 4 from the 3rd element until it is not the largest element a = [6,3,4] then we will subtract 4 from the 1rd element until it is not the largest a = [2,3,4] then *a= [2,3,0] then a = [2,-1,0] then a = [-2,-1,0]

    one way to optimize it is to go immediately to the * where each element in position i is equal to (ai mod k).

    one approach is to handle zeroes individually and then use a PriorityQueue like this: PriorityQueue<Integer> pq =new PriorityQueue<>((a,b)->(arr[a]==arr[b])?a-b:arr[b]-arr[a]); to handle the rest of the indices

  • »
    »
    11 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    yep, but solution above is simpler.

»
11 months ago, # |
  Vote: I like it +29 Vote: I do not like it

What is that simpler solution for problem F that most of the participants wrote?

  • »
    »
    11 months ago, # ^ |
      Vote: I like it +12 Vote: I do not like it

    I'll try to explain my simpler solution.

    The basic idea is that, if I have at least 3 numbers with a common prefix in the binary representation, then no matter how I partition the sets, at least 2 of those will be in the same set, and all bits of the common prefix will be 0 upon xorring those numbers. So the idea is to find the longest prefix such that there are at least 3 elements with that prefix.

    n = 2 is a corner case, which can be solved independently. Now assume n >= 3.

    It is guaranteed that there exists some B such that right-shifting all elements by B will result in 3 or more equal elements, let's find the smallest such B.

    Since I have picked the smallest such B, one can observe that there won't be more than 4 equal elements upon right shifting every element by B. (If there were 5 or more elements, then B-1 also would've worked.)

    After making some cases manually, [VERY IMPORTANT:] one can observe that the minimum xor will be obtained by xorring the elements that become equal after right bitshifting by B. (because any other 2 numbers will have the xor >= $$$2^B$$$)

    So, the solution is simple:

    • find B,

    • then make groups, such that all elements of the group become equal upon right shifting by B (size of all such groups <= 4)

    • within each group, try every way of partitioning them into 2 sets. (can actually be narrowed down to much fewer cases, but this much is enough for a complete solution)

    Here is my solution with some slightly ugly casework at the end, but one can get the general idea from it: https://codeforces.com/contest/1849/submission/216361206

»
11 months ago, # |
Rev. 3   Vote: I like it +47 Vote: I do not like it

Problem E can also be solved using the trick from 1156E - Special Segments of Permutation. Firstly, precalculate the "borders" in which $$$p_i$$$ is the minimum or the maximum element using monotonic stacks. Let's fix the maximum element of the segment. As in the problem mentioned above go to the closest end of the maximum border. If it is to the left of $$$p_i$$$ just calculate the number of good segments ($$$minpos(l, r) < maxpos(l, r)$$$) directly. Otherwise calculate the number of bad segments ($$$minpos(l, r) \geq maxpos(l, r)$$$) and subtract it from the total number of segments. You can see the details in my submission (with comments!): 216141125

  • »
    »
    11 months ago, # ^ |
    Rev. 2   Vote: I like it +4 Vote: I do not like it

    Can you elaborate on the complexity of your algorithm? It doesn't seem obvious why it isn't quadratic in the worst-case scenario

    UPD: I went by the link to another similar problem that you provided, it is explained there very well. It is indeed, very educational problem, thanks!

»
11 months ago, # |
  Vote: I like it 0 Vote: I do not like it

For Problem C, I tried creating a prefix sum array of the number of inversions in the string, and for each range inserting the number of inversions into a set. At the end, I printed the number of elements in the set — does anyone know why this doesn't work?

»
11 months ago, # |
Rev. 3   Vote: I like it +13 Vote: I do not like it

Why in problem E in the author's solution in the line len += it->first - prv->first; (in the first while cycle) do we subtract from the it->first not from me->first? As I understand, in the case it->second == 0, we should firstly subtract from len the value it->first - me->first and then add the value it->first - prv->first.

More surprisingly, both versions get accepted: 216151073 and 216150501.

  • »
    »
    11 months ago, # ^ |
      Vote: I like it +18 Vote: I do not like it

    The line len += it->first - prv->first is completely redundant in the min stack, since we are continuously removing minimums, it is impossible for it->second to be 0. Hence, you can even delete that part completely and still get AC.

    • »
      »
      »
      11 months ago, # ^ |
      Rev. 2   Vote: I like it +19 Vote: I do not like it

      Oh, that's true. I initially had the solution for min > max, so I hastily moved the lines around to work for the actual problem. That's why it's like that.

      UPD: Apparently, it was as redundant in the first version.

»
11 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Can someone explain C in another way. I don’t really understand the editorial’s solution.

  • »
    »
    11 months ago, # ^ |
    Rev. 2   Vote: I like it +7 Vote: I do not like it

    when we try to sort [l, r], if str[l] to str[mid] are all '0', and str[mid+1] to str[r] are all '1', then the string wouldn't change at all.

    let's use pos_0[i] denote the position of first '0' at pos i or to its left, and pos_0[i] denote the position of the first '1' at pos i or to its right.

    so for the [l, r], if pos_0[r]<pos_1[l], then the string wouldn't change at all. otherwise, the string will change.

    But we can found that if there are several '0' before pos l, and several '1' after pos r. that is str[ll] to str[l-1] are all '0' and str[r+1] to str[rr] are all '1'. Then you can found that sort [l, r] and sort [ll, rr], we get actually the same string.

    Actually pos_0[r]=pos_0[rr] (because str[r+1] to str[rr] are all '1'), pos_1[l]=pos_1[ll]. So if sort[l1, r1] and sort[l2, r2] get different strings, then { pos_1[l1], pos_0[r1] } is different from { pos_1[l2], pos_0[r2] }

    My English is not very good. Please forgive my grammer error and poor typesetting.

»
11 months ago, # |
Rev. 3   Vote: I like it 0 Vote: I do not like it

Why not add the editorial to "Contest Materials" ?

Upd: Added.

»
11 months ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

I am having a hard time understanding the problem C. While I can see that my approach isn't optimized at all, I would have expected either a TLE or an MLE but instead I got WA on test 2. My approach: for each m, sort the string [l,r], store it in an unordered set and later count the size of the set. I understand that if there is no change in the string after applying the op, adding the ti string (where ti==s) to the set means I have added the original string only and that is counted once as the set stores unique values. What am I understanding wrong here? My submission that got WA: 215938621

  • »
    »
    11 months ago, # ^ |
    Rev. 2   Vote: I like it -10 Vote: I do not like it

    Deleted comment

    • »
      »
      »
      11 months ago, # ^ |
      Rev. 3   Vote: I like it 0 Vote: I do not like it

      Brother I am very stupid. I still don't understand what this means. Is there some part in that statement that needs deciphering? In my code each copy (ti) of the string s is affected by only one op, that is sort(ti.begin()+l-1,ti.begin()+r-1). Is adding the string's copy to the set counted as another operation? if yes, then how does that matter? Thanks

      • »
        »
        »
        »
        10 months ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        Ignore my previous comment. Your code is giving error because you are sorting the string in wrong way. Instead of doing sort(ti.begin()+l-1,ti.begin()+r-1), you should do sort(ti.begin()+l-1,ti.begin()+r). You can understand it yourself if you think for a few seconds.

        suppose you want to sort [l, r] = [1, 2] for some string. sort(ti.begin()+l-1,ti.begin()+r-1) will reduce to sort(ti.begin(), ti.begin()+1) which is wrong.

»
11 months ago, # |
  Vote: I like it 0 Vote: I do not like it

I tried double hashing for problem C im still getting sometimes 1 more than the answer.

216112844

»
11 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Can anyone explain why I got TLE in B? After mod all health to within 1 to K, I loop through the array and store list of all monsters with same health to a map, using health as key. Then I iterate over keys from K to 1 while output in list order. Isn't it supposed to be O(n), since accessing a map is O(1) & push to array is just amortized O(1). Isn't using stable sort as in the tutorial O(nlogn). I'm beginner so if any wrong, please let me know.

  • »
    »
    11 months ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    Firstly, standard map give you opportunity to get value by key in O(log n), unordered_map give O(1), but it can increase to O(n) in bad input. But this isn't the reason of your tle, the reason is limit on K in problem. K can be 10^9, it mean, that your solution will do 10^9 iteration in the end, this is so much for 2 sec.

    My recommendation for you, you can iterate all pair in map with using construction for(auto el:[your map]{ Do something with el(el.first — key, el second — value) }

»
11 months ago, # |
  Vote: I like it +6 Vote: I do not like it

For what it's worth, for problem E there is another (overly complicated, but perhaps more straightforward) solution.

Let $$$[l_i, r_i]$$$ be the range where $$$p_i$$$ is the maximum element, $$$a_i$$$ be the position of the closest element to the left of $$$p_i$$$ which is less than $$$p_i$$$, or $$$a_i = 0$$$ if it does not exist. Now, let's fix $$$i$$$ and look at the subarrays where $$$p_i$$$ is the maximum element, i.e. subarrays $$$p_l, \dots, p_r$$$ such that $$$l_i \le l \le i$$$ and $$$i \le r \le r_i$$$.

Here comes the main observation: the subarray $$$p_l, \dots, p_r$$$ is good ($$$minpos_{l,r} < i$$$) iff $$$\min(a_i, \dots, a_r) \ge l$$$. The proof should be trivial so I will omit it. Therefore, for fixed $$$r$$$ we have $$$\max(0, \min(a_i, \dots, a_r) - l_i + 1)$$$ good subarrays.

In order to get rid of the $$$\max(0, \dots)$$$ part let's find the rightmost index $$$r_i'$$$, such that $$$r_i' \le r_i$$$ and $$$\min(a_i, \dots, a_{r_i'}) \ge l_i$$$. Since the array $$$a$$$ can be calculated beforehand, this part can be done in $$$O(\log n)$$$ using a sparse table or some other data structure. Then, the number of good subarrays containing $$$p_i$$$ is

$$$ \sum_{r=i}^{r_i'} (\min(a_i, \dots, a_r) - l_i + 1) = \sum_{r=i}^{r_i'} \min(a_i, \dots, a_r) - (r_i' - i + 1) \cdot (l_i - 1). $$$

Subtracting the $$$(r_i' - i + 1) \cdot (l_i - 1)$$$ part is easy, let's focus on the sum. We have the following problem: given an array $$$a_1, \dots, a_n$$$ and $$$n$$$ queries of the form $$$(l, r)$$$, for each query find $$$\sum_{i=l}^r \min(a_l, \dots, a_i)$$$. There must be an easier way to solve it, but here is what I came up with. Let's answer these queries offline by iterating $$$l$$$ from right to left.

We need to support 2 kinds of operations on the array $$$b$$$ which is initially empty: 1) append an element to the left of $$$b$$$, 2) query $$$\sum_{i=1}^{r} \min(b_1, \dots, b_i)$$$. We can use a segment tree for that. Operation 1 can be done using range assignment: when we are appending $$$a_i$$$ all we need is to assign $$$a_i$$$ on the range $$$[i, q_i)$$$, where $$$q_i$$$ is the closest position of an element less than $$$a_i$$$ to the right of $$$a_i$$$ (or $$$n + 1$$$, if it does not exist). And operation 2 is simply a range sum query.

That's it, not the easiest implementation, but it felt very natural to come up with. The final complexity is $$$O(n \log n)$$$ and my submission is 215978080.

»
11 months ago, # |
Rev. 3   Vote: I like it 0 Vote: I do not like it

can someone please explain in c why cant we just sort the array on each iteration will that not be nlogn?

m for all iterations and then long for storing:

This gives WA on 2nd tc

C++

~~~~~

int main() { long t,m,n; string s; cin>>t;

while(t>0){
    cin>>n>>m;
    cin>>s;
    long l,r;
    unordered_set<string> tempstore;
    for(int i=0;i<m;i++){
        cin>>l>>r;
        l--;
        r--;
        string temp=s;
        if (l<=r) sort(temp.begin()+l,temp.begin()+r);
        tempstore.insert(temp);
    }
    cout<<tempstore.size()<<endl;
    t--;
}
return 0;

}

~~~~~~


Python

t=int(input())

while t>0:
    n,m=map(int,input().split())
    strs=set()
    s=input()
    
    for i in range(m):
        l,r=map(int,input().split())
        l-=1
        r-=1
        temp=s[:l]+''.join(sorted(s[l:r+1]))+s[r+1:]
        strs.add(temp)
    print(len(strs))
    t-=1

understood why got tle on python , but got a wa on cpp soln?

tle reason: for the tle part the thing is while sorting the time complexity is nlogn but when we insert in to set it is about logn so the difference of n matters in this case that causes tle (as we know n>logn)

  • »
    »
    11 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    I had the exact same doubt. This was my original question. Did you understand how the solution works?

    • »
      »
      »
      10 months ago, # ^ |
        Vote: I like it +10 Vote: I do not like it

      sort is [begin;end) in C++ STL, so if (l<=r) sort(temp.begin()+l,temp.begin()+r); should be if (l<=r) sort(temp.begin()+l,temp.begin()+r + 1); (or just not decrement it beforehand).

»
11 months ago, # |
  Vote: I like it +12 Vote: I do not like it

Problem E was a very interesting problem. My initial idea was to use a lazy segtree for an NlogN solution, but it turns out that the O(N) solution is much cleaner and simpler.

https://codeforces.com/contest/1849/submission/216349588

  • »
    »
    10 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it
    I do it with divide and conquer , and ordered_multiset when conquering the segment
    time : n(logn^2) , pass with 2932 ms :))))
    
»
11 months ago, # |
  Vote: I like it 0 Vote: I do not like it

C problem, how to come with such a conclusion: If rgl>lfr, then the changed segment is degenerate (and this means that the string does not change at all) i.e., the sort operation results in the original sequence. Is this some math theorem/lemma/logic?

  • »
    »
    11 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    we are storing the index ranges which would be sorted in those arrays, so say if range 2-3 is the one to be sorted i.e index 2 has a first '1' on the right and index 3 has the first '0' on the left.

    now say these are a few examples:

    indexes: 0 1 2 3 4 5 6 7

    string:-- 0 0 1 0 1 1 1 1

    now if we sort from index 0-6/ 1-5/ 0-4 we get the same result

    similarly here degenerate means that there is no such 0 on the right of the current index of range that if sorted it will result in a diff string consider:

    we take indices 0-1 even if we sort them we get the same here the next index of 0 is to right of r , vice versa for indexes 6-7 now rgl>lfr would be true if such a condition occurs in a bigger string where say the 0 and 1 which are unordered are out of bounds of the window to sort which we had, but the logic remains the same.

»
11 months ago, # |
Rev. 5   Vote: I like it 0 Vote: I do not like it

I've tried C with string hashing, my idea was basically for every query we first subtract the hashed part (l, r) from the full string hash, and then we add the "sorted" hash of (l, r) by counting the number of 1s and 0s in (l, r) using prefix sum. I am able to do it in O(1) as I am precomputing powers of "p" from 1 to n, prefix sum and prefix_hash of the string.

Although I think my logic seems sound, my solution fails on test case 4: link I am not able to recognize my mistake, any help would be appreciated.

My implementation is based on this source: https://cp-algorithms.com/string/string-hashing.html

  • »
    »
    11 months ago, # ^ |
    Rev. 3   Vote: I like it 0 Vote: I do not like it

    The problem with my solution turns out to be the hashing method, as I had suspected before. Unfortunately, the source I had followed for this doesn't contain many suggestions on avoiding collisions.

    To our rescue, one of the comments suggests to use double hashing to avoid collisions.

    I also found a very nice blog on this: https://codeforces.com/blog/entry/60445

  • »
    »
    11 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    I used same approach and my solution gives WA2

»
11 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Can someone help me out with finding any test case for my submission to figure out where I am going wrong.

Any help is much appreciated, thanks in advance :)

»
11 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Why do I think D is easier than C?

»
11 months ago, # |
  Vote: I like it +3 Vote: I do not like it

Cab anyone help me to understand editorial of problem E? I tried but, I failed to understand it.

»
11 months ago, # |
  Vote: I like it 0 Vote: I do not like it

has anybody done D with DP? If yes could you please explain and share the approach?

  • »
    »
    10 months ago, # ^ |
    Rev. 11   Vote: I like it 0 Vote: I do not like it

    I upsolved it with DP. Note that coloring a $$$0$$$ doesn't affect any other elements. Also, coloring a $$$1$$$ or $$$2$$$ may affect other $$$0$$$'s and other nonzero elements beside it. So, a possible strategy is to color some nonzero elements first, then color the $$$0$$$'s that are unaffected in the end.

    The idea is to consider contiguous segments of nonzero elements such that to their right is either a boundary or $$$0$$$ and to their left is either boundary or $$$0$$$. If there is a two in a segment, then we may color the two once, and use the $$$2$$$nd operation on it twice so that will affect nonzeroes on both directions until it hits a boundary or $$$0$$$. Otherwise, we can color from either ends of the segment: if we color the right end, choose the left elements and we may affect a zero to its left; if we color the left end, choose the right elements and we may affect a zero to its right. So, there are $$$2$$$ or $$$3$$$ optimal ways to color a segment.

    Suppose these segments are stored as a tuple $$$(l_i, r_i, t_i)$$$ where $$$t_i = 0$$$ if the segment is only composed of $$$1$$$'s and $$$t_i = 1$$$ if there exists a $$$2$$$ in the segment.

    Let $$$dp_{i,j}$$$ be defined as the maximum number of zeroes you can affect processing the first $$$i$$$ segments such that the $$$i$$$th segment is colored the $$$j$$$th way. If $$$j=0$$$, then we color the right end; if $$$j=1$$$, then we color the left end; and if $$$j=2$$$, then we color the two if it is possible.

    Transitions are pretty straightforward but messy. You have to consider if a segment contains a $$$2$$$, if it is right along a boundary, and what if consecutive segments are only one $$$0$$$ apart. Here's my submission: 216927717.

    Let $$$m$$$ be the number of segments and $$$c$$$ be the number of zeroes. The answer is $$$m + c - \max{(dp_{m-1,0}, dp_{m-1,1}, dp_{m-1,2})}$$$.

    But yeah, editorial solution is so much simpler.

»
11 months ago, # |
  Vote: I like it 0 Vote: I do not like it

There is another solution in F, a bit hard to code but easy to understand. Let's write binary search on the answer, now we need to check if answer is <= X. Then go from left to right, maintaining a binary trie. Then, on each level of the trie, check the following: if ith bit is set in X, then all of the numbers with which the XOR would have that bit have to be in a different component. Otherwise we can ignore the bigger component. Since we can unite components in $$$O(sz)$$$, and the sum of all sizes is $$$O(NlogMAX)$$$, one check works in $$$O(NlogMAX)$$$. Since the DFS constant is really hig, use DSU.

Working code: https://codeforces.com/contest/1849/submission/216600784

»
11 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Can someone help with finding issue in my code?: 216602874 For each l[i], r[i], I substract hash of section from hash of the string and add hash of sorted section which is calculated using the formula of geometric series sum. I add ultimate hash to the set and output size of the set as an answer

  • »
    »
    11 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    The issue with the code appears to be GSS function and bigInt (__int128) type as I understood. In GSS function, when overflow occurs, division by r-1 will give wrong result. And the problem with __int128 type is that hash stored in it will be calculated erroneous if GSS returns negative number. So instead I used unsigned long long type.

»
11 months ago, # |
  Vote: I like it 0 Vote: I do not like it

For problem E, I have a cool idea, but unfortunately my poor ability can't make it happen. Is there anyone willing to take a look and tell me if this is impossible?

My idea is: for each index $$$i$$$, we can use a monotonic stack to solve $$$Lmin_i,Rmin_i$$$, meaning that when the interval left endpoint is in $$$[Lmin_i,i]$$$, the right endpoint is in $$$[i,Rmin_i]$$$, the minimum value of this interval is $$$a[i]$$$. The maximum value is similar, and can be described by $$$Lmax,Rmax$$$.

Then we try to construct a rectangle $$$A$$$ on a two-dimensional plane. Its lower left coordinate is $$$(Lmin, i)$$$, and its upper right coordinate is $$$(i, Rmin)$$$.

That is to say, this rectangle spans $$$[Lmin_i,i]$$$ on the x-axis, and spans $$$[i,Rmin_i]$$$ on the y-axis. The area of this rectangle is the number of intervals that satisfy the minimum value is a[i].

If we add another rectangle $$$B$$$ in the same way with $$$Lmax_j, Rmax_j$$$. Then the intersection part of rectangles $$$A$$$ and $$$B$$$ is: the number of intervals that satisfy the minimum value is a[i] and the maximum value is a[j].

If we require $$$i<j$$$ at this time. Then the sum of all such rectangle pairs intersecting like this is the answer to this question.

So what I think is that index $$$i$$$ goes from left to right, first query the total area of all $$$Lmin,Rmin$$$ rectangles within the range of $$$Lmax_i, Rmax_i$$$ rectangle, add it to the answer, and then add a $$$Lmin_i,Rmin_i$$$ rectangle.

The key to the problem is this task of dynamically adding rectangles and solving total area.

My idea may be naive, please don't laugh at me hard XD

»
10 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Can someone explain me problem E in some other way?

»
10 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Can i use string hash to solve the problem C?

  • »
    »
    9 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Yes, I think it may work. If you use string hash you can simply calculate the hash of all the resulting string with some math. I would not recommend it though. It's better to solve easier problems the intended way instead of forcing your way through with advanced ds.

»
9 months ago, # |
  Vote: I like it 0 Vote: I do not like it

C and D should be swapped in my opinion

»
9 months ago, # |
  Vote: I like it 0 Vote: I do not like it

In fact, I wrote B with a priority_queue,and with this: ~~~~~

if(max_monster.health == second_max_health) num_hits = 1; else num_hits = (max_monster.health — second_max_health + k — 1) / k; max_monster.health -= num_hits * k; if (max_monster.health > 0) { monsters.push(max_monster); } else { death_order.push_back(max_monster.index); } ~~~~~

but sad to say,it has been still TLE.

I guess it isn't o(nlogn).

»
6 months ago, # |
  Vote: I like it +3 Vote: I do not like it

https://codeforces.com/contest/1849/submission/239744150 For problem E, Does anyone know what is this judgement means? I believe my code is correct and it passed 35 testcases.

  • »
    »
    6 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Same problem bro. There is an error in generating test 36.

  • »
    »
    6 months ago, # ^ |
    Rev. 3   Vote: I like it +3 Vote: I do not like it

    It looks like the issue was caused by a test generator submitted during the hacking phase (it was generating the test too slowly). I've rewritten that generator in C++ and rejudged all submissions receiving that verdict, so the problem should be fixed.

    UPD: somehow that issue still persists, I'm not sure what is happening but hopefully I can resolve it

    UPD2: okay, now it definitely works.

»
5 months ago, # |
  Vote: I like it 0 Vote: I do not like it

IN PROBLEM D, WHY CAN'T I FIRST BFS ALL THE 2 ONES AND THEN ALL ONES AND COUNT REST UNVISITED INDICES? PLS GIVE A TEST CASE FAILING THIS?