SomethingNew's blog

By SomethingNew, 7 weeks ago, In English

Sorry for not very good samples, and late editorial. It was our first round, so it was wary stressful for us, But we hope you liked the problems despite this!

1719A - Chip Game

Hint 1
Hint 2
Hint 3
Tutorial

1719B - Mathematical Circus

Hint 1
Hint 2
Hint 3
Tutorial

1719C - Fighting Tournament

Hint 1
Hint 2
Hint 3
Tutorial

1718A2 - Burenka and Traditions (hard version)

Hint 1
Hint for A1 1
Hint for A1 2
Hint 2
Hint 3
Tutorial

1718B - Fibonacci Strings

Common hint
The way of mathematicians
The way of programmers
Tutorial

1718C - Tonya and Burenka-179

Hint 1
Hint 2
Hint 3
Hint 4
Tutorial

1718D - Permutation for Burenka

Hint 1
Hint 2
Hint 3
Tutorial

1718E - Impressionism

Hint 1
Hint 2
Hint 3
Tutorial

1718F - Burenka, an Array and Queries

Hint
Tutorial

Please rate the problems, it will help us make the problems better next time!

Problem Feedback
 
 
 
 
  • Vote: I like it
  • +129
  • Vote: I do not like it

»
6 weeks ago, # |
  Vote: I like it +25 Vote: I do not like it

In Div1D, how it is easy to see that suitable d will form a contiguous segment. Can somebody please elaborate?

  • »
    »
    6 weeks ago, # ^ |
    Rev. 2   Vote: I like it +66 Vote: I do not like it

    I think it's by far the hardest part of the entire problem, so the editorial's claim that it's easy is quite baffling.

    But here's how someone explained it to me: by Hall's theorem, for there to be a matching between $$$x$$$ and intervals, you want for all sets of intervals $$$I$$$, the number of elements of $$$S$$$ inside these intervals must be greater than the number of intervals (let's say $$$x(I) \ge N(I)$$$). In other words, your added value $$$d$$$ is valid if and only if it is inside the union of intervals for every set $$$I$$$ such that $$$x(I) < N(I)$$$.

    Now the trick is to realize that sets of intervals whose union is not a contiguous segment don't matter -- if a set whose union is two or more disjoint parts has $$$x(I) < N(I)$$$, then definitely one of the parts violates the condition as well, and that part alone would be a more restrictive condition on the added value $$$d$$$. Therefore the valid values of $$$d$$$ are in the intersection of some contiguous segments, which is a contiguous segment.

    • »
      »
      »
      6 weeks ago, # ^ |
      Rev. 2   Vote: I like it +28 Vote: I do not like it

      Wow!! Really nice proof. My approach didn't utilize this observation, it might be interesting to you.

      Approach: The problem is same that we have $$$k$$$ intervals and $$$k-1$$$ points, one query point and we have to check whether we can perfectly match them or not.

      The idea is to find those intervals, removing which we can perfectly match the given $$$k-1$$$ points and remaining $$$k-1$$$ intervals.

      Let's match the intervals greedily:

      • Sort the intervals according to left endpoints

      • Sort the points in increasing order

      • Move from left to right and for the current point say $$$x$$$ insert all right endpoint values in a set whose left endpoint $$$\le x$$$(can be done using a pointer).

      • Match the point with the segment having the smallest $$$r$$$ value.

      • Also for each point, we can calculate what it will be matched to if the segment in the previous step was removed (-1 for those points that don't have such an option).

      In the end, the set should have exactly one spare segment.

      Let $$$seg_i$$$ be the segment alloted to $$$i^{th}$$$ $$$(1 \le i \le k - 1)$$$ interval and $$$seg_k$$$ be the only spare segment.

      We can be sure that we can match $$$seg_k$$$ to $$$d$$$ ensuring that the remaining segments still have a perfect match.

      Let's make a DP where $$$dp_i$$$ denotes whether we can remove $$$seg_i$$$ such that the remaining segments perfectly match with given $$$k-1$$$ points. Initially, $$$seg_k = true$$$.

      • Iterate from $$$k-1$$$ to $$$1$$$, and for each $$$seg_i$$$ we know the segment it will be replaced with (i.e the segment that will be matched to $$$interval_i$$$) if $$$seg_i$$$ didn't exist, let it be $$$nxt_i$$$.
      • $$$nxt_i$$$ is guaranteed to be greater than $$$i$$$ because that segment would have been allotted later only to some point.
      • $$$dp_i = dp_{nxt_i}$$$

      Finally, we have all the segments in which $$$d$$$ can lie. Assuming segments do not always form a contiguous section, we can answer queries offline (using sweepline) or online (using segment tree).

      Code

      • »
        »
        »
        »
        6 weeks ago, # ^ |
          Vote: I like it +13 Vote: I do not like it

        "Transposed" version of your solution can be used to prove that "suitable $$$d$$$ will form a contiguous segment" as well (and as a bonus, with a few standard tricks it can be implemented in $$$O(n \alpha(n))$$$, faster not only in theory but in practice).

        • Store $$$S$$$ in a set, sort segments in the order of increasing right ends.
        • Iterate through segments.
        • Greedily assign to the current segment the first point to the right of its left end, removing it from the set.
        • If such a point can't be assigned (either it's beyond the right end of the segment or there's no such point), we know that another point must be added somewhere to the left of the right end of this segment, so we get an upper bound $$$R$$$ on $$$d$$$ and we pretend to match this segment with $$$R$$$.
        • If such situation happens a second time, even "optimal" $$$d$$$ wasn't enough to save $$$S$$$, so for every $$$d$$$ the answer is no.
        • Repeat the same from another direction, sorting segments in the order of decreasing left ends, finding a lower bound $$$L$$$. We don't have to continue after finding the first problematic segment, since if $$$L$$$ being in its own greedy way optimal isn't good enough, $$$R$$$ couldn't have been good enough either.
        • If we determined that there is at least some solution for $$$d$$$, imagine adding $$$d = L$$$ and again matching segments with points from the left, as we did looking for $$$R$$$. Without this additional point there was a segment that had no matching point, but with $$$L$$$ it must have a match -- either it matched $$$L$$$ itself, or some previous segment did and every segment after shifted its match one point left, and some point eventually "carried over" to the problematic segment.
        • If we shift the added point from $$$L$$$ to some position $$$x, L < x < R$$$, up until this point segments will be matched to the same points as without $$$x$$$. The segment that will try to match $$$x$$$ was able to match both a point to the left (when we added $$$L$$$, which might be this "point to the left" of $$$x$$$ too) and a point to the right (when we added no point) (except if it is the problematic segment, then there is no suitable point to the right, but $$$x < R$$$ so $$$x$$$ satisfies the right end of this segment), so it can match $$$x$$$, and every next segment will match the same point as with $$$L$$$ added. So every such $$$x$$$ is good too.
        Tricks for better complexity
      • »
        »
        »
        »
        6 weeks ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        Can we also just go in order of decreasing right endpoint and match every segment to its rightmost available value? I feel like this strategy works and is simpler.

      • »
        »
        »
        »
        6 weeks ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        Neat! Eventually it needs to be looked at in a way similar to this because we still need a way to construct the interval of valid $$$d$$$, even if we know it's an interval.

        And it's easy to show it's an interval from your solution: $$$dp_i = dp_{nxt_i}$$$, and obviously $$$i$$$ and $$$nxt_i$$$ have at least one common point by the definition of $$$nxt_i$$$.

»
6 weeks ago, # |
  Vote: I like it +27 Vote: I do not like it

1718A2, I think the observations (segments of length 1 and 2) are pretty simple and obvious. The more interesting part is how to implement a solution from that. I worked like an hour on it and even did not solve A1.

The editorial answers this with "this amount can be calculated by dynamic programming or greedily." Thanks a lot!

  • »
    »
    6 weeks ago, # ^ |
    Rev. 2   Vote: I like it +1 Vote: I do not like it

    Yeah, the editorial should definitely elaborate on that. And some of these observations aren't even needed for D1.

    My DP approach: the optimal solution for an array of $$$n$$$ elements can be constructed by taking the optimal solution for the first $$$k$$$ elements (for some $$$k < n$$$) and then applying the XOR chain (overlapping segments of size 2) on the rest of the elements. We need to try each value of $$$k$$$, however, which leads to an $$$O(n^2)$$$ solution, maintaining both an array of optimal time as well as an array of accumulating XOR chain results for each possible starting index. This doesn't work in D1.

    Greedy approach: maintain a running XOR while also storing each intermediate result in a set. If a result appears twice, i.e., $$$r \oplus a_i \oplus \oplus a_{i + 1} \oplus \cdots \oplus a_{i + k} = r$$$, then the subarray $$$a_i, \ldots, a_{i + k}]$$$ has a XOR sum of 0. We don't need to know where the subarray starts ($$$i$$$), but we just need to detect when it ends (when the running XOR is not a new value), so we can update the saving counter and reset the running XOR chain. You can check out just how simple it is over here: https://codeforces.com/contest/1719/submission/168640804

    That being said, the observation that a running XOR will encounter an old element if a subarray has a XOR sum of 0 should definitely be included in the editorial, in my opinion, since that's essential to the greedy approach (unless the authors had a different greedy approach in mind).

  • »
    »
    6 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    I am quite the opposite of this, I couldn't do anything during the contest to both of the versions, and only when I was told the hint of segments of length 1 and 2, I solved the hard version instantly

    • »
      »
      »
      6 weeks ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Well, the above code is fairly simple, but actually I still do not really understand why this works.

      What is the key idea we can see here instantly?

      • »
        »
        »
        »
        6 weeks ago, # ^ |
        Rev. 2   Vote: I like it 0 Vote: I do not like it

        The Key Idea is to notice that the extra XOR that comes to a[i] before turning a[i] to zero with one operation can be represented as the xor subarray ending at index i-1 (can be empty) (where this subarray represents consecutive subarray of size 2 before going to index i), so to make a[i] already 0 before reaching it, we have to find a subarray ending at index i with xor sum equal to 0 (i.e find maximum j such that pref[i]^pref[j] = 0)

      • »
        »
        »
        »
        6 weeks ago, # ^ |
        Rev. 3   Vote: I like it +8 Vote: I do not like it

        As far as I understood:
        You can always use n steps to turn all elements to zero by just taking all indexes consecutively. And you want to improve this result. How you can do it?
        You notice that there's no sense to use range of size greater than 2 since the given formula of seconds calculation. What can you do with this?
        If you take two numbers a[i] and a[i+1] and apply xor with a[i] to them you can get two results:
        1) [0,0] it means a[i]==a[i+1] => you saved 1 second here!
        2) [0,X] now you can apply xor with X to next pair [X,a[i+2]] and so on. you can apply it until you get [0,0] pair or get to end of array and turn last element to zero with xor with range of 1. However you saved 1 second!
        So what do these both situations have in common? xor of all elements in covered ranges equal to 0.
        so now you can convert problem into finding count of subarrays with xor=0 (every subarray is a saved 1 second) and the answer will be n-(count of such subarrays)

»
6 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

"Unable to parse markup [type=CF_MATHJAX]" in problem D.

»
6 weeks ago, # |
  Vote: I like it +10 Vote: I do not like it

Can anyone share their div 1c solution using segment tree which passes for all factors of $$$n$$$ as claimed in the editorial? I used segment tree but got TLE.

  • »
    »
    6 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Here is mine: 168744644.

    I believe it is $$$O(n\sqrt{n} + q\sqrt{n}\log{n})$$$ or something like that.

»
6 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

Another approach for Div2 C, based on Editorial one.

Let's calculate next greater element and previous greater element. Then, for each $$$a_i$$$, we can calculate how many rounds athlete will win for infinite $$$k$$$. If previous greater element exist, he won't win a single round. Otherwise, the answer equals to distance between athlete and its next greater element.

When we answering queries, one should notice that an athlete start to fight after $$$i-1$$$ rounds — this is the time he should wait to get into second index and start fighting.

Time complexity: $$$O({n})$$$

https://codeforces.com/contest/1719/submission/168605787

»
6 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

Oh, wow, am I the only one who actually started filling up Sprague-Grundy values for Div2A (Chip Game) to realize and prove that they form a binary chessboard? I felt that this was more straightforward (albeit requiring knowledge on the Sprague-Grundy Theorem) than the number-theoretic observations.

Is it planned for solutions to be posted later? I'm curious to see whether the authors' solution for Div2E/Div1B actually passes some of the really nasty but valid input test cases.

»
6 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

I solve problem C (div2) in $$$O(n)$$$ time complexity:

submission : 168668282

  • »
    »
    6 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    me too my submission 168676244

  • »
    »
    6 weeks ago, # ^ |
    Rev. 3   Vote: I like it +1 Vote: I do not like it

    There are $$$2$$$ conditions in this problem.

    I use $$$tmp$$$ to mark the number of the steps to make the biggest value to the front of the array.

    And array $$$cnt_i$$$ is used to count the number of the victories of the i_th player.

    If $$$k \lt tmp$$$

    There are two subtask in this conditions:

    • $$$val_{id}$$$ is the biggest number ($$$val_id == n$$$) then the answer should be cnt[id] + k - tmp, because this player will win after round $$$k - tmp - 1$$$

    • Otherwise : the answer is $$$cnt_i$$$.

    Otherwise

    There are also two conditions:

    • If $$$val_{id}$$$ is the biggest number in the prefix.

    I use array $$$r$$$ to mark the first element which is on its right and has a bigger value than it ( you can use a stack to get array $$$r$$$ in $$$O(n)$$$ in the begining.

    The answer of this condition is (id != 1) + min(r[id] - id - 1, k - id + 1)

    • Otherwise : the answer is $$$0$$$

    In this way, I can get each answer in $$$O(1)$$$ and use $$$O(n)$$$ time to get $$$tmp$$$ and array $$$cnt$$$ and $$$r$$$.

    • »
      »
      »
      2 weeks ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      I think I have exactly the same approach and still getting tle. Could you give it a glance?

      Solution

      • »
        »
        »
        »
        2 weeks ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        Hello sir, I just read your code. You use 'a.erase(it)' in your code. This causes TLE. because it has a time complexity of $$$O(n)$$$. So the total time complexity is $$$O(n^2)$$$.

»
6 weeks ago, # |
Rev. 2   Vote: I like it +3 Vote: I do not like it

Is there an English editorial for this contest?

Edit: The issue has been fixed. Thanks.

»
6 weeks ago, # |
  Vote: I like it +1 Vote: I do not like it

Any proof for Div 1B/2E why greedily choosing the letter with maximum frequency as the current last block will work?

  • »
    »
    6 weeks ago, # ^ |
      Vote: I like it +5 Vote: I do not like it

    You could maybe search more on Zeckendorf Representation, that's a formal name for the fibonacci-sum representation of a positive integer.

  • »
    »
    6 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    assume the answer is YES, with n+1 blocks total, then we can prove that the number of letters with third highest frequency cant be greater than F_n. If the last block used the letter with second highest frequency, then we can prove that ( because F_n = F_(n-1) + F_(n-2) ) the next two segments has to BOTH be from the letter from maximum frequency, that leads to a contradiction. Therefore the last block has to come from the letter of max frequency.

    the parts I didn't prove can be done by messy but routine bounding

»
6 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

I'm sorry but i really do not understand the editorial of Div2D. Especially the "n − (the maximum number of disjoint sub-segments with a xor of 0) " part.

»
6 weeks ago, # |
Rev. 2   Vote: I like it +36 Vote: I do not like it

Another way to understand the solution to div1B is the Zeckendorf Theorem, which states that each positive integer can be uniquely represented by the sum of a set of unique fibonacci numbers where no two are adjacent.

We can apply this representation directly to the frequency of each letter and solve the problem.

And due to this uniqueness of the representation, the greedy solution will also always work.

»
6 weeks ago, # |
  Vote: I like it -35 Vote: I do not like it

If you write hints write useful things in spoiler instead of wasting our time :

The programmer does not need a second hint, he wrote a greedy solution after the first one

The only one hint to this problem is don't try to solve, it's not worth it.

»
6 weeks ago, # |
  Vote: I like it +20 Vote: I do not like it

DIV2 (ABCDEF)video Editorial for Chinese :

Bilibili

»
6 weeks ago, # |
  Vote: I like it +5 Vote: I do not like it

I remember now that there was a problem similar to Div2. D on USACO, except that it had to count the number of subsegments whose sum is divisible by some integer $$$k$$$ if I remember correctly. Similar observations to that problem but not quite the same.

»
6 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

Div2C:

How can we do this faster? Seems like we have an array of integers and we want to know frequency of given integer in [1,k] range.

»
6 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

Does anyone have the DP approach to Div2 D1? Also, is there a mistake in the Hint for A1 2? Is it a_i = v?

»
6 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

Hello there! A question to the authors of the round. I see that there are a lot of problems about Buryatia in this round. Is one of you from it, or is it just a meme? Thanks.

»
6 weeks ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

During contest my solution of problem B was accepted, but after thr contest it is showing that my time limit was exceeded at test case 5.Was that a bug?

  • »
    »
    6 weeks ago, # ^ |
      Vote: I like it +1 Vote: I do not like it

    Nope, not a bug. Looks like you failed system tests.

    So, to put it short, the test cases that are run on your code during the contest are called pretests. These are normally not too big but are certainly meant to be strong. after the round is over, there will be system tests, which are basically extra cases added on to the pretests to make sure that your solution is indeed efficient/correct

»
6 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

I dont understand why in my solution for problem Div2B i get the message "Jury has the answer but participant has not ". How is it possible?

Also, in the explanation of the same problem, what is the sum which the author refers to? ("but the degree of occurrence of 2 in this sum is").

»
6 weeks ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

can anyone debug my code and tell what thing I'm doing wrong div2 c. here is my approach(two pointer) : iterating given permutation using i and checking how many matches ith element can win from its right, when i find that ith element cannot win more matches because , i found a element b[j] such that b[j]>b[i] , so b[j] wins with b[i] ,here b[j] wins 1match from its left then we check how many matches b[j] can win from left. then process goes on. also I'm storing no of matches before ith element could fight. my submission : https://codeforces.com/contest/1719/submission/168725054

»
6 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

Hi everyone! Can someone help me solve this problem?

Given an array a of n * m integers partitioned into n arrays of m integers each, find the maximum sum obtainable by selecting any number of integers from the array a such that no more than k consecutive elements can be selected anywhere in the array a.

»
6 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

My comment will be published soon

»
6 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

could someone help me with the problem c. My logic was on par with the editorial, but there might be something wrong with the code.

#include <bits/stdc++.h>

using namespace std; 

/*

*/

void solve () {
	int n, q; 
	cin >> n >> q; 
	vector<int> arr(n+1), arr2; 
	for (int i = 1; i <= n; i++){
		cin >> arr[i]; 
	}
	arr2 = arr; 
	int answers[n+1]; 
	int ma = arr[1]; 
	memset(answers, 0, sizeof(answers)); 
	for (int i =2;i <= n;i++) {
		if (ma > arr[i]) {
			answers[ma] ++; 
		}else {
			answers[arr[i]]++; 
			ma = arr[i]; 
		}
	}

	while (q--) {
		int i, k; 
		cin >> i >> k; 
		if (i <= k+1) {
			int as = k - i + 1;
			if (answers[arr[i]] == 0) {
				cout << 0 << endl; 
				continue; 
			}

			if (i == 1) {
				cout << min (answers[arr[i]], as) << endl; 
				continue; 
			}
			
			int ans = 1+min(answers[arr[i]]-1, as); 
			if (arr[i] == n) {
				ans = 1+as; 
			}
			cout << ans << endl; 
		}else {
			cout << 0 << endl; 
		}
	}
}

int main () {
	ios::sync_with_stdio(false);
	cin.tie(0);

	int t;
	cin >> t;
	while (t--) {
		solve(); 
	}
	
	return 0; 
}

»
6 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

can anyone please tell me my mistake i am getting wrong answer on test case 4 in problem C 168914912

  • »
    »
    6 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Take a look at Ticket 16054 from CF Stress for a counter example.

    The strongest participant is already in front of the line. Hence, they will be the winner in all rounds. However, your code prints the number of wins as $$$total\_rounds + 1$$$ while there have only been $$$total\_rounds$$$ fight.

»
6 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

For 1719B, i.e. Mathematical Circus, I made an array A storing all numbers from 1 to n, and another array B storing all numbers from 1+k to n+k. Now I look up for numbers in the two lists that are divisible by 4 with a condition that is A[i] is selected then A[i}+k i.e B[i] will not be considered. Now I count the number of such numbers. If it is equal to n/2 then it is true, because we get n/2 numbers that are divisible by 4.Now I pair them up with any of the remaining elements in the array A. This gives us our required pairs. This approach works great for the given input in the question and also passes pretest 1 but fails in pretest 2. Can anyone please help me figure out the issue.

»
6 weeks ago, # |
Rev. 4   Vote: I like it +18 Vote: I do not like it

I have a solution with bitset in $$$O(\frac{nC\log n}{p(k)}+\frac{nC\log n}{w}+\frac{qC}{w}+\frac{2^kC}{w})$$$.

We just calculate the bitset of every subset of the first $$$k$$$ primes. For larger primes, we can use the trick called "猫树分治"(i don't know how to translate it into English, maybe it's called "cat tree divide and conquer").

Since std::bitset have an extremely small constant, we can solve this problem by this weird method.

Edit:

$$$p(k)$$$ is the $$$k$$$-th prime, i can't prove its exact bound, but it has a small constant.

Edit2:

I think $$$k=14$$$ or $$$k=15$$$ would be the fastest one, but it takes too much memory. It runs 1668 ms with $$$k=13$$$.

»
6 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

For c, here is my code 168953583 can anybody please tell me the test on which my code wont work.

»
6 weeks ago, # |
  Vote: I like it -8 Vote: I do not like it
»
6 weeks ago, # |
Rev. 9   Vote: I like it 0 Vote: I do not like it

Hello SomethingNew, please can you explain , in problem 1718A2(hard version): What does disjoint sub-segments with XOR 0 actually mean ??

I know it is a very noob question and you can say that you have googled it and I have searched it and kind of understood that also. But in this problem , when using my that limited knowledge I kind of failed on working my own test cases/examples of this problem.

For example incase you want to know: the array I am confused on :

1 3 2 1 2 3 1 

  Prefix xor =:  1 2 0 1 3 0 1

Not clearly able to find out "distinct subs-segments with XOR 0" .

  • »
    »
    6 weeks ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    if prefix_xor[i] = prefix_xor[j] then a[i+1] ^ a[i + 2] ^ ... ^ a[j] = 0, disjoint sub-segments means disjoint subarrays.

»
6 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

can someone give pretest 2? because i did the exact same thing the question say but it fails pretest 2?

»
6 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

What does this line mean: "There is an answer in which segments of length 2 does not intersect with segments of length 1" in editorial of 1718A2

»
6 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

In Div2C, how am I supposed to answer a query in logarithmic time?

I tried storing the number of wins upto the first $$$m$$$ rounds in a grid. Since we're simulating $$$n$$$ rounds we would get $$$O(n^2)$$$ space. This causes an MLE.

Can somebody please help me out with this?

»
5 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

Div.1 A~D solutions for Chinese readers. https://www.cnblogs.com/jklover/p/16595927.html

»
5 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

I'm still unable to understand why the greedy solution works in Div1B. I noticed that for any integer $$$x$$$ such that $$$x \gt F_{i}$$$ cannot be represented as the sum of non-consecutive fibonacci numbers less than $$$F_{i}$$$. So it's always advantageous to take $$$c_{j}$$$ such that $$$c_{j} \ge F_{i+1}$$$ if we are trying to represent $$$F_{i}$$$. But why does taking the maximum $$$c_{j}$$$ always lead to the answer here? Can someone explain to me?

»
5 weeks ago, # |
Rev. 3   Vote: I like it 0 Vote: I do not like it

A2. Burenka and Traditions (hard version) https://codeforces.com/contest/1718/submission/169707649 I just can't understand Why Is This Giving TLE ??? Although its O(N)

»
5 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

I don't think the editorial of 1E is understandable, but my poor English doesn't allow me to write a whole editorial.

But I can give a proof for the time complexity. Consider divide the connected components of the two graphs into groups, such that each component in a group has the same number of left vertices. Then we can consider only matchings between vertices that their connected components are in the same group.

Now we only consider components in the $$$i$$$-th group. Let the $$$x_i$$$ be the number of left vertices in each component, $$$y_i$$$ be the number of components in the graph $$$a$$$, $$$z_i$$$ be the number of edges in the graph $$$b$$$. If $$$x_i=0$$$, each of the components is made up of only a right vertices. It's easy to deal with this situation.

Otherwise $$$x_i\ne0$$$. Consider a component in $$$a$$$ and a vertex $$$u$$$ in it, we are looking for a vertex to match with $$$u$$$. For an edge in $$$b$$$, suppose the edge belongs to the component $$$S$$$, then the edge will contribute to the time only when we try to match $$$u$$$ with some left vertex in $$$S$$$. So the time complexity of dealing with this group is $$$O(x_iy_iz_i)$$$, and the total time is $$$\sum x_iy_iz_i\le(\sum x_iy_i)(\sum z_i)=O(n^2m)$$$. Now we can solve the whole problem in $$$O(nm\sqrt{nm})$$$ time.

»
5 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

I got WA at test2 case 275 of 1718B. Could somebody please tell me why I am wrong? I'm using std algos so there shouldn't be like wrong implementation or sth. Here is my submission:170024965

»
5 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

coincidentally, i notice div2C this time is so similar with 569 div2C. XD

»
4 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

Stuck in Fibonacci Strings : 1718B (link: https://codeforces.com/problemset/problem/1718/B) since the past few days, cant think why it is going wrong, any help would be really nice.

Subsmission: https://codeforces.com/problemset/submission/1718/170409697 Test 9: WA in test 136 — found YES instead of NO.

»
45 hours ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

.