We will hold AtCoder Grand Contest 056. This contest counts for GP30 scores, and this is the final AGC of this year.

- Contest URL: https://atcoder.jp/contests/agc056
- Start Time: http://www.timeanddate.com/worldclock/fixedtime.html?iso=20211204T2100&p1=248
- Duration: 180 minutes
- Number of Tasks: 6
- Writer: maroonrk
- Tester: maspy, yamunaku
- Rated range: 1200 -

The point values will be 300-900-900-1600-1600-1600.

We are looking forward to your participation!

Good luck on the contest, starting in 50 minutes.

Fun fact: For all AGC problems with score>=1600 this year, the number of accepted solutions is no more than 1.

The scores of the problems are quite strange.

AGC is really hard, I even cannot solve any problems.

Even, I can't solve a single problem!

AtCoder has great problems, but the difficulty estimation is so bad it's not even fun. Every ABC I took part in first 5 problems — >700-800 solves and the 6th had like <100, come on how come yesterday's F and E are weighed 500 points both??? On the other hand in AGC I couldn't solve a single problem ever, the same holds today. ARC is the only balanced one, but regarding the during contest time I really don't find AGC/ABC fun anymore, I just feel like wasting 2-3 hours of my life every weekend, although during practice I tend to learn a lot from them.

Well, they can't make everyone happy, I have heard many LGM/GM/IM saying they like AGC/Atcoder the way it is, and it seems like they are the targeted contestants so ig you won't be seeing any changes in AGC.

Speaking of ABC, I find it educational enough and I get to learn new things from almost every ABC, so points doesn't matter much to me.

-Atcoder fan

I never requested changes, I just stated my opinion on how it currently is, it may or may not be right, but again my personal opinion.

When it comes to learning from ABC I'd argue 5/8 problems are just as bad as a regular Div. 3 round on CodeForces and the rest can't match CF Edu 70% of the time. I feel like solving CodeChef Longs when doing ABC's — a couple of trivial ad-hoc problems and a couple of trivial implementation problems in case you've encountered the specific rare technique required for the problem e. g. FFT/NFT/Automata/Grey's Code...

I participated in AGC054 (solved 1 problem), AGC055 (solved 0 problems) and AGC056 (solved 1 problem). The difficulty of problem A in AGC contests does not seem to be unreasonably hard.

I suspect that your main problem is that you overestimate the problem difficulty and give up too early. And your comment also had been posted long before the contest was over. You still had a lot of time to work on solving at least problem A.

Problem C:

Why I change unordered_set to set and it pass ???!! Do the problem setter intentionally kill it in the test "01-027"?

https://atcoder.jp/contests/agc056/submissions/27696901

https://atcoder.jp/contests/agc056/submissions/27696734

I solved A, but my rank is only 5 ranks higher than people who didn't solve any.

Some feedback on the contest:

Overall, it was a very good contest with a problem which was amazing. Thanks for the contest!

Feedback on your feedback on C:

It's cool, but this idea about lowerbound also appeared in 1450E - Капитализм (and remembering it helped me to solve the problem fast).

But the contest was still amazing, thanks a lot to maroonrk!

in problem C I can see from your code and also the editorial that we make a graph where we add an edge between adjacent $$$v_i$$$'s of weight $$$1$$$ and between all pairs given in the problem, of weight $$$0$$$. Then we find the shortest path from $$$v_0$$$ to all $$$v_i$$$'s.

But I can't understand what has the shortest path from $$$v_0$$$ to $$$v_i$$$ got to do with maximizing $$$v_i$$$ lexicographically. Can you please help with this "cow technique" as to how are the inequalities and the shortest path equivalent.

The cost-$$$0$$$ edges force certain $$$v_i$$$s to be equal, and the cost-$$$1$$$ edges allow increments from both the left and right until they meet in the middle. Intuitively, if you graph the resulting $$$v_i$$$, you can see that from left to right, the graph will increase unless forced to decrease by one or more equality constraints, matching the greedy nature of lexicographic ordering.

Can you please explain your solution for C?

It is exactly the one described in the editorial.

The magic moment was when I noticed that the distance does not provide only a lowerbound but also a construction.

Not at all , an easy contest!!!!!!

An easy construction for AIt's easy to observe that this construction works for all n(>=5).

CodeMy solution for ARandomly generate a bunch of 5x5, 6x6 and 7x7 blocks and cherry pick a few of them with interesting properties. I used the following set of blocks for construction:

SpoilerKeep placing 5x5 blocks (5 connected components) to the bottom right corner. This reduces the size of the grid by 5 on each step. The remaining part to handle in the top left corner will be between 5x5 and 9x9. Handle each of these remaining cases separately by using blocks from the available set. For example, if N=8, then we can use 3x3 block (1 connected component) and 5x5 block (7 connected components).

Sample output for N=18Code: https://atcoder.jp/contests/agc056/submissions/27694896

Edit: C++ source of the random blocks generator/validator (sizes from 3x3 to 8x8)My solution for A:We need 6 blocks only: 6x6, 7x7, 8x8, 9x9, 10x10 and 11x11. Generate them manually. 6x6 is given to us. So we need to manually generate 5 blocks only. How to generate NxN using the above 6 blocks? Suppose N is 19, which is equal to 6 + 6 + 7. First generate an NxN block with dots only. Then, overwrite it with a 6x6 block on coordinate (0,0), another 6x6 block on coordinate (6,6) and finally a 7x7 block on coordinate (12, 12). The resulting NxN block will have the criteria satisfied.

Code: https://atcoder.jp/contests/agc056/submissions/27696613

How do you manually generate a valid 11x11 block? This is the most difficult part of your solution and you provide no explanations for it. Did you rely on some additional observations and/or heuristics?

Long explanationIt is a manual process with the following general rule: Starting from a 6x6 valid block:

To arrive at a 7x7 valid block, first I added a '#' on the new coordinate (6,6):

Now there are 19 '#'s instead of 21. I need to add 2 more, while keeping the constraint of 7 connected components, and each row and column can only have 3 '#'s. I need to trade 1 '#' for 2 '#'s, thereby gaining 1 extra '#'. If I repeat the above twice, I can grow from 19 '#'s to 21 '#'s. First, I removed (5,5) and added (5,6) and (6,5):

Now there are 20 "#"s. Next, I removed (2,4) and added (2,6) and (6,4) (Note the pattern in the x,y coordinates. This is chosen so as to maintain the number of '#'s per row and per column to remain unchanged):

Now there are 21 "#"s, which can potentially fulfill the "3 cells per row and column" requirement. In this case, that requirement is fulfilled. However, another requirement is broken. The number of connected components is 8, not 7. I need to make another trade, but this time it will be a 1 for 1 trade. I removed (2,6) and (4,5), and added (2,5) and (4,6) instead (Again note the pattern in x,y coordinates):

Now I fulfilled all requirement for 7x7. There are 7 connected components.

Sorry, there are a lot of human decision here to pick which '#'s to trade for. That's why it took me a long time to submit my solution.

Similar logic for growing from 7x7 to 8x8, etc, all the way to from 10x10 to 11x11. Use the previously generated 7x7 as the starting point when generating for 8x8.

Editorial to D is similarly useful to me than a sequence of random characters. Maybe after some time I would be able to parse and accept this, but it gives absolutely no intuition and looks ridiculous. Can somebody explain his reasoning on this problem? Maybe dario2994? (You said that the solution seems easy to guess, despite the fact that the solution from editorial looks ridiculous, so I suppose you must have had some intuition behind what you're doing)

An intuitive way is to think about how Alice can win if $$$L=R$$$ and there is a specific subset that sum is $$$L$$$. Then Alice needs to pair the numbers to make sure that she can take this subset, otherwise, it's impossible for her to win.

Then we can think about what if the number is $$$x+ y\times \varepsilon$$$, Alice still needs to take numbers from the pair and Bob now can choose arbitrary numbers from every pair. So it's not hard to guess the solution,

My claim is that if Alice can win then she can win with the following strategy. Sort the array $$$A$$$.

Choose two special indexes $$$i\not= j$$$. Then pair the remaining indexes in $$$\frac n2-1$$$ adjacent pairs. For example, if $$$n=5$$$ and $$$i=3$$$, $$$j=8$$$; then the pairs are $$$(1,2)$$$, $$$(4,5)$$$, $$$(6,7)$$$, $$$(9,10)$$$. Let the pairs be $$$(l_k, r_k)$$$.

If $$$L \le A_i + A_{l_1} + A_{l_2} + \cdots + A_{l_k} \le A_i + A_{r_1} + A_{r_2} + \cdots + A_{r_k} \le R$$$, then Alice can win. Indeed, she begins by choosing $$$A_i$$$ and then she takes exactly one element from each pair (and $$$A_j$$$ will be taken by Bob).

One has to check all possible $$$O(n^2)$$$ strategies (one for each choice of $$$i\not=j$$$).

Why, if Alice can win, then she can win with such a simple strategy? No idea. Am I even sure that this is correct? Nope but it gets accepted.

Oh wow, that makes total sense, thanks. Actually I have already written a code that does almost exactly that, but I did one stupid mistake in my thoughts, so I missed that

I wrote exactly same solution and is struggling to prove why it works

After some analysis of small case, I propose the following "proof"

Let us prove that Alice can win with dario2994's strategy by induction on n.

Suppose that A's winning strategy does not exist after choosing the first number.

Name the remaining numbers as a_1, a_2, ..., a_(2t + 1)

It is simple to find B's strategy if A has to declare one number to not choose at all in this game after choosing the first number. He just has to either keep choosing maximum or keep choosing minimum.

Now, if B's strategy would be the same regardless of what number A declares to not choose, then we are done. Hence, we need to examine when if A declares he will not choose a_k then B can keep choose minimum to win and if A declares to not choose a_(k + 1) then B can keep choosing maximum to win.

If we expand this condition out in terms of R and L, there is one number (either a_k or a_(k + 1)) that does not appear and excluding this number, the condition will be of the form (sum of (2p — 1)th number) < L < R < (sum of (2p)th number). B should choose the that one number not appearing in the condition and then (hopefully) mathematical induction would complete the proof. (I think the condition above would mean that if A then chooses i and avoids j, then if i < j then B chooses maximum and else B chooses minimum to win).

I apologize for my inability to write Latex and I hope it is readable.

You can modify the problem to the problem of minimax game of absolute value describe in the editorial.

Notice that $$$\lvert x \rvert = max(x,-x)$$$, the problem can be describe as finding value of nested min, max function of some function $$$P_i(X,a_0,a_1,...,a_{n-1})$$$ such that coefficient of $$$X, a_i$$$ is $$$\pm 1$$$.

Observe that when increase or decrease by $$$1$$$ of some $$$a_i$$$, the result is increase or decrease by $$$1$$$ because each $$$P_i(X,a_0,a_1,...,a_{n-1})$$$ is increase or decrease by 1 and invariant modulo 2. Also observe that adding $$$2$$$ same value $$$a_n = a_{n+1}$$$ into $$$(a_i)$$$ will not change the game because of when a additional move make an advantage for one player, the other can cancel it by making exactly same move.

Now we want to to build the state $$$(a_0,a_1,...,a_{n-1})$$$ from some state with value $$$0$$$. The result of absolute value always $$$\geq 0$$$ so we will measure how close it can be reach from $$$0$$$, or tighten the upper bound for the result.

So we find the minimum of maximum of the result, we need to start at Bob's turn and n odd. A state with value $$$0$$$ is a single $$$X$$$, adding $$$\frac{n-1}{2}$$$ pair of same value to make $$$n$$$ number such that minimize number of adjustment every number to arrive at state $$$(a_i)$$$. We get that by sorting the array $$$(a_0,a_1,...,X)$$$, make $$$n/2$$$ pair the adjacent of number and for each pair not contain $$$X$$$, add two same number of one element.

The rest is to prove it, you can write a program to check it or induction,... I got some proof relate to color the line but it's quite complicate.

The problems were amazing!! :+1:

B was too difficult for me though :)

My solution for A

SpoilerFor a given n you take the sequence that is all dots except for the first 4 characters which are #.## If for each row you do a place cyclic shifts of this sequence you get n+2 components that satisfies the conditions

e.g for n=5 you get

which has 7 components

Hence for a given n>=8 you can do this construction for n-3 and for the top 3 rows you place a 3x3 block of ###. This will give n components in total. E.g to get the solution for n=8 you do

This gives a solution for all n>=8. For n=6 you can steal the example given in the sample. For n=7 you can randomly generate something that works. I wrote a checker that tests if a given grid is correct. Then I created grids by appending 7 random shuffles of ###.... until I got something that worked. I ended up with

A bit another view at problem B, which worked for me for most of problems about segments, maxima and so: Our dynamic will be just $$$dp[i][j]$$$ — answer for segment $$$[i,j]$$$, when we are looking at maxima only of subsegments of $$$[i,j]$$$. How to recalculate it?

Let's call the element big, if it is maximal on all segments containing this element. Surely there are some big elements. Suppose we fix some set of elements ($$$S$$$), which will be big, and denote $$$F(S)$$$ — number of sequences, for which elements of $$$S$$$ are big. Then it is easy to calculate $$$F(S)$$$ — big elements split $$$[i,j]$$$ into some subsegments, and $$$F(S)$$$ is just a product of $$$dp$$$ of these subsegments. Only condition to big elements is that two different big elements can't be contained in one segment.

But $$$dp[i][j] =\sum_S(-1)^{|S|+1}F(S)$$$ using inclusion-exclusion principle, and this can be easily calulated in $$$O(n)$$$ by dynamics.

An alternate approach to E:

We could view the problem as generating several pieces of cheese on a circle. Each time we pick a piece of cheese, and we check it with the next mouse. It's eaten with probability 1/2 (if the mouse hasn't eaten anything yet) and travels an arc otherwise.

Notice that the order of picking cheese to check doesn't matter. So we could work arc by arc rather than cheese by cheese, which could be done by DP.