Hi all,

The third contest of the 2020-2021 USACO season will be running from February 26th to March 1st this weekend. Good luck to everyone! Please wait until the contest is over for everyone before discussing problems here.

# | User | Rating |
---|---|---|

1 | Benq | 3650 |

2 | Radewoosh | 3648 |

3 | tourist | 3579 |

4 | ecnerwala | 3551 |

5 | Um_nik | 3494 |

6 | Petr | 3452 |

7 | ksun48 | 3432 |

8 | jiangly | 3349 |

9 | maroonrk | 3338 |

10 | Miracle03 | 3314 |

# | User | Contrib. |
---|---|---|

1 | 1-gon | 210 |

2 | awoo | 188 |

3 | Errichto | 186 |

3 | rng_58 | 186 |

5 | SecondThread | 182 |

6 | Um_nik | 176 |

6 | Ashishgup | 176 |

8 | maroonrk | 172 |

9 | antontrygubO_o | 171 |

10 | -is-this-fft- | 169 |

Hi all,

The third contest of the 2020-2021 USACO season will be running from February 26th to March 1st this weekend. Good luck to everyone! Please wait until the contest is over for everyone before discussing problems here.

↑

↓

Codeforces (c) Copyright 2010-2021 Mike Mirzayanov

The only programming contests Web 2.0 platform

Server time: May/07/2021 14:23:25 (i2).

Desktop version, switch to mobile version.

Supported by

User lists

Name |
---|

Thanks for the link! xiaowuc1, you should add it to the announcements if there isn't a policy reason not to!

why not

Thank you guys!

What was it previously?

It used to be C++11 only, not sure about the Python/Java versions but I think they were also several years old.

Ah, I remember last week I looked at a USACO page and it had

`C++`

and`C++11`

, so I can only imagine how old`C++`

was, haha.AFAIK, the

`C++`

option was`C++98`

. I hope they haven't removed it though; there have been times where`C++11`

was too slow, but my program passed in`C++98`

. Does anyone know if that option is still available?Edit: I guess those were anomalies/luck

I'm surprised, I thought on CF the newer versions were faster since they have newer compilers. But I'm not really sure.

Is there any place where the method for calculating your score is described in detail?(including how aproximation works in calculating this)

I'm not sure if it's described anywhere, but it's pretty easy to determine experimentally (by just looking at past USACO results) what the formula is. If you got $$$\frac{a}{x}$$$ test cases on problem 1, $$$\frac{b}{y}$$$ test cases on problem 2, and $$$\frac{c}{z}$$$ test cases on problem 3, your score is $$$1000(\frac{a}{x} + \frac{b}{y} + \frac{c}{z})/3$$$.

Thank you! It really makes sense!

I think the contest is over (even if you started at the last minute and the clock ran for the four hours), can anyone confirm?

Yes, it's over (you can't access the contest page on the website anymore).

Can anyone explain their solutions to Plat 2 & 3?

Problem recap: For a graph $$$G$$$, define $$$f_G(a, b)$$$ the boolean value of whether there is a walk from vertex $$$1$$$ to vertex $$$a$$$ using exactly $$$b$$$ edges. Given a connected graph $$$G$$$ (possibly with self-loops), we want to find a graph $$$G'$$$ (possibly with self loops but no parallel edges) with the same set of vertices which satisfies $$$f_G(a, b) = f_{G'}(a, b)$$$ for all $$$a$$$ and $$$b$$$. Problem 2 asks to minimize the number of edges in $$$G'$$$, and problem 3 asks to count the number of such graphs $$$G'$$$.

First, note that the graph is undirected. This mean that we can always walk back and forth along an edge to increase our distance by 2, so $$$f_G(a, b) \implies f_G(a, b+2)$$$. Thus, for each vertex $$$a$$$, we actually only care about the shortest distance to reach it with either parity. We will represent each vertex $$$a$$$ as an ordered pair $$$(x, y)$$$ where $$$x$$$ is the minimum distance from vertex $$$1$$$ to vertex $$$a$$$, and $$$y$$$ is the minimum distance from vertex $$$1$$$ to vertex $$$a$$$ of the opposite parity as $$$x$$$. Note that $$$x < y$$$, and $$$y$$$ may be equal to $$$\infty$$$. All we need are for these ordered pairs to match up between $$$G$$$ and $$$G'$$$.

Let's consider the problem where we're given a collection of these ordered pairs, and we want to add some edges to make the ordered pairs actually reflect the shortest paths. Note that a vertex has shortest path $$$d$$$ iff there is no edge to a vertex with shortest-path less than $$$d-1$$$, and for each vertex, there is at least one edge to a vertex with shortest path exactly $$$d-1$$$. Let's formalize this.

Because the graph is undirected, we can only add an edge between two vertices $$$v_1=(x_1, y_1)$$$ and $$$v_2=(x_2, y_2)$$$ if $$$x_1$$$ and $$$x_2$$$ are close, and $$$y_1$$$ and $$$y_2$$$ are close; otherwise the edge will form a shorter path than is allowed. More precisely, we can derive these constraints:

Therefore, we can have the following types of edges:

A graph is valid iff it uses only edges of the above form, and each vertex $$$(x,y)$$$ has at least one edge to a vertex with value $$$x-1$$$ and at least one edge to a vertex with value $$$y-1$$$ (or $$$y = \infty$$$).

Note that some $$$y = \infty$$$ iff the graph is bipartite, in which case all of the $$$y$$$ are $$$\infty$$$. This case is easy to solve for both parts, we can only add edges when $$$\lvert x_1 - x_2 \rvert = 1$$$, and each vertex must have at least one edge to a smaller $$$x$$$.

Otherwise, $$$y \ne \infty$$$, and we can think about this as a collection of $$$2n$$$ constraints, one for each $$$x$$$ and one for each $$$y$$$. Each edge satisfies 2 of these constraints: type 2 edges satisfy both constraints for the right-side vertex, type 3 edges satisfy the $$$y$$$ constraint on the left vertex and the $$$x$$$ constraint on the right vertex, and type 4 edges satisfy the $$$y$$$ constraint on both vertices. Thus, we can think of this as an

edge-cover problem in the graph of constraints. The graph of constraints is very well structured: all edges connect two constraints whose vertices satisfy $$$x_1 + y_1 = x_2 + y_2$$$, and within a group of equal $$$x+y$$$, the graph essentially forms "layers", with these vertices and edges. For $$$x+y = 2g+1$$$, the group looks like$$$y$$$-constraints of vertices $$$(g,g+1)$$$$$$x$$$-constraints of vertices $$$(g,g+1)$$$$$$y$$$-constraints of vertices $$$(g-1,g+2)$$$$$$x$$$-constraints of vertices $$$(g-1,g+2)$$$Then, we can solve problem 2 using a greedy algorithm (going backwards through this chain), and we can solve problem 3 using a $$$O(n)$$$-state $$$O(n^2)$$$-transition DP based on the number of unfulfilled constraints at each level (this requires some combinatorics/PIE).

One edge case to handle is that the root's x-constraint is automatically satisfied.

Any chance you could share your code for #3? I finished my solution shortly after the contest ended, and I’d like to stress test it against a correct solution in order to figure out whether I still have any bugs left to fix.

https://ideone.com/qaYE9T

Thank you!

UPD: I think my code is correct after stress testing it against the solution above. In case anyone's curious, here are my solutions to the Platinum problems:

Problem 1: https://pastebin.com/B48yzp4B

Problem 2: https://pastebin.com/kmLvisir

Problem 3: https://pastebin.com/CAPxzb4m

I'm fairly certain my P3 code is (much) nastier than it needs to be, but ah well. (That said, it's only slightly longer than ecnerwala's solution above.) I can't guarantee that my P3 solution is fully correct, but my generator isn't finding any countercases. (The other two solutions received AC in-contest.) As a bonus, my P2 code includes commented out versions of several incorrect greedy solutions I tried :p

Why does the greedy strategy work in problem 2? After constructing the meta graph, I couldn't figure out how to best cover things. On an unrelated note, is there a "flowy" solution to P2? In contest, I proved that, aside from the cliques formed by nodes of the type $$$(x, x+1)$$$, the graph is bipartite (I might be wrong though). Is there a way to model this as a flow problem? Min edge cover can be reduced to max matching, so without those cliques it's just max matching on a bipartite graph.

Yes, it is just max matching, but general flow algorithms would be too slow; you want to use the structure of this graph. I'll talk about this in terms of max matching, but the same arguments work if you directly think about edge cover.

Why the greedy max matching works: the graph is organized into layers, where each layer is connected to the previous and next, and furthermore the bottom-most layer is a clique. Then, if you start with the top-most layer, those vertices might as well be matched to arbitrary vertices in the next layer; there's nothing else for them to be matched to, and "saving" the next layer's vertex isn't worth it, because it's worth at most one other matching (this is the matroid-like property of bipartite max matching in general). The vertices in each layer are totally symmetric, so there's no real tradeoffs inside a layer we need to think about. Thus, you match as many as you can, and then move onto the next layer. At the bottom, the max matching in of the leftover vertices in the clique is obviously $$$\lfloor v/2 \rfloor$$$.

Thank for all! I undarstand all of these comment.

When I read analysis of P2 from Benq, i understand all besides two cases and explaining quadratic DP (i didn't read approach 2 for the present)

In your explaining I cannot imagine what's greedy I need

Can someone explain, how it must work in this task

when understanding a subtask is harder than an entire problem

383 on USACO Gold

Modern Art 3On an unrelated note, how hard was this gold contest? This is my first time participating in USACO Gold, and I'm curious how bad my 383 is :D

Can anyone please explain their solution to Gold Problem 2 and Problem 1 also, although i solved Problem 1 but not sure of my approach because I used some Hashing solution.

Here is my solution to problem 2:

Firstly remove all ranges of equal elements and replace them by one of that element. For example if $$$2,2,2$$$ is a range in the array then just replace by $$$2$$$ because we can always include the other $$$2's$$$ while painting. Let $$$dp[i][j]$$$ denote the minimum number of moves to paint the array starting from $$$j$$$ to $$$i$$$. Notice that if the last element or the first element of this range is equal then just paint the array from either $$$j$$$ to $$$i-1$$$ or $$$j+1$$$ to $$$i$$$ since in the move we paint the $$$i^{\text{th}}$$$ element we can paint the $$$j^{\text{th}}$$$ element too. Otherwise we iterate over all values $$$j \leq mid \leq i-1$$$ and take the min of $$$dp[mid][j] + dp[i][mid+1]$$$ since we can always find a mid where it is optimal to split the array and solve the problem seperately for them. Time complexity for calculation the DP is $$$O(n^3)$$$. Here's my implementation during the contest

Problem 1:

Moves become

You lose if the number $$$f(m)$$$ of piles with $$$m$$$ stones is always even.

Try all $$$k$$$ and check if you can reach a losing configuration for the opponent: you can if all $$$f(m)$$$ are even except either $$$f(1)$$$ (you can fix it in $$$f(1)$$$ ways) or both $$$f(x)$$$ and $$$f(x+1)$$$ (you can fix it in $$$f(x+1)$$$ ways).

I got these observations, but I still don't what the optimal time complexity is. I implemented an O(n * sqrt(VALMAX)) solution but got TLE. My idea was that a pile will change its "group" at most 2 * sqrt times.

There are at most $$$n / k + 1$$$ nonzero values of $$$f(m)$$$ for each $$$k$$$. Hence, the complexity is $$$O(n \log n)$$$ (maybe $$$O(n \log^2 n)$$$ because of binary search).

Well shit, I didn't think about grouping the piles with the same number of stones. That would have made it O(nlogn)... Instead, I've considered each number independently, which I still think is O(n*sqrt)

What's your hashing solution to problem 1, if you don't mind sharing?

Can someone tell gold P3 solution? I tried to find some observations but nothing really worked :(

Did you find the observation that it was a fractal? While I didn't get the solution, I was thinking some log base 3 decomposition to find the parts where the diagonal intersects with a part that has 1's. Then, as you split you diagonal into those parts, if we observe the line being at a certain part of a smaller square (such as a main diagonal) then, we can add those 1's to the total.

If you can get the answer for a prefix of a diagonal, you can use a technique similar to prefix sums. Now let's work on finding the answer to a prefix.

Like Tomg mentioned, it's a fractal. First, if $$$x+y$$$ is odd for the prefix $$$(x, y)$$$, the answer is always $$$0$$$. I imagined the fractal as a bunch of "blocks," each with a certain "level," so a block of level $$$i$$$ is made up of 9 blocks of level $$$i-1$$$. Now imagine you had some function to get the answer for some prefix of a diagonal if the prefix started on the edge of some block (call this $$$f(x)$$$ where $$$x$$$ is the distance from the origin). Then to answer some query, the diagonal intersects the sides of at most 2 blocks of level $$$i-1$$$ (for which you can use $$$f(x)$$$ to find the answer), and is in the middle of exactly one block of level $$$i-1$$$. Now you can recurse on the block that it's in the middle of. The total number of recursion levels is $$$log_3(10^{18})$$$ because that's the highest level that exists that has coordinates $$$\leq 10^{18}$$$ (because each time the size of the block's edge triples). But there's still a problem, how do you find $$$f(x)$$$?

At this point, I just printed out some of the small (non-zero) values, noticed they were powers of $$$3$$$, and just put the $$$log_3(x)$$$ for small numbers of $$$x$$$ into oeis, and I found this sequence. It seemed likely because the description seemed very related to the problem at hand, so I just went with it. I used this definition

`a(n) = a(floor(n/3)) + (n mod 3) mod 2`

, as it would solve for what I needed easily. If anyone knows how to prove this formula, lmk. Sorry for the not very clean solution.The final complexity is $$$O(q*log^2(10^{18}))$$$ which should be fast enough (I stress tested this against a slow solution but I'm not in gold, so I couldn't submit in contest).

Unfortunately, any outside resources, including OEIS, are no longer allowed in USACO.

The idea would be: since this sequence is kind of like a fractal itself, we basically start with: 0,1,0. This is where the ((n mod 3) mod 2) part comes from. Then, since our sequence continues as 0,1,0,1,2,1,0,1,0, we could apply the same fractal principle we have been using in the grid to our sequence. The triplets (0,1,0), (1,2,1) and (0,1,0) "map" back to (0,1,0). For example, floor(4/3)=1, and the value at the index 1 of the sequence is 1, so we add 1 to ((4 mod 3) mod 2) to get 2.

Hopefully this makes things more clear.

I meant to ask why the answer for some edge of a block corresponds to that sequence, not why the formula corresponds to the sequence. Without oeis, it's not hard to notice the pattern (as you said, it follows the same generic pattern as the 2-d fractal), but my question was why that formula is the answer for some edge, not why the formula corresponds to the sequence. I also don't know why the answers for some edge are powers of three, or in general why this works.

Have you tried drawing out the grid? If you do so, it should be clear that as you go along the edge, the number of 1's along the diagonal is 1,0,3,0,1,... This is because as we move along the edge, we first only intersect the square with 5 1's, which is why we have 1,0,3,0,1. After we are done with this first small square, our diagonal intersects 3 of these small squares, leading to 3,0,9,0,3. Then, we yet again intersect one small square (the rest of the diagonal is within regions consisting of only 0's). This pattern now repeats, except as 3,0,9,0,3,0,9,0,27,0,9,0,3,0,9,0,3, because have moved onto a larger "level" of the grid

You can simply do a DP over the ternary representation to count all $$$dx$$$ such that $$$x+dx$$$ and $$$y+dx$$$ both have same parity at every position in their representation (you just need to maintain the carry on both x and y, as well whether currently $$$dx$$$ is bigger than $$$d$$$).

Results are now out!

http://www.usaco.org/index.php?page=feb21results

wow that's insane...orz vrooooom