Today at 19:00 MSK

We can discuss problems here after the contest

# | User | Rating |
---|---|---|

1 | tourist | 3810 |

2 | MiracleFaFa | 3604 |

3 | Radewoosh | 3549 |

4 | maroonrk | 3527 |

5 | Benq | 3514 |

6 | slime | 3511 |

7 | Um_nik | 3480 |

8 | greenheadstrange | 3430 |

9 | ksun48 | 3401 |

10 | ecnerwala | 3346 |

# | User | Contrib. |
---|---|---|

1 | YouKn0wWho | 207 |

2 | Monogon | 200 |

3 | Um_nik | 193 |

4 | awoo | 190 |

5 | -is-this-fft- | 185 |

6 | sus | 177 |

7 | Errichto | 174 |

8 | antontrygubO_o | 173 |

9 | maroonrk | 167 |

9 | SecondThread | 167 |

Today at 19:00 MSK

We can discuss problems here after the contest

↑

↓

Codeforces (c) Copyright 2010-2022 Mike Mirzayanov

The only programming contests Web 2.0 platform

Server time: Jan/28/2022 15:03:11 (h3).

Desktop version, switch to mobile version.

Supported by

User lists

Name |
---|

Short SRM blog :D

What a bloody systest :( Btw, what is the common error for 250 Div.1???

Here is test I used to challange everyone:

{60, 59, 25, 27, 74, 47, 65, 79};

{67, 104, 33, 63, 143, 53, 66, 151};

UPD. Drawing it will make you understand pretty easily

:o Tricky 250 :P. Well played, that test causes my solution to fail :P

It's not tricky, it just has shitty samples. I'm sure you'd dismiss it as "okay, that A was easy, let's move along" here in CF, because there's a difference between random whocaresaboutwhatIputhere sample and a manual one intended to avoid failing many people who can solve the problem just because they hurry (which isn't surprising, considering it's blind coding with strict time penalties).

Hmmm... It's true that it had shitty samples, but it doesn't happen to me often that I came up with a wrong solution and that was the case here, even if it's a pretty easy to fix mistake, so I will stick to my opinion :P.

I think it's really a matter of psychology/experience with programming competitions and how TC problems (IIRC just recently) don't conform to it.

It's easy to make small mistakes when coding, especially with easy standard problems with easy short implementations, where we don't tend to pay attention to every bracket. The very reason why multiple samples / pretests / full feedback / no time penalties / subtasks / you name it exist is to decrease the impact of these small bugs on the final result — you're either free to test your code more, or submit it and get it... checked at the cost of small penalties. The more contests you do, the more you're able to utilise this.

When there are 5 samples, 2 of which are trivial and the other 3 random, passing them actually gives you hope that your solution is correct. You don't just miss possible bugs, you're told to not look for any (so to speak). Gee, what a surprise when 70% of all submissions fail.

To reply to Mimino's profile quote on TC, the game is rigged against us :D

Talking more generally I fully agree, I also dislike that high amount of failing solutions. In last half of a year 3 or 4 times it occured to me that I submitted easy and medium hoping to get AC and both of them failed :P.

And maybe it is just such format of contest? I am also not a huge fan of this, but i am used to it. It is called "TopCoder". So it should be focused on finding best coder, right? But if someone is doing stupid bugs often — maybe he isn't actually a good coder? I am sure that you will find small bug in a short code fast, but when you write solution for ACM problem, and it has 300-400 lines of code — sometimes it takes a lot of time to find even stupid thing like missed

llsomewhere in this code.You mentioned contests without penalties. When I participate in such contests (for example — at Hackerrank), I often don't even compile my code locally, if program isn't complicated. Why I have to do it, if rules don't punish me for CE/TL/RE/WA? And someone would just say "why he didn't got a penalty for it, in ACM he would already have a huge penalty time for all that dirty debug, and your rules are strange". But we are not at ACM ICPC contest, we are at Hackerrank:)

In general i prefer CodeForces — it has challenges during contest, wide range of possible strategies, more ACM-style problems and a lot of other advantages. But I don't want TC to make it "like at CF" just because "it is sad to fail because of small bug". We already have CodeForces:) At CF one should expect relatively strong pretests (and if they are not — you'll see it by number of challenges), but I don't think that this works for TC.

Well these programming contests should be fun. You know what isn't fun? Being overly penalised for small mistakes that you can't fully avoid. Maybe for you, a "good coder" is someone who can take a complete set of ideas and write a perfectly correct implementation of them on paper, and then make some tests and blind-challenge others, but for me, it's moved much more into the thinking part and much less into the blind coding (while pretending to not be blind-coding) part.

Also, Topcoder is the company's(?) name. The track SRM are in is called "Algorithm".

Also, what I'm complaining about isn't a long-term trend, it's a recent one. I looked at a few consecutive summer SRMs (and TCO rounds, even if they make it slightly worse) and div1 250 never had below 50% accuracy; in fact, it's often been over 75%. In the last 10 SRMs, it's been below 40% in 4 cases. That's not one outlier, that's almost half the time (we don't have better statistics for what I'd call a recent trend, but it goes months back)! It's not that TC format is bad, it's that it broke recently and it's not good.

Some more statistics: there are

17 pagesin "Problem Archive" for div1 250s;only the first page contains problems with accuracy up to 40%, the next one up to 50% and the increase slows down quickly.Maybe the samples are deliberately bad? They have some wrong solutions (that they want to fail only at systests) in mind.

You mean like all solutions that have a small bug that can be easily fixed? That's not a good idea.

Not too long ago, I used characters A instead of Y (as what's supposed to appear on the input) and it still passed all samples. Why even have them, why not make it fully blind coding? Because that's what it effectively is.

Also, my short experience as TC problemsetter includes: nobody cares about the content of tests.

From my experience as TC participant — their testcases are weak even less often than testcases at CF:) And usually they are well-prepared.

Well they're not this time, and not several times before, as evidenced by the number of failing solutions. Of course, if we don't consider the possibility that we're total idiots and just don't see that simple stuff immediately.

What is connection between number of failing solutions and quality of testcases? I don't say anything about samples, I am talking about full dataset — it does not allow bad solution to pass in most cases; comparing to some sites like CodeChef, where overal quality of problems is very low, and overal quality of testcases is also horrible — it is a huge advantage of TopCoder.

Oh, you're talking about all tests? This discussion has been about samples, because they're the part that affects thinking during the contest.

I don't really pay attention to systests (apart from when using them to debug in practice), but yeah, they're probably good.

`Also, my short experience as TC problemsetter includes: nobody cares about the content of tests.`

My less short experience as TC problemsetter: everyone cares about the content of tests (especially samples).

Then I suppose your recent problem TheKingsArmy (div1) had only a single non-trivial sample with the intention of making almost everyone fail?

(Don't take this as an attack, I'm just curious about why stuff happens.)

No, it was a surprise for me, as well as (and even more) the statistic for d1 easy / d2 med. For some problems (mostly when they are pretty unusual) it's hard to predict how people will solve it. But I agree that such numbers are too low.

Usually we add a couple of non-trivial samples to a problem. Sometimes we add a single huge testcase, which is good when it is not easy to test time limits on your own.

For the problem you mentioned, adding a huge random test would have probably changed the statistic radically.

Anyway it's not correct to say that nobody cares about tests content.

I challenged 3 people with {1, 5, 9, 13, 17, 21}; {5, 9, 13, 17, 21, 25}.

Killer systest :o

Everyone in my room was playing the "troll ffao" game during challenge phase: As soon as I clicked the button to challenge someone, that person was already challenged u.u

More like standard TC systest recently. The samples on 250 were incredibly weak as usual, because my silly mistake was caught by systest 4 and passed the samples comfortably.

And the sample of 500 is weak as well..

I haven't noticed "The order of soldiers does not have to be preserved.", but the samples can't save me. >_<

Please share the idea how to solve 500 problem

Hint — 2 reflections is parallel shift

Refelecting x-coordonate and y-coordonate in parallel ?

Reflect x coordinate is not central reflection

Composition of 2 reflections (as described in problem statement) is a parrallel shift by some vector (namely double of vector between reflection centers)

How to prove the corectitude of the 250 solution ?

Thanks.

My solution: while true find the leftmost endpoint of an interval that isn't covered by a promoted one, find the interval that starts before it and has the rightmost endpoint, promote that interval.

This greedy is completely obvious. We need to cover one of the intervals that start before the leftmost end (otherwise, the interval that leftmost end belongs to would remain uncovered), and it's best to take the one that covers as many other intervals as possible. It's also easy to code.

Oh , I see. I've made the same thing by sorting intervals by left and making DP. I should have relised it was a simpler solution.

Could you please explain your dp? Thanks

As I specified , we sort the intervals ascending by left. Now let dp[i][j] = the minimum number of marked intervals between the ones among (i,i+1,...,j).

For some dp[i][j] , firstly you check if there is one interval that covers all the others in the range (i,j). If there isn't such interval will try to minimize dp[i][j] as follow:

`dp[i][j] = min(dp[i][k]+dp[k][j]) , for each k from i to j-1`

The solution will be in dp[1][n].

{{0, 2}, {7, 4}, {4, 2}, {7, 0}, {2, 2, 10}, {7, 2, 2}}

Could anyone tell my why the result of this test case in div1-500 is "impossible"?

From (0, 7) -> (4, 7) by using (2, 7)

From (2, 4) -> (2, 0) by using (2, 2).

=> So the correct answer must be "possible"?

You can't use a tower to move only part of soldiers; when you use any tower — all soldiers are moved at the same time.

Thanks. I didn't read the statement carefully. The test is quite weak, my wrong solution passed all pretest, and passed the first 140 tests in practice room.

lolwut? OMG, I thought that you can move any soldier as many times as you want separately...

That problem can also be solved. Just determine which point can go to which points and check if there exists a perfect matching in created graph :)). Another observation is that if

pcan be transformed toqandqtorthen alsopcan be transformed tor, so checking that matching is pretty trivial, because we can divide vertices to classes of equivalence and check if all classes have the same number of point from original and desired set :D.I can't get how to check if point p can be transformed to point q.

The only idea I have is solving set of 2 linear equations with 3 variables. And solution must be integers. But I don't know how this can be done.

The problem I am going to describe is that of determining whether the translation (

tx,ty) can be obtained as a combination of several shifts along the sides of a triangle. The reduction from the original problem to this one is described somewhere above in this thread.Notice that any two sides of the triangle suffice because the third one always can be expressed as their difference and we are not interested in optimizing the number of translations.

Suppose the triangle is degenerate. Then the problem is essentially one-dimensional and we are interested if our (

tx,ty) point can be reached from (0, 0) using the smallest step we have in our disposal (viz. the greatest common divisor of the lengths of our two triangle sides, though be careful not to introduce fractional numbers). That is of course if the corresponding vector is parallel to the line that our triangle vertices are on.Otherwise, the affine transformation

maps the triangle (0, 0), (

a,b), (c,d) to (0, 0), (1, 0), (0, 1) and preserves point-line incidence so all that is left is to check whether the image of (tx,ty) has integer coordinates.The general question is this: Can you represent a vector v by an integer combination of vectors from a set S?

Since we're only dealing with two dimensions (though we can do n-d with this), we should be able to represent the whole set of obtainable vectors by choosing at most two appropriate combinations from S. This is intuitive since we can do this if we're allowed to take linear combinations, but note that we're dealing with something more restricting here.

If we had linear combinations, we'd do Gaussian elimination. So we'll do something similar here. The idea is to apply Euclid's algorithm to each dimension in order. Here's what I mean: as long as there are at least two vectors with non-zero x-coordinate, we apply the Euclid's algorithm to two such vectors until one x-coordinate becomes zero. Once there is only one such vector (or maybe none) we proceed to y (excluding the vector with non-zero, if it was found, from further consideration).

In the end, since there are only two dimensions, we'll have at most two non-zero vectors. Since the operations we applied are reversible, the two vectors obviously represent all the possible integer combinations of the initial set. Also, at most one of the vectors has a non-zero x-coordinate, so it's easy to check whether a vector v is a combination of those two vectors (the x-coordinate must come from the one with non-zero x, and the rest comes from the one with zero x).

In comments above you can see

goodanalytic solutions(honestly, I am not so good at math, so it took some time for me to understand them, but now it seems that i've got it; thank you, guys:) ).During contest I used a naive brute-force. I'll describe it here. This solution is ugly, but requires less math :D Let's assume that you already fixed unpaired mirroring operation, or you don't have unpaired mirroring operations at all. Now you want to solve

a*v1+b*v2+c*v3=v4, wherea,b,care integers,a+b+c=0;v1, v2, v3, v4are vectors. Looking at constants, first idea is "I see 10^6 instead of 10^9, it means that there is a naive solution with brute-force". Trick is — if there are no solution with at least one"relatively small"coefficient, then there are no solutions at all. Let's assume thatais small (and later just copy-paste same solution forbandc). In this case you can try all possible integer values ofa, and for fixed value you have new equationb*v2+c*v3=v5, whereb+c=A. Now you can setv6=v2-v3, and get even easier equation —A*v2-X*v6=v5. This is a linear equation with a single variable X:) You just have to check that values of X are integer and equal for both coordinates.But there is actually 2 variables if you look more attentive

How can I see systests?

In web Arena (java) in practice. Practice Options -> Run System Test.

Thanks, I got it now after 7 past contests.

Idea please on Div2 1000?

see here ;-)

What was the asymptotic of the intended solution for Div1-900? I came up with 5^11*22 + 1000000*22 simple operations, but it seems to be too slow (I didn't try though).

My solution with the same running time passed (in 1.969 ms). https://ideone.com/HGequz

It should be something like O(m*n*2^n + n*4^n).