By abacabadabacaba, history, 5 months ago, ,

Google Code Jam 2019 Round 2 will begin at 14:00 UTC. Top 1000 participants will advance to Round 3 and win a t-shirt.

Note that there will be no Distributed Code Jam this year.

Let's discuss the problems after the round is over.

• +92

 » 5 months ago, # |   +39 One thing I hate about google code jam — before every contest I spend 10 minutes trying to figure out how to enter. This time I could not figure it out. Does anyone have a link to the elusive contest dashboard?
•  » » 5 months ago, # ^ |   0 Even I am trying to figure out the same. Every time before contest they seems to start the countdown. This time it's just link to past problems.
•  » » » 5 months ago, # ^ |   +32 There's a Compete now buttun on the page: https://codingcompetitions.withgoogle.com/codejam I think that should be okay
•  » » 5 months ago, # ^ |   +32
 » 5 months ago, # |   0 "Has begun"Round 2 in 10 minutes HmmmmmIs this a sign that I am going to load the dashboard for 10 minutes before I can start?
 » 5 months ago, # | ← Rev. 2 →   0 Anyone here getting round loading...? It seems like mine webpage is not loading...EDIT: worked after I switch to firefox from chrome
•  » » 5 months ago, # ^ |   0 Posting a picture doesn't workIs this the same issue? I didn't try switching browsers, but it just started working after a few minutes.
•  » » » 5 months ago, # ^ |   0 yes, I only waited 2 minutes, so I don't know if it started working. But you still lose a few minutes waiting tho..
 » 5 months ago, # |   +3 What is the intended solution for A? I had a $O(n^3 \log n)$ solution that got TLE.
•  » » 5 months ago, # ^ |   +8 Check the "critical weight ratio" between two molecules in O(n^2) The answer is the number of different ratios, so use set or something to get O(n^2 log n)
•  » » 5 months ago, # ^ | ← Rev. 2 →   0 Suppose the two metals have weight 1 and $m$. For every pair we can find $m=m'>0$ such that the two molecules will have same weight (if no such $m'$ then their order is fixed), i.e. if $mm'$ then the ordering of some molecule pair will be changed. Check the number of intervals of (0,inf) separated by $m'$s, which is number of possible $m'+1$
•  » » 5 months ago, # ^ |   0 Represent each molecule as a point, now we want to know the number of different directions of type \ parallel to some segments between points
 » 5 months ago, # | ← Rev. 2 →   +207 Is this just me or is it the worst Code Jam round ever existed?Two maths tasks (well, actually one, since I'm pretty sure you can't solve C.small without A.small); and in addition one fine-tune until it works interactive task.Got 467th with A.small, B and C.small for a T-Shirt; which is all I needed from this round.
•  » » 5 months ago, # ^ | ← Rev. 3 →   +50 Well I think there are probably worse rounds than this (in particular I think last year's Round 2 was worse than this year personally), actually I didn't think it was that bad. I am not sure about B since I didn't have the time to pass it (though as you said I believe it's just fine-tuning some constants). I think A, C are just glorified standard problems (C is just finding fraction with smallest denominator between two fractions), but D is actually not bad (at least I think there is some idea involved, though it's not hard and I was frustrated at some point due to missing stupid cases in my code).I think I don't really expect much from GCJ qualification rounds like this anymore so it really wasn't as bad as I expected.
•  » » 5 months ago, # ^ |   +46 Actually, C-small is not all that mathy. You can just check all answers between (1, 1) and (200, 200), each in linear time. Don't even have to prove it to get it working.
•  » » » 5 months ago, # ^ |   +18 Fair enough, I haven't realized that. I guess I just solved the non-existing subtask for O(N), since in A.small I just computed for every permutation the equation C1 * y < x < C2 * y; and I copy-pasted that code into C.small and iterated over x, finding smallest y for it (if any) in O(1).
•  » » » 5 months ago, # ^ |   +5 why not (1,1) to (100,100) as mentioned in que?
•  » » » » 5 months ago, # ^ |   +8 The inputs (atom counts) are guaranteed to be in [1,100]. This does not mean that the outputs (atom weights) have to be in the same range.What happens for the input with molecules (1,99), (2,1), and (1,100) in this order?
•  » » » » 5 months ago, # ^ |   +13 Well, the fraction $\frac{a + c}{b + d}$ is definitely between $\frac{a}{b}$ and $\frac{c}{d}$, so we have a 2x upper estimate. Turns out it is a tight bound, as the least denominator fraction between $\frac{98}{99}$ and $\frac{99}{100}$ is $\frac{197}{199}$. See Farey sequence or Stern-Brocot tree for formal or visual explanation.
•  » » » » » 5 months ago, # ^ |   +32 "Not all that mathy" "See Farey sequence or Stern-Brocot tree for formal or visual explanation." You can choose only one.
•  » » » » » » 5 months ago, # ^ |   0 Nah, I choose both.To have the test set C1 accepted, you just need a bit of intuition on the bound.To understand what is going on, and solve C2, yeah math will help.
•  » » » » » » » 5 months ago, # ^ |   0 Everything is simple if you know it in advance. I read about Farey sequences. Even Farey himself just conjectured without proving this simple (a+c) / (b+d) expansion.
•  » » » » » » » » 5 months ago, # ^ |   +31 Everything is simple if you know it in advance. True. Regarding C1, in fact, it gets judged immediately, so you can just try some bound which fits into the time limit, and correct the bound if you get WA/TL.
•  » » 5 months ago, # ^ | ← Rev. 3 →   +64 I agree, when I saw how straight forward, classical and pure math C was I couldn't believe my eyes. I never had time to learn about continuous fractions cause they seemed useless in decent contestsPS: This round was complete shit but I now remember last year's round 3, so I guess this is not really worst of history but definitely in top. The quality has been pathetic lately in GCJ
•  » » 5 months ago, # ^ |   +23 +1 The hard part of solving C boils down to finding the "least denominator" fraction between a/b and c/d. This is done with Continued Fractions / Stern-Brocot and is not nice. Ended up copying code to do this part.Here is a recent(ish) task which requires the same technique.I really hope that the solution for problem B is nice, because otherwise fine-tuning is, indeed, not fun either.
•  » » » 5 months ago, # ^ | ← Rev. 2 →   +241 You don't need this stuff, if segment is rational. Here is elementary solution, for that case. Let's consider we are solving problem for $[\frac{p_1}{q_1}, \frac{p_2}{q_2}]$. We also allow fractions like $\frac{1}{0}$ as positive infinity.Here are three cases: $\frac{p_1}{q_1} \ge 1$. Let's subtract $\left\lfloor \frac{p_1}{q_1} \right\rfloor$ from both numbers. Obviously, solution is same up to adding it back. $\frac{p_1}{q_1} < 1 < \frac{p_2}{q_2}$. $\frac{1}{1}$ is answer! Note, that it minimize both parts of fraction. $\frac{p_2}{q_2} \le 1$. Let's solve for $[\frac{q_2}{p_2}, \frac{q_1}{p_1}]$. This is the same problem (up to answer inverse), but now we need, to minimize numerator, not denominator. But this doesn't matter, because step 2 optimize both! Full coderat find_best(rat l, rat r) { if (l.x >= l.y) { ll d = l.x / l.y; rat res = find_best({l.x - d * l.y, l.y}, {r.x - d * r.y, r.y}); res.x += res.y * d; return res; } if (r.x > r.y) { return {1, 1}; } rat res = find_best({r.y, r.x}, {l.y, l.x}); return {res.y, res.x}; } In fact, this doing the same — we need to find first place, where continued fractions differs, and cut answer here. There is much harder solution, which constructing this fraction from the other side, and it's harder to understand and have complexity problems in easiest implementation. Here complexity is obviously O(log n), because it's equivalent to two Euclid algorithms in parallel. The main problem with this solution — it have nothing to do, when we don't know borders in advance, but can check if point is to the left of out segment, inside, or to the right. While harder solution can do this too.
•  » » » » 5 months ago, # ^ | ← Rev. 2 →   +5 sorry, ignore it
•  » » » » » 5 months ago, # ^ |   0 2/11, 17/35 < 1/2
•  » » » » 5 months ago, # ^ |   +8 "Obviously, solution is same up to adding it back" — how is this obvious? If all you want is to find a rational number between those 2, then yes, but if you want to minimize either its denominator or it numerator, then it definitely doesn't look obvious (minimizing y in x/y doesn't imply minimizing qy in p/q+x/y=(py+qx)/qy. Second problem: why doesn't subtracting p1/q1 repeatedly from p2/q2 overflow at some point? This is definitely not trivial to see either.
•  » » » » » 5 months ago, # ^ |   +8 You may have misread this; notice that the thing we subtract is always integer and thus doesn't cause overflow
•  » » » » » » 5 months ago, # ^ |   +8 Ohh shit you're right. That clears both my concerns. Sorry
•  » » » » 5 months ago, # ^ |   0 What happens for $0/1$ and $1/2$ in your solution. I know it's not hard to find the wanted fraction in that case but I wanted to ask because you mentioned it's the full code and I'm not sure if I'm missing some point.
•  » » » » » 5 months ago, # ^ |   0 It becomes $[2/1, 1/0]$, after applying third rule, than $[0/1, 1/0]$, after applying first, here anser is 1. By going back, it's converered ti 3, and than 1/3. Seams to be correct
•  » » » » 5 months ago, # ^ |   +5 How is it that when you are minimizing the numerator you are allowed to do step 1? For me step 1 only really makes sense when you want to minimize the denominator.
•  » » » » » 5 months ago, # ^ |   +10 Fact is answer is the same if you minimize numerator with denominator being tie-breaker (it is quite easy to see why)
•  » » » » » » 5 months ago, # ^ |   0 Could you please explain more?
•  » » » » » » » 5 months ago, # ^ |   +20 Suppose fraction in [l; r] with smallest denominator is a/b and one with smallest numerator is c/d. Note that c/d <= c/b <= a/b, hence c/b is faction with both smallest numerator and denominator in this segment
 » 5 months ago, # |   +3 Damn, I spent the entire time trying to figure out why my A wasn't working. Anyone have the solution? Here's what I tried: find every ratio which would make A and B equal for every possible pair A,B of molecules, and perturb it by a very small amount in each direction, then try all of those slopes. But it didn't work... I hope it wasn't just due to not enough precision. https://gist.github.com/VitamintK/bf43a1191f0278ceca2d8cd89ec1fc18
•  » » 5 months ago, # ^ |   -8 Answer is number of different positive ratios +1.
•  » » » 5 months ago, # ^ |   0 damn, it was that simple? Just for the sake of closure, do you have any idea why my solution didn't work? It's just doing some extra work on top of that. Or is it probably just floating point imprecision?
 » 5 months ago, # | ← Rev. 2 →   0 I got A small+large, C small and D small for 519. For C large, I reduced the problem to finding a point with minimum x and y, such that m1 <= x/y <= m2, but wasn't able to get a good solution when m1 and m2 are fractions upto sizes of 1e9. Can anyone share the idea for this?
•  » » 5 months ago, # ^ | ← Rev. 3 →   +5 https://codeforces.com/blog/entry/65500?#comment-496352UPD: well, a more detailed explanation is that we want to find a point with the smallest $x$ which is inside an angle whose sides are two rays from the origin (say, the lower one is $y/x=p_1/q_1$ and the other one is $y/x=p_2/q_2$). One can see that a point $(q_1 + q_2, p_1 + p_2)$ is ok for us, so the upper bound on $x$ is $q_1 + q_2$. Now we can find the smallest such $x$ using binary search and comparing the numbers of lattice points below these lines, and this is why I dropped the linkUPD2: don't read my comment, it contains a complicated solution comparing to other comments here which are more related to your question
•  » » » 5 months ago, # ^ | ← Rev. 2 →   +11 Wait, I thought about binary search, but if an integer point between the lines exists for x = m, it may not exist for x = m+1, right? Is it some modification to the procedure?Edit: all right, I see. In binary search, we just compare lattice points using the code linked. Leaving this here for anyone else having the same doubts. Thanks!
•  » » » » 5 months ago, # ^ |   0 I also calculated m1 and m2 in A small. For C large, I was trying to find the first x, such that ceil(x/m1) != ceil(x/m2). I did this using binary search. The final answer will be according to me x, ceil(x/m2). But I kept getting WA, is there any logical flaw ?
 » 5 months ago, # |   +19 C large seems to be classic? Is that well-known? I would be very disappointed if that happens.
•  » » 5 months ago, # ^ |   +18 Yes here
•  » » 5 months ago, # ^ |   +26 I was thinking in the lca of stern-broccoli trees.
 » 5 months ago, # |   +24 Did someone else didn't notice that there are actually 4 tasks? Contransmutation was hidden because of scrolling.
•  » » 5 months ago, # ^ | ← Rev. 4 →   0 I didn't, until 30 minutes had passed. Although, I couldn't solve it anyway haha. But really, they clearly shouldn't have changed their interface.
 » 5 months ago, # |   +27 If anyone cares about B – here is my "solution" (seriously, how is this 23 points): 50 days — put 5 tokens "100" into 10 vases and forget about them 10 days — have a look at how many tokens are in remaining 10 vases 20 days — put 4 tokens "100" into 5 vases (the ones which have maximum amount of tokens from previous query) and forget about them 5 days — have a look at how many tokens are in remaining 5 vases 9 days — put 3 tokens "100" into 3 vases (the ones which have maximum amount of tokens from previous query) and forget about them 2 days — have a look at how many tokens are in remaining 2 vases 3 days — put 3 tokens "100" into 1 vase (the one which have maximum amount of tokens from previous query) and forget about them 1 day — put token "100" into the remaining vase
•  » » 5 months ago, # ^ | ← Rev. 2 →   +34 my solution even simpler:first 60 days, put "1" into the first 12 vases. Each got 5 votes.Then ask 8 other vases for the minimum. Find minimum here we put our vote. All others vases put "1".The overall idea was those first days will not give us any information about distribution, so there is no point to ask. Let's just remove more than half vases by ourselves, then ask remaining part what is the distribution. And other votes put using the knowledge of the distribution.
•  » » » 5 months ago, # ^ |   +15 Gg I wanted to code this and submit in the last few minutes but failed to do so by a few minutes.
•  » » » 5 months ago, # ^ |   -7 Yeah, I got too scared and decided to overkill it a bit, by doing this approach multiple times.I still don't know what they expected us to do for 23 points — prove that this approach will have enough probability of working?
•  » » » » 5 months ago, # ^ | ← Rev. 2 →   +67 Are you saying it’s too easy? If you look at the scoreboard, lots of people struggled with it. The problem is pretty open-ended and there are many different things you could try which may not work.Also, it’s not like your solution is that “simple”.
•  » » » » » » 5 months ago, # ^ | ← Rev. 2 →   +64 It was by far the hardest problem for me. It seems like you stumbled upon the intuitive strategy quickly, but it was not at all clear for me that you should separate the vases and not treat them equally, or that picking your vase at the beginning is not likely to work.I agree that the hard part is not really algorithmic, but that doesn’t mean it will be trivial as a problem.
•  » » » » » » » 5 months ago, # ^ |   0 but it was not at all clear for me that you should separate the vases and not treat them equally, or that picking your vase at the beginning is not likely to work Same here, actually. I implemented the solution of picking one vase, polluting all the others and just putting one of my tokens in the "picked" one in the last day. That passed 10 out of first 20 sample sets.Then I tried still picking that vase, but instead of putting 5.1 tokens in each; put 4 instead, query all vases and then top-up the smallest. Same percentage. I reckoned this is because I reduced the pollution size by 1 token and the query is so large, that if I do my final query at 76 days each, it can be completely incorrect at the final day.From this the only logical choice I could see is reducing the problem by making some vases irrelevant and thus making the final query as late as possible.So, I think I went through all of the things you mentioned, so I'm genuinely curious if you could get stuck somewhere, or one of these transitions weren't as obvious as they seemed to me.
•  » » » » » » 5 months ago, # ^ |   +84 Maybe there is just one path forward. I wrote 300 lines and didn't get more than 75%.I think that you dislike this task for wrong reason. You guessed intended solution right away. I don't want to use word "correct" solution, because I don't see how this stupid thing is better than anything else.I don't understand people saying "you could write generator and check if this idea is correct". Yes, I can. I even did it. My idea wasn't correct. So, is it a guessing contest?I hate this kind of tasks.
•  » » » » » » » 5 months ago, # ^ |   +15 I hated these kind of tasks ever since IOI 2010. But they're so common at this point, I think I got tired of complaining about them, as long as they don't let contestants wander somewhere of with no chance of actually solving the task.If I take the task language from IOI 2010 for example, you're describing exactly my thoughts on that. But that task was actually open-ended, and you could do pretty much anything, whether here I felt that your options are very limited by types of queries.It's very possible I just accidentally stumbled upon solution and still can't come up with any other approaches to this problem because of tunnel vision; so I'd really like to hear yours or ksun48 thought process while solving this task; and I'll concede that I'm wrong and there were indeed multiple paths of solving this with no way to know in advance what would actually work.In which case I'll remove my complain that the task gave too many points and transfer it to a complaint that such a task exists for exactly the same arguments you mentioned.
•  » » » » » » » 5 months ago, # ^ |   +87 My impression is that this is a marathon task rather than an algorithm task.
•  » » » » » » » » 5 months ago, # ^ |   +13 I'm really curious whether the organizers have a proved solution for it. Running it and seeing it working doesn't prove shit about the variance or the distribution of the probability.I've only once set a randomized algorithm-based problem and I had to let the limits significantly looser than what appeared to be the success rate of my source, up till a level where I could prove the probability for my program to get AC. I did that only because I find it setter's duty to make sure that a proved approach works.I can't see how one can model this sort of behavior up till a level of proving that the probability that the algorithm will fail less than 10% of the inputs is really low. And I'm more certain that this problem shouldn't exist than they can be of their solution's correctness if there is no such proof
•  » » » » » » » » » 5 months ago, # ^ |   -8 There was a fun probabilistic problem in GCJ round 1* about 4 years ago: you have some random permutations generated by 2 generators, where one of them is uniformly random; determine which generator generated which permutation, with at least [%] accuracy.The solution was to run each generator locally, notice that the non-uniform one prefers permutations with specific properties (like $P_i = x$) and use these properties to estimate a probability that a permutation was produced by the non-uniform generator. That was a very "hey, as long as it works" solution, but interesting nonetheless.
•  » » » » » » » » » 5 months ago, # ^ |   0 Sounds interesting, yet inappropriate. This pretty much supports my point that these guys don't seem to care about proofs. In my world, it generally matters more to do a proof than to come up with lucky approaches that sometimes work because that makes the difference between blindly following your intuition and thinking and getting out of your comfort zone. You can make up all sorts of problems that still require some decent idea and nice intuition, but that doesn't make them algorithmic puzzles, especially not if the solution is not clearly a correct one
•  » » » » » » » » » 5 months ago, # ^ |   0 Sure. In serious rounds, for a yolo problem like this, I would require proof that the intended solution can't fail depending on implementation (picking slightly different properties). If it's just round 1A and solving everything except this is enough to pass, it's not a big deal.
•  » » » » » » » » » 5 months ago, # ^ |   +59 I don't think the situation is as complicated as you make it out to be. For a given program, when you run it on a test case, it will succeed with some fixed probability $p$, independently of the other test cases (assuming the judge RNG isn't very bad). So, as mentioned in the editorial, the number of successes among 250 test cases has a binomial distribution, with parameters 250, p, and the probability of WA is $f(p) = \sum_{k=0}^{224} {250 \choose k} p^k (1-p)^{250-k}$. In the limit $250 \to \infty$, by LLN $f(p)$ would tend to 0 for $p > 0.9$, and 1 for $p < 0.9$. Of course we are not in this limit, and so there is a non-negligible probability of getting AC with $p < 0.9$ or WA with $p > 0.9$. For reference, $f(0.89) = 0.65$, and $f(0.91) = 0.25$.You might wonder how the judges would prove that their solution has $p > 0.9$. Maybe they could do this by explicitly analysing their solution, using some bounds, maybe dynamic programming; I don't know. But in practice there's no need to. If you run your solution on $n$ test cases, and observe a success rate $\hat p$, then $\hat p$ has expectation $p$ and standard deviation $\sqrt{p(1-p)/n}$ (for reference, this is $0.3\%$ for $p = 0.9$, $n=10^4$). If you observe $\hat p = 0.95$, which they seem to have done, then you probably have better things to worry about (by CLT, the probability of $\hat p$ being more than say 10 standard deviations from $p$ should be very small). Indeed they claim they have a solution with $\hat p > 0.98$ (for reference, $f(0.98) < 10^{-10}$).Of course, the situation would be entirely different if there was a possibility of constructing malicious test data.Anyway, this problem seems to be more or less orthogonal to most of competitive programming, but that doesn't make it a bad problem in itself.
•  » » » » » » » » » 5 months ago, # ^ |   -23 Alright, so I haven't worked much with CLT yet, but I pretty much understood your point, and I'm sure enough of 2 things: That you can't conclude anything practically, not even if you have a perfect RNG (which I believe is a decent assumption) "By CLT" — as you, yourself, mentioned this works for N -> inf. I'm not saying that I wouldn't be willing to bet all my money for that — I'd do that for Goldbach Conjecture as well. Practice doesn't prove anything, and any practical experiment you'd do is shockingly independent from future ones, so there's really no way to correctly, formally conclude anything this way. You're basically saying that in practice there's no need to prove. I sort of agree with that on the contestants' side. On the setters' side though, it's their duty to prove the solutions they have. Why? Because if they were free to author anything that happens to work nothing would be fun anymore. When thinking about a problem, you're reasoning and sometimes letting go ideas that seem unprovable, because you must have certain mechanism of choosing which directions of thinking not to pursue.
•  » » » » » » » » » 5 months ago, # ^ |   +38 I don't think this is a fair comparison. In the case of estimating a fixed probability, running more trials makes you increasingly "confident" in the result, no matter what your priors are.With something like the Goldbach Conjecture or Riemann Hypothesis, this is not exactly true. I wouldn't say that verifying some large amount of finite cases can quantitatively push your probability that the Goldbach Conjecture or Riemann Hypothesis is true. There could be some legitimate reason why the first counterexample is very large.However, if the probability for this problem is actually $p = 0.8$, it actually means that our trials were very "unlucky" and more trials would most likely have convinced us of this.
•  » » » 5 months ago, # ^ |   0 Mine was overloading 16 vases for the first 95 turns and looking at other 4 vase sizes trying to resolve tie on day 100
•  » » » 5 months ago, # ^ | ← Rev. 2 →   0 I used the same approach with checking the percent. You was lucky enough to pick exactly 12 vases. If you picked less than 12 or more than 15 vases, the algo wouldn't work for 90%+ accuracy.
•  » » » » 5 months ago, # ^ |   +5 I wasn't lucky. I never tried to use their generator. When I finished writing code to choose the best value, I just made 4 or 5 submissions with different constants. And 12 vases passes.
•  » » 5 months ago, # ^ |   +3 The only logical way to solve this task (even from the first submission) was to write a wrapper of console interaction, then write two realizations of the wrapper interface (a real console interaction and a fake with test generation), which allow you to generate 1 mln tests on the fly and check them.And then start writing complete bullshit like yours, looking at the percent of successful attempts, until it gets %91+.
 » 5 months ago, # |   +106 The current GCJ interface made me wonder for quite a while: what could be the rationale for such design? I followed the thread on Google Groups where the organizers answered some questions about it, but I still don't get it.Maybe Google is already captured by robots? And they use the contest to try their devious designs on us?..At least the problems themselves are still made by humans, and interesting enough to take part in spite of the strange interface. Thanks to the setters for holding their ground!
•  » » 5 months ago, # ^ |   +21 If I open the scoreboard in Chrome iOS, in landscape mode I can see exactly 0 scores. Because the non-scrolling top part takes all the screen and no amount of zooming or scrolling can help me.
•  » » » 5 months ago, # ^ |   +13 I too can assemble a list of what is wrong for me in the current interface, but after following the thread linked above, I see there is no point in doing so here.
•  » » 5 months ago, # ^ | ← Rev. 2 →   +20 Every freaking time I have to modify the CSS to get more space for the problem statements. It really amazes me how a company like Google can suck that much at designing UI.
•  » » » 5 months ago, # ^ |   0 Perhaps one point from the thread is worth repeating here: if you want the criticisms to be heard by the Google staff, take the discussion to their platform.
•  » » » » 5 months ago, # ^ |   +8 This has already been mentioned in that thread. If it hasn't, I don't think I would care to take it there after reading response like this: Round overview should be on its own page, not tucked above the scoreboard.I disagree. Why? I mean, that may be good, but it also means one more page. Right now there are 2 pages total (for the competition experience): one that is general to the round and one that is problem-specific. That seems sound and simple to understand, which is important for many users, especially new ones.
•  » » 5 months ago, # ^ |   -8 James Damore's case at Google could provide some insight into the 'what'. Specifically, it seems like midwits at (among other places) middle management focusing on non-issues to hide the fact that they're midwits; this GCJ rebuild 2: Electric Boogaloo, a commitment to social justice and the recent comedy with Google's AI Advisory Board (it was dismantled about as quickly as it was formed, after a protest) are just some examples I've noticed. As Google got big, it became less of a place of excellence.
 » 5 months ago, # |   +46 Did anyone face issues with bias in randomness for p2 (the interactive one). My solution worked on the local grader every time I ran it with a random seed. I submitted it 5-6 times but all gave WA. To fix this I just generated a random permutation to shuffle the the vases before outputting it. This shouldnt theoretically make any difference but with this I got AC which leads me to believe that there is some issue with the randomness. AC Code: https://pastebin.com/hwUYZHdp WA Code: https://pastebin.com/UMiNBkkv
•  » » 5 months ago, # ^ |   0 I have same problem, but I added permutatuion only after your comment. Now I have AC in practice and feel extremely frustrated, because I didn't qualify with correct solution of B. I think that such this situation is dishonest, because test was obviously generated manually, and problem says that test is generated randomly
•  » » » 5 months ago, # ^ | ← Rev. 2 →   0 I really don't think that the test was generated manually. I would say its because of fault in randomness and how they pseudo-random and not totally random. Also you could try to appeal the decision. Maybe that might help.
•  » » 5 months ago, # ^ |   0 I don't know whether this was due to randomness, but my solution was supposed to pass about 20% of the time, and I made around 30 submits, WA...
•  » » 5 months ago, # ^ | ← Rev. 3 →   0 Had the same problem. My solution gets AC 9 / 10 times locally (using the testing tool with a random seed) but unfortunately the data they used for testing was one of those 1 / 10 bad seeds.After the competition I added in random permutation of all the urns and on the server I now get AC around 50 % of the time. So they used a really unfortunate seed for me.Oh well, got < 1000 so still qualified for the next round. But I still do feel a bit cheated on problem B.
 » 5 months ago, # |   +41 Me during the contest...
 » 5 months ago, # |   +50 Feels bad to solve nothing :(
 » 5 months ago, # | ← Rev. 2 →   +19 I followed pretty much the exact editorial solution for New Elements Part 2, but I kept getting WA in contest. Can someone check out my code?https://ideone.com/gOO7XnAnd congrats to everyone who qualified for Round 3!
•  » » 5 months ago, # ^ |   +5 where do we find editorials ?
•  » » » 5 months ago, # ^ |   0 It's an Analysis tab right next to the problem.
•  » » 5 months ago, # ^ |   0 Alternatively, I would appreciate if someone either (a) gave a break case, or (b) the test data from GCJ to find the break case.
•  » » » 5 months ago, # ^ |   +6 I just ran a few tests, so I don't now why your program gives WA. Here are some cases: 5 2 7 1 10 5 1 5 3 8 3 Your: 10 4 Right: 5 2 4 2 4 8 1 6 10 7 7 Your: 10 3 Right: 4 1 Hope it helps
 » 5 months ago, # |   -18 What is a commercial address? I don't understand.
 » 5 months ago, # |   +5 Can someone clarify the solution to problem D? The simple cycle case seems incorrect in the analysis. Consider a simple cycle with lead as one of the nodes. (The second transformation just pushes lead to a useless node.) You would not create an unbounded amount of lead in this case as you would push the amounts around the cycle of metals without creating more metal.If anything, perhaps I'm misunderstanding the editorial or problem? I'm very confused at this problem as my solution matches my brute force solution. But neither pass the data. :(
•  » » 5 months ago, # ^ |   +10 Yes, $Q'_L$ (as in the editorial) can have one cycle that contains vertex 1. It should be handled. The rest of the editorial looks fine to me.
•  » » » 5 months ago, # ^ |   +8 Thank you for the help! I figured it out. My issue is I reduced my metagraph for no reason (removed parallel arcs). My sim also did this, so both were wrong in the same way. RIP t-shirt... :(The editorial seems to just be missing explanations of the special cases (cycles that don't branch).
 » 5 months ago, # |   +16 During contest I tried to test my solution locally using testing tool, but it just hanged without any output. I thought I misunderstood IO format (because it's not easy to understand it for interactive problems, and something is different every time). At the end I gave up and started to work on problem C.But after contest I found that test script hanged when it tried to print to stderr around test case 200. I'm not sure why, maybe there is some length limit for subprocess.PIPE? I modified script a little (I removed stderr=stderr_pipe on line 35 in interactive_runner), and it started to work. Having working testing tool I was able to find quickly correct parameters for my solution.
•  » » 5 months ago, # ^ | ← Rev. 3 →   +5 I also had that hanging issue, I just redownloaded and it worked afterwards maybe they had fixed something in the meantime.
 » 5 months ago, # |   +8 Is there any way to get the testdata ? I'm getting wa on D but I couldn't find any mistake.
•  » » 5 months ago, # ^ |   0 Ok , found it.
•  » » » 5 months ago, # ^ |   0 wait, did you found your mistake or testdata? If testdata, where can we find it?
•  » » » » 5 months ago, # ^ |   0 Found my mistake, not the testdata
•  » » 5 months ago, # ^ |   +44 In case anyone is interested, here is a variation, where 3. and 4. are slightly simplified: As above, run backwards BFS from lead to find metals that produce lead. Delete the rest. Run forwards BFS from metals with non-zero initial amount, to find metals that can occur. Delete the rest (and their edges). For any non-lead metal that cannot be produced by anything else, push its grams forward, then delete it. Check if the resulting metals can still be produced, by keeping track of in-degrees. If there is still a node that produces more than one metal, the answer is UNBOUNDED. Otherwise it is the total number of grams left.
•  » » 5 months ago, # ^ |   0 where are the official solutions located? I can not find it
•  » » » 5 months ago, # ^ |   +3 There is a bar above written as 'Show round overview'
•  » » » 5 months ago, # ^ |   +11 Tell this to the codejam ui designers analysis tab