### kostka's blog

By kostka, 3 months ago, , This is just to remind you that the second round of this year's Code Jam will take place today (5/16) at 14:00 UTC.

1,000 highest-ranked contestants advance to the third round and you will receive the famous Code Jam tee if you are within these 1,000 contestants.

https://g.co/codejam

Good luck, have fun!  Comments (64)
 » The round starts in 45 minutes.
 » 3 months ago, # | ← Rev. 5 →   Sorry ,just a little misunderstanding ..
•  » » I think in this context tee means T-shirt :) (https://www.urbandictionary.com/define.php?term=tee)
•  » » » Oh , thanks i didn't knew that.
•  » » You should suggest Google to produce famous Code Jam tea as a reward for the contestants.
•  » » » I Don't have any contacts in Google , there is no way i can suggest them (except email).
•  » » » » There's always the Codejam Google Group.
 » The constraints for B were a bit unfortunate, I think it would have been preferable to include a second visible set with positive values of $X$.
•  » » 3 months ago, # ^ | ← Rev. 2 →   It would also have been helpful to have a test case in D where some of the numbers were not 1 ...(I was super lucky in the sense that I was doing a problem quite similar to D just before the contest, and that I managed to catch an error 5 min before the end of the contest without actually constructing additional test cases...)
 » How to solve B2?
•  » » 3 months ago, # ^ | ← Rev. 3 →   Divide vertices on two sets: 1) x < 0 (for those vertices set x = -x); 2) x > 0. Sort both sets in non-descending order. Now we should add vertices to graph (which initially contains only vertex 1) and note time of its adding (initially 0). If value of next vertex from set 1 are less or equal to the number of currently added vertices (initially 1) than add it (increase time by 1 if value is equal to the number of currently added vertices) else add vertex from other set (set time equal to vertex's value). Continue until both sets become empty. My AC solution// #pragma comment(linker, "/stack:200000000") #pragma GCC optimize("Ofast") //#pragma GCC target("sse,sse2,sse3,ssse3,sse4,avx,avx2") #include #define fi first #define se second #define all(x) x.begin(), x.end() #define sz(x) (int)(x).size() #define LL long long int #define ULL unsigned LL #define LD long double #define PB push_back #define PF push_front #define FORi(i, a, b) for(int i = a; i < b ; ++i) #define FORd(i, a, b) for(int i = a; i > b ; --i) using namespace std; using pii = pair; struct Edge { int id, from, to; }; int main(){ ios_base::sync_with_stdio(false); cin.tie(NULL); cout.tie(NULL); mt19937 rng(chrono::steady_clock::now().time_since_epoch().count()); clock_t start_t = clock(); int t, n, m, x; cin >> t; FORi(ti,0,t){ cin >> n >> m; set after, time; FORi(i,1,n){ cin >> x; if (x > 0){ time.insert({x, i}); } else { after.insert({-x, i}); } } vector> edg(n); FORi(i,0,m){ Edge e; e.id = i; cin >> e.from >> e.to; --e.from; --e.to; edg[e.from][e.to] = e; swap(e.from, e.to); edg[e.from][e.to] = e; } vector parent(n, -1), at(n, 0), ans(m, 1e6); parent = 0; for(auto e: edg){ parent[e.second.to] = 0; } int timer = 0, connected = 1, v; while(!time.empty() || !after.empty()){ if (!after.empty() && after.begin()->first <= connected){ timer += after.begin()->first == connected++; v = after.begin()->second; after.erase(after.begin()); } else { timer = time.begin()->first; connected++; v = time.begin()->second; time.erase(time.begin()); } assert(parent[v] != -1); at[v] = timer; ans[edg[parent[v]][v].id] = at[v] - at[parent[v]]; for(auto e: edg[v]){ if (parent[e.second.to] == -1){ parent[e.second.to] = v; } } } assert(connected == n); cout << "Case #" << ti+1 << ":"; for(auto y: ans){ cout << ' ' << y; } cout << '\n'; } return 0; } 
•  » » The important idea is that, for a list of (numerical) latency, there always exists a way to set the weight on the edges conforming the latency requirement: for an edge (x-y), the weight is $|latency(x) - latency(y)|$if they are not equal, or any nonzero number otherwise.The rest is described as gultai4ukr
 » Me waking up on 14:06 UTC (Thus, those who wear shirts are shameful)
 » WTF, B has two samples? Don't do things like that...
•  » » What's wrong with having to test your solution yourself?
•  » » » What's wrong is putting the second sample at the end of the statement, way below the first sample, so that some people could have missed it (and some did).
•  » » » » +1
•  » » » » LOL, I didn't notice it too, until now.
•  » » » » » Same here. Submitted considering that sample 1 contains cases with X > 0.
•  » » » » Breaking news: According to Google UI/UX team working on the GCJ website, this is a great decision that will attract more people due to how user-friendly it is.
•  » » » » I did and it costed me the qualification because of a stupid mistake when copy pasting part of the code. This is so frustrating
•  » » » » I still don't get it. I also didn't notice it until now but I don't know how it affected the competition.
•  » » 3 months ago, # ^ | ← Rev. 3 →   And no sample for D-hard. I would understand this from some random competition but I always treated GCJ as the best competition in the world and now this shit...EDIT: well, it's obviously a small detail in the whole competition but next time I would be happy to see useful samples. It makes the competition less random. (I had another opinion a few years ago btw.) It's just hard to imagine organizers deciding that it's better not to provide a test for D-hard.
 » Could you help explain me why uncommenting line 41 in this source code gives WA for problem C?https://pastebin.com/JkaeuBRi
•  » » The third argument of std::unique must be ==, not <, maybe that's why
•  » » » 3 months ago, # ^ | ← Rev. 2 →   Wow, ok. Good to know. Thanks!Edit: What a combo-breaker for the sort -> unique paradigm; I wonder why they chose it like that.
•  » » » » Because unique has weaker contract on data
•  » » 3 months ago, # ^ | ← Rev. 3 →   I was mistaken, please ignore.
 » 3 months ago, # | ← Rev. 3 →   Does anyone have an idea of what's wrong with my solution for C? I've tested this with tens of thousands of random inputs against Scott Wu's solution and I can't find a case that makes it fail. Code// wormhole.cpp : This file contains the 'main' function. Program execution begins and ends there. // #include #include #include #include using namespace std; #define int long long int gcd(int a, int b) { while (b) { int x = a % b; a = b; b = x; } return a; } int t, n; pair p, pc; signed main() { cin >> t; for (int c = 1; c <= t; c++) { cout << "Case #" << c << ": "; cin >> n; for (int i = 0; i < n; i++) { cin >> p[i].first >> p[i].second; } if (n <= 4) { cout << n << '\n'; continue; } int mx = -1; for (int a = 0; a < n; a++) { for (int b = 0; b < n; b++) { if (a == b) { continue; } int dx = p[b].first - p[a].first, dy = p[b].second - p[a].second; int g = gcd(abs(dx), abs(dy)); dx /= g; dy /= g; copy_n(p, n, pc); for (int i = 0; i < n; i++) { int x = pc[i].first, y = pc[i].second; pc[i].first = dy * x - dx * y; } map cnt; for (int i = 0; i < n; i++) { cnt[pc[i].first]++; } int one = 0, odd = 0, even = 0; int total = 0; for (auto& x : cnt) { if (x.second == 1) { one++; } else { if (x.second & 1) { odd++; } else { even++; } total += x.second; } } if (odd & 1) { one++; even++; odd--; total--; } if (one == 0 && even > 1) { total--; } else { total += min(one, 2ll); } mx = max(mx, total); } } cout << mx << '\n'; } } Edit: I got it to work after reading the analysis, but I'm not exactly sure why my solution is wrong.
•  » » Probably overflow.
•  » » » Testing cases with large coordinates also doesn't seem to work...
•  » » Random tests are shit in this problem, you won't get any special combinations of collinear points.
•  » » » 3 months ago, # ^ | ← Rev. 3 →   I also did random tests with small coordinates to try to increase the chances of those. I figured with 7 points there aren't that many combinations and using points in a 20x20 square should hit at least some of them. Manual testing also didn't work.
•  » » I Think this is wrong: if (one == 0 && even > 1) { total--; } Try this:answer is 6 1 6 0 0 1 0 100 10 110 10 1000 15 1100 15 
•  » » » 3 months ago, # ^ | ← Rev. 3 →   That was it, thank you!
 » Can someone help me to understand the following case for the third problem (Wormhole in One)? 1 7 0 0 0 2 0 3 0 4 0 5 8 8 11 11 I see accepted solutions printing $6$, but my solution outputs $7$ and here's how:Connect points $(0, 0) - (0, 5)$.Connect points $(0, 3) - (8, 8)$.Connect points $(0, 4) - (11, 11)$.Here is a figure to visualize the scenario. The same colored points are connected.Now from the point $(11, 11)$, we hit the ball straight up. It first goes to $(0, 4)$ because they are connected, from there to $(0, 5)$, from there to $(0, 0)$ because of the wormhole, to $(0, 2)$, then $(0, 3)$ which teleports it to \$(8, 8) finally visiting all the holes.What am I missing here? Did I misunderstand the problem?
•  » » 3 months ago, # ^ | ← Rev. 2 →   The ball gets stuck at $(0,2)$ for your construction. When a ball enters a hole that is not connected to another hole, it stops.
•  » » 3 months ago, # ^ | ← Rev. 2 →   The ball stops at (0, 2). Note they are holes, so once the ball hits a hole, it will stop unless the hole being connected to another hole via a link.
•  » » » Thanks FieryPhoenix and ll931110, my bad. This simple misunderstanding ruined the contest for me, should have re-read the statement again instead of trying to debug correct solution of a different problem.
 » Have you noticed the second sample test in problem B?strawpoll
•  » » Lol ,it took me a minute to realise what you are talking about because I was really not having any idea about second one. RIP to me
•  » » I did, but pretty late.
 » 3 months ago, # | ← Rev. 2 →   Well, here is my screencast with some commentary. I tried to solve all problems and ended up getting the 525th place https://www.youtube.com/watch?v=kvzvsyUZp7Q
 » 3 months ago, # | ← Rev. 2 →   My solution in D large was similar to my recent problem: Giant PenguinAt first, we should note that we can change $CC$ to $(CC)$ by adding additional brackets with zero $l_i, r_i$ and infinite $p_i$. Like that, we can change $CCCCC$ to $(((CC)C)C)C$, so our "bracket tree" will be binary.In this binary tree, we can find a subtree with the size $\frac{1}{3}n \ldots \frac{2}{3}n$. We can note that if we delete two border vertices of the segment corresponding to this subtree, we will get two independent problems with sizes $\leq \frac{2}{3}n$, so we can build something similar to "centroid decomposition," like in the problem that I've mentioned before.
•  » » What is your user name there, I was just curious to look at your solution. Thanks in advance for replying
•  » » » 3 months ago, # ^ | ← Rev. 2 →   CCCiqDon't thank me (I've got insider info) *_*
 » 3 months ago, # | ← Rev. 2 →   Why did I get WA instead of RE for index out of bound issue in problem B?Such misguiding verdict cost me a lot of time.
•  » » C++ arrays have garbage values in them outside the declared bounds. As far as I know, declaring arrays in C++ is simply allocating a bunch of memory (unlike Java, where arrays are objects and out-of-bound access is an undefined operation and will throw an exception). So if you don't go too far, the compiler may not even notice.I remember in problem B of round 1A (Pascal Walk) I had an int and I went as far as index 1000 in the first testcase, but got AC (lol), so it is just something to pay attention to.
•  » » » In fact different compilers give different verdict on the same issue.I once got RTE for exceeding memory limit.But I wish GCJ compiler would have been advanced enough to catch index out of bound issue.I wasted 45 minutes only to find I didnt declare enough memory.
•  » » » » Btw, have you stress tested your code? I mean, it's unlikely you would get AC with out-of-bound behavior, so......
•  » » » » » Nope.I got ac after declaring enough memory.
 » My screencast without many commentaries, where I learn that non-ac verdicts on sample tests aren't accounted in penalty and that trying to read numbers eternally from an empty stream under their constraints causes MLE, not TLE
 » I don't like hidden verdicts :(. Failed A's second test case because of integer overflow and lost my shot at a t-shirt.What's wrong with immediate feedback?
•  » » 3 months ago, # ^ | ← Rev. 5 →   Me also, strongly don'e like it, as I always write buggy code.Set instead of multiset in B, and 1e9 instead of $\sqrt{2e18}$ in A were enough to fail hidden tests in both and to kick me out of qualification. Although, my time was very enough and good.
•  » » » Tho I don't like hidden verdicts as well, A could have been easily tested by trying out max values (I myself caught about 3 bugs related to this), so I think it's kinda unfair to blame the hidden verdicts for that.
•  » » 3 months ago, # ^ | ← Rev. 2 →   Who said there is some mistake in immediate feedback. Each contest(or better, website) has its own way. Its nice that there are different variants of contests so its not boring with a same ICPC format.
•  » » » Yes. There's nothing like the awesome atcoder where you can see all tests. Then Yandex where you can see the number of first failed test in all tests. Then the codejam and codeforces, but codeforces is bit better as the pretests are diverse. Then, the worst topcoder, where I even can't guarantee positive score. IT'S PERSONAL OPINION.
•  » » I actually miss the days when a lot more competitive programming contests were evaluating the skill of testing your implementation. In the long run I feel that it certainly helped me to write working code on the first try more often.So the last 15 minutes of this round, for example, instead of focusing on D.small, I specifically was testing my solutions (1-line max test for A, multiple manual tests for B.large since B.small is useless for testing that etc.). Every contest has different rules and you have to be able to adapt your strategy to each one of them.
 » That moment when your A is 1 character away from being correct, and you miss an opportunity to make round 3 :(
•  » » I know that feel bro: screencast. Fixed one char and got AC 50 seconds after the end
•  » » » That screencast is so sad yet op. You should get a t-shirt just for being closest to deserve making round 3 but not with video proof. Rip. At least you can probably make a failure meme template out of that lol.
 » Another way how to think about D-large:Construct the weighted directed graph $G$ which is naturally implied by the problem. $G$ has treewidth $2$. In other words, there exists a tree $\mathcal{T}$ ("tree decomposition"): whose vertices are subsets ("bags") of $V(G)$ of size at most $3$, for any $uv \in E(G)$, there exists a bag containing both $u$ and $v$, each vertex $v \in V(G)$ occurs in bags forming a subtree of $\mathcal{T}$. For instance, the example test: (The graph on the left consists of bronze and blue edges (actually, pairs of edges); blue edges move between adjacent characters, while bronze edges move between matching parentheses. Tree decomposition on the right.)Now, the main idea of the algorithm boils down to the centroid decomposition of $\mathcal{T}$. In each recursive call, we find a centroid bag in $\mathcal{T}$ run (at most) $3$ Dijkstras from and $3$ Dijkstras to each vertex in this bag, answer the queries we can do, and carefully recurse with remaining queries. We still solve the problem in $O(n \log^2 n + q \log n)$ time.I figured it might be worth mentioning about this method as it works for any graph with bounded treewidth ("looking kinda like a thick tree").
•  » » 3 months ago, # ^ | ← Rev. 3 →   Why do you complicate things introducing treewidth even though you can say that removal of any pair of matching parentheses disconnects graph, so you can take centroid of the tree on the left and forget about the right one? Of course I am a big fan of treewidth but I don't see any profit hereEDIT: OK, I understand, removal of any pair of matching parentheses indeed disconnects the graph, but not in the way as the left tree disconnects, cause brothers of removed vertex stay connected. Now I see the point.
 » Posted here is the code I've written for wormhole in one. It worked on every test I ran on it, and it follows the logic of the analysis, and it still doesnt work. Can anyone spot the mistake? I would love to know what it was, so I could fix it the next time, because I can't seem to find any mistake. (the code is a bit dirty because i've tried some fixes, but it should still work) codedef main(): N = int(input()) holes = [] for i in range(N): a = [int(x) for x in input().split()] holes.append(tuple(a)) if N <= 4: return N slants = {} actual_lines = {} for cur_hole in holes: for j in holes: if j == cur_hole: continue if cur_hole == j: slant = 'inf' else: slant = float(cur_hole - j) / float(cur_hole - j) slants.setdefault(slant, [set(), 2]) slants[slant].add(cur_hole) slants[slant].add(j) if slant == 'inf': y = cur_hole else: y = cur_hole - slant * cur_hole yy = (slant, y) #actual_lines.setdefault(yy, set([])) #actual_lines[yy].add(cur_hole) #actual_lines[yy].add(j) #for i in actual_lines: # if len(actual_lines[i]) % 2: # slants[i] ^= 3 a = slants[max(slants, key=lambda x: len(slants[x]) + (len(slants[x])%2==0))] a = len(a) + (len(a)%2==0) return min(N, max(a+1,4)) num_of_cases = [int(x) for x in input().split()] for p in range(num_of_cases): output = main() print("Case #{0}: {1}".format(p + 1, output))