Codeforces celebrates 10 years! We are pleased to announce the crowdfunding-campaign. Congratulate us by the link https://codeforces.com/10years. ×

### Una_Shem's blog

By Una_Shem, history, 3 weeks ago, ,

Hi Codeforces!

Colleagues from Google asked to share the announcement. Join, it'll be fun!

Hello!

Google’s team programming competition Hash Code is back for another year of challenging developers around the world to solve a Google engineering problem. Think you could optimize the layout of a Google Data Center? Or how about perfecting video streaming on YouTube?

If you’re up for the challenge, sign up to compete by February 17 at g.co/hashcode.

Hash Code takes place over 2 rounds:

• an Online Qualification Round: Thursday, February 20 from 17:30 — 21:30 UTC: Compete from this virtual round wherever you’d like, including from a Hash Code hub. Hubs allow for teams from the same community (e.g. university or coding club) to compete side-by-side in a fun and exciting environment.

• a Final Round at Google Ireland: Saturday, April 25: Top scoring teams from the Online Qualification Round are invited to our Dublin office to vie for cash prizes and the title of Hash Code 2020 champion.

Whether you’ve just started coding or you’ve been participating in programming contests for years, Hash Code is a great chance to flex your coding muscles, get a glimpse into software engineering at Google, and have some fun. Take a look at previous Hash Code problem statements to see the engineering challenges participants have tackled in the past.

Register by Feb 17 at g.co/hashcode

• +208

 » 3 weeks ago, # |   +171 I'd be more supportive of this if Google didn't try to pull political stunts in their programming contests, such as creating Google Code Jam for Women-- Either this suggests that Google thinks women can't compete on a level playing field with men, or is Google finding a legal way to artificially bias hiring practices towards women; both of which are ideas I find extremely unappealing.
•  » » 3 weeks ago, # ^ |   -20 What now, you don't want a contest only for some country because it's discriminatory? There aren't many women in STEM and events/contests like that are meant to help the situation. I understand complaining about unfair interviews and getting a job, but a contest with the goal of popularizing programming for some underrepresented group of people? Come on.
•  » » » 3 weeks ago, # ^ |   +761 Bruh, women are much smarter than men. That's why they don't waste their lives on competitive programming.
•  » » » » 3 weeks ago, # ^ | ← Rev. 3 →   +16 Upvoted. My all-time favorite comment.
•  » » » » 3 weeks ago, # ^ |   -124 Dang/ had to choose between upvoting this and keeping upvote count to 69. Guess what 69 didn't last.
•  » » » » » 3 weeks ago, # ^ |   +21 We can still make it 420.
•  » » » » » 3 weeks ago, # ^ |   -17 shit I thought I replied to antontrygubO_o :|
•  » » » » 2 weeks ago, # ^ |   -27 That's discrimination against men and untrue. Fact: The vast majority of famous scientists of all centuries are not women.
•  » » » 3 weeks ago, # ^ |   -15 I'm not trying to bring discrimination into this or anything, but I would say, yes, it is pretty silly to artificially restrict participants of an online contest to a geographic location. Why take extra effort to exclude people who might want to participate?(Of course, this doesn't include ICPC regionals, for example, where your geographic region matters. In that case it determines the spot at WF that you are competing for)In the case of high school contests it makes sense to restrict participation, because you can't expect high school students to be able to be at the same level as college/post-college IGMs who have been competing for years. But that difference in what a person has time to learn isn't present across age/gender/country of residence.
•  » » » » 3 weeks ago, # ^ | ← Rev. 2 →   +31 You only addressed the first sentence of my comment. In particular, my second sentence is the answer to your question "Why take extra effort to exclude people who might want to participate?".Your logic says it's stupid to organize a competition only for some region of a country (let's say, the championship of New York) or for a single school. The goal of those is also to popularize something in that place.
•  » » » » » 3 weeks ago, # ^ |   -28 The only reason you should exclude people from participating is if the trait that causes them to be excluded is relevant to the contest. The college I attend holds a programming contest in order to select ICPC teams. High school students and students from other universities can join, but they are ranked unofficially, since they aren't competing for a spot on an ICPC team. In this case, location is relevant to what you are competing for.It would be silly to exclude people from participating in an online contest on a global platform where geography isn't relevant, in the same way it would be silly to limit the contest to a race or gender when those aren't relevant either.
•  » » 3 weeks ago, # ^ |   +60 I agree I'm a female and I think it's embarrassing
•  » » 3 weeks ago, # ^ |   +272 There is no rule in competition "Google Code Jam for Women" that bans male participants.
•  » » 3 weeks ago, # ^ | ← Rev. 3 →   +26 There is a girls-only contest in CCPC(China CPC), and I have seen no complaints about that. There is also a CGMO (China Girl Mathematical Olympiad), its problems are quite hard (higher than provincial level contestes), also no complaints there.
•  » » » 3 weeks ago, # ^ |   +55 Nobody questions authority in China ¯\_(ツ)_/¯¯\_(ツ)_/¯¯\_(ツ)_/¯
•  » » » » 3 weeks ago, # ^ |   +7 That's not exactly the case, the computer federation who hosts the contests DO get questions for other contests they hold, and they DO change things according to them.I think the bias did not come from the contest host, but the stereotypes of other people, who thinks that computer geeks should be bald male wearing glasses probably with curly hair hackers focusing on his screen all day do not know how to communicate with others etc So, the problem is not with google (or the authority, or who holds the contests. etc.), it's with these stereotypes. Only if we change these views can we restore the fairness in computing.
•  » » » » » 3 weeks ago, # ^ |   +1 My score 4/7.
•  » » » » » 3 weeks ago, # ^ |   +25 bald and probably with curly hair do not make sense together.
•  » » » » » » 3 weeks ago, # ^ |   0 It's just a general impression of how people see geeks.
•  » » » » » » 3 weeks ago, # ^ |   +124 (this even looks kinda good, but you can find some truly awful combinations of bald + curly hair)
•  » » » » » » » 3 weeks ago, # ^ |   +46 I know nothing about awful combinations. But there are some legendary!
•  » » 3 weeks ago, # ^ | ← Rev. 2 →   -86 Women can not compete in chess and competitive programming as good as men indeed — leaderboard on this site and ELO ratings shows that.Code Jam for Women has nothing to do with politics either. That is just the way to popularize Google's brand, hire more women to have more labor resources. Google's capital allows them to do whatever they want, e.g. infringe the rights of workers.
•  » » » 3 weeks ago, # ^ | ← Rev. 2 →   +123 Women can not compete in chess and competitive programming as good as men indeed — leaderboard on this site and ELO ratings shows that. It shows nothing of the sort. If 1% of people who do competitive programming are women (seems accurate), then obviously there are not going to be many in the top 100. But that doesn't mean that it's harder for women to get into top 100, it just means there are less women.I don't think there is a conclusive way to prove or disprove what you just said. Codeforces leaderboard doesn't show gender (and it shouldn't), and I doubt you looked at 40000 profiles manually and tried to figure out what their gender is. So my guess is that you are talking out of your ass.
•  » » » » 3 weeks ago, # ^ |   -16 It shows the current state — that is not "nothing of the sort".In the current state of the field, women perform not that bright as men. One of the reason is there are too few women as you said by yourself.I have no interest in discussing either cognitive or physical women's capabilities. But I'm open to discuss workers' rights.
•  » » » » » 3 weeks ago, # ^ |   +85 You said that "women can not compete ... as good as men". I'm sorry but it really is hard to interpret it as anything else than "the average woman is not as good as the average man". But I'm open to discuss workers' rights. Cool, but what the hell does any of this have to do with workers' rights???
•  » » » » » » 3 weeks ago, # ^ |   -30 Ok, I see. Probably, I should have said women do not compete as good as men.I made a couple of points in my first comment, replying to SecondThread, but you decided to discuss women.
•  » » 3 weeks ago, # ^ |   +28 We should stop discrimination, contests should be rated for everyone regardless of their sex
•  » » » 3 weeks ago, # ^ |   0 yelllsss!!
•  » » 3 weeks ago, # ^ |   +3 Honestly it's Google showing that they're woke — lip service, no real effect. They don't need to artificially bias hiring practices, discrimination West-style is only forbidden against specific groups.
•  » » 3 weeks ago, # ^ |   +25 I'd be more supportive of this if Google didn't try to pull political stunts in their programming contests Says the one writing political stuff in an unrelated post
•  » » » 3 weeks ago, # ^ |   -26 You also reacted to political stuff.See, this is the shitty thing about political activism: it tends to make one want to respond, so it propagates into unrelated things like programming. Unfortunately, it always ends either with flaming or a circlejerk.
•  » » » » 3 weeks ago, # ^ |   +14 Then write a new thread, rather than spamming Hash Code announcement. You also reacted to political stuff. I didn't say anything bad about that ;)
•  » » » » » 3 weeks ago, # ^ |   0 I didn't say anything bad about that ;) ??? You said "Says the one writing political stuff in an unrelated post", it's about reacting to (Google's) political stuff, it doesn't sound like you're praising it or being sarcastic.Tbh it doesn't belong on CF in any form but I can understand dude's desire to vent.
•  » » » » » » 3 weeks ago, # ^ |   +29 "in an unrelated post"
•  » » » » » » » 3 weeks ago, # ^ |   -36 Your (or my) posts are also unrelated to this blog, so that still applies.
•  » » » » » » » » 3 weeks ago, # ^ |   +30 nice try though
•  » » » » » » » » » 3 weeks ago, # ^ |   -28 I'm glad you admit that I'm right.
•  » » 7 days ago, # ^ |   +78 Let me give you my two cents as a woman that participated in the latest "Code Jam for Women" competition. I've met loads of female competitive programmers here in Brazil and discussed why there aren't as many women in competitive programming. Most of them said it is a pretty hostile environment. Guys choose guys over higher-rated girls or refuse to teach to girls as much because they think they won't learn as fast as other dudes. Because of that they are much less likely to ask questions and learn from their peers. What I liked about the Code Jam for Women is that it gave some of the less experienced girls a place to ask questions about previous competitions, discuss with people similar to themselves and be heard. I had never seen them as a group study as hard for anything else. I normally can get people to respect me enough to hear my ideas (in most cases) but the challenge is with girls at the beginning, that before becoming coders face this silly artificial barrier. Code Jam for Women helps with that.
•  » » 8 hours ago, # ^ |   -10 my personal opinion is .they just marry
 » 3 weeks ago, # | ← Rev. 2 →   +76 It won't be funny if you'll give us a problem about moving some units on a plane...It'd be a final fall of a HashCode.
•  » » 3 weeks ago, # ^ |   +80 organizers quickly adding the third dimension
•  » » 3 weeks ago, # ^ |   +60 Care to elaborate?
•  » » » 3 weeks ago, # ^ |   +58 It's very common in optimization/approximation contests to order units to move (taxis, soldiers, workers) and Radewoosh always complains about it. I agree with him that it gets boring and organizers should avoid that.
•  » » » 3 weeks ago, # ^ |   0 A few examples (including the contest from the announcement).
 » 3 weeks ago, # |   -24 It seems there is a political behavior which is more shameful and hostile from Google. I can not represent my country.
 » 3 weeks ago, # | ← Rev. 2 →   +139 I think that it's a good place to share my thoughts. I'm the greatest anti-fan of HashCode's formula. A few reasons:-- One problem for four people for four hours? Definitely wrong proportions. Do you remember eliminations to Pizza/Deadline24/Marathon24? The last one had five problems for three people. This was something. You had to show your strategic skill: where to spend more time, where to just write something fast and let it run in the background, take problem for yourself or give it to the teammate, etc... In the current formula, it always ends with so small differences. In Marathon, fighting for some epsilons didn't make sense as you can change the problem and get much more.-- You are not solving the problem, you are solving the testcases. Well, on Deadline/Marathon you were solving it too, but they had $10$ files with several testcases each. HashCode offers about two seriously hard tests. If the organizers still think that we are solving the problem, come on, you want to compare the solutions on two tests? For example, TopCoder (which is the most famous in the optimization world thanks to marathon matches) runs solutions on $2000$-$5000$ tests. OK, you won't force us to run on so many, but still, giving us $10$ files with several testcases would sometimes force the best teams to analyze some testcase deeply (as you like it so much) -- to find how it's generated, maybe on what should they focus. Currently, teams HAVE to focus on the properties of these two testcases, not on the whole problem. And again, it makes everybody fighting for some small differences in the same places instead of trying to find some other place to get points.-- The problems (at least on the eliminations) are just boring. There are no places for some nice ideas, everybody ends with almost the same greedy (and has to fight for small differences). A few times I won in a problem on the eliminations to Pizza/Deadline and every time I won thanks to some great idea which nobody else came with.deadline24 2018 Firstly, cut everything like that:And then play with merging these beams.pizza 2018 Make long (almost) diagonal pillars:(ok, on the drawing they aren't so long, sorry for that) And then play with building one on the other.I was very proud of myself coming with these (still, rather simple) ideas. What can we come up with on HashCode? "Huh, maybe send some worker to the nearest workplace on the plane"? xdI guess that even if the problems would be more interesting the aura about good ideas would still be killed with the fact that everybody in the team has to think about the same problem and then more teams would again come up with the same thing. Also, if you think that it's a bad idea to give such a non-standard and complicated problem when there's only one problem in a contest -- you know what to do, give us more problems. In particular, the number of problems $\geq$ the number of people in one team should hold, then you can feel calm that the best teams would advance to the finals. With more than one problem you can experiment and give us many different types of problems (and then one of them can be boring if you like such ones, it won't hurt so much).-- The problems could be better prepared. Here's again the fact about solving the testcases instead of problems, but there's more. A typical situation from a short contest: ok, I finished my greedy algorithms, maybe I found something in a smarter way, what now? Everybody should know the answer — metaheuristics like hill climbing, simulated annealing or changing the greedy to beamsearch (it depends on the problem of course). And what turns out on HashCode? The limits are so hight that you won't be able to use any metaheuristic in an efficient way (at least I've always had this feeling). Everybody, do you understand it? They organize a contest to see who's the best in optimization problems and force us to do only some simple greedy shit xd. Of course, giving us only small data would also be bad. Do you have any ideas about what would be good? Maybe $10$ files with several testcases with various sizes and parameters? We'll never know...To sum up everything -- maybe each of these defects wouldn't hurt so much (but still would hurt), all of them together make the formula of the contest the worst possible ever. I always wonder, how is it possible that Google, the company which organizes the Code Jam, made such a bad contest as HashCode. Just look here, on the section "Scoring"->"How is my score for a particular problem determined?". It guess that people were thinking about this section several times longer than about the whole HashCode."Join, it'll be fun!" -- yea, but what about being professional and making a competition instead of a chaotic game for kids?Dear HashCode team, not everything is lost. I still believe in you. You can still fix your contest and make it make sense.Best regards,Mateusz Radecki a.k.a. Radewoosh
•  » » 3 weeks ago, # ^ |   +9 I completly don't agree with your opinion about hashcode elimination round problems. Probably simple greedy allows you to eliminate, but if you will try something simple in upsolving you will see that that you are far from the first place result.I also don't like that there are only 4-5 tests. May be they do so because all tests are handmade or taken from some "real" data.Deadline/marathon eliminations were very cool, but that doesn't mean that one problem format is worse. They are different, one problem format is more about team interaction, several problems format is much more individual.HC/SA worked on almost all hashcode problems, you are just wrong about it.
•  » » » 3 weeks ago, # ^ |   +23 I remember one time talking with Errichto after the eliminations which were won by them (2018 if I'm not mistaken) to learn what did they do. Guess what. Just greedy with choosing parameters and decisions by analyzing the structure of the test (as there are different amounts of points for different tests, usually one is the most important).Maybe people had working SA, but we won't know who was the best as there are O(1) tests. I think that many tests on Deadline/Marathon also were handmade, so still, it's possible to do more.
•  » » » » 3 weeks ago, # ^ |   +2 Ok, may be 2018 elimination was bad. I don't remember well, but seems that it was issue with test structure.
•  » » » 3 weeks ago, # ^ |   +10 "In my opinion it makes a perfect sense to have quals and finals in the same format." -- I totally agree, but I don't want to speak too much about finals as I've never been on HashCode finals. I've heard that the problems happen to be a bit better than on the eliminations. Comparing eliminations of Pizza/Deadline24/Marathon24 I guess that it'd be nice to see both eliminations and finals of HashCode with multiple tasks for ~5 hours.I'm not suggesting making 24 hours finals, as the competition is definitely about optimization, not about writing bots and so on, but imo there is a better way to run the competition for teams.
•  » » 3 weeks ago, # ^ |   +28 You are basically saying "I enjoy formats of different optimization contests, so this one is bad because it is not the same".If all you have to do is to write one greedy, why are you not able to advance?
•  » » » 3 weeks ago, # ^ |   +12 No, I really tried to show the defects which I see. I told about a small differences. Are you sure that the teams are really sorted according to the strength of their programs? Look here. Yes, probably past glory had a better solution than others, but look at the eliminations. $Second = First * 0.998$.To show my thoughts: how many cf rounds which you won you'd win if they'd consist of only one problem, let's say C? You'd have to fight for milliseconds, while with a whole problemset you can show how good are you, not worrying that small mistakes disqualify you.
•  » » » » 3 weeks ago, # ^ |   +20 Yeah, I understand you, but it is still parts of format.I don't think that it is legit to compare relative difference. For example, in the finals there was a testcase where you can get either $2^{19}$ or $\varepsilon$. All top teams got $2^{19}$ so the relative difference (on other testcases) became smaller. I feel like my comment is messy, what I want to say that if you compare differences to some baseline instead of total scores, the relative difference will increase.Still, I agree that many changes from 2nd place to 1st place solutions feel insignificant and kinda random, but maybe that's what makes this format unique. 1st place team somehow made these changes and won, right?Most of your comment is about you not liking some problems and not liking that there is one problem and not many. From my point of view it really is "I enjoy formats of different optimization contests, so this one is bad because it is not the same".
•  » » » » 3 weeks ago, # ^ | ← Rev. 4 →   +33 About this one. Yeah, Past Glory had a better solution. Above the required knapsack and some greedy algos (teams on 2nd-6th also had that part) they had hand made patterns, which are kind of a constructive solution for the given tests.While I don't consider task from 2019 finals as an example of a greatest Hash Code problems, I can't agree to you accusations.If teams easily get X (let's say X=6M), and then fight for the last Y points. Even if Y << X, it doesn't tell anything about a quality of the problem. It's just non linear scale of the score value (from the solution quality).
•  » » 3 weeks ago, # ^ | ← Rev. 2 →   -10 In deadline the first greedy that comes to mind gets at least top-30 score (more likely top-10). That's how I qualified for 3/4 deadlines (failed acm-task when not qualified). All tests in deadline are more or less random and you don't have to write separate solutions for some tests.Hashcode is harder because: it is more known there are tests where you must write separate solution you can't always know for each test, if your score is fine or you can get more points, you can just estimate it but not compare to other teams' scores As I never qualified for hashcode finals, I should say it is more balanced (stronger people more likely get higher scores)
 » 2 weeks ago, # |   +10 I agree with some of those complaints about the format of Google Hash Code (having participated in the qualification 4 times already). I think there is one simple solution that would improve things considerably — have a few hard inputs that are only revealed during the last hour of the competition. This would favour more generic solutions over the ones tailor made for the specific inputs. Also, this would add way more action and tension in the end, compared to the current scoreboard freeze during the last hour.
 » 9 days ago, # |   0 Can i one tell is it necessary to do that sample question given by hash code ? And which types of question should i practice the most at this time please .
 » 9 days ago, # |   +10 registration extended to Feb 19th, 11:00 UTC.
 » 9 days ago, # |   -25 How many problems do we get to solve in "online qualification round" && in "final round" ?? #HashCode
 » 7 days ago, # |   0 I'm looking for a team -- python preferred, C++ is ok. Anyone got spot on their team or looking for a team?
 » 6 days ago, # | ← Rev. 2 →   +25 Teams in the top, can you share your scores on tests? Here are ours (team Errecto: Errichto, kostka, Chameleon2460, kabuszki).A — 21B — 5,822,900C — 5,690,369D — 5,034,770E — 5,237,345F — 5,348,248total 27,133,653We think that D can be improved quite more.
•  » » 6 days ago, # ^ | ← Rev. 2 →   +51 A — 21 B — 5,822,900 C — 5,689,622 D — 5,107,115 E — 5,216,590 F — 5,348,248 total 27,184,696
•  » » » 6 days ago, # ^ |   +30 Lol we got 98% of max possible score on D at the beginning and decided to concentrate on other tests, now we have 70k less than you on this test
•  » » » » 6 days ago, # ^ |   0 Same
•  » » » 6 days ago, # ^ | ← Rev. 7 →   0 a
•  » » » 6 days ago, # ^ |   +39 how did your team get such a high score in D?
•  » » » » 6 days ago, # ^ |   +41 Perhaps intentionally, there are 15000 books at 2 libraries each, and 63600 books at 3 libraries each. If the former are variables and the latter are clauses, then we have a 3-SAT instance. Then we did local search on the number of satisfied clauses. Probably using some SAT-solver a perfect score can be achieved as well.
•  » » » » » 6 days ago, # ^ |   0 our team also found 3-SAT modeling, but doing such a 'good' local search was difficult part... and congratulations for your team!
•  » » » » » » 6 days ago, # ^ |   +50 Well, our local search is very simple: flip a random variable, keep that change if the number of satisfied clauses does not decrease, wait for as long as you wish :)
•  » » 6 days ago, # ^ | ← Rev. 4 →   0 We are not really a contender for the top, but we got: A-21 B-5,822,900 C-5,688,788 D-5,039,450 E-5,102,090 F-5,345,656 total 26,998,905.I guess E was our weak point, did you guys use some special algorithm for E?
•  » » 6 days ago, # ^ |   -33 A — 21 B — 5,822,900 C — 1,421,219 D — 4,815,200 E — 3,748,621 F — 1,253,555 total 17,061,516 We think that everything can be improved quite a lot!
•  » » » 6 days ago, # ^ |   +13 I guess besides A and B :)
•  » » 6 days ago, # ^ | ← Rev. 4 →   +3 Probably not even close to the top after unfreeze, butA — 21B — 5,822,900C — 5,689,820D — 5,028,530E — 5,121,579F — 5,348,248Total 27,011,098Team circus_wolves(nmakeenkov, linjek).In the last hour we did only slightly better (total was 27,009,965), because concentrated on task C
•  » » 6 days ago, # ^ | ← Rev. 2 →   +16 A — 21B — 5,822,900C — 5,645,747D — 4,812,015E — 4,601,441F — 5,317,660Total : 26,199,784I guess, we were derailed the moment we only tried improving C and F since there was huge margin. But, D and E seems to have been better to focus on.
•  » » 6 days ago, # ^ | ← Rev. 4 →   +14 We had- A: 21- B: 5,822,900- C: 5,689,822- D: 5,028,790- E: 5,034,292- F: 5,317,660 Total — 26,893,485With tusg25 and zeyunow
•  » » 6 days ago, # ^ | ← Rev. 4 →   +9 With bardek and DamianS we have:A -- 17B -- 5,822,900C -- 5,690,448D -- 5,028,530E -- 5,215,303F -- 5,340,296total -- 27,097,494
•  » » » 13 hours ago, # ^ |   0 I think your score in A can be improved quite a lot!
•  » » » » 11 hours ago, # ^ |   0 I disagree -- only by 4 points :P
•  » » 6 days ago, # ^ | ← Rev. 2 →   0 A-21 B-5,822,900 C-5,686,950 D-5,028,530 E-5,131,048 F-5,345,656 Total-27,015,105
•  » » 6 days ago, # ^ |   +4 A — 21 B — 5,822,900 C — 5,666,123 D — 5,028,790 E — 5,222,266 F — 5,330,120 total 27,070,220I hope we qualify (((((((
•  » » 6 days ago, # ^ |   +14 TÜ Randomized algorithms (me and semjon.00) A — 21 B — 5 822 900 C — 5 689 822 D — 5 028 790 E — 5 236 142 F — 5 348 248 Total — 27 125 923
 » 6 days ago, # |   +19 What are your predictions for the cutoff?
 » 6 days ago, # |   +3 Team NZEC_ final Score: A — 21 B — 5,822,900 C — 5,645,747 D — 4,843,540 E — 4,613,390 F — 5,240,161 Total Score — 26,165,759
 » 6 days ago, # |   +15 Schedule shows that results of the qualification round will be published on Feb 25. I get that you need to check for cheating but will we not even get some "preliminary" results before? Or is scoreboard "unfrozen" on Feb 25?(We got a huge increase in the last hour and are now anxious :D)
 » 6 days ago, # |   +29 Is the live streaming dead?
•  » » 6 days ago, # ^ |   +3 +
•  » » 6 days ago, # ^ |   0 +
•  » » 6 days ago, # ^ |   +27 †
 » 6 days ago, # | ← Rev. 2 →   0 Team q1w2e3A: 21B: 5822900C: 5645747D: 4841980E: 4945508F: 5289663Total: 26545819
 » 6 days ago, # | ← Rev. 2 →   0 Pumpkin Patch (WhaleVomit, PlasmaVortex):A — 21B — 5822900C — 5467966D — 4815395E — 4854966F — 5202696Total — 26163944
 » 6 days ago, # |   +1 What is extended round?
•  » » 6 days ago, # ^ |   +6 Upsolving
 » 6 days ago, # |   +10 Team tratata: A – 21; B – 5,822,900; C – 5,679,708; D – 5,067,140; E – 5,205,284; F – 5,348,248; Total score — 27,123,301.
•  » » 6 days ago, # ^ |   +1 several people seem to achieve this same score on F. is that optimal? what's the idea behind it? our team only got 5,001,254 on that particular case.
 » 6 days ago, # |   +13 who r the members of *code team who secured first place.
 » 6 days ago, # |   +13 A 21 B 5,822,900 C 5,690,468 D 5,104,515 E 5,211,028 F 5,308,034 Total 27,136,966 Members: snuke smeke sigma425 koyumeishi
 » 6 days ago, # |   +10 Metaheuristics in vacuum (MrDindows, Furko)B — 5,822,900C — 5,690,378D — 5,038,215E — 5,210,736F — 5,345,656 Total score 27,107,906
 » 6 days ago, # | ← Rev. 2 →   +10 A : 21B : 5,822,900C : 5,689,997D : 5,033,665E : 4,938,403F : 5,345,656Total Score 26,830,642 Team : Little_Piplup dlwocks31 diordhd gratus907 coffeeteaSeveral tweaks / random parameters / UNINTENDED BUGS / weird piece of code :(
 » 6 days ago, # | ← Rev. 2 →   +10 A — 21B — 5,822,900C — 5,689,502D — 5,043,025E — 5,095,422F — 5,317,660Total — 26,968,530 (#121 overall)from #9 team of last final round.Congraturations for finalists!
 » 6 days ago, # |   -19 A — 21 B — 5,822,900 C — 5,467,996 D — 4,839,510 E — 3,600,887 F — 5,042,544 Total: 24,773,828 Team: me and Polarity, but it's basically me solving them all. I still don't understand why my method failed so bad at E...
•  » » 6 days ago, # ^ |   0 Did you perhaps forget to sort the lists for libraries by score?
 » 6 days ago, # |   +31 We did pretty badly, but at least we got 5,690,828 from C (with a set-cover IP formulation), which seems to be the best result posted here so far. Actually, I think it is optimal.
•  » » 6 days ago, # ^ |   +161 5690870
•  » » » 6 days ago, # ^ |   +113
 » 6 days ago, # | ← Rev. 3 →   -49 Joined in the last 90 mins, wrote a stupid O(L^2) heuristics and ended up with a score of 26791720. No idea what happened XD, well it’s hashcode, stupid solution gets decent score
•  » » 6 days ago, # ^ | ← Rev. 2 →   +3 What is your team name?As far as I can see the highest score in the contest was 27,203,691 ( < 27,791,720).Update: I just read that you solved in 90s. I thought its 90 mins at first. Seriously you destroyed everyone in 90s?
•  » » » 6 days ago, # ^ | ← Rev. 2 →   -15 Sorry it is 26791720 :). Just trying to say the scoring algorithm is kind of strange. Friends of mine wrote some brilliant solutions but received a lower score unfortunately. I’m 100% I don’t deserve the score
•  » » » » 6 days ago, # ^ | ← Rev. 3 →   -36 S̶t̶i̶l̶l̶,̶ ̶y̶o̶u̶ ̶w̶r̶o̶t̶e̶ ̶t̶h̶a̶t̶ ̶s̶o̶l̶n̶ ̶i̶n̶ ̶t̶h̶e̶ ̶9̶0̶s̶?̶ Duration also updated later.What was your soln/heuristics?
•  » » » » » 6 days ago, # ^ | ← Rev. 3 →   -19 I just did a sorting with a weighting function involving time for signing-up, and a greedy to scan the best book in the remaining time. (Idea is stupid it was 4:00am here and my brain was not functioning properly but that's the only possible thing u can code in 90 mins I guess)It ran for ~5 mins on case D (slowest) and got the score I stated above. I am now trying to optimize some kind of knapsack, hope it can get a better score.
•  » » » » » » 6 days ago, # ^ |   0 what was the exact heuristic? I tried something similar but wasn't able to consistently beat my original greedy solution :(
•  » » 6 days ago, # ^ |   0 what heuristic?
 » 6 days ago, # |   +17 A — 21 B — 5,822,900 C — 5,689,822 D — 5,037,435 E — 4,985,816 F — 5,340,296 Total: 26.876.290Does anyone have a good solution idea for E?
 » 6 days ago, # |   +8 For any team who got more than 5M points on E: how did you do that? Our team's local search ended up getting a terrible score for E (less than 4M), and we only got 4.6M after doing some random greedy.
•  » » 6 days ago, # ^ | ← Rev. 3 →   +13 Just greedily searching for the next library and then greedily taking all untaken books from it with most score and scoring function for library sum(book_scores) / library.signUpTime gave us 5,034,357 points.Seems to be much less than lots of people got though
•  » » » 6 days ago, # ^ |   +39 After you get final order of libraries, you can improve results using maximum vertex-weighted matching in bipartite graph. That will give about 5,210,736 points. With some heuristics for book score based on number of libraries with such book in it I got 5,223,111.
•  » » » 6 days ago, # ^ |   +77 We used the following way to decide the order of the libraries.Assume we have some order of the libraries; let's try to quickly (in $O(N)$) estimate the total weight of books we can take. For each library, we can calculate the number of books it can scan: (number of days after signup) * (number of books that can be scanned per day), and just assume it takes the best books possible. Using precalculated prefix sums, calculate the sum of scores each library takes. At this point, we just ignore that multiple libraries will count the same book and will simply count these books multiple times.Now we have some quick way to estimate how good a given order is. We can use some hill climbing or simulated annealing to find a good order. After we have decided the order, use some min-cost max-flow to assign books to libraries (as mentioned above).We got 5 236 142 points for E this way. About 200k of it came from this approach to book ordering (before we had a simple greedy ordering + max flow).
•  » » » » 6 days ago, # ^ |   0 Did you run min-cost-max-flow on the whole dataset or splitted them into parts? Running min-cost-max-flow on the whole dataset wouldn't finish anything after a-set for us and all of our splits gave bad scores.
•  » » » » » 6 days ago, # ^ |   0 The whole dataset. We have a fairly efficient implementation. It didn't finish for C, but finished for D, E and F (at least after compiling with -O2 :P).
•  » » » » » » 6 days ago, # ^ |   0 Would it be possible to share that implementation with me?
•  » » » » » » 6 days ago, # ^ |   +28 And there's no assignment to do for C anyway since all books are scanned immediately.
•  » » » » » 5 days ago, # ^ |   +8 There is a thing called “zkw mincostflow” which really works well in practice. I don’t know the details, but basically it prunes by considering all paths with same distance, and simultaneously pushing flows of same value like in Dinic’s. This finishes in about 0.5-1s for each iteration. Probably, the number of distinct reward will be small, so one can also compress the nodes with same scores and reduce the graph. I didn’t tried it. Maybe I should’ve tried it.
•  » » » » 8 hours ago, # ^ |   +2 How do you set up the graph for min-cost max-flow? I created a bipartite graph to maximize the number of books scanned, but I couldn't think of any way that took into account the values of each book.The second part seems really essential, our implementation gave almost the exact same scores as just greedily assigning books since we didn't think about costs.
•  » » » » » 46 minutes ago, # ^ | ← Rev. 3 →   0 The values of each book is exactly what the "min-cost" is needed for.The flow network consists of the following nodes: a source; a sink; a node for each book; a node for each library. And edges: from the source to each book an edge with capacity 1 and cost equal to -1 times the book value. from each book to each library it is in an edge with capacity 1 and cost 0. from each library to the sink an edge with capacity equal to the number of books it can scan ((days after signup) * (books per day)) and cost 0. Actually, I guess we should have just "min-cost flow" here, maximizing flow isn't what we care about. But we were in a hurry and it worked pretty well ¯\_(ツ)_/¯.
•  » » 5 days ago, # ^ |   0 Got 4.8 by changing comparator as this bool cmp(lib l1, lib l2){ ll val1 = l1.cnt * (l1.shiping) * l2.signup, val2 = l1.signup * l2.cnt * (l2.shiping) ; //ll val1 = l1.signup, val2 = l2.signup; // cnt is no of books return val1 > val2; }
•  » » 5 days ago, # ^ |   0 It turned out that there were a bug in our team's local search implementation. After fixing the bug, we got approximately 5.1M on E and minor improvements to C, D and F as well.
 » 6 days ago, # | ← Rev. 2 →   -7 A 21 B 5,822,900 C 5,645,747 D 4,837,950 E 4,569,203 F 5,305,796 Total 26,181,617 Team : Int elligencewhat were your approaches for D and E ?
 » 4 days ago, # |   +18 It was really fun to participate in hashcode for the first time. Learned a lot. We made a time-lapse video while participating. Check it out — https://youtu.be/7mzgaE9xiFEAfter watching the video, some of the mistakes were quite evident. If you could also point out a few mistakes that we made then that would be really helpful. Thanks! Looking forward to enhancing my problem-solving skills.
 » 32 hours ago, # |   0 Was anybody contacted regarding the finals?
 » 19 hours ago, # |   0 Anyone got e-mail regarding selection for world finals?
•  » » 18 hours ago, # ^ |   +3 Yeah we got the e-mail yesterday (#25).
•  » » » 17 hours ago, # ^ |   0 Great. Congrats! We were actually ranked 60 and were hoping that we somehow get through. But guess will have to try next year.
•  » » 18 hours ago, # ^ |   0 Yes. 31st place.
 » 16 hours ago, # |   0
 » 13 hours ago, # |   0 My team recently participated in Google's HashCode 2020 and it was my first time yet we ranked 115th in India and 1127th internationally and here is my HashCode 2020 Experience https://www.linkedin.com/pulse/hashcode-2020-experience-jai-kumar-dewaniMy LinkedIn post https://www.linkedin.com/posts/jai-dewani_hashcode-hashcode2020-activity-6636693461091352576-ntpE