Please, try EDU on Codeforces! New educational section with videos, subtitles, texts, and problems. ×

By VladaMG98, history, 3 years ago, ,

Could some of you which have participated in Google Hash Code 2017 be kind enough to share some of your ideas/solutions to the problem?

• +46

 » 3 years ago, # | ← Rev. 3 →   +42 Idea 1 (too slow): While we can add anything: let's calculate for each pair of (Server, video) how our score will change if we add video to the server and choose one greedily by this improvementIdea 2: After we added (video, server) it'll change the "improvement score" only for pairs (video, other server) so we can maintain them using set and don't recalculate each timeIdea 3: The cost of adding a video is not constant. The bigger video — the less number of videos we can add. So lets divide improvement for each pair by size of videoIdea 4: Let's add random constants and bruteforce them (by hand or not). This will change results a bitIdea 5: On smaller test, we can choose not optimal pair but one of the most optimal pairs randomly and try this many timesImplementing this gives 2.63-2.65m. If I remember correctly, first 3 ideas give you 2.63m
•  » » 3 years ago, # ^ |   0 Maybe you can provide some code? It would be great. It's so unclear for me how to implement these ideas to get decent run time.
•  » » » 3 years ago, # ^ | ← Rev. 2 →   +10 Our code which gave max points for " Videos worth spreading"
•  » » 3 years ago, # ^ |   0 I've implemented 3 first ideas and they gave me only 2.22m. Story of my life.
•  » » 3 years ago, # ^ |   0 Can you write your scores on individual test cases? Thanks :)
•  » » » 3 years ago, # ^ |   0 1024846 516403 499991 610759 
 » 3 years ago, # |   0 Did anyone try to do a local optimization after the greedy step, and what were the results?We tried it, without much success. State transition function was to remove a random video from some cache and try adding one or more videos to it. This gave very little improvement as score recalculation was slow and perhaps our greedy solution was a deep local minimum. Or maybe just some implementation issue.
 » 3 years ago, # |   +10 Solving for a single cache, assuming all others are fixed, is simple: compute the gain in overall latency should you add video X (for all videos X), and then solve a 0/1-knapsack problem.Our solution: Shuffle the caches, and solve the knapsacks one-by-one. Overall (with some tiny optimisations/randomisations here and there): 2.578M
•  » » 3 years ago, # ^ |   0 could you share your code please?
•  » » » 3 years ago, # ^ |   0 Sure, it's on my GitHub: https://github.com/PetarV-/HashCode2017
•  » » » » 3 years ago, # ^ |   0 Thanks a lot !
•  » » 3 years ago, # ^ |   0 Hi PetarV,Can you please explain what do you mean by "assuming all others are fixed"??
•  » » » 3 years ago, # ^ |   0 Assuming you've fully decided which videos go in all the other caches.(Of course, in reality, while executing this algorithm you usually haven't---and that's why the solution obtained at the end is only approximate.)
•  » » » » 3 years ago, # ^ | ← Rev. 2 →   0 Thanks for your reply,So If I have the list of all videos with their weights and values, should I fill the caches one by one assuming that there are no other caches?how the order of the caches will affect the result?
•  » » » » » 3 years ago, # ^ |   0 Sounds about right. To be fully precise, "assuming that there are no other caches" is incorrect, since you do take into account all the assignments you made to other caches when you compute the utilities of the videos (e.g. if some cache already has video X, and serves it to everyone better than the cache you're currently filling, then X will have zero value with respect to its knapsack).The order in which this is done is clearly important — it should be simple to construct a case with only two caches, where choosing which cache to fill first will impact the result significantly. You can think of optimising each cache as a local ("greedy") solution, which might not yield the best results in the global case.Lastly, I don't think the word should is the best word to use. This is indeed the strategy we used, and we managed to (barely) get into the finals (ranked 53rd). As some other people have said, it seems that it's possible to outperform us with simpler randomised greedy strategies (that do not involve solving several dynamic programming problems as a subtask).
•  » » » » » » 3 years ago, # ^ |   0 Thanks a lot!
•  » » » » » » 3 years ago, # ^ |   0 so in this case I can't put the same video in multiple caches? right?
•  » » » » » » » 3 years ago, # ^ |   0 Of course you can, if it would provide a benefit to any user with respect to the (partial) solution you have built up before.If it doesn't, there's really no need to (the algorithm will assign a value of zero).
 » 3 years ago, # |   +31 Two of the input files can be solved exactly:In "me_at_the_zoo", the limits are small, a formulation as a mixed-integer-program can be solved by an appropriate MIP-solver in a few seconds.In "trending_today", every endpoints is connected to every cache sever and all the latencies are the same. Also the total size of all videos is the total size of all cache servers.This reduces the problem to fitting each video into exactly one cache server, which is some variation of a bin-packing problem. To solve it, consider the videos in decreasing size and solve a subset-sum problem for each cache server (one server after another).
•  » » 3 years ago, # ^ |   0 What's the optimal score for "me at the zoo"?
•  » » » 3 years ago, # ^ |   0 516557