VladaMG98's blog

By VladaMG98, history, 7 years ago, In English

Could some of you which have participated in Google Hash Code 2017 be kind enough to share some of your ideas/solutions to the problem?

  • Vote: I like it
  • +46
  • Vote: I do not like it

| Write comment?
»
7 years ago, # |
Rev. 3   Vote: I like it +42 Vote: I do not like it

Idea 1 (too slow): While we can add anything: let's calculate for each pair of (Server, video) how our score will change if we add video to the server and choose one greedily by this improvement

Idea 2: After we added (video, server) it'll change the "improvement score" only for pairs (video, other server) so we can maintain them using set and don't recalculate each time

Idea 3: The cost of adding a video is not constant. The bigger video — the less number of videos we can add. So lets divide improvement for each pair by size of video

Idea 4: Let's add random constants and bruteforce them (by hand or not). This will change results a bit

Idea 5: On smaller test, we can choose not optimal pair but one of the most optimal pairs randomly and try this many times

Implementing this gives 2.63-2.65m. If I remember correctly, first 3 ideas give you 2.63m

  • »
    »
    7 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Maybe you can provide some code? It would be great. It's so unclear for me how to implement these ideas to get decent run time.

  • »
    »
    7 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    I've implemented 3 first ideas and they gave me only 2.22m. Story of my life.

  • »
    »
    7 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Can you write your scores on individual test cases? Thanks :)

»
7 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Did anyone try to do a local optimization after the greedy step, and what were the results?

We tried it, without much success. State transition function was to remove a random video from some cache and try adding one or more videos to it. This gave very little improvement as score recalculation was slow and perhaps our greedy solution was a deep local minimum. Or maybe just some implementation issue.

»
7 years ago, # |
  Vote: I like it +10 Vote: I do not like it

Solving for a single cache, assuming all others are fixed, is simple: compute the gain in overall latency should you add video X (for all videos X), and then solve a 0/1-knapsack problem.

Our solution: Shuffle the caches, and solve the knapsacks one-by-one. Overall (with some tiny optimisations/randomisations here and there): 2.578M

  • »
    »
    7 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    could you share your code please?

  • »
    »
    7 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Hi PetarV,

    Can you please explain what do you mean by "assuming all others are fixed"??

    • »
      »
      »
      7 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Assuming you've fully decided which videos go in all the other caches.

      (Of course, in reality, while executing this algorithm you usually haven't---and that's why the solution obtained at the end is only approximate.)

      • »
        »
        »
        »
        7 years ago, # ^ |
        Rev. 2   Vote: I like it 0 Vote: I do not like it

        Thanks for your reply,

        So If I have the list of all videos with their weights and values, should I fill the caches one by one assuming that there are no other caches?

        how the order of the caches will affect the result?

        • »
          »
          »
          »
          »
          7 years ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          Sounds about right. To be fully precise, "assuming that there are no other caches" is incorrect, since you do take into account all the assignments you made to other caches when you compute the utilities of the videos (e.g. if some cache already has video X, and serves it to everyone better than the cache you're currently filling, then X will have zero value with respect to its knapsack).

          The order in which this is done is clearly important — it should be simple to construct a case with only two caches, where choosing which cache to fill first will impact the result significantly. You can think of optimising each cache as a local ("greedy") solution, which might not yield the best results in the global case.

          Lastly, I don't think the word should is the best word to use. This is indeed the strategy we used, and we managed to (barely) get into the finals (ranked 53rd). As some other people have said, it seems that it's possible to outperform us with simpler randomised greedy strategies (that do not involve solving several dynamic programming problems as a subtask).

          • »
            »
            »
            »
            »
            »
            7 years ago, # ^ |
              Vote: I like it 0 Vote: I do not like it

            so in this case I can't put the same video in multiple caches? right?

            • »
              »
              »
              »
              »
              »
              »
              7 years ago, # ^ |
                Vote: I like it 0 Vote: I do not like it

              Of course you can, if it would provide a benefit to any user with respect to the (partial) solution you have built up before.

              If it doesn't, there's really no need to (the algorithm will assign a value of zero).

»
7 years ago, # |
  Vote: I like it +31 Vote: I do not like it

Two of the input files can be solved exactly:

In "me_at_the_zoo", the limits are small, a formulation as a mixed-integer-program can be solved by an appropriate MIP-solver in a few seconds.

In "trending_today", every endpoints is connected to every cache sever and all the latencies are the same. Also the total size of all videos is the total size of all cache servers.

This reduces the problem to fitting each video into exactly one cache server, which is some variation of a bin-packing problem. To solve it, consider the videos in decreasing size and solve a subset-sum problem for each cache server (one server after another).