Блог пользователя giorgosgiapis

Автор giorgosgiapis, история, 2 года назад, По-английски

Round 1B of the Google CodeJam contest just finished. What are your thoughts/comments on this round? I found the problems to be a bit unbalanced in difficulty which resulted in a lot of ties.

Теги gcj
  • Проголосовать: нравится
  • +58
  • Проголосовать: не нравится

»
2 года назад, # |
Rev. 2   Проголосовать: нравится +10 Проголосовать: не нравится

Read the first problem wrong 3 times which resulted in 3 penalty attempts and a solve time of 29 mins. I also had a '<' instead of '>' in problem B which took me another 20 mins to find. Guess I shouldn't have stayed up till 5 AM to participate in the Kickstart round. Fortunately, I did qualify, albeit ranking 1500th.

  • »
    »
    2 года назад, # ^ |
      Проголосовать: нравится +1 Проголосовать: не нравится

    Same situation here but the first problem statement costed me the whole contest and I failed to qualify.

»
2 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

In 2nd task, Why 1e9 dp solutions(n*p*p) are passing ?

  • »
    »
    2 года назад, # ^ |
      Проголосовать: нравится -18 Проголосовать: не нравится

    P is only 100 so it's not 1e9 it's 1e3*1e2*1e2 = 1e7

    • »
      »
      »
      2 года назад, # ^ |
        Проголосовать: нравится +5 Проголосовать: не нравится

      t*n*p*p ~ 1e9

      • »
        »
        »
        »
        2 года назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        I used dp to solve it in t * n * p. but their t * n * p * p passed because I think all of the test cases don't have optimal n and p. Or maybe just the TL was a bit high.

  • »
    »
    2 года назад, # ^ |
      Проголосовать: нравится +1 Проголосовать: не нравится

    That would be roughly 1e9 operations for all the test cases which should pass given the 5sec TL.

  • »
    »
    2 года назад, # ^ |
      Проголосовать: нравится +1 Проголосовать: не нравится

    You do not need to maintain all 100 possible ending spots, only the largest and smallest one. You can prove that ending a customer on an item that is not the largest or the smallest is never optimal, hence make the dp noticeably faster.

  • »
    »
    2 года назад, # ^ |
      Проголосовать: нравится -23 Проголосовать: не нравится

    P-100, N=1000 and T=100, which makes it 1e5*1e2*1e2 = 1e7 as input. I have seen rarely people giving 1e7 numbers as input. Input can be at max 1e6 numbers in standard situation. Which makes in 1e6*1e2=1e8... which will easily pass under 5sec.

    • »
      »
      »
      2 года назад, # ^ |
        Проголосовать: нравится +9 Проголосовать: не нравится

      Input can be at max 1e6 numbers in standard situation.
      Shouldn't this be specified?

      I have seen rarely people giving 1e7 numbers as input.
      That does not mean a test case with 1e7 numbers violates the constraints.

      TBH this situation is not very simple. Yes, as a participant it is my fault that I did not ask for clarifications (not sure if it would have been provided), but such an ambiguity should not exist in the first place itself.

      IMO, Code Jam team should either update their test cases and rejudge solutions, or consider all AC solutions and take the earliest attempt with highest sum instead of last one. They can also choose to consider ones being affected as a smaller fraction and thus ignore them, but that would be exercising authority more than delivering justice.

  • »
    »
    2 года назад, # ^ |
    Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

    I don't know if it was actually $$$10^9$$$ or not, but in some cases, using $$$dp$$$ with arrays can pass $$$2s$$$ time limit easily. This could be due to small constant factor of arrays.

»
2 года назад, # |
  Проголосовать: нравится +8 Проголосовать: не нравится

Has anyone solved Problem C using any Probabilistic methods ? I tried using "00000001" with set bit uniformly distributed over all the 8 indices , I observed this operation would either increase the count of set bits or decrease .. ( Tried but figured that It won't work , as when we had very few bits in judges hidden input , it is very low probability to get a hit)

  • »
    »
    2 года назад, # ^ |
      Проголосовать: нравится +13 Проголосовать: не нравится

    I just queried 00000000 and then kept querying for strings with the first $$$N_i$$$ bits being 1 unless we get $$$0$$$ or $$$8$$$ as a response. Did not prove it but it felt like a good strategy. :P

  • »
    »
    2 года назад, # ^ |
      Проголосовать: нравится +52 Проголосовать: не нравится

    I just spammed guesses. Whenever you get $$$d$$$ digits as the answer of the previous query, send in a random $$$8$$$-bit guess with $$$d$$$ ones.

»
2 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

How to solve C1. Can anyone provide the solution with short explanation.

  • »
    »
    2 года назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    if Ni > 4 you can reduce it to Ni <= 4 by choosing V = 11111111.

    And for any case where Ni < 4 I keep choose a random V with Ni set bits until Ni = 0

    • »
      »
      »
      2 года назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      I did something similar , But How were you handling cases when Ni got equal to small values like maybe 1 or 2?

      • »
        »
        »
        »
        2 года назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        here is my solution
        I handled the case where Ni equal to 2 in function case_2(), the logic is easy to understand. I'm just generating a random string with 2 set bits until Ni is something different than 2 [1,3 or 4]. this is working because r is chosen uniformly at random.

    • »
      »
      »
      2 года назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      can you please provide your solution too.

  • »
    »
    2 года назад, # ^ |
    Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

    While the input is d != 0, output a random bitset with d ones, and hope that: 1. it's the same bitset up to rotations 2. the correct rotation will be chosen

    The chances of the first one being correct are pretty high (More than 1/12 I think) The chances of the second one being correct are not less than 1/8

    So overall pretty good probabilistic algorithm

    Here's an inefficient example for such algorithm:

    #include <bits/stdc++.h>
    #define ll long long
    
    using namespace std;
    
    vector<int> choose(int k) {
        vector<int> res;
        vector<int> opt = {0,1,2,3,4,5,6,7};
        for(int i = 0; i < k; i ++) {
            int cur = (rand()%opt.size());
            res.push_back(opt[cur]);
            vector<int> nxt;
            for(int j = 0; j < cur; j ++) {
                nxt.push_back(opt[j]);
            }
            for(int j = cur + 1; j < opt.size(); j ++) {
                nxt.push_back(opt[j]);
            }
            opt = nxt;
        }
        return res;
    }
    
    int main() {
        ios_base::sync_with_stdio(false);
        cin.tie(0);
        srand(time(0));
        int t;
        cin >> t;
        for(int ti = 1; ti <= t; ti ++) {
            cout << "00000000" << endl;
            for(int i = 0; i < 299; i ++) {
                int x;
                cin >> x;
                if(x == 0) { break; }
                string nxt = "00000000";
                vector<int> pos = choose(x);
                for(int j : pos) {
                    nxt[j] = '1';
                }
                cout << nxt << endl;
            }
        }
        return 0;
    }
    
    • »
      »
      »
      2 года назад, # ^ |
        Проголосовать: нравится +10 Проголосовать: не нравится

      I was outputting not a random bitset, but the one starting with d ones, and then (8-d) zeroes. Thought no need to make it random, since there will be a shift anyways. Still passed C1.

      • »
        »
        »
        »
        2 года назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        Yep you are right! Implementing excessive / uneeded solutions was today's theme for me ;) (LIS at A)

»
2 года назад, # |
Rev. 2   Проголосовать: нравится +48 Проголосовать: не нравится

It was like DIV3 contest up until C2. While I managed to get top 150, it wasn't fun round for me...

»
2 года назад, # |
  Проголосовать: нравится +18 Проголосовать: не нравится

I think the second problem has bad restrictions, N * P^2 solutions pass and it was quite risky to submit such solutions given the time limit. Also, the editorial does not mention this solution so I suppose it wasn't intended to pass.

»
2 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I have a tech question for someone way more familiar than me with C++.

I was using variable name data for a long time. However today I got compilation error which was fixed after I renamed variable data to d.

Here is the code that fails to compile.
Here is the compilation error message.

I cannot reproduce this locally using C++17. Removing either first or second line makes it compile. Can anyone explain what exactly is failing and why I might not be able to reproduce this locally? Thanks!

  • »
    »
    2 года назад, # ^ |
      Проголосовать: нравится +20 Проголосовать: не нравится

    I think it's because of your compiler version being different. However, I haven't been able to not reproduce this error on any common compiler on godbolt (including recent enough versions of GCC, i.e., >= 8.1). Are you sure you're compiling with flags like -std=c++17 or --std=c++17 or --std=gnu++17, since it compiles fine with C++14 and earlier. The reason is C++17 has a function in the standard library called std::data and C++14 doesn't, so there is an ambiguity when you use data while doing using namespace std;.

    • »
      »
      »
      2 года назад, # ^ |
        Проголосовать: нравится +16 Проголосовать: не нравится

      I'll blame my particular compiler (g++.EXE (tdm64-1) 5.1.0) which is an old version of tdm-gcc, which does compile successfully while specifying --std=c++17.

      If I test g++ in Windows Subsystem for Linux (g++ (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0) it indeed errors out.

      I guess I expected the compiler to either compile error or complain about --std=c++17, but maybe in my old compiler version C++17 was experimental and not yet feature complete.

      Anyway, thanks a lot for looking into this!

»
2 года назад, # |
  Проголосовать: нравится +19 Проголосовать: не нравится

My experience-
Solve A, B, and C1 and somehow manage to clear cutoff.
Look at the constraints of B and overthink a lot. "T*N*P*P=10^9, will never pass under 5sec in java"
Resubmit B after modifying solution.
Realize after contest that the first solution passes as well.
Get kicked out. :)

»
2 года назад, # |
  Проголосовать: нравится +2 Проголосовать: не нравится

I enjoyed the problems.

Problem 1 analysis: I think the section labelled "Test Set 2" should be labelled "Test Set 3": https://codingcompetitions.withgoogle.com/codejam/round/000000000087711b/0000000000acd59d#analysis

(I can't find where to report this to the Google folks)

»
2 года назад, # |
  Проголосовать: нравится +3 Проголосовать: не нравится

Regarding the alternative solution in problem C analysis, in the step of using V as the $$$i^{th}$$$ instruction in P[K-1] appended with $$$2^{k-1}$$$ zeros (when the assumption of the left half being equal to the second half turns to be invalid), what is the proof that this will definitely make the 2 halves equal at some point in time?

  • »
    »
    2 года назад, # ^ |
      Проголосовать: нравится +29 Проголосовать: не нравится

    If we have a sequence P[k-1] which we know will take 2^(k-1) bits and have it reach 0 at some point (no matter how you rotate each item in the sequence), we can append 0's to each element of that sequence to make the both halves equal at some point. This is true because if we take the xor of both halves, we can treat this as an imaginary smaller record. Each step of our sequence (with appended 0's) affects this smaller record the same way it would affect a normal record of that size. (Cyclic rotations don't matter the same way they don't matter for P[k-1])

    If the xor of both halves reaches 0 at some point (which it must as described above), then that means both halves will eventually be equal.

    • »
      »
      »
      2 года назад, # ^ |
      Rev. 3   Проголосовать: нравится 0 Проголосовать: не нравится

      NVM, just got it, thanks!

    • »
      »
      »
      2 года назад, # ^ |
      Rev. 4   Проголосовать: нравится 0 Проголосовать: не нравится

      I got it now, thanks a lot tbuzzelli!

      So the solution description can be rephrased as follows:

      For an initial value $$$IV$$$ with $$$2^k$$$ bits ($$$k>0$$$). Assuming $$$IV[2^k-1:2^{k-1}]$$$ is $$$L$$$, $$$IV[2^{k-1}-1:0]$$$ is $$$R$$$, $$$L$$$ $$$xor$$$ $$$R$$$ is $$$LR$$$, and a sequence $$$P[k-1]$$$ that will reset any $$$(k-1)$$$-bits initial value at some step regardless of the rotations. Some observations regarding the $$$I^{th}$$$ instruction $$$INS_i$$$ in $$$P[k-1]$$$:

      1. Applying $$$INS_i+INS_i$$$ on $$$IV$$$ is equivalent to applying $$$INS_i$$$ on $$$L$$$ and $$$INS_i$$$ on $$$R$$$ separately (with the same rotation). However, this has no effect on $$$LR$$$ ($$$+$$$ here is for concatenation).
      2. Applying $$$INS_i+0's$$$ on $$$IV$$$ is equivalent to applying $$$INS_i$$$ on $$$LR$$$.

      When $$$L=R$$$, $$$LR$$$ will be $$$0$$$, and applying $$$INS_i+INS_i$$$ one by one will make $$$L=0$$$ and $$$R=0$$$ at the same time. $$$L=R$$$ should hold (and applying $$$INS_i+INS_i$$$ one by one will make $$$IV=0$$$ accordingly) either before applying $$$INS_1+0's$$$, or after $$$INS_1+0's$$$, or after $$$INS_2+0's$$$, ..., or after $$$INS_i+0's$$$.

»
2 года назад, # |
  Проголосовать: нравится +4 Проголосовать: не нравится

The first problem statement cost me a lot, and I was getting frustrated why I was getting WA, and total submission was increasing at a fast rate. Then to qualify, I had to do all questions (At least till 3A), which I couldn't. Hence now it all goes to final round 1C, hoping for the best.

»
2 года назад, # |
  Проголосовать: нравится +26 Проголосовать: не нравится

An amazing fact: Problem C can be solved within just $$$16$$$ operations!

And my sad story: I insisted on working on this strategy but I failed to finish before the end of contest :( I thought the "300 operations" is just a red herring (or for the ease of first subtask) and missed the regular $$$(2^8-1)$$$-operation solution.

I want to share this solution to the community because I found it by mainly case work. I don't know if:

  1. It only works for number of bits $$$k = 8 = 2^3$$$ or other $$$k$$$'s, because in most steps there are only 2 cases need to consider;
  2. It has further insight or there is any theory to explain this (may lead to a simpler thinking process and coding; I thought it may have some connections with high-order difference);
  3. $$$16$$$ is the minimum steps to achieve the goal.
Solution
  • »
    »
    2 года назад, # ^ |
      Проголосовать: нравится +21 Проголосовать: не нравится

    Your solution is probably the same as the 2^8-1 operation solution with optimization. Many of the operations in the 2^8-1 solutions can be skipped if you know the number of 1s in the record.

    • »
      »
      »
      2 года назад, # ^ |
        Проголосовать: нравится +8 Проголосовать: не нравится

      Wow the living problem author! Thank you for your painful problem! :(

      I think enabling skipping operations may lead to a completely different solution, because the results of operations are cumulative. It is difficult to say whether skipping some operations can still enumerate all potential target states. Moreover this may break the induction hypothesis. I'd rather consider this strategy as a heuristic modification of the first "state search + pruning" solution.

      Well, when there is a complete $$$(2^8-1)$$$-operation sequence, it is easy to guess any simpler strategy is trying to form a subsequence of the $$$(2^8-1)$$$-operation sequence. I'm interested in whether there is any better way to achieve or explain this strategy, but now it mostly seems like some special case work.

      • »
        »
        »
        »
        2 года назад, # ^ |
        Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

        You are right. It is very hard to tell which operations can be skipped without breaking the induction hypothesis.

        What I originally meant is, the instructions your solution would output, is likely a sub-sequence of the (2^8-1)-operation sequence (with rotation), with the same intention to force the record to some known good state.

        For example, in the (2^8-1)-operation sequence, there is one instruction with odd number of 1s in it. i.e. 10000000

        This is same as your first instruction if you know the number of 1s in the record is odd initially. And the purpose of this instruction is force the number of 1s to be even. You can tell the instructions before and after this instruction is completely the same.

        This way, you can remove half of the (2^8-1)-operation sequence.

        • »
          »
          »
          »
          »
          2 года назад, # ^ |
            Проголосовать: нравится 0 Проголосовать: не нравится

          It's a good point. I checked the operation sequence and find the $$$128$$$-th term is $$$10000000$$$. Checking the number of set bits in advance can indeed save half operations.

          Then I guess if there are good operations at $$$64$$$-th and $$$128$$$-th, and I find $$$11000000$$$. As in my solution, this operation is used to make even set bits in odd and even bits, and I use it when there are both odd set bits in odd and even bits. This information may not be deduced by just the total number of 1's.

          So now it looks like an enumeration strategy: if before $$$11000000$$$ there are even set bits in odd and even bits, then after it there are odd set bits, and vice versa.

          In my solution I spend a $$$10101010$$$ and tell if it needs $$$11000000$$$ from the result. So it may be like my solution use extra instructions and tell from the total number of bits whether we need to use the "core" instruction to toggle the difference between some sets of bits.

          If for any $$$(2^k-1)$$$-operation sequence the middle operation is a core operation, then my solution is indeed follow the above strategy: it successively checks if the instructions $$$10000000$$$, $$$11000000$$$, $$$10100000$$$, $$$11110000$$$, $$$10001000$$$, $$$11001100$$$, $$$10101010$$$ and $$$11111111$$$ should be applied. If this is true, then my solution looks like a binary search strategy, comparing to the $$$(2^8-1)$$$-operation complete search strategy.

          Then a few more questions:

          1. What is a core operation? I think it should be an operation in the operation sequence which is the only operation to toggle the difference between two specific sets of bits, but I have no proof to it now. If this is true, then the binary search strategy works.
          2. What is the minimum number of operations needed to check if a core operation should be applied?
  • »
    »
    2 года назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    I also had a sad story on this problem :)

    My BFS-like solution (basically what is described in the problem analysis) uses $$$13$$$ operations.

    Most states end up being redundant, they can be handled in the same way as some other state that contains them. Removing these redundant states and printing the distances, I get this nice little chart. With some work maybe this could even be turned into a human-understandable algorithm...

    chart

    The states furthest from the zero state are $$$12$$$ operations away, plus $$$1$$$ more operation to get the initial value of $$$n$$$ gives $$$13$$$ operations.

»
2 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I was able to solve 2 questions A and B but still got a rank of 4k

»
2 года назад, # |
  Проголосовать: нравится +20 Проголосовать: не нравится

Finished debugging full solution to C one minute after contest ended, which meant a rank below top 2500 instead of one in top 150...Guess I just have bad luck or something.

Agree that the difficulty was quite unbalanced. Other than the (very hard) last subtask of C, all the problems are quite easy. If you can't solve that very last subtask, you have to get everything else AND be fast enough to qualify.

  • »
    »
    2 года назад, # ^ |
      Проголосовать: нравится -8 Проголосовать: не нравится

    I managed to qualify, but I am very impressed you actually solved C by yourself. It took me more than an hour after reading the official solution to C2 to understand the problem and was convinced only people that had seen it before could solve it.

»
2 года назад, # |
  Проголосовать: нравится +4 Проголосовать: не нравится

"But 99.3% isn't so bad, and if we see an unlucky failure with this method, we can easily get another independent try by changing our code's random seed, since we have control over that source of randomness."

the above paragraph is from problem c1 analysis ,can someone explain it ,I can't understand it?