deltixlab's blog

By deltixlab, 3 years ago, In English
Tutorial is loading...

Prepared by Vladik.

Tutorial is loading...

Prepared by AleXman111.

Tutorial is loading...

Prepared by netman.

Tutorial is loading...

Prepared by Vladik.

Tutorial is loading...

Prepared by AleXman111.

Tutorial is loading...

Prepared by netman.

Tutorial is loading...

Prepared by Vladik.

Tutorial is loading...

Prepared by 74TrAkToR.

  • Vote: I like it
  • +33
  • Vote: I do not like it

| Write comment?
»
3 years ago, # |
  Vote: I like it -21 Vote: I do not like it

purplecrayon didnt comment, scary

»
3 years ago, # |
Rev. 4   Vote: I like it +48 Vote: I do not like it

Not trying to be hateful towards the setters and the testers who put a lot of effort for this round to be as good as it was, but the pretests for Problem A were not so great. they didn't test the boundaries of the constraints thus causing many people to get time limit exceeded on the main system tests.

Other than that, great round!

  • »
    »
    3 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    the didn't test the boundaries of the constraints thus causing many people to get time limit exceeded on the main system tests.

    can you please elaborate more?

    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      It is shown in the problem that the maximum limit for m is 10^9, but none of the cases in the pretests have m set to its maximum value(10^9), so many solutions passed the pretests but failed when it came to the main tests (as they have cases where m = 10^9)

      • »
        »
        »
        »
        3 years ago, # ^ |
          Vote: I like it -31 Vote: I do not like it

        If you put every possible combination in pretests, what's even the point of having system tests?

        • »
          »
          »
          »
          »
          3 years ago, # ^ |
            Vote: I like it +35 Vote: I do not like it

          Of course not every possible combination should be in the pretests, but the basic corner cases for the problem should(Ex: minimum constraints, maximum constraints, unusal input, special cases...) which is what makes pretests strong.

        • »
          »
          »
          »
          »
          3 years ago, # ^ |
          Rev. 2   Vote: I like it +14 Vote: I do not like it

          So you want to say that pretests are present to just scam people ? Just think before you type something :/

          • »
            »
            »
            »
            »
            »
            3 years ago, # ^ |
              Vote: I like it -22 Vote: I do not like it

            Pretests exist to check for corner cases and smaller-sized inputs, that's it. I didn't imply anything about scamming people, think carefully before you accuse someone of something.

            • »
              »
              »
              »
              »
              »
              »
              3 years ago, # ^ |
                Vote: I like it -25 Vote: I do not like it

              Bruh, just stop saying crap.

              • »
                »
                »
                »
                »
                »
                »
                »
                3 years ago, # ^ |
                  Vote: I like it -20 Vote: I do not like it

                Follow your own advice

                • »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  3 years ago, # ^ |
                    Vote: I like it 0 Vote: I do not like it

                  That explains your contribution.

        • »
          »
          »
          »
          »
          3 years ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          To everyone who's downvoting, atleast let me know what's the point of having system tests?

          • »
            »
            »
            »
            »
            »
            3 years ago, # ^ |
              Vote: I like it 0 Vote: I do not like it

            pretests are present because of the limited capapcity of the servers. On the time of the contest duration if you have a lot of tests, then the judge might just give up due to immmense load of submmissions. Hence pretests are a subsets of the actual tests that make sure your solution is correct but at the same time with much less load so that the server can support fast results.

            • »
              »
              »
              »
              »
              »
              »
              3 years ago, # ^ |
                Vote: I like it +9 Vote: I do not like it

              So how should system tests be designed such that when a code passed pretests but fails system tests, people don't say that the pretests are weak?

      • »
        »
        »
        »
        3 years ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        I agree that the pretest is weak, but don't people see the constraint for m being abnormally huge? or maybe they forgot to look closely since they are doing speedforces?

        • »
          »
          »
          »
          »
          3 years ago, # ^ |
            Vote: I like it +3 Vote: I do not like it

          I agree, some people may lack concentration when focusing on speed.

  • »
    »
    3 years ago, # ^ |
    Rev. 2   Vote: I like it +85 Vote: I do not like it

    Maybe unpopular opinion but I think that weak pretests don't necessarily lower the quality of the round.

    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it -58 Vote: I do not like it

      yeah but it lowers your dck. Motherfcker stop giving your shit opinions. Asshole. Will choke you with big D... .

  • »
    »
    3 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    But the data of Problem D is so annoying!

»
3 years ago, # |
Rev. 2   Vote: I like it -23 Vote: I do not like it

.

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

In problem E, if the test case is just 4, 2. Then the good permutations are- {{1, 2}, {1, 3, 2}, {1, 3, 4}, {1, 4, 2}, {1, 4, 3}, {2, 1}, {2, 3}, {2, 4, 1}, {2, 4, 3}, {3, 1, 2}, {3, 1, 4}, {3, 2}, {3, 4}, {4, 1, 2}, {4, 1, 3}, {4, 2, 1}, {4, 2, 3}, {4, 3}} making a total of 18 and sum of switched on lights over all combinations is 48. Then shouldn't the answer be 48/18, but I checked it from others code and it is not correct. I am sure I am doing some blunder, can anybody please help.

  • »
    »
    3 years ago, # ^ |
      Vote: I like it +39 Vote: I do not like it

    Shitt!! These events are not equiprobable. I just multiplied by their probabilities and it ACed. Sadly I was debugging this for 1hr in the contest. Sorry for my silly question.

»
3 years ago, # |
  Vote: I like it +163 Vote: I do not like it

We apologize to all the participants who did not find some of the statements completely clear :(

We tried to answer all your questions as quickly as possible and minimize the impact of the not immediately clear tasks' statements.

We will do our best to make our next round better.

  • »
    »
    3 years ago, # ^ |
      Vote: I like it +8 Vote: I do not like it

    yup,,,,good job

  • »
    »
    3 years ago, # ^ |
    Rev. 2   Vote: I like it -15 Vote: I do not like it

    your previous round was great

    what happened this time?

    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it +47 Vote: I do not like it

      It's hard for me to find an answer to your question. Indeed, I have invested much more myself in this round than in any of the other rounds that I have authored. I really wanted to do something awesome that the community would appreciate, but at some stage, something seemed to go wrong.

      I will personally do my best to make the next Deltix round amazing, even if I am not the author of it.

      • »
        »
        »
        »
        3 years ago, # ^ |
          Vote: I like it -20 Vote: I do not like it

        What would you say about FSTs on D even after 37 pretest :/(I lost a chance to purple cuz of that :sob:)

        • »
          »
          »
          »
          »
          3 years ago, # ^ |
            Vote: I like it +39 Vote: I do not like it

          I looked at your solution. It seems that you are using some kind of greedy algorithm. While preparing this problem, I wrote all the greedy algorithms that I could come up with and they all did not pass pretests. I am sorry for your unpleasant experience.

          • »
            »
            »
            »
            »
            »
            3 years ago, # ^ |
              Vote: I like it +7 Vote: I do not like it

            Yes, I agree. Tbh I wasnt even expecting it to pass on my own either. But the fact it passed pretests was very surprising to me as well. Anyways its just part of game. Thanks for the contest. Hoping to see u with another set of bugaboos soon.:)

            • »
              »
              »
              »
              »
              »
              »
              3 years ago, # ^ |
                Vote: I like it +45 Vote: I do not like it

              Thank you, glad to hear something positive

»
3 years ago, # |
  Vote: I like it +9 Vote: I do not like it

The TL in problem F is too tight, I have a solution with the same time complexity but just couldn't pass.

About 2E8 calculations for 2sec is just too tight.

»
3 years ago, # |
Rev. 5   Vote: I like it +15 Vote: I do not like it

D pretests were hilariously weak. My pure bruteforce with bitset went upto test 127 and I have no idea how

Edit: ok so i got it to pass . I think this would be hard to hack imo. (aand someone hacked :p)

  • »
    »
    3 years ago, # ^ |
      Vote: I like it +9 Vote: I do not like it

    With some optimization i was able to pass systests.

    Submission

    Feel Free to hack ^_^

  • »
    »
    3 years ago, # ^ |
      Vote: I like it -10 Vote: I do not like it

    I think it was easier to solve D with greedy than other methods. I just figured the best indices for putting 1 and then greedily replaced some 1's with 0's in order to get the best solution. However, It would be quite interesting if someone can find a counter case for my solution. Here's my submission 117911118

    PS: Its runtime was better than bitset solution :)

    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it +8 Vote: I do not like it

      Check this test case: 6 3 2 101 101 101 010 010 010

      Your solution gives: 010

      According to the problem it requires subset of max size so answer should be — 101

      Feel free to correct me if I am wrong.

  • »
    »
    3 years ago, # ^ |
    Rev. 3   Vote: I like it +14 Vote: I do not like it

    it is even funnier because test 127 was sunzh's hack (and it was the only hack to this problem during contest)

    (I also failed on 127, same as the only second person from my room who passed pretests on D)

  • »
    »
    3 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    sharath101 why did you use bitset, as m < 61 we can use long long right? or is bitset faster then long long.

    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      I used bitset of length n, since there are n friends

»
3 years ago, # |
  Vote: I like it +16 Vote: I do not like it

Can anyone please tell me, how to solve E using linearity of expectations?

»
3 years ago, # |
  Vote: I like it +28 Vote: I do not like it

although this is a coding contest it really makes me doubt my english skills

»
3 years ago, # |
  Vote: I like it +3 Vote: I do not like it

Asking for hack: my submission for D

My solution is totally wrong but I can't hack it.

»
3 years ago, # |
  Vote: I like it +3 Vote: I do not like it

What was TC #8 of test 2 for C? I couldn't find a test case where my solution fails and didn't understand how to test my solution for an edge case.

»
3 years ago, # |
  Vote: I like it +111 Vote: I do not like it

Shittiest contest ever. problems were shit with no new ideas. the hardest part in the contest was understanding the statements :\

  • »
    »
    3 years ago, # ^ |
    Rev. 2   Vote: I like it -64 Vote: I do not like it

    C has a really nice solution, though you are right at the statements part lol.

    Although you might have solved C in a few minutes.

    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it +28 Vote: I do not like it

      Well, to be honest, I think C is quite boring problem. Although I didn't think the statement of C is bad during the contest, it seemed just another basic stack problem anyway.

      • »
        »
        »
        »
        3 years ago, # ^ |
        Rev. 2   Vote: I like it -16 Vote: I do not like it

        Any help in intuition to stack please.
        I don't know if I am missing some silly point or what but I don't get how could someone think of a stack on this problem in a contest.

        I particularly mean, some intuition or proof that using a stack will always yield an answer (wherever possible).

        • »
          »
          »
          »
          »
          3 years ago, # ^ |
            Vote: I like it +15 Vote: I do not like it

          Editorial also says stack solution. I think just greedy approach can reach this solution.

        • »
          »
          »
          »
          »
          3 years ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          just use a vector and popback

      • »
        »
        »
        »
        3 years ago, # ^ |
          Vote: I like it +39 Vote: I do not like it

        When the problem is way below your level, it's extremely rare for you to find it interesting. It's just a matter of different perspectives here.

        • »
          »
          »
          »
          »
          3 years ago, # ^ |
            Vote: I like it +3 Vote: I do not like it

          no, it's because it's readforces, not a genuinely hard problem.

          • »
            »
            »
            »
            »
            »
            3 years ago, # ^ |
              Vote: I like it +38 Vote: I do not like it

            I'm not here to argue with you C is a good/interesting or not (it's trivial to me if you really want to know).

            The point I want to make is that: when a person finds something nice about a problem, let him enjoy it. It's unnecessary to ruin his fun when you're at a much higher level.

    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      C is nothing new.

  • »
    »
    3 years ago, # ^ |
      Vote: I like it +17 Vote: I do not like it

    Your CF Round #684 was infinitely times worse. Just stupid implementation crap. Except problem C all other problems were very good.

  • »
    »
    3 years ago, # ^ |
      Vote: I like it +5 Vote: I do not like it

    the problems a,b,c were trivial. just implementation. statement of c would have been unclear without picture but with picture and example it was clear. the idea of hosting div1+div2 is weird. obviously 1k+ people of 2100+ rating would beat all div2 participants. thats why people think not giving contest is better than competing in a contest with 1000+ offset rank

  • »
    »
    3 years ago, # ^ |
      Vote: I like it +10 Vote: I do not like it

    your opinion is wrong. i understood the statements, so you’re just making excuses for getting negative delta.

    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it +6 Vote: I do not like it

      i wrote some toxic trolling shit and it got +15

      wannahavealife just wrote “yes” and got -13

      #SayNoToRatism

»
3 years ago, # |
  Vote: I like it +7 Vote: I do not like it

I am a newbie in cp and i faced tle in problem A can someone tell why did i faced tle and how can i optimize my solution https://codeforces.com/contest/1523/submission/117914318

»
3 years ago, # |
  Vote: I like it +3 Vote: I do not like it

Can anyone help me in getting a better intuition to how the solution of C works?

  • »
    »
    3 years ago, # ^ |
      Vote: I like it +7 Vote: I do not like it

    the intuition of stack: since suppose we are at inner of some list, and if the current value is not able to further increase the inner list to we need to go just outer list, this gives an the intuition of stack.

    at any time stack will look like this(it will tell the just previous of current list) ''
    { .
    .
    1.3.2
    1.3
    1
    }

    1. the first result will be "1" always(so first a1 will always be 1).
    2. now see ai:

      2.1 first extract last no. in top of stack like(1.3.2) is 2 lets say it is temp.
      2.2 if(ai==3) or ai==temp+1, it means we can add list like 1.3.3, so just remove 1.3.2 from stack and push 1.3.3
      2.3 else if(ai==1) it means that we can't increase 1.3.2, but we can make like 1.3.2.1(think why it always works :) ) so just create 1.3.2.1 and push in stack
      2.3 else we need to go outer of current list so just pop(it will never become empty if solution exist)

    My solution

    • »
      »
      »
      3 years ago, # ^ |
      Rev. 2   Vote: I like it 0 Vote: I do not like it

      Is this a valid list?

      1

      1.1

      1.2

      2

      1.3

      just want to confirm that can we add 1.3 (a deeper level to 1) after 2? And also is 2 lexicographically greater than 1.3? I know it's a basic level doubt but I am a bit confused, please help. Thanks.

      • »
        »
        »
        »
        3 years ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        this is the wrong a/c to question it should be in lexicographical order

        1 1.1 1.2 2(this place you went out of this list (1.1,1.2)) 1.3(so you cannot write 1.3 here see question pic for more clear)

      • »
        »
        »
        »
        3 years ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        No 1.3 is not lexicographically smaller than 2

        • »
          »
          »
          »
          »
          3 years ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          Then why can't we add 1.3 after 2 if 1.3 is lexicographically greater?

          • »
            »
            »
            »
            »
            »
            3 years ago, # ^ |
              Vote: I like it 0 Vote: I do not like it

            He typoed, 1.3 is lexicographically smaller than 2. They are compared with their first characters, then the next, then so on.

            It's just how an outline works! Such as when you have something such as

            1. Do subtasks 1.1, 1.2, 1.3
            2. Do subtasks 2.1, 2.2, 2.3 but 2.3 has 2 parts 2.3.1 and 2.3.2.
            3. Do subtasks 3.1
            4. is a task in itself.
»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Can anybody explain why my solution of A is giving TLE? My approach is same as that in editorial.

  • »
    »
    3 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Check is O(n), because a whole string is given to function (not a pointer). Try to change bool check(string s, int i) into bool check(string &s, int i).

  • »
    »
    3 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    You are doing a very trivial mistake.

    Whenever you are calling check function a new string is created for the check function which takes O(size of string) time for copying all elements from original string.`   
    So there are 3 things you can do.  
    => use & while defining check function.   
    => define the string globally.    
    => check only when needed.  
    

    So i made this change in your code and it is Accepted. Submission:117923870

»
3 years ago, # |
  Vote: I like it +4 Vote: I do not like it

In problem F, I also needed a priority queue to determine the correct order of processing the states, therefore my solution had an extra log factor. How do you avoid this?

  • »
    »
    3 years ago, # ^ |
    Rev. 2   Vote: I like it +34 Vote: I do not like it

    Found the answer in ecnerwala's solution: the states F(mask, x) are always visited in increasing order of x for the same mask, so we can just iterate over mask after each quest and check if "x" should be increased to visit appropriate F() states.

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Problem D test cases were weak(brute force accepted up to >100 test cases), but not that weak sothat bruteforcer can crack system testing. :))

»
3 years ago, # |
  Vote: I like it -12 Vote: I do not like it

good editorial

»
3 years ago, # |
  Vote: I like it +25 Vote: I do not like it

Not sure if this has been pointed out before, but it looks like CF's judging servers are non-deterministic with respect to TLEs?

This submission 117921507 and this submission 117923079 contain exactly the same code, with the exception of a comment on the last line. The first one ACed in 2.4 seconds, the second one TLEd. Does your code run slower when CF's servers are under load?

»
3 years ago, # |
Rev. 2   Vote: I like it +3 Vote: I do not like it

This is becoming most downvoted editorialಥ‿ಥ Why all those downvotes?

  • »
    »
    3 years ago, # ^ |
      Vote: I like it -13 Vote: I do not like it

    round was bad. most people solved only A, B and C, because D was hard

    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it -8 Vote: I do not like it

      This itself doesn't mean round was bad

      Also it makes more sense to downvoted the announcement if you're unsatisfied with the round, not the editorial

»
3 years ago, # |
  Vote: I like it +5 Vote: I do not like it

Just curious, was it your intention to fail the 3^N check solutions for D? I fst'ed it with tle on tc 126, but afterwards got AC by just using fewer trials. I feel like it's not necessarily a bad thing to force n*2^n, I just wish the constraints made it slightly more clear

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

In Problem C is it ideal to use the stack data structure? You can't iterate through a stack without deleting everything and you need to iterate to print out the corresponding line.

You could keep an additional string and delete the last two characters each time you pop but that only works if you only have single digit numbers.

Or you could copy your stack each time your print but it seems simpler to just use an array.

»
3 years ago, # |
  Vote: I like it +22 Vote: I do not like it

today, english IQ mattered more than spatial or memory IQ

  • »
    »
    3 years ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    I'm not sure whether it has to do with English...

    Anyway, in C I just skipped all the gibberish and figured out the true statement from the picture and examples. Not the first I had to do this

»
3 years ago, # |
Rev. 2   Vote: I like it +39 Vote: I do not like it

In Problem E this Sentence:

  • "Notice that for a state where p lights are turned on the probability of getting to that state is $$$\frac{p!}{n \cdot (n - 1) \cdot (n - 2) \dots (n - p - 1)}$$$. "

Shouldn't the formula be $$$\frac{p!}{n \cdot (n - 1) \cdot (n - 2) \dots (n - p + 1)}$$$? $$$+1$$$ instead of $$$-1$$$? For $$$p=1$$$ we get $$$\frac{1!}{n }$$$ with $$$+1$$$ which should be right.

»
3 years ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

Me after getting AC on D after a dumb brute force : OMG I m gonna be CM !

Me after system : How to stop crying :sob:

»
3 years ago, # |
  Vote: I like it +37 Vote: I do not like it

In the editorial for E it says "Now for a new value of p we need to calculate the number of fitting states. Without the condition about having a distance of k between the lights this number would be equal to the number of ways to split the lights into p groups, which is C(n−1,p−1)." So lets say n=3 and p=1. Then we get C(n-1,p-1) = C(2,0) = 1. But there are three ways to choose a single lamp to turn on, so there would be three fitting states. Am I not understanding this editorial correctly or is there a mistake in it?

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

can anyone explain why this thing happen??? Ques1.suppose time limit for per test case is 1 sec and there is 1000 test case then how much time provided by codeforces for running of whole program 1 sec or 100sec? Ques2. i m try to add some overhead(1e9 loop) but exec time is not change why??? 117926727 and 117927902

  • »
    »
    3 years ago, # ^ |
      Vote: I like it +5 Vote: I do not like it

    1) 1 sec 2) It might be because the compiler recognizes that the a and b variables don't do anything, or it sees the for loop and concludes that the only thing that the for loop does is change the variable a to the value 10.

»
3 years ago, # |
  Vote: I like it +5 Vote: I do not like it

FST Problem D :(

»
3 years ago, # |
Rev. 2   Vote: I like it +30 Vote: I do not like it

Editorial for D seems a little too minimalist but maybe just me. Can't follow along the first paragraph much less the second

  • »
    »
    3 years ago, # ^ |
      Vote: I like it +3 Vote: I do not like it

    For anyone else having problem here is explanation I can understand from yash_daga in the announcement post:

    "Randomised solution for D:

    Randomly take a friend and assume it to be the part of the final subset now compare all other friends with this subset and use submask dp in the end to find maximum size subset. I took 25 random friends."

    • »
      »
      »
      3 years ago, # ^ |
      Rev. 3   Vote: I like it 0 Vote: I do not like it

      BTW I found ksun48 solution super clean and similar to the idea in the tutorial: 117886225

      UPD: Don't know why rerunning this cannot pass timing tho.

      UPD: here is my AC’ed solution — 118024463

»
3 years ago, # |
  Vote: I like it +56 Vote: I do not like it

From this editorial we discovered that the probability of the contestant being hit by a falling meteorite is $$$10^{-6}$$$. But how did you calculate this probability?

  • »
    »
    3 years ago, # ^ |
      Vote: I like it +40 Vote: I do not like it

    I think it's roughly equals to (number of CF users hit by meteorite fall) / (number of meteorite falls)

  • »
    »
    3 years ago, # ^ |
      Vote: I like it +32 Vote: I do not like it

    For each participant, they rolled a 10^6 sided dice using random.org. If it came up as a 1, then they used their advanced space technology to locate the contestant and hit them with a meteorite.

»
3 years ago, # |
  Vote: I like it +5 Vote: I do not like it

Editorial for task E doesn't really explain why the result proposed is the expected value which needs to be calculated. Is it true that expected number of lights turned on is $$$1$$$ + expected number of moves such that the machine is still not broken? Although I got an AC, I'm still not convinced why it works...

»
3 years ago, # |
Rev. 3   Vote: I like it +62 Vote: I do not like it

Problem E

Why so unclear editorial? Or i so stupid? I don't understand that $$$C(n−(k−1)⋅(p−1),p−1)$$$ is right for amount arranged p lights with min dist greater or equal k. we can choose $$$p$$$ lights from $$$n-(k-1)(p-1)$$$ and put after first $$$p-1$$$ chosen lights by $$$k-1$$$ another lights, so $$$C(n-(k-1)(p-1), p)$$$ i understand. is it typo?

Why summarize this values multiplying by $$$C(n, p)$$$ we got the expected value? expected value is sum of value multiplying by probability. We have amount arranged p lights with min distance greater or equal k and we need put $$$p+1$$$ light to destroy this condition and finish device. I don't understand how we took into account this

  • »
    »
    3 years ago, # ^ |
    Rev. 2   Vote: I like it +203 Vote: I do not like it

    I think some of the formulas in the editorial have off-by-one errors, as I had slightly different formulas when upsolving. In any case, I can try to explain the idea.

    First, we can rephrase $$$\text{E[lights turned on at the end]}$$$ as $$$\text{E[number of turns the machine works for before breaking]}$$$, and split that into $$$\text{E[number of turns the machine works for and is one turn away from breaking]} + 1$$$. Now we want to count the aforementioned quantity.

    We can count more generally the number of states the machine can be in, such that no subarray of size $$$k$$$ has more than one light turned on. Say the machine has worked for $$$p$$$ turns, meaning it has $$$p$$$ on lights. In order for the $$$k$$$ condition to not be violated, there must be at least $$$k - 1$$$ off lights in between on lights. We can think of this as a stars and bars problem: our $$$p$$$ lights function as bars separating the remaining $$$n - p$$$ off lights into $$$p + 1$$$ groups, where the middle $$$p - 1$$$ groups must each have at least $$$k - 1$$$ off lights. If we start by putting $$$k - 1$$$ off lights into each of the middle groups, then we are left with $$$n - p - (k - 1) \cdot (p - 1)$$$ off lights to distribute into $$$p + 1$$$ groups, where groups may be non-empty. Applying stars and bars (specifically theorem two in the link above) gives us $$$\binom{n - (k - 1) \cdot (p - 1)}{p}$$$.

    Now, not all of these states can lead to the machine breaking in the next turn. But each state contributes some amount to the answer. The machine starts at the state of $$$n$$$ off lights, then each turn it moves to another state by turning on some light, until it breaks. So the number of turns the machine works for is the number of states on the path it takes. Each state thus contributes $$$+1$$$ to the answer for the probability that it is on a path. The probability that a state with $$$p$$$ on lights is visited is $$$\frac{p!}{n \cdot (n - 1) \cdot (n - 2) \cdot \dots \cdot (n - p + 1)}$$$. Editorial says $$$n - p - 1$$$ but I believe that's a typo. This comes from the fact that we can pick any of the $$$p$$$ lights first, then any of the $$$p - 1$$$ lights second, and so on. Similar logic is applied to the denominator.

    So our final answer is thus $$$1 + \sum_{p=1}^n \binom{n - (k - 1) \cdot (p - 1)}{p} \cdot \frac{p!}{n \cdot (n - 1) \cdot (n - 2) \cdot \dots \cdot (n - p + 1)}$$$.

    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it +3 Vote: I do not like it

      Thank you!

    • »
      »
      »
      3 years ago, # ^ |
      Rev. 4   Vote: I like it +21 Vote: I do not like it

      Do I understand this right: The $$$1+...$$$ from the final answer is from the fact, that there is a 100% chance to end in an ending state, i.e. one with 2 lights in $$$k$$$ consecutive lamps (We didn't count those states yet so we are not overcounting this way)?

      Edit: Thank you very much for the explanation! It explains critical aspects that have been left out in the editorial and helps really much. I was able to write my own solution from scratch and to understand and reconstruct the proof for this solution with your help!

    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      thanks, nice editorial. But the most unclear moment for me you explained not good or i amn't smart). why this sum is expected value? So let be a good state is state such that no subarray of size k has more than one light turned on. $$$PG_p$$$ is probability be in good state with p lights turned on, $$$PT_{p+1}$$$ is probability be in terminated state with p+1 lights turned on. then answer is $$$E = \sum_{p=1}^{n - 1} (p + 1) PT_{p + 1}$$$. From good states with p lights we can move on good states or terminated state with $$$p+1$$$ lights with $$$trg_p$$$ and $$$trt_p = 1 - trg_p$$$ probabilities respectively. then $$$E = \sum_{p=1}^{n - 1} (p+1)PG_p * trt_p= \sum_{p=1}^{n-1}PG_P + \sum_{p=1}^{n-1} p * PG_p - trg_p * PG_p - p *PG_p * trg_p$$$ so why $$$\sum_{p=1}^{n-1} p * PG_p - trg_p * PG_p - p *PG_p * trg_p = 1$$$ ?

      • »
        »
        »
        »
        3 years ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        I'm not too sure where the right hand side of your equation, $$$\sum_{p=1}^{n-1} PG_p + \sum_{p=1}^{n-1} p \cdot PG_p - trg_p \cdot PG_p - p \cdot PG_p \cdot trg_p$$$, comes from. I think it might help to not think of expected value as the canonical definition of $$$\sum_{p=1}^n p \cdot \text{P[machine terminates with } p \text{ on lights]}$$$, but instead think of it as a contribution of each state to the answer. One way of considering that is as counting states on paths as per the last paragraph in my comment. Another way that might be more straightforward is _Bishop_'s comment below, where we split the expected value further into sum of probabilities that the machine is still working after each turn with linearity of expectation.

      • »
        »
        »
        »
        3 years ago, # ^ |
          Vote: I like it +5 Vote: I do not like it

        I'll leave one comment. I didn't use rephrasing of expected value, and calculated probability to terminate with p+1 lights using following logic. Let's say we know it didn't terminate at state with p lights, then it will split into (n-p) new states with (p+1) lights. Some of them are terminating, and some of them are not. But we know formula for non-terminating! Thus, we get number of non-terminating with p lights on, multiply by (n-p) and then subtract number of non-terminating with (p+1) lights on. All we have to do is multiply by probability of this state: factorial(n-p-1)/factorial(n), and multiply by value of this state (p+1). 118170317

        • »
          »
          »
          »
          »
          3 years ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          thanks, this problem has so many smart approaches) But you has some inaccuracy. first, you calc not amount non-terminating state but amount non-terminating state with order of turning on lights, what simplify transform to $$$p+1$$$ lights multiply by $$$n-p$$$. second, factorial(n-p-1)/factorial(n) isn't probability, it's amount of select $$$p+1$$$ lights from $$$n$$$ with order.

          • »
            »
            »
            »
            »
            »
            3 years ago, # ^ |
              Vote: I like it 0 Vote: I do not like it

            Yes, I take order of lights into the state as well. And factorial(n-p-1)/factorial(n) is exactly probability to reach that particular state.

    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Thank you for the clarification!

    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Your explanation is very clear and I feel that the official explanation is very irresponsible (for me personally).

      I have been troubled by the official question for a long time. Thank you again!

  • »
    »
    3 years ago, # ^ |
      Vote: I like it +50 Vote: I do not like it

    Let $$$X$$$ be the random variable denoting the number of lights turned on when the $$$\textbf{process terminates}$$$. Now, let us try to figure out the value of $$$P(X \geq d)$$$. It is easy to see that for $$$d \geq 2$$$ , the value of $$$P(X \geq d)$$$ is nothing but probability of arriving at sequence of $$$(d-1)$$$ elements such that each pair of adjacent elements are $$$k$$$ apart. And that $$$P(X \geq 1) = 1$$$. Now, once this is done we can write
    $$$E(X) = P(X \geq 1) + P(X \geq 2) + \dots + P(X \geq n)$$$ as $$$X$$$ is discrete random variable.

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Need help with the solution of problem C. Getting WA on test case 2.

Solution link: 117905624

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

D is also solvable without SOS dp, using what's maybe considered bitset cheese: https://codeforces.com/contest/1523/submission/117930426

»
3 years ago, # |
  Vote: I like it +10 Vote: I do not like it

Not sure how my solution worked for D, but it is not randomized and falls well under the time limit.

Solution
  • »
    »
    3 years ago, # ^ |
      Vote: I like it +22 Vote: I do not like it

    Hacked with a generator I stole from someone else :P

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

I also write in bitset,I see that someone said it can be hacked,I just want to get a hack data so that I can improve my code,ses there[submission:117940279]

»
3 years ago, # |
  Vote: I like it +11 Vote: I do not like it

Could you please tell me what the "submask" means? I just can't understand this word.

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

  • »
    »
    3 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    That's true for Prob. D as well :(

  • »
    »
    3 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    I'm guessing you did that. If so, that is the reason why you are green

    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      But the editorial also says that you need to notice that 1 2 1 2 1 2 works for all cases

      How is that noticing done? Isnt it done by hit and trial? I tried to hit and trial that too. Sometimes it works sometimes doesnt.

      Isnt writing a simple recurrence solution (which takes max 10 min) and running it to find out the solution best way?

      How did you notice the 121212 thing? Is there a logical procedure to notice it?

      • »
        »
        »
        »
        3 years ago, # ^ |
        Rev. 2   Vote: I like it +10 Vote: I do not like it

        N is even and max no of operations is 5000, n <= 1000. It is important to note when you do operation 1, the sum replaces the number at the larger position, and for operation 2, the subtraction replaces the number at the smaller one

        Since N can be as small as 2, it means it is possible for just 2 numbers meaning you can apply exactly what you did for n=2 to all 500 pairs of numbers in max case(n=1000) and n is even which means you wont have to worry about an incomplete pair. Therefore for n = 2, it can be done in at most 10 operations.(500*10=5000)

        After coming up with 1 2 1 (trial and error or algebra, not hard to get) notice that the 2nd number is negative but in the 1st position while the 1st number is still positive in the 2nd position

        e.g
        5 7 (original pair)
        5 12 (operation 1)
        -7 12 (operation 2)
        -7 5 (operation 1) 
        

        we are here now, how do you get -5 -7?

        this is like: we got from a b, to -b a(we are in the right track since we managed to negate the second number but switched the positions), how to get to -a -b? to do that, you need to negate the second number(which is currently a) and switch the positions of the two numbers and 121 does what we need...

        212 also does the required task, thus how 121121 or 121212 is gotten(at least for me)

  • »
    »
    3 years ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    I support you that writing the bruteforce here is a totally acceptable way and it is not necessarily a sign of being green. Actually I got lucky and managed to find a solution manually, but I was questioning the validity of such method while doing so thinking of coding bruteforce

»
3 years ago, # |
  Vote: I like it +3 Vote: I do not like it

Ah, could anybody tell me why my solution D get WA on 80? 117914897

I found some of AC codes are just like mine.

  • »
    »
    3 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    I can tell you why. Because some hackers will observe your code and make a set of data according to your random process. To avoid that, I recommend you to use a unique random seed or just srand((unsigned)time(0)*time(0));. Wish you good luck.

    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      thanks.

      I think my solution is not good enough.

      I found that most of these AC codes got WA now.

»
3 years ago, # |
Rev. 2   Vote: I like it +5 Vote: I do not like it

Can you prove me how the editorial solves problem B?

  • »
    »
    3 years ago, # ^ |
    Rev. 2   Vote: I like it +3 Vote: I do not like it

    Let us take an array of just 2 elements, say {a, b}. Think of it this way. We can convert it to {-a, -b} in 6 moves. How?

    Look carefully at these operations:

    (I will be denoting the first element as x and second as y, because their values will be changing)

    x = x + y; now we have {a + b, b}

    y = y — x; {a + b, -a}

    x = x + y; {b, -a}

    So we can see that these 3 operations got us from {a, b} to {b, -a}. It is clear enough now, that applying these 3 operations again on some {x, y} will get us from {x, y} to {y, -x}, i.e. {-a, -b}.

    I hope it was clear. If it wasn't, please let me know!

  • »
    »
    3 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Here's the simulation for the first pair ($$$a_1$$$, $$$a_2$$$). Similarly we apply the same six operations to the rest of the pairs . The number of actions is $$$n / 2$$$ pairs * $$$6$$$ operations $$$ = n * 3 $$$.

    Spoiler
»
3 years ago, # |
Rev. 2   Vote: I like it +11 Vote: I do not like it

Thanks for the brilliantly written problem statements and the excellent pretests! /s

»
3 years ago, # |
  Vote: I like it +1 Vote: I do not like it

Can anyone post some resources on how to solve Problem-D. I didn't get the slightest idea!

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

I misread the problem A and thought solution should be O(n). Can someone tell me why my O(n) solution(probably) is taking same time as O(n^2) ones. 117909301

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Hoping that your team can supply standard codes to leave a deeper impression to readers.

»
3 years ago, # |
  Vote: I like it +44 Vote: I do not like it

Understanding problem statement C

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it
  • »
    »
    3 years ago, # ^ |
      Vote: I like it +3 Vote: I do not like it

    hacked

    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      is there any way to solve problem d without random??

      • »
        »
        »
        »
        3 years ago, # ^ |
          Vote: I like it +8 Vote: I do not like it

        Technically yes; 117932090 is an $$$O(n \cdot 2^p)$$$ solution with bitset with the max test taking about 2 seconds. I don't think there's a deterministic solution with a better complexity though.

        • »
          »
          »
          »
          »
          3 years ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          You can try doing $$$O(\cdot 2^{2p})$$$ as well since there are at most 2p column that have at least $$$\frac{n}{2}$$$ ones, but it would probably be a bit too slow to pass. Incomparable complexity though.

        • »
          »
          »
          »
          »
          3 years ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          Isn't the complexity $$$O(n \cdot 2^\text{usefuls.size()})$$$ and usefuls.size() can go up to $$$2p$$$ ? (unless the if(other.count() >= threshold) is pruning significantly, in which case can you explain why?)

          • »
            »
            »
            »
            »
            »
            3 years ago, # ^ |
              Vote: I like it +3 Vote: I do not like it

            That program works in $$$O(n \cdot masks)$$$, where $$$masks$$$ is the number of masks which are submasks of at least $$$\frac{n}{2}$$$ friends. If $$$masks>2^{p+1}$$$, then the total number of submasks over all friends would have to be greater than $$$\frac{n}{2} \cdot 2^{p+1}=n \cdot 2^p$$$, which is impossible because each friend has at most $$$2^p$$$ submasks.

      • »
        »
        »
        »
        3 years ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        I've got a solution : Submission

        Not sure it's correct or not.

        I brute-forced like at most 30 bits. I think that my solution run at most checking $$$2\cdot 2^{P}$$$ possible combinations, but I can't prove it.

    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it
      • »
        »
        »
        »
        3 years ago, # ^ |
          Vote: I like it +11 Vote: I do not like it

        hacked. It would be interesting to see how many solutions from contest would fail after rejudging on all of the hacks that were created after contest. But we will probably never know.

        BTW I used really simple test case: n/2 with first 15 bits set and n/2 with last 15 bits set. It would most probably fail even on n/2 with first 15 bits set. Strange that such test case wasn't part of 120+ system test cases

        • »
          »
          »
          »
          »
          3 years ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          yeah, I knew it was going to get hacked. I thought of shuffling the bits its self. But, I was eventually mistaken as probability of failure is high. I just hoped that it will pass and it did xD. I am really lucky!

»
3 years ago, # |
Rev. 4   Vote: I like it -8 Vote: I do not like it

[DELETED]

»
3 years ago, # |
  Vote: I like it -9 Vote: I do not like it

Problem A, B, and C have difficulty level of 800, 1100, and 1600. Increment of 300 and 500. Seems fine. Problem D have difficulty level of 2400. Increment of 800! I know contest had huge prices but think about Div-2 also from next time please.

  • »
    »
    3 years ago, # ^ |
      Vote: I like it +3 Vote: I do not like it

    The problem ratings are unknown before the contest, and are calculated after the constest on a basis of how much people solved them.

    So we can consider the problemsetters try to choose them best balanced as possible, but sometimes the results are not exactly like expected.

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Can anyone explain why in the second test case of Problem D, we cannot include currencies 2 and 4?

Input 5 5 4 11001 10101 10010 01110 11011

Output 10001

Currencies 2 and 4 are also liked by ceil(5/2) = 3 friends. If we include those currencies (11011), we would have a larger set of 4 currencies than the expected answer of 2 currencies. I am missing something, can anyone point out what it is?

  • »
    »
    3 years ago, # ^ |
    Rev. 2   Vote: I like it +1 Vote: I do not like it

    Only the fifth person does like all currencies of 11011. So the set 11011 is only liked by one person and $$$1<3=\left \lceil{5/2}\right \rceil $$$. So 11011 is no valid set of currencies.

    Everyone in the group needs to like every chosen currency!

    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Thanks for the clarification. I assumed that the ceil(n/2) people can be different for each currency.

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

I have a solution for problem A with a time complexity of O(n): At each lamp, we store the distance from it with the closest lamp on it's left and right After that, we loop through all lamp, if it was originally off, and now the min of the distances are smaller than k and the distances are different (so that they won't be turned on at the same time) then turn on that light

My solution

»
3 years ago, # |
  Vote: I like it +29 Vote: I do not like it

Isn't Problem D just this I wrote the exact same solution.

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

I'm receiving a strange verdict in Problem C:

wrong answer Last number in the line does not match the one that Vasily had (test case 2)

But all numbers in output seems match with given numbers.

Submission: https://codeforces.com/contest/1523/submission/118022778

  • »
    »
    3 years ago, # ^ |
    Rev. 2   Vote: I like it +3 Vote: I do not like it

    Test 1, Testcase 2:

    Input gives last numbers: 1 1 1 2 2 1 2 1 2

    Your solution gives last numbers: 1 1 1 1 1 2 2 2 2

    Positions 4, 5, 6 and 8 of the list are wrong.

»
3 years ago, # |
Rev. 2   Vote: I like it +18 Vote: I do not like it

TL for problem H is so tight that solutions handling queries online with RMQ constructed in $$$\mathcal O(n)$$$ time can hardly pass :(

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

What is the solution for G?

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

How to solve G?

  • »
    »
    3 years ago, # ^ |
      Vote: I like it +19 Vote: I do not like it

    Added editorial for task G. Sorry for the delay.

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

I coded the approach in editorial for Problem D, https://codeforces.com/contest/1523/submission/118136038 .

I am getting TLE on test case 136 which seems to have been added after the contest, as there were only 127 tests for solutions submitted during the contest.

What optimisations should I do to pass the newly added testcase? Thanks.

»
3 years ago, # |
  Vote: I like it +53 Vote: I do not like it

The Editorial Solution for Problem D is pretty short, so I want to list my steps (I have also seen them in many other solutions) to solve this task, including the explanation for the steps. I will try to emphasize steps which were problems for me, in the hope to help people who are intimidated by this problem (as I was).

  • We will make a randomized solution. Since we need to find $$$\lceil \frac{n}{2} \rceil$$$ friends which share a common set of currencies, we can just query about 20 random friends, look at their favourites and only work with those. Instead of querying 20 random friends, you can also shuffle all friends and then query the first 20 of them. This will fail with probability $$$1:2^{20}$$$ which is very low.

  • Now we have chosen one person. Let's call him Carl. Carl likes only 15 currencies at most. To easier work with the masks we will compress those. We ignore each currency, which is not liked by Carl. This can be done easily with a vector bitPos which stores the bit-positions of all the currencies which Carl likes. We later will need this again to decompress the answer. In the next steps we will only regard Carls currencies and ignore all the other ones. See this example (Example 2 from problem description):

  • Now we want to create an array A[1<<15]. For a mask of currencies the value A[mask] shall be the amount of people (Carl including) who like exactly the currencies in mask:

  • Now we have A[mask]. Now we want to calculate a B[mask]. For a mask of currencies the value B[mask] tells us the amount of people who like at least the currencies in mask. They can like even more currencies. So it's the amount of people, such that mask is a submask of their favourites. See this example for A and B:

  • How do we calculate B[mask]? Well, it follows that $$$B[mask] = \sum\limits_{supmask \supseteq mask} A[supmask]$$$. There are several algorithms, which are better explained on other sites, like an O(3^n) algorithm here or an O(n*2^n) algorithm with very thorough explanation here. Just be careful: those sites show how to calculate the sums for submasks. We need the sum of supermasks here! The algorithms can be easily adapted for this. e.g. by inverting all bitmasks or by simply reversing array A[] (and reversing it again in the end) or by alternating the treatment of the bits, as explained here e.g.. This is the most difficult algorithmic part of this bugaboo (at least it was for me) but also the one you will learn the most. So take your time here!

  • Now we have B[mask]. We can check all masks for B[mask] >= roundUp(people/2). The mask with the highest amount of bits is our answer for Carl. Now we need to decompress our mask with bitPos and can return it. Don't forget, we only tested for Carl, we now need to repeat this for other people.

Here is my commented submission: 118139752

I hope it helps. Feel free to ask me questions or correct my mistakes!

  • »
    »
    3 years ago, # ^ |
      Vote: I like it +5 Vote: I do not like it

    Appreciate your detailed explanation! I would like to ask if it is a good idea to set the number of iterations as $$$ k $$$ times only, let's say $$$ k = 30 $$$, instead of $$$min(k, (int) like.size()) $$$.

    For test case #144, there are only two friends which means there are two iterations if you take the minimum value. Another similar case is #162 with 4 iterations. I failed these two cases sometimes only with wrong answer The contestant's answer is either too small.

    Test Case #144
    2 41 2
    00000000000000000000000000000000000000011
    00000000000000000000000000000000000000001
    
    Test Case #162
    4 5 5
    11000
    11000
    00111
    00111
    
    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      You need to distinguish 2 ways to obtain random people. You can either choose 20 people at random:

          for(int attempt = 0; attempt < 20; attempt++)
          {
              long long tempans = tryPerson(uniform_int_distribution<int>(0, n-1)(rng));
          }
      

      Or you can shuffle first and then chose the 20 first people:

          shuffle(like.begin(), like.end(), rng);
          for(int attempt = 0; attempt < min(20, (int) like.size()); attempt++)
          {
              long long tempans = tryPerson(attempt);
          }
      

      If there are only e.g. 2 People, then you still need 20 tests in the first solution. Using only two checks then your failure could happen, e.g. we test person 1 twice. But in the second solution 2 will be enough, because we can guarantee that we checked ALL people.

      You can also modify the first solution and mark people you already checked and don't repeat them, then 2 checks would be enough too.

      • »
        »
        »
        »
        3 years ago, # ^ |
          Vote: I like it +5 Vote: I do not like it

        I figure out what I did wrong. I misplaced that shuffle function inside the for-loop. Learned a lot from you. Thank you very much!

  • »
    »
    3 years ago, # ^ |
      Vote: I like it +6 Vote: I do not like it

    Thank you mateee... And definitely the super set part was really tough to get...

  • »
    »
    3 years ago, # ^ |
      Vote: I like it +5 Vote: I do not like it

    Thank you soooo much :)

  • »
    »
    3 years ago, # ^ |
    Rev. 3   Vote: I like it +5 Vote: I do not like it

    I have a question; what is a supermask?

    I know what a submask is :P

    UPD: I think I understood what a submask is from the solution. Thank you so much for your effort <3

    • »
      »
      »
      3 years ago, # ^ |
      Rev. 2   Vote: I like it +4 Vote: I do not like it

      It seems that you understood already, but just for the record: If $$$m$$$ is a submask of $$$M$$$ then $$$M$$$ is a supermask of $$$m$$$ and vice versa. You can also see this in the example, if we calculate e.g. B[100]=A[111]+A[110]+A[101]+A[100]. B is the sum of all its supermasks in A.

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Am I the only one who need proof of 1523C - Compression and Expansion?

»
3 years ago, # |
  Vote: I like it +3 Vote: I do not like it

Solution to H with an extra log in complexity: 118228547. Yes, the pragmas are super important.

Both in precomputation and in answering queries, I have classic states "if we make up to $$$2^e$$$ jumps, how far can we get for each $$$k$$$?". I precompute for each possible $$$e$$$, but when answering a query, I remember the largest number of jumps that's not enough to reach $$$r$$$ and how far we can get for each $$$k$$$ for this specific number of jumps (it's very binsearch-like). Incrementing a state by $$$2^e$$$ jumps is easy in $$$O(k^2 \log)$$$, where the log comes from a Fenwick tree. In total, we precompute $$$O(n \log)$$$ times and for each query, we also update states $$$O(n \log)$$$ times, where these logs come from $$$2^e$$$ jumps for which we precompute. In total, it's $$$O(n k^2 \log^2)$$$.

The key optimisation is choosing the right order of data in memory! Instead of Fenwick trees on arrays of integers for each $$$2^e$$$ and each $$$k$$$, we put whole chunks of $$$31$$$ values (max. distance for each $$$k$$$) into Fenwick trees for each $$$2^e$$$. Complexity is the same since a query on one tree is $$$O(k \log)$$$ now, but the operations done inside the trees are very simple: given an input chunk $$$in$$$ and an output chunk $$$out$$$, for each $$$k$$$ from $$$0$$$ to $$$30$$$, do $$$out_k = \max(out_k, in_k)$$$. This can be vectorised easily. Any overhead is only $$$O((n+q)k^2 \log)$$$. Bam, 6 times faster code!

In fact, if we use chunks of $$$32$$$ shorts, that's 64 bytes, perfectly fitting in cache lines and ~30% faster.

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

For problem A ive exactly coded what the editorial says but i get tle please help me find the problem here is my submission 118242830

  • »
    »
    3 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it
    1. You didn't consider if a[i] is 0 or 1 for the first two if-statements.
    2. You don't need to iterate $$$m$$$ times if $$$m > n$$$, because after $$$n$$$ times under this circumstance, there will be no changes. Hence, you just need $$$m = min(m, n)$$$ iterations.
    3. You don't need to put each character to vector<char> a. a[i] is same as s[i].
    4. You can use a temp string t copied from s, perform in-place modification on t and swap them after each iteration. Output s at the end.
    5. You can use a bitwise XOR cool trick to check if t[i] should be 0 or 1.
    Spoiler
    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      I'm coming from python and so I thought strings are immutable and also yea I forgot to check if the actual cell was dead or not I made the chages and it's accepted now Thanks!!!

      • »
        »
        »
        »
        3 years ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        In C++, you can assume everything's mutable — if you're wrong, the compiler will tell you.

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

can somebody please tell my why my solution of D just barely passes with complexity O(iter*(3^p+np))?? submission : 118593142 it takes 2979ms, can you please tell me some improvements which can be done in code to make it run faster? thanks.

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

How should we process the states of DP's in F? The best I can do is probably $$$2^n*n*m^2$$$, because of the need to recalculate all F states after each visited quest. How do we avoid this? Also, I've seen some people using priority queue to determine the correct order of states. How do you use it?

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Can anyone just look at my code for problem C:-compression and Expansion. I think I did the same thing as in editorial but not getting ac

solution link- https://codeforces.com/contest/1523/submission/120891492

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

G can also be solved in $$$O((n+m) \log^2 n)$$$ time although in practice it barely passes. You can find the first segment with length at least $$$k$$$ which is inside some given segment $$$[l,r]$$$ in $$$O(\log n)$$$ online with $$$O((n+m) \log^2 n)$$$ preprocessing of a fixed set of segments (all the segments given in the input) using fractional cascading, sparse tables, and some clever coordinate transformation tricks. I had to switch to a different technique when $$$k$$$ is small to pass. Code: 121348828 Probably using $$$O(n)$$$ RMQ would be even faster.

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

How to prove in H that minimizing $$$cnt$$$ has always higher priority than jumping further to right after some $$$t$$$ steps?

»
20 months ago, # |
  Vote: I like it 0 Vote: I do not like it

i not understood the 2nd question properly anyone can explain me please