vkgainz's blog

By vkgainz, history, 4 months ago, In English

Comment if you're taking! It will be held on January 22nd, 1 PM CT on Kattis. The online mirror will be there too hopefully; my college didn't register (Go Bears -_-) so I'll have to take it on there too.

 
 
 
 
  • Vote: I like it
  • +84
  • Vote: I do not like it

»
4 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Auto comment: topic has been updated by vkgainz (previous revision, new revision, compare).

»
4 months ago, # |
  Vote: I like it -7 Vote: I do not like it

will be taking! our team name is "We are dumb"!

»
4 months ago, # |
  Vote: I like it +42 Vote: I do not like it

Open mirror link: https://naq21-open.kattis.com/standings

I’m curious: are others planning to take the contest under one-PC or three-PC rules?

»
4 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Goberts

»
4 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Does B requiring finding minimum distance between a ray and a line segment in 3D? If so, can someone share their code for that?

»
4 months ago, # |
  Vote: I like it +11 Vote: I do not like it

Here are the aggregated standings.

»
4 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Problem F: I tried to come up with a DP but I couldn't. At the end, I was thinking something related with suffix automaton but I think this is not the way.

Do I need to know any special trick or algorithm to solve it? thanks for the help in advance.

  • »
    »
    4 months ago, # ^ |
    Rev. 2   Vote: I like it +13 Vote: I do not like it
    nope :0
»
4 months ago, # |
  Vote: I like it +3 Vote: I do not like it

Any hint on problem F?

  • »
    »
    4 months ago, # ^ |
      Vote: I like it +31 Vote: I do not like it

    for a tuple (A,B,C,D,E), we first compute pre_v[i][c] and suf_v[i][c] as for prefix/suffix i, the number of vowels c. Then we use it to compute pre[i][x][y] and suf[i][x][y] as for prefix/suffix i, the number of pair(vowel x,consonant y).

    Then to compute the answer, we do a bit of include-exclude principle. Our answer is #(A!=C,C!=E)-#(A!=C,A==E)+#(A!=C,A==E,B==D)-#(A!=C,C!=E,B==D). Each part can be computed by enumerating two loops of vowels and consonants and suming up with pre and suf.

    The total time complexity is O(n*vowel*consonant).

    • »
      »
      »
      4 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Thank you and I found that adding another loops of vowel still fits in the time limit and it makes the math much easier.

  • »
    »
    4 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Lol I overkilled problem F with a bitmask dp :D

»
4 months ago, # |
  Vote: I like it +8 Vote: I do not like it

For problem D, why doesn't this work?

I numbered the variables from 1 to 100 and then I tried making a matrix with each row correlating to an equation. In a row, the jth column would contain the power you raise variable j to such that when you multiply them all together they are 1. Ex the equation $$$b^2c=1/(a^2b)$$$ or $$$a^2b^3c=1$$$ would be translated to the row $$$[2,3,1,0,0,0,...]$$$. I then found the rref of this $$$n \times 100$$$ matrix and checked if it had a row with only one element.

»
4 months ago, # |
  Vote: I like it +8 Vote: I do not like it

Will the tests be publicly available?

  • »
    »
    4 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Judge data and solution slides will be posted in a day or so.

»
4 months ago, # |
  Vote: I like it -25 Vote: I do not like it

How to change my CF handle?

»
4 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Does this sound about right for L?

Basically you can do a sqrt trick and separate the graph into a set of light nodes (degree $$$\leq \sqrt{n}$$$) and heavy nodes (all other nodes). First, for each heavy node you can do a BFS to mark the distance to all other nodes. This will run in $$$O(n \sqrt{n})$$$. Then, you compute for each node $$$u$$$ and each heavy node $$$v$$$, how many neighbors of $$$u$$$ are at distance 2 of $$$v$$$. Store these in a table (I think 1GB memory is good enough).

Now, for each light node, we can use the following strategy: keep track of the "contribution" of each of its neighbors, where the "contribution" is how many nodes are adjacent to the neighbor but not adjacent to me (we can use a set for the queries). We can then use simple counting to find swords that are centered on the node.

We do essentially the same thing for heavy nodes, but we use our precomputed table to find the contributions instead of doing it naively.

Total runtime: $$$O(n \sqrt{n} + n log(n))$$$.

  • »
    »
    4 months ago, # ^ |
    Rev. 3   Vote: I like it +5 Vote: I do not like it

    I just used bitsets to solve it in O(n*m/64).

    Fixed the middle edge and then used .count() method to count neighbours, count common neighbours and then some permutations and combinations.

  • »
    »
    4 months ago, # ^ |
      Vote: I like it +33 Vote: I do not like it

    Our solution: If you enumerate edge (B,D) and compute the answer, the only part that you need to subtract is when A is the same as C,E,F. In that case, it is a cycle of 3 nodes. We can enumerate all cycles by O(n^1.5), then subtract the contribution to get the answer.

    • »
      »
      »
      4 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Somehow my teammate’s O(nm) heuristic passed, so I’m suspecting test data wasn’t particularly strong?

    • »
      »
      »
      4 months ago, # ^ |
        Vote: I like it +1 Vote: I do not like it

      Is "enumerate all cycles by O(n^1.5)" standard? How to do it?

      • »
        »
        »
        »
        4 months ago, # ^ |
          Vote: I like it 0 Vote: I do not like it
      • »
        »
        »
        »
        4 months ago, # ^ |
          Vote: I like it +43 Vote: I do not like it

        It probably should be $$$\mathcal{O}(m^{1.5})$$$, as a complete graph has $$$\mathcal{O}(n^3)$$$ cycles.

        Enumerating all cycles in $$$\mathcal{O}(m \sqrt{m} \log n)$$$ is easy. Call a vertex heavy if it has at least $$$\sqrt{m}$$$ neighbours, and light otherwise.

        First, loop over light vertices and pairs of their neighbours, and check if those two neighbours are connected. This takes $$$\mathcal{O}(\sum_{i \in L} deg(i)^2 \log n)$$$ time where $$$L$$$ is the set of light vertices. Since $$$deg(i) \leq \sqrt{m}$$$ for $$$i \in L$$$ and $$$\sum deg(i) \leq 2m$$$, this is $$$\mathcal{O}(m \sqrt{m} \log n)$$$.

        Now, we have enumerated over all cycles except those with three heavy vertices. Loop over heavy vertices, and for every heavy vertex, loop over all pairs of its heavy neighbours, and check if they are connected. Since there are at most $$$m / \sqrt{m} = \sqrt{m}$$$ heavy vertices, this takes at most $$$\mathcal{O}(\sqrt{m}^3 \log n) = \mathcal{O}(m \sqrt{m} \log n)$$$ time.

        • »
          »
          »
          »
          »
          4 months ago, # ^ |
          Rev. 2   Vote: I like it +46 Vote: I do not like it

          This can be done in an even cleaner way by simply doing the following: (for now, assume degrees of every vertex is distinct) For every vertex $$$x$$$, let $$$L$$$ = list of nodes neighbour to $$$x$$$ with degree > $$$deg[x]$$$. Simply iterate over all pairs $$$(p_1, p_2) \in L, p_1 \neq p_2$$$, and check if $$$(x, p_1, p_2)$$$ form a triangle.

          Proof is the same as you said — consider two cases, vertices with degree $$$< \sqrt{m}$$$ and degree $$$\geq \sqrt{m}$$$. To handle the cases where degree of multiple nodes can be the same, I just added the index of the vertex as a tiebreaker.

          • »
            »
            »
            »
            »
            »
            4 months ago, # ^ |
              Vote: I like it +20 Vote: I do not like it

            That’s quite cool, thanks!

            • »
              »
              »
              »
              »
              »
              »
              4 months ago, # ^ |
                Vote: I like it +33 Vote: I do not like it

              Another version.

              For each vertex $$$u$$$ we're going to count all triangles involving $$$u$$$. We choose the faster of two algorithms:

              1. Iterate all pairs of neighbors of $$$u$$$ and look up in a hash map whether they form an edge. Runtime: $$$(\deg u)^2$$$
              2. Iterate all $$$m$$$ edges and check whether both nodes are neighbors of $$$u$$$. Runtime: $$$m$$$

              We can see that the cutoff between the algorithms is $$$\deg u < \sqrt{m}$$$, and because of this, the ratio $$$\displaystyle \frac{\text{work done}}{\deg u} \leq \sqrt{m}$$$.

              • »
                »
                »
                »
                »
                »
                »
                »
                4 months ago, # ^ |
                  Vote: I like it +30 Vote: I do not like it

                Here's a nice idea to remove the hash map and vastly improve the constant factor (hash maps have terrible constant factor).

                Let's queue up the queries for "is $$$(a, b)$$$ an edge?" instead of answering them right away. Since these queries are $$$(a, b)$$$ pairs, if we have $$$k$$$ queries we can 2-pass radix sort them in $$$O(n + k)$$$. Then we can two-pointer sweep with the real edge list and answer all the queries in $$$O(m + k)$$$.

                Now since we would rather not use $$$m^{1.5}$$$ memory, we can instead queue up queries and then process them once we get more than $$$m$$$.

                • »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  3 months ago, # ^ |
                    Vote: I like it 0 Vote: I do not like it

                  Oh that’s a really cool idea too. Thanks!