Блог пользователя Alex_2oo8

Автор Alex_2oo8, история, 5 лет назад, По-английски

Greetings Codeforces Community!

Did you enjoy our servings in the CodeChef’s September Lunchtime? Hopefully, you achieved an improved rating and a bunch of mouthwatering ACs.

Now, the CodeChef’s October Long Challenge is just around the corner. We urge everyone from our community to take part in this contest, starting 4th October till 14th October. It will be an occasion where you put to test your exceptional programming skills.

The problem statements will be available in English, Hindi, Bengali, Russian, Mandarin and Vietnamese.

Do join your fellow programmers and actively participate in this thrilling contest! Accompanying me on the problem setting panel are:

Contest Details:

  • Start Date & Time: 4th October 2019 (1500 hrs) to 14th October 2019 (1500 hrs). (Indian Standard Time — +5:30 GMT) — Check your timezone.   

  • Contest link: http://bit.ly/2o7Pdty

  • Registration: You just need to have a CodeChef handle to participate. For all those who are interested and do not have a CodeChef handle, are requested to register in order to participate.

  • Prizes: Top 20 performers in Indian category and top 10 performers in Global category will get CodeChef laddus, with which the winners can claim cool CodeChef goodies. First to solve each problem except challenge — 100 laddus. Know more here: https://goodies.codechef.com/

(For those who have not yet got their previous winning, please send an email to [email protected])

Good Luck! Hope to see you participating! Happy Programming!

  • Проголосовать: нравится
  • +37
  • Проголосовать: не нравится

»
5 лет назад, # |
  Проголосовать: нравится +22 Проголосовать: не нравится

Tbh I didn't really enjoy the last long challenge

I think it was more of advanced math than algorithms and data structures

I sincerely hope this time the questions won't be more of math than algorithmic thinking

No offense please this is just my opinion

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +30 Проголосовать: не нравится

    I became familiar with hard math more in long contests than in short ones. Long contests are good for the kind of problems that require studying tough theory and tbh data structure problems are often just annoying to implement.

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Rating prediction for div1 is here and for div2 is here.

»
5 лет назад, # |
  Проголосовать: нравится +4 Проголосовать: не нравится

How to solve BACREP anyone?

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

How to solve CNNCT2?

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +12 Проголосовать: не нравится

    Matroid Intersection, the independent set for the first matroid is a set of edges such that even after removing them graph $$$G_1$$$ remains connected, similarly for graph $$$G_2$$$.

    Some additional implementation details involved bridge trees and so on

  • »
    »
    5 лет назад, # ^ |
    Rev. 3   Проголосовать: нравится +37 Проголосовать: не нравится

    I went a "linear algebra" way in the spirit of Kirchhoff's matrix tree theorem.

    The problem is equivalent to finding the largest subset of indices such that the edges with these indices form forests in both graphs (i.e. no cycles). If the size of this subset is $$$d$$$ then the answer to the original problem is $$$2*(n-1) - d$$$. Let's say $$$G_1$$$'s oriented (1 and -1) vertex-edge incidence matrix is $$$A$$$, $$$G_2$$$'s oriented vertex-edge incidence matrix is $$$B$$$. The edges with indices $$$I$$$ form a forest in $$$G_1$$$ iff the columns of $$$A$$$ with indices $$$I$$$ are linearly independent, the same goes for $$$G_2$$$ and $$$B$$$.

    For any subset of indices $$$I$$$ if either $$$A_I$$$ or $$$B_I$$$ is linearly dependent then it's guaranteed that the $$$I$$$-th principal minor $$$M_I = \det(A_I^TB_I)$$$ of $$$A^TB$$$ is zero (because the rank of $$$A_I$$$ or $$$B_I$$$ is strictly less than $$$|I|$$$).

    The converse is not always true: for linearly independent $$$A_I$$$, $$$B_I$$$ we have $$$\det(A_I^TB_I) = 0$$$ iff the span of columns of $$$B_I$$$ ($$$\dim = |I|$$$) has nonzero intersection with the the null space of $$$A_I^T$$$ ($$$\dim = |E|-|I|$$$).

    Let's apply a random non-degenerate linear transformation $$$F$$$ to the columns of $$$B$$$. After such transformation for linearly independent $$$A_I$$$, $$$B_I$$$ the probability of nonzero intersection, (i.e. $$$\det(A_I^TFB_I) = 0$$$) will be sufficiently small.

    The sum of principal minors of order $$$d$$$ is (up to a sign) equal to the coefficient at $$$\lambda^{|E|-d}$$$ of the characteristic polynomial of $$$A^TFB$$$. If the coefficient at $$$\lambda^{|E|-d}$$$ is nonzero then at least one of the minors of order $$$d$$$ is nonzero, i.e. there is such set of indices $$$I$$$ that the edges with these indices form forests in both graphs.

    However it can be the case that some minors are nonzero but their sum still somehow cancels out. To fix this we can replace 1 and -1 in the incidence matrix $$$A$$$ by $$$x_i$$$ and $$$-x_i$$$. Now all coefficients of the characteristic polynomial will become polynomials themselves (in $$$x_1, ..., x_{|E|}$$$) and we can compare those polynomials to 0 by substituting some random values for $$$x_i$$$.

    The algorithm:

    Given two oriented vertex-edge incidence matrices $$$A$$$ and $$$B$$$ ($$$|V|\times|E|$$$)

    Generate random $$$|E|\times|E|$$$ non-degenerate diagonal matrix $$$X$$$ with entries modulo $$$P$$$

    Generate random $$$|V|\times|V|$$$ non-degenerate matrix $$$F$$$ with entries modulo $$$P$$$ (with high probability any random matrix will be non-degenerate)

    Find the characteristic polynomial of $$$|E|\times|E|$$$ matrix $$$XA^TFB$$$: $$$char(XA^TFB) = \chi(\lambda)$$$.

    This can be done in $$$O(|E|^3)$$$:

    • reduce the matrix to the Hessenberg form
    • use Laplace's expansion to get a recurrent formula for the polynomial

    With good probability $$$\max d > 0$$$: the coefficient of $$$char(XA^TFB)$$$ at $$$\lambda^{|E|-d}$$$ is nonzero is the size of the largest subset of indices forming forests in both graphs

»
5 лет назад, # |
  Проголосовать: нравится -18 Проголосовать: не нравится

In problem Bacterial Reproduction, I got a different verdict for the exact same solution when resubmitted it. Can anyone explain to me how it happened and can Alex_2oo8 look into it as the only test case failed due to TLE in one solution got accepted for the same solution when I resubmitted the same code. Links to solutions: Submission 1 Submission 2

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

How to solve JIIT?

  • »
    »
    5 лет назад, # ^ |
    Rev. 3   Проголосовать: нравится +26 Проголосовать: не нравится

    First of all, let's simplify our problem to 1 dimension analogue.

    Let's look at the operation in a slightly different way: instead of choosing a cell $$$(i, j)$$$ and adding ones to each number in the $$$i$$$-th row and $$$j$$$-th column, we are independently choosing a row $$$i$$$ and a column $$$j$$$ and add ones. It's easy to see, that if we added ones odd number of times to $$$x$$$ rows and did the same to $$$y$$$ columns, in resulting matrix we will have $$$x \cdot (m - y) + y \cdot (n - x)$$$ odd numbers.

    Great, now are are given a simpler problem: we are given an array of zeros and each step we can choose an element and xor in with $$$1$$$. We make $$$Q$$$ steps. We want to find out the numbers of ways to get an array with $$$x$$$ zeros for each $$$0 \leq x \leq n$$$.

    Looks like a simple dynamic programming problem. We can calculate it in $$$\mathcal{O}(n \cdot Q)$$$ complexity, but it's too slow. Of course, let's use a standard trick: rewrite transitions as a matrix and power it using fast power algorithm. The complexity would be $$$\mathcal{O}(n^3 \cdot \log Q)$$$. Which is also slow.

    To speed up this algorithm, let's explore properties of the transition matrix (let's call it $$$M$$$). The transition matrix looks like \begin{bmatrix} 0 & 1 & 0 & 0 & \cdots \newline n & 0 & 2 & 0 & \cdots \newline 0 & n-1 & 0 & 3 & \cdots \newline \vdots & & \ddots & \ddots\newline 0 & \cdots & \cdots & 1 & 0 \end{bmatrix}

    And this matrix is great, because its eigenvalues are $$$\{-n, -(n-2), \cdots, 0, \cdots, n - 2, n\}$$$. So we know that $$$M = P \cdot J \cdot P^{-1}$$$ (it's a Jordan form), where $$$J$$$ is defined as \begin{bmatrix} -n & 0 & 0 & 0 & \cdots \newline 0 & -(n-2) & 0 & 0 & \cdots \newline 0 & 0 & 0 & \ddots & \cdots \newline \vdots & & \ddots & n — 2 & 0 \newline 0 & \cdots & \cdots & 0 & n \end{bmatrix} And $$$P$$$ is a transition matrix. From the equation $$$M \cdot P = P \cdot J$$$ we can compute matrix $$$P$$$ efficiently. We use the fact that the last row of $$$P$$$ contains only $$$1$$$, and second to last row of $$$P$$$ contains eigenvalues in the increasing order. To compute the value in a cell $$$(i, j)$$$ of the matrix $$$P$$$ let's just compare values of cells $$$(i+1, j)$$$ of matrices $$$M \cdot P$$$ and $$$P \cdot J$$$ (it should be the same value). After pen-and-paper work we will get a formula which depends of the values $$$P_{i+1, j}$$$, $$$P_{i+2, j}$$$ and some values in matrices $$$M$$$ and $$$J$$$ which are known. It also can be proven that $$$P^{-1} = P \cdot \frac{1}{2^n}$$$.

    After all these steps powering $$$P \cdot J \cdot P^{-1}$$$ is simple -- just power the matrix $$$J$$$. So the total complexity is $$$\mathcal{O}(n^2)$$$.

    How to come up with this solution? Well, reducing it to a one dimension problem and solving it with fast powering of a transition matrix is rather standard and obvious. The next (checking eigenvalues) is just a guess, because the matrix $$$M$$$ looks simple so it should have a pretty Jordan form. And it worked! I just checked it with SymPy and found out that it looks great. The very last part required (calculating a matrix $$$P$$$) was inspired by the fact that the matrices $$$M$$$ and $$$J$$$ have a lot of zero values, so the value in a cell shouldn't depend on many other values in these matrices.

    • »
      »
      »
      5 лет назад, # ^ |
      Rev. 3   Проголосовать: нравится 0 Проголосовать: не нравится

      I just want to say that if we have already found all eigenvalues of a 3-diagonal matrix and they all happen to be different, then we can always calculate $$$P$$$ and $$$P^{-1}$$$ efficiently:

      We can solve $$$n$$$ equations $$$(M - \lambda_iI)v_i = 0$$$ => find right eigenvectors $$${v_i}$$$, i.e. columns of $$$P$$$

      The equations are 3-diagonal, so each solution is $$$O(n\log MOD)$$$ (additional $$$\log MOD$$$ for having to invert values)

      To find $$$P^{-1}$$$, solve $$$n$$$ equations $$$u_i^T(M - \lambda_iI) = 0$$$

      Since all eigenvalues are different, $$$u_i$$$ and $$$v_i$$$ will be unique up to proportionality, so rows of $$$P^{-1}$$$ will be equal to $$$\alpha_iu_i^T$$$ for some values of $$$\alpha_i$$$

      To fix scaling between $$$u_i$$$ and $$$v_i$$$ we can find normalizing coefficients $$$\alpha_i$$$ from $$$\alpha_iu_i^Tv_i = 1$$$