### Igor_Parfenov's blog

By Igor_Parfenov, history, 14 months ago,

1823A - A-характеристика

Editorial
Solution C++
Solution Python

1823B - Шаговая сортировка

Editorial
Solution C++
Solution Python

1823C - Сильно составные

Editorial
Solution C++
Solution Python
Notes

1823D - Уникальные палиндромы

Editorial
Solution C++
Solution Python
Notes

1823E - Удалить граф

Editorial
Solution C++
Solution Python

1823F - Случайная прогулка

Editorial
Solution C++
Solution Python
• +106

| Write comment?
 » 14 months ago, # |   +9 Editorial comes out so fast!
•  » » 14 months ago, # ^ |   +14 Comment comes out so fast!
 » 14 months ago, # |   +42 The implementation of F is wrong, you forgot to reduce modulo 998244353. It's causing unexpected verdict in hacks.
•  » » 14 months ago, # ^ |   0 I noticed that some hacks are "unexpected verdict", so that's the reason?
•  » » 14 months ago, # ^ |   +24 Yes, I apoligize
 » 14 months ago, # |   0 Problem B was cool.
•  » » 14 months ago, # ^ |   0 Yeah BC were easy but nice
•  » » 11 months ago, # ^ |   0 can you tell me the logics behind the editorial
 » 14 months ago, # |   +6 Are you using palindromic tree for checker of problem D?
•  » » 14 months ago, # ^ |   +20 Yes
 » 14 months ago, # |   +3 Okay, I don't think I could have realistically proved that $chain[x] = \left\lfloor \frac{x}{\ell} \right\rfloor$ during the contest time limit, but... I think I could definitely have observed the pattern if I just bothered to generate the first 100 Sprague-Grundy values or so and eyeballed them. Everything else was in place except this single component of the solution >_>. I hope I can learn to do this for future Sprague-Grundy problems now.Anyway, I really enjoyed these problems a lot! I even had fun thinking about F briefly, until I was assured that it's definitely beyond my capabilities. These are nice combinations of standard tools that still require additional creative effort to apply correctly. Also, I especially appreciate that the implementations are very smooth once the correct idea is understood. For example, the problemsetter could've easily done the jerk move of requiring the program to print the swap sequence in B or the elements of the maximum-length array in C, which would not have really made the problems harder to figure out, but would have been unnecessarily more tedious.
•  » » 14 months ago, # ^ |   +9 Very similar problem here
 » 14 months ago, # |   +6 I don't know about Sprague-Grundy theory. Is there anyone to teach me?
•  » » 14 months ago, # ^ |   +27
 » 14 months ago, # | ← Rev. 2 →   +8 Thanks so much for there're py solution~
 » 14 months ago, # |   +1 I don't understand how cycle[x] equals chain[x] up to r+l−1. Could anyone please explain it to me? Thanks
•  » » 14 months ago, # ^ | ← Rev. 5 →   +6 Suppose $chain[x] = \frac{x}{l}$ is correct, then when $l <= x <= l + r - 1$, $cycle[x] = mex ( chain[(x-r)^+], ..., chain[x−l] ) = mex(0,...\frac{x-l}{l}) = \frac{x}{l} = chain[x]$, and when $x < l$, $cycle[x] = chain[x] = 0$.Edit: The editorial actually has a type, mex should start from $chain[(x-r)^+]$, not $chain[0]$.
•  » » » 14 months ago, # ^ |   0 I got it, thanks
 » 14 months ago, # |   0 Thought we need to consider $l = r$ for E, stuck for a long time. Now I am curious, is there a pattern in this case?
•  » » 14 months ago, # ^ |   0 Did you found any pattern for $l = r$ ?
 » 14 months ago, # |   +11 1823F - Random Walk has another solution. Root the tree at $t$ and consider the following question: suppose we are at a vertex $v$ at the given moment. What is the expected number of times we will visit it from now on, including the current time?Well, if we go into any of its $\deg(v) - 1$ children we must visit it again. Else, if we go up to its parent, the probability we visit it again is exactly $1 - \frac{1}{d(v)}$ where $d(v)$ is the depth of $v$ in the tree. I think the adedalic solution covers an approach to seeing this, but the short answer is induction on the length of the path between $t$ and $v$.Hence, if $e(v)$ is the expected number of visits to $v$ given we are there at the current moment satisfies$e(v) = 1 + \left(\frac{\deg(v) - 1}{\deg(v)} + \frac{1}{\deg(v)}\cdot \left(1 - \frac {1}{d(v)}\right)\right)e(v).$ (for some reason the displaystyle math environment is acting weird for me)Simplifying this gives $e(v) = \deg(v)\cdot d(v).$ However, this doesn't take into account the probability that $v$ is ever visited! If $v$ is on the path from $s$ to $t$, then it is always visited. Else, suppose that the first time the path from $v$ to $t$ intersects with the path from $s$ to $t$ is at vertex $u$. Since $u$ is always visited, the probability that $v$ is ever visited is the probability that a random walk from $u$ on the bamboo/path from $v$ to $t$ visits $v$ before $t$. But by the same induction, this is exactly $\frac{d(u)}{d(v)}$. Hence, if $e'(v)$ is the true expected number of visits to $v$, we have $e'(v) = e(v) \cdot \frac{d(u)}{d(v)} = \deg(v) \cdot d(u)$. Finally, if $v = t$ we have $e'(v) = 1$. This is fairly easy to calculate: 203820888
•  » » 14 months ago, # ^ |   +5 Why if we go up to its parent, the probability we visit it again is exactly 1−1 / d(v) where d(v) is the depth of v in the tree. Can you explain to me.
 » 14 months ago, # | ← Rev. 2 →   0 The first equation of F may seem obvious, here is my attempt at deriving it rigorously, where $x_{\tau}$ is position of the chip at time $\tau$, with initial $\tau = 1$:$E(c(i))$$= E(\Sigma_{\tau=1}^\infty 1_{x_{\tau = i}})$$= \Sigma_{\tau=2}^\infty P(x_{\tau = i}) + P(x_1 = i)$$= \Sigma_{\tau=2}^\infty \Sigma_{j=1}^n P(x_{\tau} = i | x_{\tau - 1} = j) P(x_{\tau - 1} = j) + 1_{i=s}$$= \Sigma_{\tau=1}^\infty \Sigma_{j \in N(i), j \neq t} \frac{1}{|N(j)|} P(x_{\tau} = j) + 1_{i=s}$$= \Sigma_{j \in N(i), j \neq t} \frac{E(c(j))}{|N(j)|} + 1_{i=s}$Notice the special $1_{i=s}$ at the end. Also, if $i = t$, we need to change the definition so that $|N(t)| = 1$. Imagine that it then goes into some other absorbing state, so $t$ is visited only once.
 » 14 months ago, # | ← Rev. 3 →   0 For problem F, we have $E(c(u)) = sum_{(u, v)}\ E(c(v)) * P(v, u) \ with \ P(v, u) \ is \ the \ probability \ that \ v \ goes \ to \ u$=> E(c(u)) and E(c(v)) depends on each other and this problem is presented on a tree so we can assume that $E(c(u)) = free(u) + E(c(p)) * g(u)$with free(u) is a constant cofficient, g(u) is the "familiarity" rate with the parent p. Our goal is to find free(u) and g(u), so how can we do that ? We have $E(c(u)) = sum_{(u, v)} (free(v) + E(c(u)) * g(v)) * P(v, u) + E(c(p)) * P(p, u)$<=> $(1 - sum_{(u, v)} g(v) * P(v, u)) * E(c(u)) = sum_{(u, v)} free(v) * P(v, u) + E(c(p)) * P(p, u)$with v is a child of u.We run through a DFS to calculate free(u) and g(u) and then a DFS to calculate the final answer. Short solution function (int, int)> DFS = [&](int u, int p) { if (u == t) return f[u] = {1, 0}; mint freedom = u == s ? 1 : 0; mint parent = 0, adjacent = 0; for (int v : graph[u]) if (v == p) parent += mint(1) / graph[p].size(); else { pair child = DFS(v, u); freedom += v == t ? 0 : mint(child.first) / graph[v].size(); adjacent += v == t ? 0 : mint(child.second) / graph[v].size(); } mint k_const = 1 - adjacent; freedom /= k_const; parent /= k_const; return f[u] = {freedom, parent}; }; function Calculate = [&](int u, int p) { dp[u] = f[u].first; if (p > 0) dp[u] += dp[p] * f[u].second; if (u == t) return; for (int v : graph[u]) if (v != p) Calculate(v, u); }; 
 » 14 months ago, # |   0 Very fun contest! Both problem D and E have quite interesting solutions (I didn't attempt problem F).
 » 14 months ago, # |   0 Thank you for task E, it is very nice!