shiven's blog

By shiven, history, 3 years ago, In English

My submission to today's problem F apparently uses 0KB memory, and I was able to find a few other similar cases as well. Even if you ignore every other obvious factor, the submission stores atleast 4e5 integers, so nowhere near 0KB even if rounded off to a reasonable amount.

Probably a bug (or Kotlin sponsorship perks? :p) worth looking into.

Full text and comments »

  • Vote: I like it
  • +63
  • Vote: I do not like it

By shiven, history, 4 years ago, In English

Welcome to my first blog!

I recently came across problem E of Round #646, and solved it using an approach that didn't seem to be mentioned in the tags. And I felt it would be good idea to share it! Let's just jump straight into the solution.

I'll refer to nodes which are initially marked 1 and need to become 0 as of type $$$OZ$$$, and the nodes which need to change to 1 from 0 as of type $$$ZO$$$.

An intuitive idea that is guaranteed to work is to greedily select the node which has the lowest cost, and make all possible exchanges in its subtree first. So, we sort the nodes in a non-decreasing sequence of their costs, and process them in that order.

To achieve that, we need to efficiently query on the number of nodes of types $$$ZO$$$ and $$$OZ$$$ in a subtree, and update the counts every time we shuffle. We can use a Euler tour to flatten the tree down to an array. To keep track of the subtree of any node, we store the in and out times of each of the nodes during the DFS (say, in $$$tin$$$ and $$$tout$$$ respectively). The subtree of any node $$$P$$$ includes all the nodes $$$I$$$ such that $$$tin[I] >= tin[P]$$$ and $$$tout[I] <= tout[P]$$$.

We can maintain a two cumulative frequency arrays to count the number of nodes of both types $$$AZ$$$ and $$$ZO$$$ in any subtree. When we are processing any node $$$P$$$’s subtree, we count the number of nodes of both types. Let’s name them $$$C_{AZ}$$$ and $$$C_{ZO}$$$. The maximum number of exchanges we can make in this subtree is $$$k = min(C_{AZ}, C_{ZO})$$$, thereby incurring a cost of $$$2 * cost[P] * k$$$, which we add to the answer.

However, we need to update the counts when we perform any exchange. An important observation here is that if we process a node $$$P$$$ before $$$Q$$$ (if it had a lower cost), and $$$Q$$$ lies in the subtree of $$$P$$$, then we can just ignore $$$Q$$$ when we encounter it. This is because all the exchanges possible in its subtree have already been processed for a lower cost. How do we keep efficiently check if any node’s ancestor has already been processed before it? We create another Fenwick tree, and mark the segment of any subtree once we process it using range updates and point queries. In my implementation, this tree is named $$$color$$$.

This makes it very easy to now update the counts after each shuffle. We first convert our cumulative frequency arrays for storing counts of $$$OZ$$$ and $$$ZO$$$ to Fenwick trees. Next, once we perform an exchange of $$$k$$$ nodes in node $$$P$$$ ’s subtree, we add value $$$-k$$$ to both $$$ZO$$$ and $$$OZ$$$ on node $$$P$$$’s position in the Euler tour. This will work because if any segment of $$$P$$$’s subtree is queried for the count of $$$ZO$$$ or $$$OZ$$$ nodes, it will also include $$$P$$$ itself. Remember that we have ruled out nodes in the subtree of $$$P$$$ from our processing, so we won’t need the detailed information of exactly where or how the shuffling has occurred. If any node now cares about any segment of $$$P$$$’s subtree, it will also care about $$$P$$$, because it can only be an ancestor or $$$P$$$. The negative value added in $P$’s position will then prevent overcounting by negating the count of exchanges that have already occurred.

The complexity does get an additional coefficient of $$$log(n)$$$, but it performs pretty close the solution mentioned in the editorial, at least for the given constraints.

And that’s it! Here’s my implementation. Let me know if there’s anything I can improve, or if you have any doubts.

Full text and comments »

  • Vote: I like it
  • +11
  • Vote: I do not like it