Can anyone please help me in one problem?
Problem: You are given a an undirected connected graph, and Q queries, in each query you will be given a vertex 'v' and output for each query should be largest connected component size after removal of vertex 'v'.
number of nodes <= 10^5
queries <= 10^5
Note: Queries are independent.
Root the tree arbitrarily. Let
sz[v]
be the size of the subtree of nodev
, including it (to make the implementation easier).The answer for query
v
ismax(sz[c] for all children c of v, n-sz[v])
, because cutting off nodev
gives you a tree for each child and another for the rest of nodes. And you can precompute this to not get $$$O(NQ)$$$ runtime.Total complexity: $$$O(N + Q)$$$
If nodes get removed after each query, then I think you'd need Euler Tour Technique + Segment Tree w/ Lazy Prop to subtract from all nodes in the subtree of
v
, and maybe something else for the ancestors ofv
, which would be a lot more complicated. I'm not too sure about this though, so maybe someone knows a better (and simpler!) way of doing this.If nodes get removed after each query, maybe you could just solve the queries in reverse, and add nodes instead of removing them?
Approach 1. Compute the low function on the graph's dfs tree (https://cp-algorithms.com/graph/cutpoints.html similar to this). When removing node V, the subtree of V's child will be disconnected from the rest of the graph if low[child] > depth[V]. The largest connected component will be equal to Max(disconnected_subtree_sizes, N-sum_of_disconnected_subtree_sizes). You can compute the answer for each node offline, and answer in O(1) for a total of O(n+m).
Approach 2. You can solve the problem using dynamic connectivity (https://cp-algorithms.com/data_structures/deleting_in_log_n.html) in O((n+m)log(m)log(n)) by simulating removing all neighbouring edges for each node at a time while also keeping a disjoint set union data structure.