## 1088A - Ehab and another construction problem

Well, the constraints allow a brute-force solution, but here's an *O*(1) solution:

If *x* = 1, there's no solution. Otherwise, just print *x* - *x*%2 and 2.

Code link: https://pastebin.com/LXvuX8Ez

Time complexity: *O*(1).

## 1088B - Ehab and subtraction

Let *s* be the set of numbers in input (sorted and distinct). In the *i*^{th} step, *s*_{i} is subtracted from all bigger or equal elements, and all smaller elements are 0. Thus, the answer in the *i*^{th} step is *s*_{i} - *s*_{i - 1} (*s*_{0} = 0).

Code link: https://pastebin.com/bpz1YxBe

Time complexity: *O*(*nlog*(*n*)).

## 1088C - Ehab and a 2-operation task

The editorial uses 0-indexing.

Both solutions make *a*_{i} = *i*.

#### First solution, n adds and 1 mod

First, let's make *a*_{i} = *x* * *n* + *i* (for some *x*). Then, let's mod the whole array with *n* (making *a*_{i} = *i*). If the "add update" changed one index, we can just add *i* + *n* - *a*_{i}%*n* to index *i*. The problem is, if we make *a*_{i} = *x* * *n* + *i*, then update an index *j* > *i*, *a*_{i} will be ruined. Just start from the back of the array!

Code link: https://pastebin.com/dBfhNBL8

#### Second solution, 1 add and n mods

**Note**: for any *a*, *b*, if *b* > *a*, *a*%*b* = *a*. Additionally, if *a* ≥ *b* > , *a*%*b* = *a* - *b*.

Let's add 5·10^{5} to the whole array, loop over *a*_{i} (in order), and mod prefix *i* with *a*_{i} - *i*. Why does this work? Notice that *a*_{i}%(*a*_{i} - *i*) = *a*_{i} - (*a*_{i} - *i*) = *i* (the second note). Also, *a*_{i} won't be changed afterwards (the first note).

Code link: https://pastebin.com/L6suPC1f

Time complexity: *O*(*n*).

## 1088D - Ehab and another another xor problem

This problem is particularly hard to explain :/ I recommend the simulation.

Let's build *a* and *b* bit by bit from the most significant to the least significant (assume they're stored in *curA* and *curB*). Then, at the *i*^{th} step, and have all bits from the most significant to the (*i* + 1)^{th} set to 0. Notice that whether *x* is greater or less than *y* is judged by the most significant bit in which they differ (the one that has 1 is bigger). Let's query with and . and can only differ in the *i*^{th} bit (or a bit less significant). Now, if the results of the queries are different, *a* and *b* have the same value in this bit, and this value can be determined by the answer of respective queries (1 if the second query's answer is 1, 0 otherwise). If the queries give the same result, *a* and *b* must differ in this bit. How to know which of them has a 1 and which has a 0? We know that the greater between them (after setting the processed bits to 0) has a 1 and the other has a 0. The trick is to keep track of the greater between them. Before all queries, we send (0, 0) to know the greater. Every time they differ in a bit, the greater may change. It'll simply change to the answer of the 2 queries we sent! In other words, we know when we sent the queries that after making *a* and *b* equal in this bit, some other bit became the most significant bit in which they differ. Also, we know who has a 1 in this bit (the greater in this query). Thus, we'll keep the answer of this query for the future, so when this bit comes, we don't need additional queries.

**Simulation for an example**

Code link: https://pastebin.com/b9zgKuJ6

Time complexity: *O*(*log*(*n*)).

## 1088E - Ehab and a component choosing problem

Assume you already chose the components. Let the sum of nodes in the *i*^{th} component be *b*_{i}. Then, the expression in the problem is equivalent to *average*(*b*_{1}, *b*_{2}, ..., *b*_{k}). Assume we only bother about the fraction maximization problem and don't care about *k*. Then, it'll always be better to choose the component with the maximum *b*_{i} and throw away the rest! This is because of the famous inequality:

*max*(*b*_{1}, *b*_{2}, ..., *b*_{k}) ≥ *average*(*b*_{1}, *b*_{2}, ..., *b*_{k}) and the equality only occurs if all *b*_{i} are equal!

This means that the maximum value of the fraction is simply the maximum sum of a sub-component in the tree. To calculate it, let's root the tree at node 1, and calculate *dp*[*node*], the maximum sum of a sub-component that contains node. Now, I'll put the code, and explain it after.

```
void dfs(int node,int p,bool f)
{
dp[node]=a[node];
for (int u:v[node])
{
if (u!=p)
{
dfs(u,node,f);
dp[node]+=max(dp[u],0LL);
}
}
if (f)
ans=max(ans,dp[node]);
else if (dp[node]==ans)
{
dp[node]=0;
k++;
}
}
```

*ans* denotes the maximum sub-component sum.

First, we call *dfs*(1, 0, 1). We calculate the *dp* of all the children of *node*. For every child *u*, we extend the component of *node* with the component of *u* if *dp*[*u*] > 0, and do nothing otherwise. Now, we solved the first half of our problem, but what about maximizing *k*? Notice that all components you choose must have a sum of weights equal to *ans* (because the equality occurs if and only if all *b*_{i} are equal). You just want to maximize their count. Let's calculate our *dp* again. Assume *dp*[*node*] = *ans*. We have 2 choices: either mark the *node* and its component as a component in the answer (but then other nodes won't be able to use them because the components can't overlap), or wait and extend the component. The idea is that there's no reason to wait. If we extend the component with some nodes, they won't change the sum, and they may even have another sub-component with maximal sum that we're merging to our component and wasting it! Thus, we'll always go with the first choice, making *dp*[*node*] = 0 so that its parent can't use it, and increasing *k* :D

Code link: https://pastebin.com/8pCrTfuP

Time complexity: *O*(*n*).

## 1088F - Ehab and a weird weight formula

First, let's reduce the problem to ordinary MST. We know that each edge {*u*, *v*} adds ⌈*log*_{2}(*dist*(*u*, *v*))⌉·*min*(*a*_{u}, *a*_{v}) to *w*. In fact, it also adds 1 to *deg*_{u} and *deg*_{v}. Thus, the problem is ordinary MST on a complete graph where each edge {*u*, *v*} has weight (⌈*log*_{2}(*dist*(*u*, *v*))⌉ + 1)·*min*(*a*_{u}, *a*_{v}) + *max*(*a*_{u}, *a*_{v})!

Let the node with the minimum weight be *m*. Let's root the tree at it.

**Lemma**: for every node *u* and a child *v*, *a*_{v} > *a*_{u}. In simpler words, the weight increase as we go down the tree.

**Proof**: the proof is by contradiction. Assume *a*_{v} ≤ *a*_{u}. Then, the condition in the problem (that every node has an adjacent node with less weight) isn't satisfied yet for *v*. Therefore, *v* must have a child *k* such that *a*_{k} < *a*_{v}. However, the condition isn't satisfied for *k*, so *k* needs another child and the child needs another child etc. (the tree will be infinite) which is clearly a contradiction.

From that, we know that the weights decrease as we go up the tree and increase as we go down.

Back to the MST problem. From Kruskal's algorithm, we know that the minimal edge incident to every node will be added to the MST (because the edges are sorted by weight). Let's analyze the minimal edge incident to every node *u*. Let its other end be *v*. Except for node *m*, ** v will be an ancestor of u**. Why? Assume we fix the distance part and just want to minimize

*a*

_{v}. We'll keep going up the tree (it's never optimal to go down, since the weights will increase) until we reach the desired distance. Now, since the minimal edge incident to every node will be added to the MST (by Kruskal's algorithm), and they're distinct (because, otherwise, you're saying that

*u*is an ancestor of

*v*and

*v*is an ancestor of

*u*), THEY ARE THE MST. Now, the problem just reduces to finding the minimal edge incident to every node and summing them up (except for

*m*). To do that, we'll fix the ⌈

*log*

_{2}(

*dist*(

*u*,

*v*))⌉ (let it be

*k*), and get the 2

^{kth}ancestor with the well-known sparse-table (binary lifting).

Code link: https://pastebin.com/vzJqh8si

Time complexity: *O*(*nlog*(*n*)).