### mohammedehab2002's blog

By mohammedehab2002, history, 2 months ago, ,

### 1364A - XXXXX

Let's start with the whole array. If every element in it is divisible by $x$, the answer is $-1$; if its sum isn't divisible by $x$, the answer is $n$. Otherwise, we must remove some elements. The key idea is that removing an element that is divisible by $x$ doesn't do us any benefits, but once we remove an element that isn't, the sum won't be divisible by $x$. So let the first non-multiple of $x$ be at index $l$, and the last one be at index $r$. We must either remove the prefix all the way up to $l$ or the suffix all the way up to $r$, and we'll clearly remove whichever shorter.

Alternatively, we can notice that this means the answer is either a prefix or a suffix, so we can simply bruteforce them all.

### 1364B - Most socially-distanced subsequence

TL;DR the answer contains the first element, last element, and all the local minima and maxima, where a local minimum is an element less than its 2 adjacents, and a local maximum is an element greater than it 2 adjacents.

Let's look at the expression in the problem for 3 numbers. If $a>b$ and $b>c$ or if $a<b$ and $b<c$, $|a-b|+|b-c|=|a-c|$, so it's never optimal to use $a$, $b$, and $c$ in a row because you can use just $a$ and $c$ and achieve a shorter subsequence. If you keep erasing your $b$'s from the original permutation, you'll end up with the first element, the last element, and the local minima and maxima. You can see that erasing any of them would decrease the expression, so this is the optimal answer.

### 1364C - Ehab and Prefix MEXs

The key observation is: if for some index $i$, $a_i \neq a_{i-1}$, then $b_i$ must be equal to $a_{i-1}$, since it's the only way to even change the prefix $MEX$. We can use this observation to fill some indices of $b$. Now, how do we fill the rest? Let's start by avoiding every element in $a$. Something special will happen if we avoid using any element from $a$ again. If we look at the first $i$ numbers in $b$, $a_i$ will indeed be excluded, so $MEX({b_1, b_2, \ldots, b_i}) \le a_i$. Now we need to make it as big as possible. How do we make it as big as possible? The logical thing to do is to fill the rest of $b$ with the numbers not in $a$ in increasing order. It turns out this construction always satisfies the conditions. Indeed, if we look at the first $i$ elements in $b$, every element less than $a_i$ will be present because $a_i \le i$ and we added the rest of the elements in increasing order.

### 1364D - Ehab's Last Corollary

The common idea is: if the graph is a tree, you can easily find an independent set with size $\lceil\frac{n}{2}\rceil$ by bicoloring the vertices and taking the vertices from the more frequent color. Otherwise, the graph is cyclic. Let's get a cycle that doesn't have any edges "cutting through it." In other words, it doesn't have any pair of non-adjacent vertices connected by an edge. If its length is at most $k$, print it. Otherwise, take every other vertex (take a vertex and leave a vertex) and you'll end up with a big enough independent set. How to find such cycle?

### First solution

Let's do a dfs in our graph. In the very first time we hit a node that has a back-edge, we take the back-edge that goes to the deepest possible node to close our cycle. This cycle can't have any edges crossing it because none of our node's ancestors has a back-edge (by definition.)

### Second solution

Let's get any cycle in the graph. Now, let's iterate over the edges that don't belong to the cycle. Whenever we meet one that "crosses the cycle," we use it to cut the cycle into 2 cycles with smaller length and keep any of them. When we finish, we'd have our desired cycle.

### 1364E - X-OR

The common idea is: if we find the index that contains $0$, we can query it with every element in $p$ and finish in $n$ queries (if you didn't do that, pleaaase share your solution.) How to get this index?

### First solution

Let's try to make a magic function that takes an index $i$ and tells us $p_i$. Assume you have an array $z$ such that $z_j$ is some index in the permutation that has a $0$ in the $j^{th}$ bit. Building our magic function with it turns out to be very easy. We'll just return $query(i,z_0)$&$query(i,z_1)$&$\ldots$&$query(i,z_{10})$. Why does that work? If $p_i$ has a $1$ in the $j^{th}$ bit, this expression will also have a $1$ because $p_i$ will make every single clause have a $1$. If it has a $0$, $query(i,z_j)$ will also have a $0$, making the whole expression have a $0$!

But how do we find $z$? This turns out to be very easy. We'll query random pairs of indices, see where the result has a $0$, and update $z$. We stop once we fill every index in $z$. This works quickly because for any bit, at least half the numbers from $0$ to $n-1$ will have a $0$.

Now we already have an $nlog(n)$ solution (call our magic function with every index,) but how to make less calls? Let's carry an index $idx$ that's supposed to have the index of $0$ in the end, and let $p_{idx}$ be stored in $val$. Initially, $idx$ is $1$ and $val$ could be found with our magic function. Now, let's iterate over the permutation. We'll query the current index, $i$, with $idx$. If the result isn't a subset of $val$, $p_i$ can't be $0$, so let's throw it in the trash. Otherwise, we'll make $idx$ equal to $i$ and use our magic function to update $val$.

analysis

### Second solution

Thanks, Utkarsh.25dec for this solution.

I'll describe a way to start with $n$ candidates to be $0$ and end up with $\sqrt{n}$ candidates. Let's query random pairs until we find a pair whose bitwise-or has at most $\frac{log(n)}{2}$ bits. Take one of the 2 indices in the pair (let's call it $i$) and query it with every candidate you have, and take the bitwise-and of the results. That will give you $p_i$. Now, let's make the numbers whose query result with $i$ is $p_i$ (hence, a subset of $p_i$) our new candidates. Since $i$ has at most $\frac{log(n)}{2}$ ones, the number of its subsets is $\sqrt{n}$, and we have our desired result!

Now, to find the index of $0$, we'll just do this recursively until we have 2 candidates. We'll keep querying them with random indices until the results differ. The one giving a smaller result is our $0$.

analysis

### Third solution

Assume you have 2 candidates for $0$ called $a$ and $b$ such that one of them is the index of $0$ at the end of our algorithm, and we always know $(p_a|p_b)$. Let's iterate over our indices in a random order and try to update $a$ and $b$. Assume the current index is $c$. Let's query to get $(p_b|p_c)$. We have 3 cases:

• If $(p_a|p_b)<(p_b|p_c)$, $p_c$ can't be $0$, so we'll throw it away.
• If $(p_a|p_b)>(p_b|p_c)$, $p_a$ can't be $0$, so we'll throw it away and change $a$ to be $c$.
• Otherwise, $p_b$ can't be $0$ because that would imply $p_a=p_c$ (recall that $p$ is a permutation.) So we can throw it away and change $b$ to be $c$. But notice that we now don't know $(p_a|p_b)$, so we're gonna have to make one more query, since we need to keep track of it.

After we're done, we can narrow our 2 candidates down to 1 with the same way described in the previous solution.

analysis

• +340

By mohammedehab2002, 2 months ago, ,

Hi everyone!

Codeforces round #649 will take place on Jun/13/2020 18:05 (Moscow time). It's rated for the second division, but, as usual, first division participants can take part out of competition.

The problems were created by me. I'd like to thank my forever orzable coordinator antontrygubO_o; my incredible army of testers dorijanlendvaj, 300iq, Osama_Alkhodairy, AmShZ, Taran_1407, TigranMec, _Aaryan_, Mohammad_Yasser, zoooma13, lavish315, Utkarsh.25dec, NOOBxCODER, and Laggy; and, of course, you-know-who for the amazing codeforces and polygon platforms.

This time, in an effort to kill type-races and because I'm lazy, you'll be given 5 problems and 2 hours to solve them.

UPD: the scoring distribution will be 750-1000-1500-2000-2500.

UPD: the editorial is out.

UPD: congratulations to the winners!

Div.1:-

Div.2:-

Good luck & Have fun :D

• +1165

By mohammedehab2002, history, 5 months ago, ,

Hi!

Codeforces round #628 (vintage Codeforces round #2) will take place on Mar/14/2020 17:35 (Moscow time). It's rated for the second division, but, as usual, first division participants can take part out of competition.

The problems were created by me. I'd like to thank, and orz, antontrygubO_o for coordinating the round; ramchandra, pajenegod, aryanc403, Taran_1407, Kuroni, mcdx9524, dorijanlendvaj, Andreasyan, 300iq, zoooma13, Osama_Alkhodairy, Mohammad_Yasser, and DeadPillow (special shout-out) for testing the round; and of course MikeMirzayanov for the great codeforces and polygon platforms.

You'll be given 6 problems and 2 hours to solve them.

UPD: the scoring distribution will be 500-750-1250-1750-2500-2750.

UPD: the editorial is out.

UPD: congratulations to the winners!

Div.1:-

Div.2:-

Good luck & Have fun!

• +671

By mohammedehab2002, history, 6 months ago, ,

### 1325A - EhAb AnD gCd

$a=1$ and $b=x-1$ always work.

First AC: Sevlll

Bonus task: can you count the valid pairs?

### 1325B - CopyCopyCopyCopyCopy

Let the number of distinct elements in $a$ be called $d$. Clearly, the answer is limited by $d$. Now, you can construct your subsequence as follows: take the smallest element from the first copy, the second smallest element from the second copy, and so on. Since there are enough copies to take every element, the answer is $d$.

First AC: socho

### 1325C - Ehab and Path-etic MEXs

Notice that there will be a path that passes through the edge labeled $0$ and the edge labeled $1$ no matter how you label the edges, so there's always a path with $MEX$ $2$ or more. If any node has degree 3 or more, you can distribute the labels $0$, $1$, and $2$ to edges incident to this node and distribute the rest of the labels arbitrarily. Otherwise, the tree is a bamboo, and it doesn't actually matter how you label the edges, since there will be a path with $MEX$ $n-1$ anyway.

### 1325D - Ehab the Xorcist

First, let's look at some special cases. If $u>v$ or $u$ and $v$ have different parities, there's no array. If $u=v=0$, the answer is an empty array. If $u=v \neq 0$, the answer is $[u]$. Now, the length is at least 2. Let $x=\frac{v-u}{2}$. The array $[u,x,x]$ satisfies the conditions, so the length is at most 3. We just need to figure out whether there's a pair of number $a$ and $b$. Such that $a \oplus b=u$ and $a+b=v$. Notice that $a+b=a \oplus b+2*(a$&$b)$, so we know that $a$&$b=\frac{v-u}{2}=x$ (surprise surprise.) The benefit of getting rid of $a+b$ and looking at $a$&$b$ instead is that we can look at $a$ and $b$ bit by bit. If $x$ has a one in some bit, $a$ and $b$ must both have ones, so $a \oplus b=u$ must have a 0. If $x$ has a zero, there are absolutely no limitation on $u$. So, if there's a bit where both $x$ and $u$ have a one, that is to say if $x$&$u\neq0$, you can't find such $a$ and $b$, and the length will be 3. Otherwise, $x$&$u=0$ which means $x \oplus u=x+u$, so the array $[u+x,x]$ works. Can you see how this array was constructed? We took $[u,x,x]$ which more obviously works and merged the first 2 elements, benefiting from the fact that $u$&$x=0$.

First AC: MiFaFaOvO

### 1325E - Ehab's REAL Number Theory Problem

Notice that for each element in the array, if some perfect square divides it, you can divide it by that perfect square, and the problem won't change. Let's define normalizing a number as dividing it by perfect squares until it doesn't contain any. Notice than any number that has 3 different prime divisors has at least 8 divisors, so after normalizing any element in the array, it will be $1$, $p$, or $p*q$ for some primes $p$ and $q$. Let's create a graph where the vertices are the prime numbers (and $1$,) and the edges are the elements of the array. For each element, we'll connect $p$ and $q$ (or $p$ and $1$ if it's a prime after normalizing, or $1$ with $1$ if it's $1$ after normalizing.) What's the significance of this graph? Well, if you take any walk from node $p$ to node $q$, multiply the elements on the edges you took, and normalize, the product you get will be $p*q$! That's because every node in the path will be visited an even number of times, except $p$ and $q$. So the shortest subsequence whose product is a perfect square is just the shortest cycle in this graph!

The shortest cycle in an arbitrary graph takes $O(n^2)$ to compute: you take every node as a source and calculate the bfs tree, then you look at the edges the go back to the root to close the cycle. That only finds the shortest cycle if the bfs source is contained in one. The graph in this problem has a special condition: you can't connect 2 nodes with indices greater than $sqrt(maxAi)$. That's because their product would be greater than $maxAi$. So that means ANY walk in this graph has a node with index $\le\sqrt{maxAi}$. You can only try these nodes as sources for your bfs.

Bonus task: try to prove the answer can't exceed $2\sqrt{maxAi}$.

### 1325F - Ehab's Last Theorem

Let $sq$ denote $\lceil\sqrt{n}\rceil$.

#### A solution using DFS trees

If you're not familiar with back-edges, I recommend reading this first.

Let's take the DFS tree of our graph. Assume you're currently in node $u$ in the DFS. If $u$ has $sq-1$ or more back-edges, look at the one that connects $u$ to its furthest ancestor. It forms a cycle of length at least $sq$. If $u$ doesn't have that many back-edges, you can add it to the independent set (if none of its neighbors was added.) That way, if you don't find a cycle, every node only blocks at most $sq-1$ other nodes, the ones connected to it by a back-edge, so you'll find an independent set!

First AC: imeimi

#### A solution using degrees

There's a pretty obvious greedy algorithm for finding large independent sets: take the node with the minimal degree, add it to the independent set, remove it and all its neighbors from the graph, and repeat. If at every step the node with the minimal degree has degree $<sq-1$, that algorithm solves the first problem. Otherwise, there's a step where EVERY node has degree at least $sq-1$. For graphs where every node has degree at least $d$, you can always find a cycle with length $d+1$. To do that, we'll first try to find a long path then close a cycle. Take an arbitrary node as a starting point, and keep trying to extend your path. If one of this node's neighbors is not already in the path, extend that path with it and repeat. Otherwise, all of the last node's $d$ neighbors are on the path. Take the edge to the furthest and you'll form a cycle of length at least $d+1$!

First AC: imeimi after only 11 minutes!

There are probably other solutions and heuristics. Can you share yours?

• +288

By mohammedehab2002, 14 months ago, ,

## 1174A - Ehab Fails to Be Thanos

If all elements in the array are equal, there's no solution. Otherwise, sort the array. The sum of the second half will indeed be greater than that of the first half.

Another solution is to see if they already have different sums. If they do, print the array as it is. Otherwise, find any pair of different elements from different halves and swap them.

## 1174B - Ehab Is an Odd Person

Notice that you can only swap 2 elements if they have different parities. If all elements in the array have the same parity, you can't do any swaps, and the answer will just be like the input. Otherwise, let's prove you can actually swap any pair of elements. Assume you want to swap 2 elements, $a$ and $b$, and they have the same parity. There must be a third element $c$ that has a different parity. Without loss of generality, assume the array is $[a,b,c]$. You'll do the following swaps:

• Swap $a$ and $c$: $[c,b,a]$.
• Swap $b$ and $c$: $[b,c,a]$.
• Swap $a$ and $c$: $[b,a,c]$.

In other words, you'll use $c$ as an intermediate element to swap $a$ and $b$, and it'll return to its original position afterwards! Since you can swap any pair of elements, you can always sort the array, which is the lexicographically smallest permutation.

Time complexity: $O(nlog(n))$.

## 1174C - Ehab and a Special Coloring Problem

Let's call the maximum value in the array $max$. Let the number of primes less than or equal to $n$ be called $p$. Then, $max \ge p$. That's true because a distinct number must be assigned to each prime, since all primes are coprime to each other. Now if we can construct an answer wherein $max=p$, it'll be optimal. Let's first assign a distinct number to each prime. Then, assign to every composite number the same number as any of its prime divisors. This works because for any pair of numbers $(i,j)$, $i$ is given the same number of a divisor and so is $j$, so if they're coprime (don't share a divisor), they can't be given the same number!

Time complexity: $O(nlog(n))$.

## 1174D - Ehab and the Expected XOR Problem

The main idea is to build the prefix-xor of the array, not the array itself, then build the array from it. Let the prefix-xor array be called $b$. Now, $a_l \oplus a_{l+1} \dots \oplus a_r=b_{l-1} \oplus b_{r}$. Thus, the problem becomes: construct an array such that no pair of numbers has bitwise-xor sum equal to 0 or $x$, and its length should be maximal. Notice that "no pair of numbers has bitwise-xor sum equal to 0" simply means "you can't use the same number twice". If $x \ge 2^n$, no pair of numbers less than $2^n$ will have bitwise-xor sum equal to $x$, so you can just use all the numbers from 1 to $2^n-1$ in any order. Otherwise, you can think of the numbers forming pairs, where each pair consists of 2 numbers with bitwise-xor sum equal to $x$. From any pair, if you add one number to the array, you can't add the other. However, the pairs are independent from each other: your choice in one pair doesn't affect any other pair. Thus, you can just choose either number in any pair and add them in any order you want. After you construct $b$, you can construct $a$ using the formula: $a_i=b_i \oplus b_{i-1}$.

Time complexity: $O(2^n)$.

## 1174E - Ehab and the Expected GCD Problem

Let's call the permutations from the statement good. For starters, we'll try to find some characteristics of good permutations. Let's call the first element in a good permutation $s$. Then, $s$ must have the maximal possible number of prime divisors. Also, every time the $gcd$ changes as you move along prefixes, you must drop only one prime divisor from it. That way, we guarantee we have as many distinct $gcd$s as possible. Now, there are 2 important observations concerning $s$:

Observation #1: $s=2^x*3^y$ for some $x$ and $y$. In other words, only $2$ and $3$ can divide $s$. That's because if $s$ has some prime divisor $p$, you can divide it by $p$ and multiply it by $4$. That way, you'll have more prime divisors.

Observation #2: $y \le 1$. That's because if $s=2^x*3^y$, and $y \ge 2$, you can instead replace it with $2^{x+3}*3^{y-2}$ (divide it by $9$ and multiply it by $8$), and you'll have more prime divisors.

Now, we can create $dp[i][x][y]$, the number of ways to fill a good permutation up to index $i$ such that its $gcd$ is $2^x*3^y$. Let $f(x,y)=\lfloor \frac{n}{2^x*3^y} \rfloor$. It means the number of multiples of $2^x*3^y$ less than or equal to $n$. Here are the transitions:

If your permutation is filled until index $i$ and its $gcd$ is $2^x*3^y$, you can do one of the following $3$ things upon choosing $p_{i+1}$:

• Add a multiple of $2^x*3^y$. That way, the $gcd$ won't change. There are $f(x,y)$ numbers you can add, but you already added $i$ of them, so: $dp[i+1][x][y]=dp[i+1][x][y]+dp[i][x][y]*(f(x,y)-i)$.

• Reduce $x$ by $1$. To do that, you can add a multiple of $2^{x-1}*3^y$ that isn't a multiple of $2^x*3^y$, so: $dp[i+1][x-1][y]=dp[i+1][x-1][y]+dp[i][x][y]*(f(x-1,y)-f(x,y))$.

• Reduce $y$ by $1$. To do that, you can add a multiple of $2^x*3^{y-1}$ that isn't a multiple of $2^x*3^y$, so: $dp[i+1][x][y-1]=dp[i+1][x][y-1]+dp[i][x][y]*(f(x,y-1)-f(x,y))$.

As for the base case, let $x=\lfloor log_2(n) \rfloor$. You can always start with $2^x$, so $dp[1][x][0]=1$. Also, if $2^{x-1}*3 \le n$, you can start with it, so $dp[1][x-1][1]=1$. The answer is $dp[n][0][0]$.

Time complexity: $O(nlog(n))$.

## 1174F - Ehab and the Big Finale

Let $dep_a$ be the depth of node $a$ and $sz_a$ be the size of the subtree of node $a$. First, we'll query the distance between node 1 and node $x$ to know $dep_x$. The idea in the problem is to maintain a "search scope", some nodes such that $x$ is one of them, and to try to narrow it down with queries. From this point, I'll describe two solutions:

### HLD solution:

Assume that your search scope is the subtree of some node $u$ (initially, $u$=1). How can we narrow it down efficiently? I'll pause here to add some definitions. The heavy child of a node $a$ is the child that has the maximal subtree size. The heavy path of node $a$ is the path that starts with node $a$ and every time moves to the heavy child of the current node. Now back to our algorithm. Let's get the heavy path of node $u$. Assume its other endpoint is node $v$. We know that a prefix of this path contains ancestors of node $x$. Let the deepest node in the path that is an ancestor of node $x$ be node $y$ (the last node in this prefix). I'll now add a drawing to help you visualize the situation.

So, recapping, $u$ is the root of your search scope, $v$ is the endpoint of the heavy path starting from $u$, $x$ is the hidden node, and $y$ the last ancestor of $x$ in the heavy path. Notice that $y$ is $lca(x,v)$. Now, we know that $dist(x,v)=dep_x+dep_v-2*dep_y$. Since we know $dep_x$, and we know $dep_v$, we can query $dist(x,v)$ to find $dep_y$. Since all nodes in the path have different depths, that means we know $y$ itself!

Another way to find y

Now, if $dep_x=dep_y$, $x=y$, so we found the answer. Otherwise, we know, by definition, that $y$ is an ancestor of $x$, so it's safe to use the second query type. Let the answer be node $l$. We can repeat the algorithm with $u=l$! How long will this algorithm take? Note that $l$ can't be the heavy child of $y$ (because $y$ is the last ancestor of $x$ in the heavy path), so $sz_l \le \lfloor\frac{sz_y}{2} \rfloor$, since it's well-known that only the heavy child can break that condition. So with only 2 queries, we managed to cut down at least half of our search scope! So this algorithm does no more than $34$ queries (actually $32$ under these constraints, but that's just a small technicality).

### Centroid decomposition solution:

As I said, assume we have a search scope. Let's get the centroid, $c$, of that search scope. If you don't know, the centroid is a node that, if removed, the tree will be broken down to components, and each component's size will be at most half the size of the original tree. Now, $c$ may and may not be an ancestor of $x$. How can we determine that? Let's query $dist(c,x)$. $c$ is an ancestor of $x$ if and only if $dep_c+dist(c,x)=dep_x$. Now, if $c$ is an ancestor of $x$, we can safely query the second node on the path between them. Let the answer be $s$, then its component will be the new search scope. What if $c$ isn't an ancestor of $x$? That means node $x$ can't be in the subtree of $c$, so it must be in the component of $c$'s parent. We'll make the component of $c$'s parent the new search scope! Every time, the size of our search scope is, at least, halved, so the solution does at most $36$ queries.

• +155

By mohammedehab2002, 14 months ago, ,

Hi!

I'm back with not one, not two, but three contests, although I have no promises about when to expect them....

The first of them, codeforces round #563, will take place on Jun/03/2019 17:05 (Moscow time). It's rated for the second division, but, as usual, first division participants can take part out of competition.

I'm the problemsetter of the round. I'd like to thank KAN for coordinating the round (and his patience .. try coordinating ~20 problems), arsijo for helping with the preparation, Um_nik, opukittpceno_hhr, Aleks5d, Rifai, pllk, ulna, Ivan19981305, and PrianishnikovaRina for testing the round, and MikeMirzayanov for the great codeforces and polygon platforms.

In this round, you'll be given 6 problems and 2 hours to solve them.

UPD: I decided to drop the 3 seconds rule. The scoring distribution is 500-1000-1500-1750-2500-2500. That means you should probably read both E and F :D

Good luck & Have fun!

UPD: here's the editorial.

UPD: congratulations to the winners!

Div.1+Div.2:-

Div.2:-

See you in the second round :D

• +466

By mohammedehab2002, 20 months ago, ,

## 1088A - Ehab and another construction problem

Well, the constraints allow a brute-force solution, but here's an O(1) solution:

If x = 1, there's no solution. Otherwise, just print x - x%2 and 2.

Time complexity: O(1).

## 1088B - Ehab and subtraction

Let s be the set of numbers in input (sorted and distinct). In the i th step, s i is subtracted from all bigger or equal elements, and all smaller elements are 0. Thus, the answer in the i th step is s i - s i - 1 ( s 0 = 0).

Time complexity: O(nlog(n)).

## 1088C - Ehab and a 2-operation task

The editorial uses 0-indexing.

Both solutions make a i = i.

#### First solution, n adds and 1 mod

First, let's make a i = x * n + i (for some x). Then, let's mod the whole array with n (making a i = i). If the "add update" changed one index, we can just add i + n - a i%n to index i. The problem is, if we make a i = x * n + i, then update an index j > i, a i will be ruined. Just start from the back of the array!

#### Second solution, 1 add and n mods

Note: for any a, b, if b > a, a%b = a. Additionally, if a ≥ b > , a%b = a - b.

Let's add 5·105 to the whole array, loop over a i (in order), and mod prefix i with a i - i. Why does this work? Notice that a i%(a i - i) = a i - (a i - i) = i (the second note). Also, a i won't be changed afterwards (the first note).

Time complexity: O(n).

## 1088D - Ehab and another another xor problem

This problem is particularly hard to explain :/ I recommend the simulation.

Let's build a and b bit by bit from the most significant to the least significant (assume they're stored in curA and curB). Then, at the i th step, and have all bits from the most significant to the (i + 1) th set to 0. Notice that whether x is greater or less than y is judged by the most significant bit in which they differ (the one that has 1 is bigger). Let's query with and . and can only differ in the i th bit (or a bit less significant). Now, if the results of the queries are different, a and b have the same value in this bit, and this value can be determined by the answer of respective queries (1 if the second query's answer is 1, 0 otherwise). If the queries give the same result, a and b must differ in this bit. How to know which of them has a 1 and which has a 0? We know that the greater between them (after setting the processed bits to 0) has a 1 and the other has a 0. The trick is to keep track of the greater between them. Before all queries, we send (0, 0) to know the greater. Every time they differ in a bit, the greater may change. It'll simply change to the answer of the 2 queries we sent! In other words, we know when we sent the queries that after making a and b equal in this bit, some other bit became the most significant bit in which they differ. Also, we know who has a 1 in this bit (the greater in this query). Thus, we'll keep the answer of this query for the future, so when this bit comes, we don't need additional queries.

Simulation for an example

Time complexity: O(log(n)).

## 1088E - Ehab and a component choosing problem

Assume you already chose the components. Let the sum of nodes in the i th component be b i. Then, the expression in the problem is equivalent to average(b 1, b 2, ..., b k). Assume we only bother about the fraction maximization problem and don't care about k. Then, it'll always be better to choose the component with the maximum b i and throw away the rest! This is because of the famous inequality:

max(b 1, b 2, ..., b k) ≥ average(b 1, b 2, ..., b k) and the equality only occurs if all b i are equal!

This means that the maximum value of the fraction is simply the maximum sum of a sub-component in the tree. To calculate it, let's root the tree at node 1, and calculate dp[node], the maximum sum of a sub-component that contains node. Now, I'll put the code, and explain it after.

void dfs(int node,int p,bool f)
{
dp[node]=a[node];
for (int u:v[node])
{
if (u!=p)
{
dfs(u,node,f);
dp[node]+=max(dp[u],0LL);
}
}
if (f)
ans=max(ans,dp[node]);
else if (dp[node]==ans)
{
dp[node]=0;
k++;
}
}


ans denotes the maximum sub-component sum.

First, we call dfs(1, 0, 1). We calculate the dp of all the children of node. For every child u, we extend the component of node with the component of u if dp[u] > 0, and do nothing otherwise. Now, we solved the first half of our problem, but what about maximizing k? Notice that all components you choose must have a sum of weights equal to ans (because the equality occurs if and only if all b i are equal). You just want to maximize their count. Let's calculate our dp again. Assume dp[node] = ans. We have 2 choices: either mark the node and its component as a component in the answer (but then other nodes won't be able to use them because the components can't overlap), or wait and extend the component. The idea is that there's no reason to wait. If we extend the component with some nodes, they won't change the sum, and they may even have another sub-component with maximal sum that we're merging to our component and wasting it! Thus, we'll always go with the first choice, making dp[node] = 0 so that its parent can't use it, and increasing k :D

Time complexity: O(n).

## 1088F - Ehab and a weird weight formula

First, let's reduce the problem to ordinary MST. We know that each edge {u, v} adds log 2(dist(u, v))⌉·min(a u, a v) to w. In fact, it also adds 1 to deg u and deg v. Thus, the problem is ordinary MST on a complete graph where each edge {u, v} has weight (⌈log 2(dist(u, v))⌉ + 1)·min(a u, a v) + max(a u, a v)!

Let the node with the minimum weight be m. Let's root the tree at it.

Lemma: for every node u and a child v, a v > a u. In simpler words, the weight increase as we go down the tree.

Proof: the proof is by contradiction. Assume a v ≤ a u. Then, the condition in the problem (that every node has an adjacent node with less weight) isn't satisfied yet for v. Therefore, v must have a child k such that a k < a v. However, the condition isn't satisfied for k, so k needs another child and the child needs another child etc. (the tree will be infinite) which is clearly a contradiction.

From that, we know that the weights decrease as we go up the tree and increase as we go down.

Back to the MST problem. From Kruskal's algorithm, we know that the minimal edge incident to every node will be added to the MST (because the edges are sorted by weight). Let's analyze the minimal edge incident to every node u. Let its other end be v. Except for node m, v will be an ancestor of u. Why? Assume we fix the distance part and just want to minimize a v. We'll keep going up the tree (it's never optimal to go down, since the weights will increase) until we reach the desired distance. Now, since the minimal edge incident to every node will be added to the MST (by Kruskal's algorithm), and they're distinct (because, otherwise, you're saying that u is an ancestor of v and v is an ancestor of u), THEY ARE THE MST. Now, the problem just reduces to finding the minimal edge incident to every node and summing them up (except for m). To do that, we'll fix the log 2(dist(u, v))⌉ (let it be k), and get the 2 k th ancestor with the well-known sparse-table (binary lifting).

Time complexity: O(nlog(n)).

• +116

By mohammedehab2002, 20 months ago, ,

Hi!

I'm back with a new contest, a new color, and a new batch of xor problems.

Codeforces round #525, rated for the second division, is taking place on Dec/04/2018 17:35 (Moscow time). As usual, first division participants can take part out of competition.

I'm the problemsetter of the round. I'd like to thank 300iq for the great effort coordinating the round, isaf27, cdkrot, BudAlNik, and gritukan for testing the round, scanhex for translating the statements to Russian, mahmoudbadawy for giving his opinions about the problems, and MikeMirzayanov for the great codeforces and polygon platforms.

Like my previous round, you'll be given 6 problems and 2 hours to solve them.

After the contest, I'll be on the community Discord server to discuss the problems.

UPD: the scoring distribution will be 500-1000-1500-2000-2500-3000.

UPD: something wrong happened and the editorial was deleted. I'll post it as soon as possible :(

UPD: the editorial has been re-written.

Good luck & Have fun!

UPD: congratulations to the winners!

Div.1+Div.2:-

Div.2:-

• +476

By mohammedehab2002, history, 2 years ago, ,

#### 959A - Mahmoud and Ehab and the even-odd game

It's easy to see that if n = 0, the next player loses. If n is even, Mahmoud will choose a = n and win. Otherwise, Mahmoud will have to choose a < n. n is odd and a is even, so n - a is odd. Ehab will then subtract it all and win. Therefore, if n is even Mahmoud wins. Otherwise, Ehab wins. n = 1 doesn't follow our proof, yet Ehab still wins at it because Mahmoud won't be even able to choose a.

Time complexity : O(1).

Bonus task : If there were multiple integers, and each player can choose which integer to subtract from, who will win?

Solution

#### 959B - Mahmoud and Ehab and the message

It's easy to see that for every word, the minimum cost of sending it is the minimum cost of sending any word in its group. For each group, we'll maintain the minimum cost for sending a word in it (let it be cost i) and for each word, we'll maintain its group (let it be group i). For every word i in the message, we'll add cost group i to the answer.

Time complexity : O((n + m)log(n) * len).

Bonus task : Try to solve the problem if the input was given as pairs of words that are synonyms (assuming synonymy is transitive).

Solution

### The first tree

For n ≥ 6, you can connect nodes 2, 3, and 4 to node 1 and connect the rest of the nodes to node 2. The real vertex cover is the set {1, 2} of size 2 while the found vertex cover will have size min(3, n - 3). As n ≥ 6, that value will be 3 which is incorrect.

For n < 6, the answer doesn't exist.

### The second tree

There are multiple ways to construct it. One easy way is the star tree. Connect all the nodes to node 1. The real and the found vertex cover will be simply {1}. Another easy way is a path. Connect node i to node i + 1 for all 1 ≤ i < n. The real and the found vertex cover has size .

Time complexity : O(n).

Bonus task : Try to find an elegant proof that the answer for n < 6 doesn't exist for the first tree.

Solution

#### 959D - Mahmoud and Ehab and another array construction task

Common things : Let's call a number "ok" if it could be inserted to array b, as a new element, without breaking any of the conditions (i.e it should be coprime with all the previously inserted elements). Let's call the maximum number that could be inserted in the worst case mx. For each integer from 2 to mx, we'll precompute its prime divisors with sieve.

### First solution by me

Create an std::set that contains all the numbers from 2 to mx. That set has all the "ok" numbers and will be updated each time we insert a new element to array b. We'll insert the elements to array b greedily one by one. At index i, let cur be the minimum number in the set greater than or equal to a i i.e std::lower_bound(a[i]). If cur isn't equal to a i, the lexicographically greater condition is satisfied and we're no longer restricted to a, so, starting from index i + 1, we'll greedily choose cur to be the first (minimum) number in the set instead. We'll insert cur to b. Each time, we'll remove all the integers that aren't coprime with cur from the set. To do that, we'll loop over the multiples of its precomputed prime divisors and remove them from the set.

### Second solution by KAN

Let used[i] indicate whether some prime is already a factor of one of elements in b (so we shouldn't use it). Each time we insert an element to b, we update used by iterating over its precomputed prime divisors and make them all used. We'll start inserting elements to b greedily one by one. To check if a number is "ok", we'll iterate over its precomputed prime divisors and check that all of them aren't used. While a i is "ok", we'll keep inserting it to b. We'll reach an integer that isn't "ok". In this case, we'll iterate naiively until we find an integer that is "ok" and insert it to b. The lexicographically greater condition is now satisfied and we can insert whatever we want (no restriction to a). Notice that starting from now, b will be sorted in increasing order. That's because if it's not, we can sort it and reach a better answer without breaking any of the conditions. The naiive solution is to loop starting from 2 until we find an "ok" integer for each i. However, as the array is sorted, we can loop starting from 2 the first time and then loop starting from b i - 1 + 1 and save a lot of loops that we're sure will fail!

Time complexity : O(mxlog(mx)). mx has an order of because the n th prime is expected to be O(nlog(n)) and the number of primes less that n is expected to be .

#### 959E - Mahmoud and Ehab and the xor-MST

For convenience, let n be the label of the last node not the number of nodes (i.e n = n input - 1).

Denote lsb(x) = x&( - x) as the value of the least significant bit set to 1 in x. The answer is , which means that node u is connected to node for all 1 ≤ u ≤ n (node u is connected to node u without that bit).

Formal proof

Now let's see how to calculate that quickly.

### Math solution

Let f(x) be the number of integers y such that 1 ≤ y ≤ n and lsb(y) = x, then . f(i) > 0 if and only if i is a power of 2 so this sum is equivalent to . Basically, the first number y such that lsb(y) = x is x and then the period is 2 * x. Take 4 to see that. The integers y such that lsb(y) = 4 are {4, 12, 20, 28, etc.} Therefore, for 1 ≤ x ≤ n and x is a power of 2.

### DP solution

Let's see how the sequence of lsb(x) is constructed. We start with {1} and at the i th step, we copy the sequence and concatenate it to itself and add 2 i in the middle.

Let . Let dp[i] = f(2 i - 1).

You can see from the pattern above that dp[i] = 2 * dp[i - 1] + 2 i - 1 for 1 < i (with the base case that dp[1] = 1). Let's find a recurrence for f(x). Denote msb(x) as the value of the most significant bit set to 1. The sum can be split into 2 parts : the sum from 1 to msb(x) and the sum from msb(x) + 1 to x. You can see that in the second sum, lsb(i) can never be equal to msb(x), so we can remove that bit safely without affecting the answer. Removing that bit is like xoring with msb(x) which makes the sum start at 1 and end at which is . Therefore, . The first part can be calculated with the help of our dp because msb(x) is a power of 2 and the second part goes recursively. Basically, for each i such that the i th bit is set to 1, we add dp[i] + 2 i to the answer.

Time complexity : O(log(n)).

#### 959F - Mahmoud and Ehab and yet another xor task

Let's solve a simpler version of the problem. Assume the queries only ask you to see whether the answer is 0 or positive instead of the exact answer. We can answer all the queries offline. We can keep a set containing all the possible xors of subsequences and update it for each prefix. Initially, the set contains only 0 (the xor of the empty subsequence). For each index i in the array, we can update the set by adding to the set for all x in the set. The intuition behind it is that there's a subsequence with xor equal to x (as x is in the set) and if we add a i to it, its xor will be , so we should add it to the set. That's a slow solution to update the set, but we have some observations:-

1. If x is in the set and y is in the set, must be in the set. To see that, let x be the xor of some elements and y be the xor of other elements. must be the xor of the non-common elements (because the common elements will annihilate) so it must be in the set.
2. If x is in the set and y isn't in the set, can't be in the set. This could be proved by contradiction. Assume is in the set, then, by the first observation, must be in the set. This is equivalent to y which we said that it isn't in the set. Therefore, isn't in the set.

Basically, if a i is already in the set, we do nothing because updating the set would do nothing but extra operations according to the first observation, and if a i isn't in the set, we don't even waste a single operation without extending the set! That makes the total complexity O(n + maxAi) or O((n + maxAi)log(maxAi)) depending on implementation because each element is added to the set exactly once.

To solve our problem, let's see the naiive dynamic programming solution. Let dp[i][x] be the number of subsequences of the first i elements with xor x. . The intuition behind it is exactly the same as the intuition behind the set construction. Let's prove that dp[i][x] is equal for all x belonging to the set! Let's assume this holds true for i - 1 and see what happens in the transition to i. Notice that it holds true for i = 0. Let j be the value that dp[i - 1][x] is equal to for all x belonging to the set. If a i is in the set, and x is in the set, is in the set (observation #1). Therefore, dp[i - 1][x] = j and which makes dp[i][x] = 2 * j for all x in the set. Notice that the set doesn't change so dp[i][x] = 0 for all x that aren't in the set. If a i isn't in the set, we have 3 cases for x. If x is in the set, isn't in the set. Therefore, dp[i][x] = j + 0 = j. If x is to be added to the set in this step, is in the set. Therefore, dp[i][x] = 0 + j = j. Otherwise, dp[i][x] = 0. To summarize, we'll maintain the set. For each integer, if it's in the set, we'll just multiply j by 2. Otherwise, we'll update the set. We'll then answer all the queries for that prefix (saying 0 or j) depending on whether x is in the set.

Time complexity : O(n + maxAi) if you implement the "set" with a vector and an array.

Bonus task : Can you make this solution work online? Can you do that with maxAi < 230?

Solution

• +112

By mohammedehab2002, 2 years ago, ,

Hello codeforces!

I'm glad to announce that codeforces round #473 for the second division will take place on Tuesday April 3rd 16:05 UTC. As usual, first division participants can take part out of competition.

I'm the problemsetter and the editorialist of this round. I'd like to thank mahmoudbadawy for creating the testdata and testing the round, neckbosov, Livace, demon1999, and gritukan for testing the round, KAN and Ahmad_Elsagheer for giving their great opinions and thoughts and helping in round preparation, arsor for helping translate the problems, and MikeMirzayanov for the great codeforces and polygon platforms.

You'll be given 6 problems and 2 hours to solve them.

UPD : the scoring distribution will be 500-1000-1250-1750-2000-2500.

Good luck and Have fun!

UPD congratulations to the winners!

Div.1:-

Div.2:-

• +367

By mohammedehab2002, history, 3 years ago, ,

I want to change the registration time for a contest in a private group or register all the people in the group. Does anybody know how to do this ?

• +5

By mohammedehab2002, history, 3 years ago, ,

862A - Mahmoud and Ehab and the MEX

One can see that in the final set all the elements less than x should exist, x shouldn't exist and any element greater than x doesn't matter, so we will count the number of elements less than x that don't exist in the initial set and add this to the answer, If x exists we'll add 1 to the answer because x should be removed .

Time complexity : O(n + x) .

Solution link (me) : https://pastebin.com/ALfcu8Ab .

862B - Mahmoud and Ehab and the bipartiteness

The tree itself is bipartite so we can run a dfs to partition the tree into the 2 sets (called bicoloring), We can't add an edge between any 2 nodes in the same set and we can add an edge between every 2 nodes in different sets, so let the number of nodes in the left set be l and the number of nodes in the right set be r, The maximum number of edges that can exist is l * r, but n - 1 edges already exist so the maximum number of edges to be added is l * r - (n - 1).

Time complexity : O(n) .

Solution link (me) : https://pastebin.com/w3bF7gKS .

862C - Mahmoud and Ehab and the xor

n = 2, x = 0 is the only case with answer "NO" .

Let pw = 217 .

First print 1, 2, ..., n - 3 (The first n - 3 positive integers), Let their bitwise-xor sum be y, If x = y You can add pw, pw * 2 and , Otherwise you can add 0, pw and , We handled the case x = y in a different way because if we add 0, pw and in this case, Then it's like adding 0, pw and pw, pw appears twice so we'll get wrong answer.

Handle n = 1 (print x) and n = 2 (print 0 and x) .

862D - Mahmoud and Ehab and the binary string

In the editorial we suppose that the answer of some query is the number of correct guessed positions which is equal to n minus hamming distance, The solutions in this editorial consider the answer of a query as n minus real answer, For convenience.

Common things : Let zero(l, r) be a function that returns the number of zeros in the interval [l;r] minus the number of ones in it, We can find it in one query after a preprocessing query, The preprocessing query is 1111..., Let its answer be stored in all, If we made a query with a string full of ones except for the interval [l;r] which will be full of zeros, If this query's answer is cur, zero(l, r) = cur - all, That's because all is the number of ones in the interval [l;r] plus some trash and cur is the number of zeros in the interval plus the same trash .

Let's have a searching interval, initially this interval is [1;n] (The whole string), Let's repeat this until we reach our goal, Let mid = (l + r) / 2 Let's query to get zero(l, mid), If it's equal to r - l + 1, This interval is full of zeros so we can print any index in it as the index with value 0 and continue searching for an index with the value 1 in the interval [mid + 1;r], But if its value is equal to l - r - 1, This interval is full of ones so we can print any index in it as the index with value 1 and continue searching for a 0 in the interval [mid + 1;r], Otherwise the interval contains both values so we can continue searching for both in the interval [l;mid], Every time the searching interval length must be divided by 2 in any case so we perform O(log(n)) queries .

#### Second solution by me

Let's send 1111... and let the answer be ans1, Let's send 0111... and let the answer be ans0, We now know the value in the first index (1 if ans1 > ans0, 0 otherwise), We can binary search for the first index where the non-found value exists, which is to binary search on the first value x where zero(2, x) * sign(non - found bit value) ≠ x - 1 where sign(y) is 1 if y = 0,  - 1 otherwise .

Solution link (me) : https://pastebin.com/Bc6q7TKv .

862E - Mahmoud and Ehab and the function

Let's write f(j) in another way:-

Now we have 2 sums, The first one is constant (doesn't depend on j), For the second sum we can calculate all its possible values using sliding window technique, Now we want a data-structure that takes the value of the first sum and chooses the best second sum from all the choices .

observation: We don't have to try all the possible values of f(j) to minimize the expression, If the first sum is c, We can try only the least value greater than  - c and the greatest value less than  - c ( - c not c because we are minimizing c + second not c - second) because the absolute value means the distance between the two values on the number line, Any other value will be further than at least one of the chosen values, To do this we can keep all the values of f(j) sorted and try the elements numbered lower_bound(-c) and lower_bound(-c)-1 and choose the better (In short we're trying the values close to  - c only).

Now we have a data-structure that can get us the minimum value of the expression once given the value of the first sum in O(log(n)), Now we want to keep track of the value of the first sum .

Let the initial value be c, In each update, If the length of the updated interval is even, The sum won't change because x will be added as many times as it's subtracted, Otherwise x will be added to c or subtracted from c depending of the parity of l (the left-bound of the interval) .

Time complexity : O(n + (m + q)log(m)) .

Solution link (me) : https://pastebin.com/u828DjcS .

862F - Mahmoud and Ehab and the final stage

First, Let's get rid of the LCP part .

observation: , That could make us transform the LCP part into a minimization part by making an array lcp where lcp i = LCP(s i, s i + 1), You could calculate it naively, And when an update happens at index a, You should update lcp a (If exists) and lcp a - 1 (If exists) naively .

Now the problem reduces to finding a ≤ l ≤ r ≤ b that maximizes the value:-

, If we have a histogram where the i th column has height lcp i, The the size of the largest rectangle that fits in the columns from l to r - 1 is , That's close to our formula not the same but it's not a problem (You'll see how to fix it later), so to get rid of finding the l and r part, We can make that histogram and the answer for a query will be the largest rectangle in the subhistogram that contains the columns from a to b - 1, One of the ways to solve it is to try some heights h and see the maximum width we can achieve if h was the height, call it w, and maximize with h * w, To solve the slight difference in formulas problem we'll just maximize with h * (w + 1)!!

Let BU be a value the we'll choose later, We have 2 cases for our largest rectangle's height h, It could be either h ≤ BU or h > BU, We will solve both problems separately.

For h ≤ BU we can maintain BU segment trees, Segment tree number i has 1 at index x if lcp x ≥ i and 0 otherwise, When we query, It should get us the longest subsegment of ones in the query range, Let's see what we need for our merging operation, If we want the answer for the longest subsegment of ones in a range [l;r], Let mid = (l + r) / 2, Then the answer is the maximum between the answer of [l;mid], The answer of [mid + 1;r], And the maximum suffix of ones in the range [l;mid] added to the maximum prefix of ones in the range [mid + 1;r] . So we need to keep all these information in our node and also the length of the interval, As it's a well-known problem I won't get into more detail. Back to our problem, We can loop over all h ≤ BU, Let the answer for the query on range [a;b - 1] in segment tree number h be w, The maximum width of a rectangle of height h in this range is w and we'll maximize our answer with h * (w + 1) .

For h > BU, Let's call a column of height greater than BU big, The heights we'll try are the heights of the big columns in the range, We don't have to try all the heights greater the BU, There are at most big columns (Where tot is the total length of strings in input), Let's keep them in a set, When an update happens, You should add the column to the set or remove it depending on its new height, The set's size can't exceed now, Let's see how to answer a query, Let's loop over the big columns in range [a;b - 1] only, If 2 of them aren't consecutive then the column after the first can't be big and the column before the second either, That's because if it were big, It would be in our set, So we can use this observation by making a new histogram with the big columns in the range only, And put a column with height 0 between any non-consecutive two, And get the largest rectangle in this histogram by the stack way for example in , The stack way will get us the maximum width w we can achieve for a rectangle containing column number i, We'll maximize with lcp i * (w + 1).

Also the answer for our main formula can be an interval of length one, All what I mentioned doesn't cover this, You should maintain another segment tree that gets the maximum length of a string in a range for this .

Maximize all what we got, You have the answer, Now it's time to choose BU, It's optimal in time to choose BU near (Reason in tfg's comment below) .

Optimization: The longest subsegment of ones problem is solved by BU segment trees and each one has 4 integers in each node, You can make them 2 integers (max prefix and suffix of ones) and make another only one segment tree that has the rest of the integers, That would divide the memory by 2 .

Time complexity :

Thanks to gritukan for making it harder and more interesting .

Solution link (gritukan) : https://pastebin.com/vQ4RJqh0 .

• +88

By mohammedehab2002, history, 4 years ago, ,

766A - Mahmoud and Longest Uncommon Subsequence

If the strings are the same, Any subsequence of a is indeed a subsequence of b so the answer is "-1", Otherwise the longer string can't be a subsequence of the other (If they are equal in length and aren't the same, No one can be a subsequence of the other) so the answer is maximum of their lengths.

Time complexity : O(|a| + |b|).

Problem author : me.

Solution author : me.

766B - Mahmoud and a Triangle

#### First solution :-

Let x, y and z be the lengths of 3 line segments such that x ≤ y ≤ z, If they can't form a non-degenerate triangle, Line segments of lengths x - 1, y and z or x, y and z + 1 can't form a non-degenerate triangle, So we don't need to try all the combinations, If we try y as the middle one, We need to try the maximum x that is less than or equal to y and the minimum z that is greater than or equal to y, The easiest way to do so is to sort the line segments and try every consecutive 3.

Time complexity : O(nlog(n)).

#### Second solution :-

Depending on the note from the first solution, If we try to generate a sequence such that after sorting, Every consecutive 3 line segments will form a degenerate triangle, It will be 1 1 2 3 5 8 13 ... which is Fibonacci sequence, Fibonacci is a fast growing sequence, fib(45) = 1134903170, Notice that Fibonacci makes maximum n with "NO" as the answer, That means the answer is indeed "YES" for n ≥ 45, For n < 45, You can do the naive O(n 3) solution or the first solution.

Let x be the number that satisfies these inequalities:-

fib(x) ≤ maxAi.

fib(x + 1) > maxAi.

Time complexity : O(x 3) or O(xlog(x)).

Problem author : me.

Solutions author : me.

766C - Mahmoud and a Message

Let dp[i] be the number of ways to split the prefix of s ending at index i into substrings that fulfills the conditions. Let it be 1-indexed. Our base case is dp[0] = 1. Our answer is dp[n]. Now let's calculate it for every i. Let l be the minimum possible index such that the substring from l to i satisfies the condition, Let x be a moving pointer, At the beginning x = i - 1 and it decreases, Every time we decrease x, We calculate the new value of l depending on the current character like that, l = max(l, i - a s[x]). While x is greater than or equal to l we add dp[x] to dp[i], To find the longest substring, Find maximum i - x, To find the minimum number of substrings, there is an easy greedy solution, Find the longest valid prefix and delete it and do the same again until the string is empty, The number of times this operation is repeated is our answer, Or see the dynamic programming solution in the code.

Time complexity : O(n 2).

Try to find an O(n) solution(I'll post a hard version of some problems on this blog soon).

Problem authors : me and mahmoudbadawy.

Solution authors : me and mahmoudbadawy.

766D - Mahmoud and a Dictionary

Let's build a graph containing the words, For every relation in the input add a new edge with the weight of 0 if they are equal and 1 if they are opposites, If adding the edge doesn't make the graph cyclic, Our relation is valid, Otherwise it may be valid or invalid so we'll answer them offline. Check if adding that edge will make the graph cyclic or not using union-find like Kruskal's algorithm. Suspend answering relations that will make the graph cyclic, Now we have a forest of trees, Let cum[i] be the xor of the weights on the edges in the path from the root of the component of node i to node i. Calculate it using dfs. To find the relation between 2 words u and v, Check if they are in the same component using union-find, If they aren't, The answer is 3 otherwise the answer is , Now to answer suspended relations, Find the relation between the 2 words and check if it's the same as the input relation, Then answer the queries.

Time complexity : O((n + m + q)log(n) * maxL) where maxL is the length of the longest string considering that union-find works in constant time.

Solution author : me.

Wait for a hard version of this problem.

766E - Mahmoud and a xor trip

If we have an array ans[i] which represents the number of paths that makes the i th bit sit to 1, Our answer will be

Let arr[i][x] be the binary value of the x th bit of the number attached to node i(just to make work easier).

There are 2 types of paths from node u to node v where u is less in depth than or equal to v, Paths going down which are paths with lca(u, v) = u and other paths, Let's root the tree at node 1 and dfs, let current node be node, Let dp[i][x][j] be the number of paths going down from node i that makes the x th bit's value equal to j. A path going down from node i is a path going down from a child of i with node i concatenated to it so let's calculate our dp. A path that isn't going down is a concatenation of 2 paths which are going down from lca(u, v), Now we can calculate ans. See the code for formulas.

Time complexity : O(nlog(a i)).

Problem author : me.

Solution author : me.