### mohamedeltair's blog

By mohamedeltair, history, 7 days ago, ,

In Successive shortest path algorithm in every iteration, we find the path that has the minimum cost (if we considered sending a unit of flow along it), and then we send along this path the maximum flow that we can.

My question is, why is it correct to send the maximum flow that we can along the found least-cost path? Imagine a scenario where we found the least-cost path, then sent a unit flow along it. This may lead to new paths appearing (via edges through which we couldn't send a flow before but now we can). Can't a one of these new paths be the new least-cost path, and hence we should have sent the next unit of flow along it (not along the previously found least-cost path)?

• +7

By mohamedeltair, history, 5 months ago, ,

The problem's editorial describes the solution as follows:

First let . If we divided the array into k segments, where the right boundaries of these segments are (in increasing order) i1, i2, …, ik (ik must be n), Every segments subset-xor can be represented as pr[ij] subset-xor. Therefore besides maximizing k, we want to guarantee that the any pr[ij] non-empty subset has a non-zero XOR-sum. So we calculate the basis size of pr[1], pr[2], …, pr[n] numbers (binary vectors) under GF(2). Because the basis size will represent the maximum k such that all pr[ij] are linearly independent under GF(2) (no pr[ij] non-empty subset has a zero XOR-sum).

My question is: from the calculated basis size k, we know that we can choose k linearly independent (under GF(2)) pr[ij] values, but how do we guarantee that we can choose these values such that ik = n (as the last right boundary of our chosen segments has to be n)?

• +4

By mohamedeltair, history, 9 months ago, ,

The problem is simply as the title says, 1<=N<=200 and 1<=M<=N. The number of permutations is required mod 1e9+7. I thought of multiple methods but I am stuck in each one of them:

1) For each position i, calculate ans[i], where ans[i] is the number of permutations which have at leasing one increasing sub-array of length M starting at i and don't have any increasing sub-arrays of length M starting at positions earlier than i. The final answer is sum of all ans[i]. But I can't calculate ans[i] properly (stuck at the part of ensuring that ans[i] doesn't include any increasing sub-arrays of length M starting at positions earlier than i).

2) calculate ans[x] for 1<=x<=M or M<=x<=N (as ans[1] = N! and ans[N] = 1). But I can't find a way to calculate ans[x] from ans[x+1] or from ans[x-1].

3) Find the answer using dynamic programming with state dp[i][j][k][l], where i is the current position, j is last number put, k is the length of longest increasing suffix, l is boolean representing whether an increasing sub-array of length M has been constructed or not. But I can't ensure that the new number I will try to put at position is is not put before at an earlier position.

• +3

By mohamedeltair, history, 9 months ago, ,

The problem I am referring to in this question is: Traffic Light

It is mentioned in the question that: "Formally, Mr. Ang can set the offsets for the traffic lights to OFFi so that for each traffic light, Mr. Ang can go through it in time range [OFFi + k × (Ai + Bi), OFFi + k × (Ai + Bi) + Ai) and must wait in time range [OFFi + k × (Ai + Bi) + Ai, OFFi + (k + 1) × (Ai + Bi)) for all integers k."

But I don't understand what the offsets actually mean here. If Mr. Ang is moving from his home at time 0 and offsets represent absolute time by which he can control the start of green light, so he can adjust the offsets to never wait at any traffic light, but this is not the case in the problem. Any help in understanding this is appreciated.

• 0

By mohamedeltair, history, 10 months ago, ,

Here is the link of the problem: Bonus Money

My question is about sub-task 3. It is mentioned in the editorial of the problem that dp[n][r+xg] is a polynomial P(x) where g=lcm(1, 2, 3, ..., n) and 0 <= r < g. What is the general proof of this? And what is the significance of lcm(1, 2, 3, ..., n) that made the expression polynomial? (That is, it is said specifically that dp[n][r+xg] is a polynomial and not just dp[n][x]). I have referred to the comments on the editorial looking for a proof, but I have just found a small proof for only dp[1][x], dp[2][2x], dp[2][2x+1], dp[3][6x]. And not a general proof which generalizes to any n.

• 0

By mohamedeltair, history, 16 months ago, ,

The problem

There are two types of queries in the problem:

1. flip from the Lth bit to the Rth bit inclusive in the elements from index x to index y inclusive
2. answer that question: is the xth number equal to the yth number ?

The elements are all initially zeros. N and Q (numbers of elements and queries) are up to 1e5. And 0 <= l <= r <= 1e9.

An intuitive idea (if the number of bits was small) is:

1. for the 1st query type: xor elements in the range [x,y] with (2^(r+1)-1 xor 2^(l)-1) (can be done easily using range update structures)
2. for the 2nd query type: just check if the xth elements is equal to the yth elements using a point query

But the number of bits is very large, I noticed from people's solutions that they hashed 2^x for all x that appeared in the first query type (l and r) to represent them in smaller numbers. So any mask which was originally (2^(r+1)-1 xor 2^(l)-1) maps to a another mask which is (hash(r+1) xor hash(l)). and the original combination of 1e9+1 bits for any element will map to a combination of smaller number of bits.

This type of solution has two sides in the second query type:

1. if the answer is Yes, it is guaranteed that this solution will output Yes.
2. if the answer is No, it may happen that that two different combination of 1e9+1 bits map to the same combination in the smaller number of bits, and hence the solution will output Yes.

What is the probability that the problem in the second side happens if (for example) the hash for any 2^x was a long long in the form of (rand()<<32)|rand())?

Note: rand() here is the C++ rand.

• +5

By mohamedeltair, history, 21 month(s) ago, ,

As the title says, there are these two types of queries. number of values (n) <= 100000 and number of queries <= 200000, the solution that came to my mind is using a segment tree, where each node of the segment tree is an AVL tree that stores the elements of the segment tree node interval in ascending order, and each node of any AVL tree will store the sum of elements of the subtree rooted at this node.

So if I need to update an element (write a new value to it), I go to the segment tree nodes whose intervals contain the position of the element (log(n) nodes) and update the AVL tree of each of these nodes (each update is log(n)), so in total (log(n))^2.

And if I want sum of numbers less than x in interval l to r, I do a query operation so that when the segment tree node is totally inside l to r, I use the AVL tree of this node to get sum of elements less than x in log(n) complexity, and the higher level segment tree nodes will return the summation of these lower level segment tree nodes' sums (so in total (log(n))^2).

If number of queries/updates is Q, their complexity becomes Q*(log(n))^2. am I walking in the right path or is there something wrong in this approach ?

• +3

By mohamedeltair, history, 22 months ago, ,

Regarding this problem in UVA: Food portion sizes I know that the optimal S will be either arr[i]/1 or arr[i]/2 or arr[i]/3, where 1 <= i <= n and arr[i] is the required food for ith person, that is we try all arr[i]/1, arr[i]/2, arr[i]/3 for all i and choose the one which gives minimum result. I know that these three choices are the minimum choices in terms of cost for the ith person, but why choosing one of them from one of the n persons results in a minimum total result for all people ? what is the proof for this ?

• 0

By mohamedeltair, history, 22 months ago, ,

When the subgames are independent, dealing with them is easy, you try to calculate grundy for each sub game, either you derive a dynamic programming solution or see if the grundies follow specific pattern then just code the pattern directly (this is useful if the limits are very high), finally you xor the grundies.

But suppose a game like this: you have n piles and each pile has number of stones greater than 0, in a move you pick a number of stones to remove, this number will be subtracted from any pile which has a number of stones greater than or equal the number you have picked, as you see here one move affects more than one existing pile, how to think in these situations ?

• +6

By mohamedeltair, history, 22 months ago, ,

Regarding this problem: Cross Mountain Climb, actually what came to my mind is brute force and dp but the constraints are just too high (hi and d can reach 1e9), any hints ?

• +19

By mohamedeltair, history, 2 years ago, ,

This is a problem in a past contest and its link is:

http://codeforces.com/gym/101102/problem/K

The statement is: Consider a directed graph G of N nodes and all edges (u→v) such that u < v, find the lexicographically largest topological sort of the graph after removing a given list of edges. (nodes are numbered 1 to n)

I tried an approach which is :

First i assume a graph sorted with the initial sorting which is 1,2,3,.....,n in array named permutation, so initially permutation[1] = 1, permutation[2] = 2, ....., permutation[n]=n.

Then this pseudocode:

loop from i=2 to i=n {

j = i //the initial index of i in permutation

while(j>1 and the edge between i and permutation[j-1] is removed) {

swap permutation[j] and permutation[j-1]  //permutation[j] = i before the swap, and permutation[j-1] = i after the swap

j = j-1  //the new index of i in permutation

}

}

The final answer is represented by the elements of permutation, for example if n=3 and permutation is finally: permutation[1] = 3, permutation[2] = 1, permutation[3] = 2, so the topological sort here is: 3,1,2.

But the code has a wrong answer in some test which i can't view, what do you think is wrong in this approach ?