I was thinking of this problem.

You have a matrix of size n by n (n<=10^5). You have to support q<=10^5 operations of 3 types (online):

Given (i, l, r, x), set a[i][l...r] to x

Given (l, r, x), set a[1...n][l...r] to x

Given (i, l, r), find the minimum of a[i][l...r]

Is there any way to support these operations efficiently?

i dont know, it is correct or no, but it is my idea:

Change 2nd type of query, to n times of 1st query. And we will use sqrt decomposition. We divide queries to K blocks. And recalc after each block. And find answer will not be hard

Sorry, for my poor english

Sorry for my poor English.

When a request of the third type comes, first update the i-th line through the second tree of segments for O (log n), and then answer the query through the first tree.

We can maintain n sets instead dynamic segment trees. In i-th set we store segment [l, r] if a[i][j1] == a[i][j2] for any l <= j1, j2 <= r and r — l is max possible.

nvm

Most likely, all elements are equal to zero at the beginning.

I think that the easiest way is swap columns and rows.

Now we could build segment tree, where we store for each row and for all column (last value, query index). Query

jdoes one of the next things:left...right] pair (x,j) at indexi;left...right] 'full column' value to (x,j);P.S. It seems that you need store list of 'lazy' queries for each vertex, that would be cleared after every update operation, so you would update only values that were not updated before.