Thanks for the participation!

Tutorial is loading...

Tutorial is loading...

Tutorial is loading...

Tutorial is loading...

Tutorial is loading...

Tutorial is loading...

Tutorial is loading...

Tutorial is loading...

Tutorial is loading...

Tutorial is loading...

Tutorial is loading...

Tutorial is loading...

In problem A, it is also necessary to add 4, since clicks are counted as a separate move

1883E can anyone tell me why in my code i am getting negative output for the first test-case ?

SUBMISSION:- 229287818Most likely overflow

then it should be giving diff output in pypy then cpp . but it is giving same tho .

I think in PyPy you may be accepted with that idea (because in PyPy has BigNum), but in C++, if you want to solve with that idea, you must self-code BigNum to avoid overflow. Or you can see my idea and my code here: 229357134

thanks

.

It's impossible to solve Div3D in Python without implementing custom Multiset :(

Use C with Classes

this was savage ;>

a list and sorting might work.

create two lists, Left and Right Sort both, if the upper bound of Right[0] exists in Left, print yes.

Right; but list deletion (and insertion) is O(n) a pop. Removals (and inserts) alone would bump you up to O(n^2); a recipe for TLE especially in python.

It is possible https://codeforces.com/contest/1883/submission/229319134

I skipped it during the contest, but did it just now using heaps. Let's be honest, it's clearly more difficult than just using sortedcontainers.

And it's just this problem. There may be problems where using heaps cannot suffice. Why would someone using Python be required to implement SortedList or to copy/paste 200-1000 lines just to run their code? Smh.

Well I wrote a solution using 2 sets and a map.

Submission Link

In C++, a set by default stores elements in sorted order. This is not the case with Python set.

Oh, I think you can compress the number left and right then use the Fenwick tree to check does it exists or not a range has right < current left or left > current right. This is my code: 229304991.

I LOVE YOUR TECHNIQUE!

thank you bro!!

Why so impossible? I use 2 heap check the maximum left and minimum right and 2 dictionary to count. Here is my code. https://codeforces.com/contest/1883/submission/229247926

Wow, your idea is very interesting! I have never seen this technique before.

Use two heapq's and a dictionary. Pop from left and right heaps and check if the segment is in the dictionary, If it is not, pop again from the corresponding heap.

You can use map + set to replate the multiset. Pop from the set when map[thing] == 1

Actually, in G2, you don't need binary search. You can just sort the two arrays and pair them up greedily.

Can you explain in more detail please?

you can see my submission. link

Nice solution. Congratulations for CM

Thanks! ^_^

Can you explain your solution please .?

First, he paired up elements in $$$A$$$ with those of $$$B$$$ greedily after sorting them while keeping track of elements of $$$B$$$ that are not paired with anyone. Then we can simply pair the value of the first element of $$$A$$$ (the one which is varying) with the largest unpaired element of $$$B$$$. Then

To make the order of array A irrelevnt, do it for the last n-1 elements with multiset and then look at the first element from 1 to m

Div 1. B can be solved in linear time. https://codeforces.com/contest/1888/submission/229257846

Div2 D2/G2 can be solved in O(nlogn). If we increase a number then we need to check if it satisfies at only 1 index. If currently the number is at a[i], then we can increase a[i] till min(a[i+1], b[removals+i]) without changing the answer. Do some casework. My AC code link

I have done something similar but from a different angle, at each ith step from 1-m, I calculated the position of i in sorted vector a, now if that position is among the number of elements removed in the previous step, then no of elements to be removed will be same again, else i checked if the new i is still lesser than the current b's element at that position else just calculated how many more elements will I have to remove.

I traversed from 1-m by doing binary searching as ans for many continuous steps will be same.

Link — https://codeforces.com/contest/1888/submission/229293946

int t=1; cin>>t;

why i m getting TLE in this code of problem F

Div. 1 A can be solved with

`multiset`

in $$$\Theta(n\log n)$$$: submissionI am beginner,, i stuck in A and cannot proceed further,,Can anyone tell me how to improve myself>

no, google search and figure it out yourself.

I dont know how these MONSTERS OF PROGRAMMING solve the tasks for 2 minutes, when I can spend all day for searching of right answers. I think you need train and solve a lot of tasks, and once you will solve all competition for 2 hour.

1888E. I'm curious about the cases where my code fails. Code

Maybe because of this line

`lli ind = upper_bound(record_time[rec].begin(), record_time[rec].end(), dist[u]) - record_time[rec].begin();`

. When it's the source node you can go to a neighbouring node in your same time so it's lower_bound.My source node have distance 0.

I figured it out now. Thanks

Div3 D is really misleading. The question doesn't clearly describe the ability to pair l and r randomly.

Two codes below use very similar algorithms but only different data structures, only the first code is accepted:

implement using two multiset left,right:

implement using a multiset<pair<int,int>>

Actually not, it's not the same. If we have three segments: [1,100] [2,3] [4,5] second algotithm will answer "NO", but right is "YES"

You are not free to pair them up randomly.

However, you can prove that if each pair of intervals has nonempty intersection, then there is an index that is contained in all of the intervals. So it is enough if you check for this.

Oh, I know my problem now ^-^. Thank you very much!

Me — "I can solve it piece of cake" Div 3E ; 'Haha I am E Don't even think' Nice problems and I personally like the experiment :)

Div1 D can also be solved using divide and conquer: https://codeforces.com/contest/1887/submission/229298128

If you don't mind, can you explain your solution because that's the method I tried with but couldn't come up with a good time complexity solution

My solutions a bit more complicated than the intended.

For a single query, can initially add $$$a_l$$$ to the left split and keep expanding the left split while there is a smaller element in the right split. Similarly, you can add $$$a_r$$$ to the right split and keep expanding the right split while there is a larger element in the left split.

If you had only prefix queries, you could sweep on the right index of the query and maintain $$$prv[i]$$$ = left index of the right split for the interval $$$[1, i]$$$. For suffix queries you can maintain $$$nxt[i]$$$ = right index of the left split for the interval $$$[i, n]$$$.

In the divide and conquer, when while on the interval $$$[l, r]$$$, run a sweep to compute $$$prv[i]$$$ for $$$[mid + 1, r]$$$ and $$$nxt[i]$$$ for $$$[l, mid]$$$. This can be done in $$$O(n log n)$$$ time so by the master theorem, a total complexity of $$$O(n log^2 n)$$$.

When in the interval, process all queries that cover $$$mid$$$. The conditions for there being no split is that the interval $$$[l_i, nxt[l_i]]$$$ contains an element greater than one in $$$[mid + 1, r_i]$$$ and the interval $$$[prv[r_i], r_i]$$$ contains an element smaller than one in $$$[l_i, mid]$$$. I used some segtrees to maintain this, which is also O(n log n). This is also $$$O(n log^2 n)$$$ by the master theorem.

I think my implementation has a bunch of redundant code :clown:

Thanks, I understand your solution! Structure of divide and conqueror is similar to disjoint sparse table if I understood correctly so I think it can be optimised to O(nlogn) with next observations: 1) To find prv(i) for [x,y] you can find it in O(n) time(prv(i) = prvi(i — 1) or i because if this element i is smaller than max from prv(i — 1), then new prv must be i, so maintaining maximum can be done with a different pointer that can go from x to y which is O(n) time, same for getting prv) 2) Because you can find answer in disjoint sparse table in O(1), you can find your solution to every query in O(logn) because you only need to check your conditions from last paragraph which are O(logn) (or O(1) if you precompute them in O(nlogn) So you get a O(nlogn) + (O(1/logn) per query)

Feeling that 1B=1C=1D=1E. Small difficulty gap. Nice round.

I can't see the English solution to Div1C

Fixed

Oh my goodness, When I solve Dances (Easy Version), I accidentally mistook 'reorder' in the question for 'record', thinking that the task taught me to store it in a local array—I even thought the question was elaborately detailed!!! Now, I would like to know if there are any algorithms capable of solving this problem without the 'reorder' condition. I tried dp but it seems too difficult, I would be grateful if someone could share some thoughts with me.

if you cant reorder, then i think you can solve it with a dp with n^2 states.

dp(i,j) = max size of a good array (thats the same as minimizing operations) considering only A[i,...,n] and B[j,...,n].

you have to choose to pair ai with bk with smallest index such that k >= j and ai < bk. in this case you transition to 1 + dp(i+1, k+1). otherwise you can also choose not to use the current ai for pairings, transitioning to dp(i+1,j).

i hope you get the idea.

wow! really nice dp You’ve made my day! Thanks a ton!

you can probably compute this DP in n (log n)² using 2d range trees, or some other data structure. the complexity of the 2d range trees can probably decrease to n log n using fractional cascading.

Thank you so much for explaining the concept of 2D range trees to me. It's my first time hearing about this data structure, and I am eager to try it out.

usually you can use segment trees instead of range trees. anyways, I think this 2d range tree idea does not work to compute this DP, there still gonna be O(n²) points to visit, so the usual DP is probably the solution you wanted.

Ok, I got it~

Problem 1887C is awesome. I really like the idea of 1887C.

I agree, the idea in the editoral is very clever

Do you know what is the segtree solution that many other implementations used?

int t=1; cin>>t;

why i m getting TLE in this code of problem F

Prob E. AC 229306255 and WA 229307572.

My idea is to use log2. In AC, I use double and ceil, while in WA, I use long long and handle ceil in an alternative manner. Help me

Not sure if this is the case, I got WA on test 7 using DOUBLE and ceil and WA on test 13 using LONG DOUBLE and ceil. Maybe the test data is made so that precision errors accour and affect your answer.

how you finally avoided that?

I didn't :( this is how i solved it in the end. I don't think this problem intended for us to use double/long double though.

In Div3 C, I calculated the product of all elements in array and if it is divisible by k, then print "0. why is this condition giving WA, because when i removed this(condition) it got accepted?

Possibly overflow and false positive

Can someone explain solution to 1883F? Why it works?

Never mind. I understood the difference between subarray and subsequence. Now this solution make sense.

Why not rated -_-

In problem E why do we get TLE if we go naive way multiplying them by 2?

Not sure why tle, but I think it’s because you’re multiplying very large integers

even if we are multiplying large numbers, i.e a[0] >= 1e9. and rest are 1. still 32 times 1e5 shouldn't result in TLE right?

Suppose the array size is 1e5 and a[0]=1e9 , a[1]=1e9-1, a[2]=1e9-2 and so on..in cases like these the multiplication won't be limited to 32 bits and it will go beyond that and the product will also become too large. I think this is where the time limit can exceed.

You are right. It is clear that the answer of a decreasing sequence containing $$$n$$$ numbers is the sum of $$$1,2,3,\ldots, n-1$$$ , because if you multiply $$$a_i$$$ by $$$2^k$$$ , you'll have to multiply $$$a_{i+1}$$$ by $$$2^{k+1}$$$ to meet the demand. So in this case, Your solution will be $$$O(n^2)$$$.

Does anyone have any good SortedList implementation?

I'm wondering is there any C++ implementation without using

`multiset`

? I tried using vector and sort after every update but it leads to TLE.i dont see why u would use vector and sort after every update when thats literally what a multiset does there r tho ways to implement this without multiset and some ds instead, tho why bother when multiset works

Yes, you are right. I'm just curious if

`multiset`

is the only solution. It seems to be yes.it's not, as i said u can use some data structure like segment tree but its pointless when multiset exists

I used map , you can see my solution — https://codeforces.com/contest/1883/submission/229248799

Bruh, pushing into a vector and sort requires O(nlogn) everytime. let x=number of elements u push one by one, then the time complexity is O(xnlog n).

Whereas multisets allows O(logn) sorting. the time complexity is O(xlogn).

Yes exactly, thanks a lot!

U are 邹必谦???

oh i thought it was personal talk, sry mb

https://codeforces.com/contest/1883/submission/229322842

can anyone tell me where I am going wrong. Any help would be appreciated . Thanks in advance

What if the segments are [1,5], [2,3], [4,7]? As 5 > 4 your code will give “NO” but there’s still the [2,3]

okay got in thanks

in 1883B even when we donot not remove all k characters but get the frequencies even we print "Yes"... why?

let o be the number of characters that appear odd number of times in the string u have to delete at least max(0, o — 1) characters, since a palindrome can have at most one odd character, appearing at the center, and the others have to be symmetrical and appear on both sides if k >= max(0, o — 1), u can delete the max(0, o — 1) first, then u can delete by 2, if theres 1 remaining u can remove a center or create a center by deleting one, either way u can palindromize it

In F:Note that a subarray suits us if al is the leftmost occurrence of the number al in the array and ar is the rightmost occurrence of the number ar in the array.How can we proof this?

if theres another al appearing earlier, we can just start the subsequence there and we would have at least 2 such subsequences, same for ar

Thanks! Very beautiful observation :)

In problem Div3, F, I got TLE (submission 229270873) using unordered_map. I replaced it with map (submission 229332985) and it got accepted. Isn't unordered_map supposed to be faster?

Unordered structures are linear at worst. They use hash functions internally, so its O(1) in average but in case of multiple collisions the search takes O(n) operations. Regular map, on the other side, is O(log n) at worst. Use custom hash

There is a fun solution of 1887C using something that is sometimes called Radewoosh trick (https://codeforces.com/blog/entry/62602).

First, let's use the idea of transition to differences array.

Suppose that we are given some array state after several operations. Let's call this STATE. Than, for every state that has ever occured we can find out, whether this state is lexicographically smaller than STATE, or not — we can process change operations in the original order and maintain fenwick tree f, where f[i] = 0 if value in index i in current state is the same to the STATE and 1 otherwise. In order to check, whether current state is lexicographically greater than STATE or not we just need to get first 1 in fenwick and perform one comparison.

After that we can just change STATE to the

randomstate that is lexicographically smaller than STATE. Expected number of such runs is $$$O(\log n)$$$, which gives $$$O(n \log ^ 2 n)$$$ solution, which is fast enough to pass the tests.My code: https://codeforces.com/contest/1888/submission/229293148.

Dang this is cool, thanks for sharing

Can someone tell me, whats wrong with my code for 1883E. am i missing something. i think my logic is correct. thanks 229351983

Take a look at Ticket 17116 from

CF Stressfor a counter example.div2.D2 can be solved in linear time with following algorithm. First, you greedily find minimum number of needed operations with your array 'a' having only n — 1 elements(greedily as until b[j] <= a[i] we decrease 'i' and only then decrease 'j' in previously sorted arrays). So we now found solution in O(n). My idea was that when you try to insert your m at position i(so it stays sorted) following can happen: 1)Because you shifted all elements left of m 1 more place left now their conditions are maybe ruined with their new pairing with a. 2)Your m can be >= b[i + numOfOperations](we paired every a[i] with b[i+numOfOperations] with our greedy method, check for yourself) In both of those cases, we need to increase numofOperations for this m by 1, but let's look at it reversely. Let's say our answer is (numOfOperation + 1) * m so we 'predict' that 1 of these 2 conditions will always hold, so we need to find when it won't. We can deduce that if we start increasing m from 1 and come to a point where first condition disturbs the pairing, it means that it will also disturb it for all m >= current one. But if the second condition happens, it just means we need to shift our search between next two elements in array a.SO, the solution will look like this: 1)Find greedy for starting a and b 2)In every interval between a[i] and a[i + 1], we can fast check for which m-s was our prediction bad(none of 2 conditions happened), which will be for all m-s between a[i] and b[i + numOfOperations] 3)If, at any moment while iterating i, we come to a point where first condition happens, we end the iteration because to fix the pairing from now on, we need to add 1 to every solution, which we already did. Time Complexity:O(n)

thank you for the solution!

229293708 should give tle

how to solve for k=4 in Div 3 problem C raspberries?

The product can be made divisible by 4 by following two cases: first is, two or more even numbers i.e. divisble by 2... and second, by adding minimum x to make an element divisible by 4

Ok, will try

In Question 1883 F why I am getting TLE?? 229371473 it's O(n)

Ok i Got it i was using unordered map

The rounds were beautiful, they helped me get to

Specialistfor the first time! Really glad :DIn Div3 E , You can simply calculate how many 2's in every a[i] and then just calculate the difference of

. here

pre[i]is number of 2's used earlier !1883ECan anyone tell me why I am getting wrong answer. https://codeforces.com/contest/1883/submission/229379757Value of a[i] getting bigger ! You need calculate 2’s externally

problem D is a great one, it really confuses me.At a moment I thought a treap or splay was required.

can anybody explain why 1883E giving tle on using brute force multiplying 2 consecutively as the maximum time 2 can be multiplied is 32 times so why tle?

About Div1F,there are some details about the boundries---there should be no $$$nxt_i$$$ equal to $$$r_1$$$,but the editorial didn't metion this,please update your editorial.

I am a python user and wondering how can i solve 1883D In Love question as python don't have the inbuilt feature of multiset.

you can implement it using set and map

maybe i play genshin impact to much...

tooooooooooooooooo much

Could any body explain why this is correct?

I mean, given a

rigorous proof?It's true as we will have the exact one occurrence of the current subarray as the subsequence only when we will have the fixed boundaries of the subarray which is true only when they don't have any other occurrences not present in the current subarray!

Thanks very much

Suppose i, j where a[i] == a[j] && i < j and we claim that a[j] is a suitable left boundary.

Then there is a contradiction as we can build another subsequence that is the same by swapping a[i] with a[j]. In other words, if a[j]....a[R] is the subarray, there exist another subsequence in A where a[j] can be swapped with a[i]. Hope this helps :)

Thank you

In

G2. Dances (Hard Version), how to proof the line"changing one element of array a cannot worsen the answer by more than 1"?:")we always can just delete this element and the smallest in b

1883F Anyone know why my solution TLE when i use unordered_map but AC when i use map?

Submission: 229484801

That task got an antitest for

`unordered_`

structures. Hash table inside of the structure allows to have $$$O(1)$$$on average, but that test was build to just cause hash collisions. Those collisions result in raising the asymptotics up to $$$O(n)$$$, which isthe worst case.Normal

`map`

doesn't use hash table and has $$$O(\log n)$$$always. That kind of increases the asymptotic, but a lower constant and the impossibility of "unlucky cases" make it work way faster, comparing to`unordered_map`

.You can find more info in comments under the announce post.

In problem 1883E, can anyone tell me why my code is getting runtime error?

My submissionHELP PLS:

https://codeforces.com/contest/1887/submission/229544478

DIV 1 D PROBLEM

I DONT KNOW WHY THIS FAILS AT TESTCASE 29

Take a look at Ticket 17106 from

CF Stressfor a counter example.can someone please explain the test case on which my code will fail on E

submission

Div3 E can be solved easily using logarithm and its properties, Its more optimal and even simpler to understand,.. the code for the same is as follows:

Thanks for such a nice contest. :D

Could someone explain me why the solve of the test case: 3 2 2 3 for the problem 1883F — You Are So Beautiful is 3 and not 4.

there are 4 differeces subsequences ( [2,2] [2,3] [2,2,3] , [3])

The subsequence

`[2,3]`

isn't unique. We can choose indexes`2,4`

or`3,4`

(similarly with the subsequence`[3]`

)In this case we have suitable arrays

`[3,2,2]`

,`[2,2]`

,`[2,2,3]`

and`[3,2,2,3]`

could anyone please tell me what is wrong with my submission? submission- 229277838

This part, i believe it can lead to wrong answers.

229682104...In problem e : why this code is giving wrong answer in test case 13 anyone?

Why I m getting TLE ...can anyone please tell me the reason behind the TLE... while(t--){

// code of the problem F(You are so beautiful)

Seems correct to me. Try not using "unordered" sets and maps; their internal hash implementation is not that great, which can cause collisions resulting in TLE. Either use an ordered map or an unordered map with a custom hash. See this for more info.

ya I got that...btw thanks

Can anyone please look into my code and tell me why I am getting WA on Div3.E? I used log_2 to calculate how many twos are required. Code

There seems to be precision issue :

Check these two submissions (in cpp) :

the only difference is, in once place we take log(a/b) which got accepted,

and in the other case we do log(a) — log(b) {which you have also done}

https://codeforces.com/contest/1883/submission/231450996

https://codeforces.com/contest/1883/submission/231446838

In part F, using unordered_map resulted in TLE. I had to use a map for it to work

Incase anyone like me find the implementation of others hard for C

codecan someone tell me why cant we just delete bad subarrays from total in question You Are So Beautiful.

Code## pragma GCC optimize("Ofast")

## pragma GCC target("avx,avx2,fma")

## pragma GCC optimization("unroll-loops")

## include

## include

## include

## include

## include

## include

## include

## include

## include

## include

## include

## include

## include

## include

## include

## include

## include

## include<stdio.h>

## include

## include<math.h>

## include

## include

## include

## include "ext/pb_ds/assoc_container.hpp"

## include "ext/pb_ds/tree_policy.hpp"

using namespace std; using namespace __gnu_pbds;

template using ordered_set = tree<T, null_type, less, rb_tree_tag, tree_order_statistics_node_update> ;

template<class key, class value, class cmp = std::less> using ordered_map = tree<key, value, cmp, rb_tree_tag, tree_order_statistics_node_update>;

// find_by_order(k) returns iterator to kth element starting from 0; // order_of_key(k) returns count of elements strictly smaller than k;

## define inf 2e18

## define ll long long

## define FAIZAN ios_base::sync_with_stdio(false);cin.tie(NULL);cout.tie(NULL)

const ll mod = 1e9 + 7;

vector sieve(int n) {int*arr = new intn + 1; vector vect; for (int i = 2; i <= n; i++)if (arr[i] == 0) {vect.push_back(i); for (int j = 2 * i; j <= n; j += i)arr[j] = 1;} return vect;}

ll inv(ll i) {if (i == 1) return 1; return (mod — ((mod / i) * inv(mod % i)) % mod) % mod;}

ll mod_mul(ll a, ll b) {a = a % mod; b = b % mod; return (((a * b) % mod) + mod) % mod;}

ll mod_add(ll a, ll b) {a = a % mod; b = b % mod; return (((a + b) % mod) + mod) % mod;}

ll gcd(ll a, ll b) { if (b == 0) return a; return gcd(b, a % b);}

ll ceil_div(ll a, ll b) {return a % b == 0 ? a / b : a / b + 1;}

ll pwr(ll a, ll b) {a %= mod; ll res = 1; while (b > 0) {if (b & 1) res = res * a % mod; a = a * a % mod; b >>= 1;} return res;}

int main() { FAIZAN; int t; cin>>t; while(t--) { ll n; cin>>n; vector v(n); map<ll,vector> mp; for(int i=0; i<n; i++) { cin>>v[i]; mp[v[i]].push_back(i); }

}

Wonderful problems. Thank you!

Tutorial of 1883E needs correction

If a(i-1) >= a(i), then we say x(i) = x(i) — 1 and add 1 until a(i) * 2 < a(i-1)

Instead of a(i) . 2 < a(i — 1) It should be a(i) . 2 < a(i) Submission link : https://codeforces.com/contest/1883/submission/231445041

Alternate brainless $$$O((n + q)\sqrt{n})$$$ solution for div1-D:

Divide the array into square root blocks. Now for queries spanning only one or two square root blocks we can simply bruteforce in $$$O(\sqrt{n})$$$ time.

For queries spanning > 2 blocks, lets think of how we can efficiently check whether a split point exists within a square root block $$$B$$$ which lies completely within the query.

Let $$$B_l$$$ and $$$B_r$$$ be the left and right bounds of the block $$$B$$$. Let $$$mx(l, r) = max(a_l, a_{l + 1} \dots , a_r)$$$ and $$$mn(l, r)$$$ is similarly defined for the minimum.

Now, for a query $$$(l, r)$$$, a split point $$$S$$$ exists within block $$$B$$$ if $$$max(mx(l, B_l - 1), mx(B_l, S))$$$ < $$$min(mn(S + 1, B_r), mn(B_r + 1, r))$$$.

NoteAssume that $$$mx(l, B_l - 1) < mn(B_r + 1, r)$$$ and $$$mx(B_l, S) < mn(S + 1, B_r)$$$ from here onwards as if that's not true, its trivial too see that $$$S$$$ cannot be a split point.

Claim: For any query $$$(l, r)$$$, a split point is present in block $$$B$$$ if and only if there exists some $$$S$$$ such that the ranges $$$[mx(l, B_l - 1), mn(B_r + 1, r)]$$$ and $$$[mx(B_l, S), mn(S + 1, B_r)]$$$ have a non zero intersection.ProofIf there are no such ranges, then each $$$S$$$ falls under of two cases

Therefore, for every block, we need to create a structure which holds information about a set of ranges of size $$$p = O(\sqrt{n})$$$ and can efficiently answer queries of the form:

Given a range $$$[l, r]$$$ answer whether any range in the set intersects with $$$[l, r]$$$.

We can easily solve this subproblem online in $$$O(p + range)$$$ time with some precomputation.

SolutionCreate a boolean array $$$a$$$ in which $$$a_i$$$ is true if some range in the given set covers $$$[i, i + 1]$$$. This array can be created very easily in $$$O(p + range)$$$ time and its a very standard task.

Now the answer to any query $$$[l, r]$$$ will be true if and only if some $$$a_i$$$ is true for some $$$i \in [l, r)$$$. This can be answered in $$$O(1)$$$ with prefix sums.

So, we prebuild this structure for each block in $$$O(n\sqrt{n})$$$ time and $$$O(n\sqrt{n})$$$ memory.

Now each query is answered by firstly brute forcing over the blocks at the edge, and for each block $$$B$$$ which lies completely in the query, we query the structure corresponding to $$$B$$$ to check if the range $$$[mx(l, B_l - 1), mn(B_r + 1, r)]$$$ intersects with anything.

Finally, we achieve a total time complexity of $$$O((n + q)\sqrt{n})$$$ and consume $$$(O(n\sqrt{n}))$$$ memory.

Code: link

Problem F : 232780368

https://codeforces.com/contest/1883/submission/233002798

O(NlogN) solution for G2.

Great round

I solved 1883D - In Love by convert a[i] to log(a[i]) and binary search to find the minimum k that a[i] + k >= a[i-1]. That's so much simple but I love your technique!