Thank you for taking part in our contest, we know leaves a lot to be desired. But we believe that you can find interesting among our problems.

## A. Nene's Game

Idea: Otomachi_Una

**Tutorial**

Obviously, a person at place $$$p$$$ will be kicked out if and only if $$$p \ge a_1$$$.

Therefore, the answer is $$$\min(n,a_1-1)$$$.

**Solution**

```
#include<bits/stdc++.h>
using namespace std;
#define ll long long
#define MP make_pair
mt19937 rnd(time(0));
int a[105];
void solve(){
int q,k,n;cin>>k>>q;
for(int i=1;i<=k;i++) cin>>a[i];
for(int i=1;i<=q;i++){
cin>>n;
cout<<min(a[1]-1,n)<<' ';
}
cout<<endl;
}
int main(){
ios::sync_with_stdio(false);
int _;cin>>_;
while(_--) solve();
}
```

## B. Nene and the Card Game

Idea: Otomachi_Una

**Hint**

The number of pairs on each side should be the same.

**Tutorial**

For each color (in the following text, "this point" refers to the point someone got by playing a card with this color):

If you have both cards of this color in your hand, you will be able to get this point.

If Nene has both cards, you will not be able to get this point.

If you have only one card, you cannot get this point when Nene is using the following strategy:

When you play one of your paired cards, Nene also plays one of her paired cards; Otherwise, Nene will have the card with the same color. She can play it and get this point.

Therefore, the answer will be the amount of pairs in your hand.

**Solution**

```
#include<bits/stdc++.h>
using namespace std;
const int MAXN=4e5+5;
int cnt[MAXN];
void solve() {
int n,ans=0;
scanf("%d",&n);
fill(cnt+1,cnt+n+1,0);
for(int i=1,a;i<=n;++i) scanf("%d",&a),ans+=(++cnt[a]==2);
printf("%d\n",ans);
}
signed main() {
int T;
scanf("%d",&T);
while(T--) solve();
return 0;
}
```

## C. Nene's Magical Matrix

Idea: Otomachi_Una

**Hint**

When $$$n=3$$$, the optimal matrix is the following:

```
1 2 3
2 2 3
3 3 3
```

**Tutorial**

The optimal matrix would be:

```
1 2 3 ... n
2 2 3 ... n
3 3 3 ... n
..... ... n
n n n n n n
```

Construction method:

```
for i in [n, n-1, ..., 1]:
set the i-th column to [1, 2, ..., n];
set the i-th row to [1, 2, ..., n];
```

This takes exactly $$$2n$$$ operations.

**Proof**

For the final matrix, we define $$$f(x)$$$ as the number of elements greater or equal to $$$x$$$. The sum of all elements in the matrix is $$$\sum_{i=1}^n f(i)$$$ because an element with value $$$x$$$ will be counted $$$x$$$ times in the formula before.

Now, we proof that $$$f(x) \le n^2-(x-1)^2$$$:

Let's rewrite the problem to make it a little simpler:

You have an $$$n\times n$$$ matrix. In each operation, you can paint exactly $$$x-1$$$ cells white and $$$n-(x-1)$$$ cells black in a row or a column. Proof that there will be at most $$$n^2-(x-1)^2$$$ black cells.

Try to strengthen the conclusion by stating that:

For any matrix of size $$$n\times m$$$, each operation can paint a row into $$$x$$$ white cells and $$$m-x$$$ black cells, or a column into $$$y$$$ white cells and $$$n-y$$$ black cells. No matter how we paint, the final matrix will have at most $$$nm-xy$$$ black cells.

We will prove this by induction.

If $$$x=m$$$ or $$$y=n$$$, the conclusion holds.

Otherwise, if the last operation is to paint a row, then this row has exactly $$$m-x$$$ black cells. And, by induction, other rows will contain at most $$$(n-1)m-x(y-1)$$$ black cells. Painting a column in the last step is similar.

Then, we have proven the conclusion above.

Since the construction above maximizes each $$$f(x)$$$, it is the optimal answer.

**Solution**

```
#include<bits/stdc++.h>
using namespace std;
#define ll long long
#define MP make_pair
mt19937 rnd(time(0));
void solve(){
int n;cin>>n;
int ans=0;
for(int i=1;i<=n;i++) ans+=(2*i-1)*i;
cout<<ans<<' '<<2*n<<endl;
for(int i=n;i>=1;i--){
cout<<"1 "<<i<<' ';
for(int j=1;j<=n;j++) cout<<j<<' ';cout<<endl;
cout<<"2 "<<i<<' ';
for(int j=1;j<=n;j++) cout<<j<<' ';cout<<endl;
}
}
int main(){
ios::sync_with_stdio(false);
int _;cin>>_;
while(_--) solve();
}
```

## D. Nene and the Mex Operator

Idea: Otomachi_Una

**Hint**

What is the answer when $$$a_i = 0$$$?

**Tutorial**

When $$$a_i =0$$$, the sum can hit $$$n^2$$$ with making $$$a_i = n$$$ at last. Construction:

```
//! a function which sets a_1 ... a_k into k.
function solve(k):
if k is 1:
operate [1,1]
return
end if
solve(k-1);
for i in [k-2, ..., 1]:
operate [1,i] //! this sets a_1 ... a_i into 0
solve(i)
//! here, a should be [i, i, ..., i, i+1, i+2, ..., k-1, 0]
end for
//! here, a should be [1, 2, 3, ..., k-1, 0]
operate [1,k]
return
```

Here, `solve(k)`

will take about $$$2^k$$$ operations.

Since doing operation $$$[l,r]$$$ will make $$$a_l,\cdots,a_r \le r-l+1$$$, if for all $$$l\le i\le r$$$, $$$a_i$$$ is included in at least one of the operations and $$$a_{l-1},a_{r+1}$$$ are not, the optimal strategy will be setting $$$a_i = r-l+1$$$ for $$$i\in[l,r]$$$ using the construction above.

Finally, we can use DFS or DP to determine whether each element is included in operations.

The number of operations used will not exceed $$$2^n$$$.

**Solution**

```
#include<bits/stdc++.h>
using namespace std;
int n,a[20],cnt[20];
vector <array<int,2>> I;
void oper(int l,int r) {
fill(cnt,cnt+n+1,0);
for(int i=l;i<=r;++i) if(a[i]<=n) ++cnt[a[i]];
int mex=0;
while(cnt[mex]) ++mex;
for(int i=l;i<=r;++i) a[i]=mex;
I.push_back({l,r});
}
void build(int l,int r) {
if(l==r) {
if(a[l]) oper(l,r);
return ;
}
build(l,r-1);
if(a[r]!=r-l) oper(l,r),build(l,r-1);
}
void solve() {
scanf("%d",&n),I.clear(),memset(a,0,sizeof(a));
for(int i=0;i<n;++i) scanf("%d",&a[i]);
int cur=0,ans=0;
for(int s=0;s<(1<<n);++s) {
int tmp=0;
for(int l=0;l<n;++l) {
if(s&(1<<l)) {
int r=l;
while(r+1<n&&(s&(1<<(r+1)))) ++r;
tmp+=(r-l+1)*(r-l+1);
l=r;
} else tmp+=a[l];
}
if(tmp>ans) ans=tmp,cur=s;
}
for(int l=0;l<n;++l) if(cur&(1<<l)) {
int r=l;
while(r+1<n&&(cur&(1<<(r+1)))) ++r;
build(l,r),oper(l,r),l=r;
}
printf("%d %d\n",ans,(int)I.size());
for(auto i:I) printf("%d %d\n",i[0]+1,i[1]+1);
}
signed main() {
solve();
return 0;
}
```

## E. Nene vs. Monsters

idea: Otomachi_Una

**Hint1**

If three consecutive monsters have energy level $$$0,x,y\ (x,y>0)$$$, the monster with energy lever $$$y$$$ will "die"(have energy level $$$0$$$) at last.

**Hint2**

If four consecutive monsters have energy levels $$$0,x,y,z\ (x,y,z>0)$$$, what will happen to the monster $$$z$$$?

**Hint3**

If four consecutive monsters have energy level $$$x,y,z,w\ (x,y,z,w>0)$$$, how many rounds of spells must be used to make at least one of these monsters die?

**Tutorial**

If four consecutive monsters have energy level $$$x,y,z,w\ (x,y,z,w>0)$$$ and they did not die after $$$t$$$ rounds of spells, then $$$y$$$ will receive at least $$$t$$$ points of damage, $$$z$$$ will receive at least $$$(t-1) +(t-2)+\cdots=O(t^2)$$$ of damage, and $$$w$$$ will receive at least $$$O(t^3)$$$ of damage.

That is to say, let $$$V=\max_{i=1}^n a_i$$$, after $$$O(\sqrt[3]{V})$$$ rounds, at least one of $$$x,y,z,w$$$ will die.

So, we can simulate the process by brute force until there are no four consecutive alive monsters, and then the problem is reduced to the one described in Hint 2.

If four consecutive monster have energy level $$$0,x,y,z\ (x,y,z>0)$$$, $$$x$$$ will remain alive, $$$y$$$ will die at last and sending $$$D=(y-x)+(y-2x)+\cdots+(y\bmod x)$$$ damage to $$$z$$$ before that. Therefore, $$$z$$$ will remain alive if and only if $$$z>D$$$.

The time complexity is $$$O(n\sqrt[3]{V})$$$.

Bonus: Actually, it can be shown that after $$$O(\sqrt[k]{V})$$$ rounds, there will be no $$$k$$$ consecutive alive monsters. Making $$$k$$$ bigger than $$$3$$$ can further reduce the time complexity, but it will be harder to implement and optimize little on actual performance.

**Solution**

```
#include<bits/stdc++.h>
using namespace std;
#define ll long long
#define MP make_pair
mt19937 rnd(time(0));
const int MAXN=2e5+5;
int n,a[MAXN];bool b[MAXN];
bool check(){
a[n+1]=a[1],a[n+2]=a[2],a[n+3]=a[3];
for(int i=1;i<=n;i++)
if(a[i]&&a[i+1]&&a[i+2]&&a[i+3]) return true;
return false;
}
void solve(){
cin>>n;
for(int i=1;i<=n;i++) cin>>a[i];
if(n==2){
while(a[1]&&a[2]){
a[2]=max(0,a[2]-a[1]);
a[1]=max(0,a[1]-a[2]);
}
b[1]=(a[1]>0);b[2]=(a[2]>0);
}else if(n==3){
while(a[1]&&a[2]&&a[3]){
a[2]=max(0,a[2]-a[1]);
a[3]=max(0,a[3]-a[2]);
a[1]=max(0,a[1]-a[3]);
}
b[1]=(!a[3]&&a[1]);
b[2]=(!a[1]&&a[2]);
b[3]=(!a[2]&&a[3]);
}else{
while(check()){
for(int i=1;i<=n;i++) a[i%n+1]=max(0,a[i%n+1]-a[i]);
}
for(int i=1;i<=n;i++) b[i]=false;
auto attack=[&](ll x,ll y){
ll k=x/y;
return (2*x-(k+1)*y)*k/2;
};
for(int p=1;p<=n;p++)
if(a[p]&&a[p%n+1]) a[p%n+1]=max(0,a[p%n+1]-a[p]);
else break;
for(int i=1;i<=n;i++) if(!a[i]&&a[i%n+1]){
b[i%n+1]=true;
b[(i+2)%n+1]=(a[(i+2)%n+1]>attack(a[(i+1)%n+1],a[i%n+1]));
}
}
int cnt=0;
for(int i=0;i<=n;i++) if(b[i]) cnt++;
cout<<cnt<<endl;
for(int i=1;i<=n;i++) if(b[i]) cout<<i<<' ';
cout<<endl;
}
int main(){
ios::sync_with_stdio(false);
int _;cin>>_;
while(_--) solve();
}
```

## F. Nene and the Passing Game

idea: Otomachi_Una

**Hint**

Two people $$$i$$$ and $$$j\ (i<j)$$$ can pass the ball to each other directly if and only if $$$[i+l_i,i+r_i]\cap[j-r_j,j-l_j]\neq \varnothing$$$

**Tutorial**

According to the hint above, we can build the following graph:

- There are $$$2n$$$ vertices in the graph. Vertice $$$i$$$ links to vertices $$$([n+i-r_i,n+i-l_1]\cup[n+i+l_i,n+i+r_i])\cap[n+1,2n]\cap Z$$$.

That is, assuming vertices $$$1$$$ to $$$n$$$ are players, vertices $$$n+1$$$ to $$$2n$$$ are temporary spots, and player $$$i$$$ links to all the spots where his/her arm can reach.

Then, the answer will be the number of connected components in this graph which contains at least one vertice with an index less than or equal to $$$n$$$.

But there's still a little problem with the solution. For two players $$$i, j\ (i<j)$$$ satisfying $$$[i+l_i,i+r_i]\cap[j+l_i,j+r_i]\neq \varnothing$$$ (that is, both players reaching out their "right arm"s), they are incorrectly counted as connected.

To solve that, we can delete all the vertices $$$n+x$$$ such that $$$\forall i,\ x\not\in[i-r_i,i-l_i]$$$ or $$$\forall i,\ x\not\in[i+l_i,i+r_i]$$$ (that is, nobody's left/right arm can reach $$$x$$$). Finding such $$$x$$$ can be done easily in $$$O(n)$$$.

The last issue is, the graph contains $$$O(n^2)$$$ edges. but since we only care about connectivity, operation "link $$$x$$$ to $$$[y,z]$$$" can be changed to "link $$$x$$$ to $$$y$$$, and link $$$i$$$ to $$$i+1$$$ for all $$$i$$$ in $$$[y,z-1]$$$". After that and removing multiple edges, the number of edges is reduced to $$$O(n)$$$.

Finally, counting connected components in a graph can be easily done in $$$O(n)$$$, so the time complexity is $$$O(n)$$$.

**Solution**

```
#include<bits/stdc++.h>
using namespace std;
#define ll long long
#define MP make_pair
const int MAXN=2e6+5;
int n,tot,le[MAXN],ri[MAXN];
int fa[MAXN<<1],s[MAXN],t[MAXN],pre[MAXN],suf[MAXN];
inline int find(int x){
while(x^fa[x]) x=fa[x]=fa[fa[x]];
return x;
}
void solve(){
cin>>n;
for(int i=1;i<=2*n+1;i++) fa[i]=i;
for(int i=0;i<=n+1;i++) s[i]=t[i]=pre[i]=suf[i]=0;
for(int i=1;i<=n;i++){
cin>>le[i]>>ri[i];
s[max(1,i-ri[i])]++;s[max(0,i-le[i])+1]--;
t[min(n+1,i+le[i])]++;t[min(n,i+ri[i])+1]--;
}
tot=n;
for(int i=1;i<=n;i++){
s[i]+=s[i-1],t[i]+=t[i-1];
if(s[i]&&t[i]) suf[i]=pre[i]=++tot;
}
suf[n+1]=0;
for(int i=1;i<=n;i++) pre[i]=(pre[i]?pre[i]:pre[i-1]);
for(int i=n;i>=1;i--) suf[i]=(suf[i]?suf[i]:suf[i+1]);
for(int i=1;i<=n;i++){
int l=max(1,i-ri[i]),r=max(0,i-le[i]);
if(l<=r){
l=suf[l],r=pre[r];
if(l&&r&&l<=r) for(int i=find(l);i<r;i=find(i)) fa[i]=i+1;
}
l=min(n+1,i+le[i]),r=min(n,i+ri[i]);
if(l<=r){
l=suf[l],r=pre[r];
if(l&&r&&l<=r) for(int i=find(l);i<r;i=find(i)) fa[i]=i+1;
}
}
for(int i=1;i<=n;i++){
int l=max(1,i-ri[i]),r=max(0,i-le[i]);
if(l<=r){
l=suf[l],r=pre[r];
if(l&&r&&l<=r) fa[find(i)]=find(l);
}
l=min(n+1,i+le[i]),r=min(n,i+ri[i]);
if(l<=r){
l=suf[l],r=pre[r];
if(l&&r&&l<=r) fa[find(i)]=find(l);
}
}
int ans=0;
for(int i=1;i<=tot;i++) if(fa[i]==i) ans++;
cout<<ans<<'\n';
}
int main(){
ios::sync_with_stdio(false);
// freopen("Otomachi_Una.in","r",stdin);
// freopen("Otomachi_Una.out","w",stdout);
int _;cin>>_;
while(_--) solve();
return 0;
}
```

Auto comment: topic has been updated by 0doO (previous revision, new revision, compare).I didn't see the meaning of splitting E into E1 and E2, the difference is only $$$k=2$$$ and $$$k=3$$$... The pretests of both problems are so weak, and the time limit for E2 is tight. (upd: seems that

`std::set`

has too large constant, and I should use list.)not tight.. my solution runs only 300ms during the testing.

Are you really Japanese?

Not exactly. E1 has a simpler solution. Brute force until no more than 2 consecutive monsters, and for all the consecutive monsters, only the first one will survive. The time complexity is $$$O(n\sqrt(V))$$$. So you don't need to handle three consecutive monsters, which involves some math calculations. In my case, I wrote a binary search + matrix multiplication to deal with <=4 consecutive monsters which is dumb.

True

In fact,E2 also has a simpler solution.Brute force until no more than 3 consecutive monsters,and it's easy to do that.

What's the difference between this and E2? If you can handle the <= 2 case, it will be easy to handle the <= 3 case too.

But it seems that many participants passed E1 but not E2.

Look at the weak pretests...

Not everyone solved them like in the tutorial. Some of them only make guesses following their intuition and they can pass E1 not E2. Like recursing on the continuous segments, if you recurse until the length of the segment is only 1, it is easy to hack, and the data would be like

`0 1 100000`

repeated. But if you recurse until the length<=2 then stop, it seems hard to hack and yet the participants don't know what is the time complexity and it passes E1. When it comes to E2, the participants are not brave enough to guess that if they stop at length<=3 they will pass.Auto comment: topic has been updated by 0doO (previous revision, new revision, compare).Very well explained. Thanks!

thx for the tutorial u r amazing guys and now i'm specialist

Auto comment: topic has been updated by lichenghan (previous revision, new revision, compare).I became pupil, feels good!

Me too fam! recently became pupil again.

I became expert, feels good!

I became expert,not bad /(ㄒoㄒ)/~~

thank you for such an amazing round. I'm so happy because this was my best ever codeforces round. Solved 3 and placed 2.3k. Thank you so much Chinese boy

Thanks for the fast tutorial. It's interesting in 1956D - Nene and the Mex Operator to be able to change all values of any k consecutive elements to k

Yeah! Then bitmask to see which values we should keep, and convert all other ranges to their length squared! Incredibly cool problem.

Very cool problem indeed! I especially like the tower-of-hanoi-like construction of the k consective elements of k with MEX.

Can you share the link to that problem

How do we actually check which elements to keep and whom to add in the range?

You literally just check all $$$2^n$$$ possibilities by iterating over all subsets and checking the maximum value.

Have a look at this (starting from the main function, you can ignore all other functions as they’re related to the construction of the sequence of operations rather than the answer)

Oo damn. The tutorial said something about dp and i was stick on that. Thanks

You can do DP as well: 256761916. I would not recommend it though, because it is quite complicated and error-prone, and checking all subsets is simply a less risky way (in contest).

ayush2321 avighnakc hashman I actually neither used bitmask nor dp, instead, I used a greedy approach where I always took the segment the got me the highest increase.

Here's the code for it:Can you please explain your greedy solution?

Sure, since I already know that for each segment of length k I can make its sum = k^2. I check all possible segments and take the segment that has the maximum increase = k^2 — original_sum

Update this segment and then look again for other segments and stop when there's no segment with positive increase

The complexity of this I assume is n^2 * (number_of_segments_changed + 1) which total is less than n^3

<spoiler summary="ll n; cin>>n; vectora(n); for(ll i=0;i<n;i++) cin>>a[i]; vectorpref(n+1,0); pref[0]=0; pref[1]=a[0]; for(ll i=2;i<=n;i++) pref[i]=pref[i-1]+a[i-1]; // cout<<pref[n]<<ln; vector<pair<ll,ll>>ans; ll i=0; ll sol=0; while(i<n) { ll flag=0; for(ll j=n;j>i;j--) { ll len=j-i; ll sum=pref[j]-pref[i]; // cout<<sum<<" "; if(len*len>sum) { sol+=len*len; // cout<<sol<<" "; ll temp=j; // cout<<temp<<" "; ll flag1=0; while(temp>i+1) { ans.push_back({i+1,temp}); ans.push_back({temp,temp}); temp--; } ans.push_back({i+1,j}); i=j; flag=1; break; } } if(flag==0) { sol+=a[i]; i++; } } cout<<sol<<" "<<ans.size()<<ln; for(auto it:ans){ cout<<it.first<<" "<<it.second<<ln; }"> ...

I almost used same method but I am getting wrong answer. could you help me ??

The thing we discussed here is only how to get the best sum, constructing the operations needed to achieve this answer is more complicated than what you did.

In the proof section of the editorial for C, is there a chance that you've mixed up the assignments?

This should be $$$x=m$$$ or $$$y=n$$$, right?

yes, updated

Auto comment: topic has been updated by 0doO (previous revision, new revision, compare).What is the intuition behind knowing that you need to construct this matrix in problem C?

1 2 3 ... n

2 2 3 ... n

3 3 3 ... n

...............

n n n ... n

It is a little easier to think about performing the operations in reverse. Intially you have

Then, using the permutation

`1, 2, 3, 4`

, perform the operation on the first row and columnUsing the permutation on the second row and column:

Notice that I didn't overwrite the previous values at

`a[1][2]`

and at`a[2][1]`

. This is because I'm performing the operations in reverse, so the one I perform first will be the one that lasts. Doing this on the third row and column gives:and finally fourth row (or column; in reality you only need $$$2n-1$$$ operations)

which gets you your answer.

But how can you arrive at the conclusion that you need to construct that matrix in the first place? Constructing the matrix itself is easy.

I arrived at that conclusion by brute forcing smaller cases ($$$n=3$$$ and $$$n=4$$$). It looked like a pattern arose, and I couldn't seem to improve it further, so yeah.

If you're interested, the editorial also proves that this answer is optimal.

During the round I tried to act greedily, I wanted to build a 4 * 4 table entirely from elements equal to 4. After looking at the cases, I realized that there are a maximum of 7 such elements. And then, using this logic, you can complete the corners and get a table one larger in size. (In my opinion, in such problems you need to write everything down on a piece of paper and you certainly don’t need to strictly prove everything)

There's no intuition, just guessing without proof. It's a bad problem

Though the proof in the editorial is quite clear, it's hard for everyone to actually think about it in a contest.

Define

`g(x)`

as the number of elements in the matrix equal to`x`

.Given

`n`

,It remains to prove that

`g(x) >= 2*x - 1`

, for`x = 1 ... n`

?Then you can prove the recurrence

`f(x+1) = f(x) - g(x) <= n^2 - x^2`

from the assumption`f(x) <= n^2 - (x-1)^2`

Unfortunately, such problems are only becoming more frequent :(

My thoughts: since we're replacing each row and value with permutations, we can simply give 1, 2, ..., n to every row (left -> right) and every column (up -> down). Other permutations are equivalent. Then, for every a[i][j], the maximum possible value you can get is max(i, j), which corresponds to the matrix.

For me there is some intuition. Let me try to explain

You can try to think greedily: for some number $$$x$$$, how many times do you think it's possible for $$$x$$$ to appear in the final matrix? My intuition say it should appear at most $$$2n - 1$$$ times because you can only set rows and columns permutations from $$$1$$$ to $$$n$$$.

Now which number should you assign to $$$x$$$? Again my intuition says it should be the maximum number in the permutation which is $$$n$$$.

Now since I used $$$2n - 1$$$ of my $$$n's$$$. I'm left with $$$n^2 - 2n + 1$$$ spots left in the matrix. Which is equal to $$$(n-1)^2$$$, and my permutation cannot contain $$$n$$$ anymore. Notice that you're left with the same problem above, but with $$$n = n - 1$$$.

My intuition says that if I follow this logic, my final matrix should look like:

The sum over all the elements of the matrix in problem C can be calculated by $$$\frac{n * (n + 1) * (4*n - 1)}{6}$$$, here’s the link to OEIS, although I can’t figure out why this is the exact sequence, I wonder if some of you guys would be so kind as to explain it to me.

This is the sequence because the optimal matrix is always

and $$$\displaystyle \sum_{i=1}^{n} i(2i-1) = \frac{n(n+1)(4n-1)}{6}$$$ (see WolframAlpha)

You can also derive this yourself by rewriting $$$\displaystyle \sum_{i=1}^{n} i(2i-1) = 2 \sum_{i=1}^{n} i^2 - \sum_{i=1}^{n} i = 2\frac{n(n+1)(2n+1)}{6} - \frac{n(n+1)}{2} = \frac{n(n+1)(4n-1)}{6}$$$

Thanks!

in problem e2 i am getting tle on tc 54. total tc are 56. (very close to optimum approach) could anyone tell me how to resolve tle .

sumbission link:- https://codeforces.com/contest/1956/submission/256603484

code:- // warning :- the template is big , it can cover your entire page .

Spoiler## include<bits/stdc++.h>

using namespace std;

## include <ext/pb_ds/assoc_container.hpp>

## include <ext/pb_ds/tree_policy.hpp>

using namespace __gnu_pbds; //

//

// // ..Optimizations // //

#pragma GCC optimize ("Ofast") #pragma GCC target ("sse,sse2,sse3,ssse3,sse4,popcnt,abm,mmx") #pragma GCC target ("sse,sse2,mmx") #pragma GCC optimize ("-ffloat-store") #pragma GCC optimize("O3,unroll-loops")

## pragma GCC target("avx2,bmi,bmi2,lzcnt,popcnt")

#pragma GCC optimize("Ofast,unroll-loops")

## pragma GCC target("avx2,tune=native")

## pragma GCC optimize ("O3")

## pragma GCC optimize ("unroll-loops")

## pragma GCC target ("avx2")

## pragma GCC optimize ("Os")

## pragma GCC optimize("Ofast")

## pragma GCC optimize("fast-math")

## define _FORTIFY_SOURCE 0

## pragma GCC optimize("no-stack-protector")

## define MOD 1000000007

## define MOD1 998244353

## define INF 10000000000

## define nline "\n"

## define pb push_back

## define ppb pop_back

## define mp make_pair

## define ff first

## define ss second

## define PI 3.141592653589793238462

## define set_bits __builtin_popcountll

## define sz(x) ((int)(x).size())

## define all(x) (x).begin(), (x).end()

typedef long long ll; typedef unsigned long long ull; typedef long double lld; template <typename Type, typename ComparatorFn = less> using oset = tree<Type, null_type, ComparatorFn, rb_tree_tag, tree_order_statistics_node_update>;

template<typename Type, typename ComparatorFn> bool oset_erase(oset<Type, ComparatorFn> &st, Type val) { if (st.lower_bound(val) == st.upper_bound(val)) return false; st.erase(st.find_by_order(st.order_of_key(val))); return true; } // methods -> *find_by_order, order_of_key // comparators -> less, less_equal, greater, greater_equal

## ifndef ONLINE_JUDGE

## define debug(x) cerr << #x <<" "; _print(x); cerr << endl;

## else

## define debug(x)

## endif

void _print(ll t) {cerr << t;} void _print(int t) {cerr << t;} void _print(string t) {cerr << t;} void _print(char t) {cerr << t;} void _print(lld t) {cerr << t;} void _print(double t) {cerr << t;} void _print(ull t) {cerr << t;}

template <class T, class V> void _print(pair <T, V> p); template void _print(vector v); template void _print(set v); template <class T, class V> void _print(map <T, V> v); template void _print(multiset v); template <class T, class V> void _print(pair <T, V> p) {cerr << "{"; _print(p.ff); cerr << ","; _print(p.ss); cerr << "}";} template void _print(vector v) {cerr << "[ "; for (T i : v) {_print(i); cerr << " ";} cerr << "]";} template void _print(set v) {cerr << "[ "; for (T i : v) {_print(i); cerr << " ";} cerr << "]";} template void _print(multiset v) {cerr << "[ "; for (T i : v) {_print(i); cerr << " ";} cerr << "]";} template <class T, class V> void _print(map <T, V> v) {cerr << "[ "; for (auto i : v) {_print(i); cerr << " ";} cerr << "]";} #define int long long

## define ll long long

ll mod_add(ll a, ll b, ll m) {a = a % m; b = b % m; return (((a + b) % m) + m) % m;} ll mod_mul(ll a, ll b, ll m) {a = a % m; b = b % m; return (((a * b) % m) + m) % m;} ll mod_sub(ll a, ll b, ll m) {a = a % m; b = b % m; return (((a — b) % m) + m) % m;} //ll mminv(ll a, ll b) {ll arr[3]; extendgcd(a, b, arr); return mod_add(arr[0], 0, b);} //for non prime b //ll mminvprime(ll a, ll b) {return expo(a, b — 2, b);} bool revsort(ll a, ll b) {return a > b;} //ll mod_div(ll a, ll b, ll m) {a = a % m; b = b % m; return (mod_mul(a, mminvprime(b, m), m) + m) % m;} //only for prime m #define endl "\n" //b,k, ointime ,outim,lca, kth parent, bfs, dfs, subtree ,lvl ,euler tour etc // dp on trees , rerootinmg , precomputation

//shortes path, bfs,0-1 bfs, dfs, mutisrc bfs, adding edges, const int M=998244353; long long mod(long long x){ return ((x%M + M)%M); } long long add(long long a, long long b){ return mod(mod(a)+mod(b)); } long long mul(long long a, long long b){ return mod(mod(a)*mod(b)); } bool cmp(pair<int,int>a,pair<int,int>b){ return a.ss<b.ss; }

signed main(){ ios::sync_with_stdio(false); cin.tie(0);cout.tie(0); int t=1; cin>>t;

}

}

The link for editorial in contest announcement has a typo

So many people got E1 by simply simulating the process for a decent number of rounds (say 500). Was this one of the intended approaches for E1?

Simply simulating can be hacked using the testcase:

1 2 1e5 0 1 2 1e5 0 1 2 1e5 ... ...

it can be proven that sqrt(2 * A) simulation is enough for E1 (and also for E2, but its just too large to actually do). If you actually read the editorial, it says that for general k, there needs to be O(A^(1/k)).

Why are problems put with codeforc.es?

as per editorial, we define f(x) as the number of elements greater or equal to x. The sum of all elements in the matrix is ∑f(i) because an element with value x will be counted x times in the formula before.What is the motivation for stating this f(x) ?I am thinking during n operation how many times I can preserve n then n-1 then n-2 ... and so on, that i figure out this construction.

The motivation is that it helps you prove your intuition. It’s not actually required to solve the problem without a proof.

Can someone please help me figure out task D. My solution is to iterate over a bitmask of length n, which will iterate over the assignment operations. Then I try to make the maximum mex on the segment and update the answer. At the end, I restore the answer.

This is my code: https://codeforces.com/contest/1956/submission/256538264

Thanks to the authors for a wonderful round!

Try this case :

2

0 0

your operation is giving maximum answer 2 . but it is possible to gain 4.

[0, 0]

l = 1, r = 2 , the array become [1, 1]

then l = 1, r = 1 the array become [0, 1]

finally l = 1, r = 2 the array become [2, 2]

Ohhh, I get it. Thanks!

easy to cheese F in $$$O(n\log^2{n})$$$ (segtree + small-to-large): 256586729

what's your approach? there's tons of log^2 solutions to F but none of them should pass, crazy that yours does

Firstly, two nodes $$$i$$$ and $$$j$$$, $$$(i < j)$$$ have an edge iff:

I will iterate over $$$i$$$ in increasing order, and for each $$$i$$$, I will add edges to all $$$j$$$ such that:

How to add all such edges?

Sort indices of nodes in increasing order of their $$$j - l_j$$$ value and call this array $$$S$$$. Notice that an edge $$$(i, j)$$$ can be trivially found by checking if the minimum value of $$$j - r_j$$$ on a particular suffix of $$$S$$$ is $$$\leq i + r_i$$$. We will speed this process up using a segment tree.

Build a segment tree upon $$$S$$$, where you store the value $$$j - r_j$$$ at the location of index $$$j$$$. The segment tree needs to support two things:

It is easy to build such a segment tree.

Now, you just iterate over $$$i$$$ in increasing order, find an edge $$$i \to j$$$ if it exists, and if it does, then you will merge the smaller component into the larger component and update the component representative of all nodes in the smaller component in the segment tree.

I used some problem specific things to optimise the iterative segment tree, and some other bs to somehow get it to pass.

crazy

I added some asserts in your merge function and sum of all dsu movings is less than 2*N for all cases, so your solution is effectively nlogn for these testcases.

Not sure why. Perhaps weak test cases / hard to generate counterexample, but there's also a lot of structure to the queries: you're basically "adding points" above the diagonal and doing "merge queries" below the diagonal (look at the conditions relative to the point (i, i)), and they're sort of "reflected" across this line (in the opposite way of how you would usually view it).

Not sure what that means for the structure, would be very cool if you could prove some kind of bound on this process, seems hard though.

i will let you know if i find any provable bounds

I think segment tree is overkill.Instead, we can use priority queue My code

yup, this relies on the observation that the order of "range merges" and "point add query" don't matter, since point add queries will only ever add points to the right of (i, i), while its query is always to the to the left of (i, i), so no need to do anything w/ dynamic data structures. only realized that this morning.

i suppose the whole 2d merging thing is a roundabout way to arrive at the editorial's first observation then, lol

i think problem F is very similar to this

problem. Have no idea why its rating is 3000

Thank you for fast editorial

256520814,please anyone can help me with this ,I am not able to understand why am I getting wrong submission at test 3.

1<<num will overflow. num can be as high 200000.

Why are you calculating duplicates that way by the way ? You can simply use a map.

What is optimal strategy to make strong tests for problem E?

Notice that number of contestants is not that big so the number of solutions they may provide is very small and during contest kick it by some tests, but jokes apart that is difficult things to do from the side of jury and before the contest to find out what kinds of solutions could be almost not probable and that makes competitive programming so challenging and so weird sometimes due to weak test data..

I guess another question is how to construct a testcase such that it takes about V^{1/k} turns to have no consecutive k monsters alive. I think(?) some of the FST on E2 came from people simulating not enough steps.

Can someone please help me out with my code for E1/E2. (WA on tc 2)

Here is my submission 256685619

Consider the case: n = 5 and a[N] = {5,6,0,0,4}

With your approach, the answer is a[2] and a[5] while the real answer is only a[5]

The difference is because you didn't add

after checking that there are no four consecutive non-zero numbers.

Thanks for your time.

You're welcome! I've made the same mistake as you at first.

C was cray cray

Struggling to understand how does induction step work here for proof of Problem C:

Otherwise, if the last operation is to paint a row, then this row has exactly $$$m−x$$$ black cells. And, by induction, other rows will contain at most $$$(n−1)m−x(y−1)$$$ black cells. Painting a column in the last step is similar.Would really appreciate if someone could elaborate this proof.

A very good competition but D"s print is hard.

can someone explain this line of the editorial of the problem E?

If four consecutive monsters have energy level x, y, z, w (x, y, z, w > 0) and they did not die after t rounds of spells, then y will receive at least t points of damage, z will receive at least (t-1) + (t-2) + ... = O(t^2) of damage, and w will receive at least O(t^3) of damage.

That is to say, let V = max(ai) for i = 1 to n, after O(V^(3/2)) rounds, at least one of x, y, z, w will die.

at least o(t^2) means?? what does it want to convey?

cool problem D. i enjoyed the process of coming up with idea to construct the sequence

In problem D, what does operate really do? I still don't understand T_T

It's just setting the segment $$$[L,R]$$$ in the array to its $$$MEX$$$

Is the solution is finding from all subset ( 2^N possibilities ), we change the subarray in which in the set is "turned on" to become the MEX and leave the turn off as it is and then finding the maximum sum one ?

I recommend you to read the editorial again but this is simply what happens:

The first fact we can change any value in some subarray $$$[L,R]$$$ to be $$$R-L+1$$$ (the size of that subarray), as we can set all its values to 0 and start building our new subarray

Based on that fact, We check all the

possiblecombinations, consecutive ones in our mask means try change all the values in thatsubarrayto itslength, otherwise leave it as it's.Now take the greatest sum and keep that state.

At the end iterate again over the $$$mask$$$ where results in the greatest sum, change all consecutive ones subarray to their lengths.

This is done simply like building tower-of-hanoi problem, to create $$$MEX$$$ of value $$$n$$$, we must build first value $$$n-1$$$ and so on. (try solving tower of Hanoi to get the idea).

The editorial here just apply the operation, if we had to make operation $$$[L,R]$$$,

operchange this subarray to its length by simple $$$MEX$$$ algorithmI hope this helps, if something isn't clear, please let me know

just to make it clear, the solve(k) of the editorial is just for how to create the sequence of operations right? if the problem only asked for the optimal problem, we could just find it through all 2^n possibilities without having to use solve(k) ?

Yes, exactly.

You could even find the optimal solution in $$$O(N^3)$$$ or even better in $$$O(N^2)$$$ using $$$DP$$$.

got AC, thanks dude :D

Good job! :3

problems links are incorrect for example https://codeforc.es/contest/1956/problem/E2, dot between codeforc, es

Auto comment: topic has been updated by Otomachi_Una (previous revision, new revision, compare).IN problem D, I used online compiler ideone in default setting and previosly present code on Geek for geeks for sliding window technique. That's why issue of similiarity arised. I insist of understanding and taking in account this fact. I will take care of this issue now onwards[problem:D][submission:256537093]

Curious how'd one write a checker for D ?

why should it be difficult?

If it is easy then I donno what data structure to maintain that computes range mex, and also does range assignments. may be I could find a way to compute range mex from segment tree or something but definitely not sure about range assignments

bro you just need a 1D array to do that of size 18

damn, forgot about the constraints

Can someone explain me one thing, that my 257525583 solution for the C problem does seem to work but the testcase output doesn't even match, I was so confused by this problem for 2 days trying to match the testcase output then I realized it works without it.

In F's tutorial,

There are $$$2n$$$ vertices in the graph. Vertice $$$i$$$ links to vertices $$$([n+i-r_i,n+i-l_\color{blue}{1}\color{black}]\cup[n+i+l_i,n+i+r_i])\cap[n+1,2n]\cap Z$$$.Shouldn't it be $$$l_\color{red}i$$$ or I missunderstand it?

Here's the provided perspective on problem B for this round: https://codeforces.com/contest/1956/problem/B "My initial approach:

For a type of number card with 2 copies, securing one such card guarantees one point. For a type with only 1 copy, if my count of cards (the number of types with 2 copies) exceeds the opponent's, then my score is: the number of pairs of identical number cards + the remaining single number cards on the field. Otherwise (if the count of cards <= opponent's), because I play first, I must start by placing pairs of identical cards; otherwise, I place single cards. In this scenario, I can only score based on pairs of identical cards.

Correct solution:

Mentioning that I play first and that both sides prioritize placing pairs of identical numbers. Ultimately, only the score for having pairs of identical cards needs to be output since my count of cards (number of types with 2 copies) always remains <= opponent's. The cards each side possesses are initialized as the letters ABCDE, ABCDE ABCDE indicating that we can only perform exchanges between cards of the same letter on the upper and lower rows. Whenever one side completes a pair of identical letters, the other side also completes it simultaneously. This means that both sides have the same number of pairs of identical letter cards and single letter cards. Combining this observation with my initial conclusion, in cases where the counts are equal for pairs of cards, since I play first, it's disadvantageous for me, so I can only maintain the count of cards with pairs of 2."

it can be proven in E that if we brute through the array logn rounds, there must be at least a zero in it. that's my intuition when i first saw the problem. it's not crucial to solving the problem but i think it is fun.

Input 2 1 2 Output 1 1 1 1 1 1 7 3 1 1 1 2 1 2 1 2 2 1 1 2 Checker Log wrong answer Integer parameter [name=m] equals to 7, violates the range [0, 4] (test case 2) what this is giving me the error?