### Enigma27's blog

By Enigma27, 18 months ago,

### 1208A — XORinacci

The sequence is $a$, $b$, $a\oplus b$, $a$, $b$, $a\oplus b$ $\cdots$

Since, the sequence has a period of $3$, $f[i] = f[i \mod 3]$.

Code
#include<bits/stdc++.h>
using namespace std;
int main()
{
int test,a,b,n;
cin>>test;
while(test--){
cin>>a>>b>>n;
switch (n%3){
case 0:
cout<<a<<endl;
break;
case 1:
cout<<b<<endl;
break;
default:
cout<<(a^b)<<endl;
}
}
return 0;
}


### 1208B — Uniqueness

After removing a sub-segment, a prefix and a suffix remain, possibly of length $0$. Let us fix the prefix which does not contain any duplicate elements and find the maximum suffix we can get without repeating the elements. We can use map/set to keep track of the elements.

Time complexity: $O(n^2 \cdot log(n))$

Code
#include <bits/stdc++.h>

using namespace std;

const int N = 2e3 + 5;
int a[N];

int main(){
int n;
scanf("%d", &n);
for(int i = 0; i < n; ++i){
scanf("%d", &a[i]);
}

int ans = n - 1;
map<int, int> freq;
for(int i = 0; i < n; ++i){
bool validPrefix = true;
for(int j = 0; j < i; ++j){
freq[a[j]]++;
if(freq[a[j]] == 2){
validPrefix = false;
break;
}
}
int min_index_suffix = n;
for(int j = n - 1; j >= i; --j){
freq[a[j]]++;
if(freq[a[j]] == 1){
min_index_suffix = j;
}
else break;
}
if(validPrefix){
ans = min(ans, min_index_suffix - i);
}

freq.clear();
}
cout << ans << '\n';

Bonus

Try to solve this problen in $O(n \cdot log(n))$

### 1208C — Magic Grid

Divide the grid into four quadrants. Assign distinct integers to the first quadrant from $0$ to $(\frac{N^2}{4} - 1)$. Copy this quadrant to the other three. This way XOR of each row and column becomes $0$.

Now, to make numbers distinct among the quadrants, multiply the numbers by $4$. Add $1$, $2$, and $3$ to the numbers in $1^{st}$, $2^{nd}$ and $3^{rd}$ quadrants respectively. The XOR of each row and column would still remain $0$ as $N/2$ is also even but the elements will become distinct while being in the range $[0, N^2-1].$

Another approach in this problem is to use a $4 \times 4$ grid given in the sample itself and replicate it in $N \times N$ grid by adding $16, 32, 48 \cdots$ to make the elements distinct.

Of course, there are multiple ways to solve the problem. These are just a few of them.

Code
#include <bits/stdc++.h>
using namespace std;
#define int     long long
#define ld      long double
#define pii     pair<int ,int>
#define pld     pair<ld ,ld>
#define F       first
#define S       second
#define mod     1000000007
#define pb      push_back
#define mp      make_pair
#define all(x)  x.begin(),x.end()
#define mset(x) memset(x, 0, sizeof(x));
#define ios     ios_base::sync_with_stdio(false);cin.tie(0);cout.tie(0);

const int N = 1000;

int n, grid[N][N];
void run(){
cin >> n;

/*
Divide Grid into Quandrants as follows:
1 | 2
-----
3 | 4
*/

int fill = 0;
for(int i = 0; i < n/2; i++){
for(int j = 0; j < n/2; j++){

grid[i][j]              = 4*fill + 1;   // 1st quadrant
grid[i][j + n/2]        = 4*fill + 2;   // 2nd quadrant
grid[i + n/2][j]        = 4*fill + 3;   // 3rd quadrant
grid[i + n/2][j + n/2]  = 4*fill;       // 4th quadrant

fill++;

}
}
for(int i = 0; i < n; i++){
for(int j = 0; j < n; j++){
cout << grid[i][j] << " ";
}
cout << endl;
}
}

signed main(){
int tests = 1;
// cin >> tests;
for(int i = 1; i <= tests; i++) run();
return 0;
}


### 1208D — Restore Permutation

##### Approach 1

Let us fill the array with numbers from $1$ to $N$ in increasing order.

$1$ will lie at the last index $i$ such that $s_{i} = 0$. Find and remove this index $i$ from the array and for all indices greater than $i$, reduce their $s_{i}$ values by $1$. Repeat this process for numbers $2, 3, ...N$. In the $i^{th}$ turn, reduce the elements by $i$.

To find the last index with value zero, we can use segment tree to get range minimum query with lazy propagation.

Time complexity: $O(N \cdot log(N))$

Code
#pragma GCC optimize("Ofast")
#include <iostream>
#include <vector>
#include <map>
#include <set>
#include <queue>
#include <algorithm>
#include <string>
#include <cmath>
#include <cstdio>
#include <iomanip>
#include <fstream>
#include <cassert>
#include <cstring>
#include <unordered_set>
#include <unordered_map>
#include <numeric>
#include <ctime>
#include <bitset>
#include <complex>

using namespace std;

typedef long long ll;

const ll INF = 1e16 + 239;
const int MAXN = 1e6 + 239;

ll a[MAXN];

namespace SegmentTree {
int n;
ll t[4 * MAXN];
ll mod[4 * MAXN];

void pull(int v) {
t[v] = min(t[2 * v + 1], t[2 * v + 2]);
}

void apply(int v, ll val) {
t[v] += val;
mod[v] += val;
}

void push(int v) {
if (mod[v] != 0) {
apply(2 * v + 1, mod[v]);
apply(2 * v + 2, mod[v]);
mod[v] = 0;
}
}

void build(int v, int l, int r) {
if (l + 1 == r) {
t[v] = a[l];
} else {
int m = (r + l) >> 1;
build(2 * v + 1, l, m);
build(2 * v + 2, m, r);
pull(v);
}
}

void add(int v, int l, int r, int ql, int qr, ll val) {
if (r <= ql || qr <= l) {
return;
} else if (ql <= l && r <= qr) {
apply(v, val);
} else {
push(v);
int m = (r + l) >> 1;
add(2 * v + 1, l, m, ql, qr, val);
add(2 * v + 2, m, r, ql, qr, val);
pull(v);
}
}

int go_down(int v, int l, int r) {
if (l + 1 == r) {
return l;
} else {
push(v);
int m = (r + l) >> 1;
int res = -1;
if (t[2 * v + 2] == 0) {
res = go_down(2 * v + 2, m, r);
} else {
res = go_down(2 * v + 1, l, m);
}
pull(v);
return res;
}
}

void init(int n_) {
n = n_;
build(0, 0, n);
}

void add(int l, int r, ll val) {
add(0, 0, n, l, r, val);
}

void add(int pos, ll val) {
add(0, 0, n, pos, pos + 1, val);
}

int last_zero() {
return go_down(0, 0, n);
}
}

signed main() {
ios_base::sync_with_stdio(false);
cin.tie(0);

int n;
cin >> n;
for (int i = 0; i < n; i++) {
cin >> a[i];
}
SegmentTree::init(n);
vector<int> ans(n, -1);
for (int i = 1; i <= n; i++) {
int pos = SegmentTree::last_zero();
ans[pos] = i;
}
for (auto t : ans) {
cout << t << ' ';
}
cout << endl;
}

##### Approach 2

For every i from $N$ to 1, let's say the value of the $s_{i}$ is x. So it means there are $k$ smallest unused numbers whose sum is $x$. We simply put the $k+1$st number in the output permutation at this $i$, and continue to move left. This can be implemented using BIT and binary lifting.

Thanks to real.emerald for expressing the solution in the above words.

Code
#include<bits/stdc++.h>
using namespace std;

const int N = 2e5 + 5;
long long BIT[N], s[N];
int n;
int ans[N];

void update(int x, int delta){
for(; x <= n; x += x&-x)
BIT[x] += delta;
}

long long query(int x){
long long sum = 0;
for(; x > 0; x -= x&-x)
sum += BIT[x];
return sum;
}

int searchNumber(long long prefSum){

int num = 0;
long long sum = 0;
for(int i = 21; i>=0 ; --i){
if((num + (1<<i) <= n) && (sum + BIT[num + (1<<i)] <= prefSum)){
num += (1<<i);
sum += BIT[num];
}
}
return num + 1;
}

int main(){
scanf("%d",&n);
for(int i = 1; i <= n; ++i){
update(i, i);
scanf("%lld", &s[i]);
}
for(int i = n; i >= 1; --i){
ans[i] = searchNumber(s[i]);
update(ans[i], -ans[i]);
}
for(int i = 1; i <= n; ++i){
printf("%d", ans[i]);
if(i < n){
printf(" ");
}
else printf("\n");
}
}


### 1208E — Let them slide

For every array $i$ from $1$ to $N$, let us maintain 2 pointers $L[i]$ and $R[i]$, representing the range of elements in $i_{th}$ array, that can be accessed by the current column index $j$.

Initially all $L[i]$ and $R[i]$ would be set equal to 0.

As we move from $j_{th}$ index to $(j+1)_{th}$ index, some $L[i]$ and $R[i]$ would change. Specifically, all those arrays which have $size \ge min(j,W-j-1)$ would have their $L[i]$ and $R[i]$ change.

Since overall movement of $L[i]$ and $R[i]$ would be equal to $2 \cdot$ size_of_array($i$). Overall change would be of order of $O(\sum a[i])$. For every array we need range max query in $(L[i], R[i])$.

We can use multisets/ segment trees/ deques to update the answers corresponding to an array if its $L[i], R[i]$ changes. This way we can get complexity $O(N)$ or $O(N \cdot log(N))$ depending upon implementation.

Code
#include<bits/stdc++.h>
using namespace std;

const int N = 1e6 + 5;
long long ans[N];
multiset<int> global_ms[N];

int main(){
int n,w;
scanf("%d %d", &n, &w);
for(int i = 0; i < n; ++i){
int cnt;
scanf("%d", &cnt);
for(int j = 0; j < cnt; ++j){
int x,l,r;
scanf("%d", &x);
remove_element[j + w - cnt].push_back(make_pair(x, i));
}
if(cnt < w){
remove_element[w - 1].push_back(make_pair(0, i));
remove_element[w - 1 - cnt].push_back(make_pair(0, i));
}
}
for(int i = 0; i < w; ++i){

int idx = j.second;
int val = j.first;
ans[i] -= (global_ms[idx].empty()? 0: *global_ms[idx].rbegin());
global_ms[idx].insert(val);
ans[i] += *global_ms[idx].rbegin();
}

if(i < w - 1){
ans[i + 1] = ans[i];
for(auto j:remove_element[i]){
int idx = j.second;
int val = j.first;
ans[i + 1] -= (*global_ms[idx].rbegin());
global_ms[idx].erase(global_ms[idx].find(val));
ans[i + 1] += (global_ms[idx].empty()? 0: *global_ms[idx].rbegin());
}
}
printf("%lld ",ans[i]);
}
printf("\n");
}


### 1208F — Bits And Pieces

The idea is to first fix some $a[i]$ and try to get the bits which are off in $a[i]$ from any $2$ elements to the right of $i$. Since, we need to maximize the value, we will try to get higher bits first. What we need now is, for every number $x$ from $0$ to $2^{21}-1$, the $2$ right most positions such that the bits present in $x$ are also present in the elements on those positions. This can be done by iterating over submasks(slow) or SOS-DP(fast).

Once we process the positions for every $x$, let us fix some $a[i]$ and iterate over the bits which are off in $a[i]$ from the highest to the lowest. Lets say the current maximum we have got is $w$ and we are going to consider the $y^{th}$ bit. We can get this bit if the $2$ positions for $w|2^{y}$ are to the right of $i$ else we can not.

The final answer would be the maximum of $a[i]|w$ over all $i$ from $1$ to $N$.

Time complexity $O((M+N)\cdot logM)$ where $M$ is the max element in the array.

Code
#include <bits/stdc++.h>

#define ll          long long
#define pb          push_back
#define pii         pair<int,int>
#define vi          vector<int>
#define vii         vector<pii>
#define mi          map<int,int>
#define mii         map<pii,int>
#define all(a)      (a).begin(),(a).end()
#define x           first
#define y           second
#define sz(x)       (int)x.size()
#define endl        '\n'
#define hell        1000000007
#define rep(i,a,b)  for(int i=a;i<b;i++)
using namespace std;

int n,a[1000006],ans,bits;
pii dp[1<<21];

}
else{
}
}
}

void merge(int m1,int m2){
}

void solve(){
memset(dp,-1,sizeof dp);
cin>>n;
rep(i,0,n){
cin>>a[i];
bits=max((int)log2(a[i]),bits);
}
bits++;
rep(i,0,bits){
}
}
}
rep(i,0,n){
int cur=(1<<bits)-1-a[i];
int opt=0;
for(int j=bits-1;j>=0;j--){
if((cur>>j)&1){
if(dp[opt^(1<<j)].y!=-1 and dp[opt^(1<<j)].x>i){
opt^=(1<<j);
}
}
}
if(dp[opt].y!=-1 and dp[opt].x>i) ans=max(ans,a[i]^opt);
}
cout<<ans<<endl;
}

signed main(){
ios::sync_with_stdio(false);
cin.tie(0);
cout.tie(0);
int t=1;
// cin>>t;
while(t--){
solve();
}
return 0;
}


### 1208G — Polygons

If we choose a polygon with side length $l$, it is profitable to choose polygons with side lengths as divisors of $l$ as well, because this will not increase the answer.

So our final set would be such that for every polygon with side length $l$, we would have polygons with side length as divisors of $l$ as well.

All polygons should have at least one common point in the final arrangement, say $P$ or else we can rotate them and get such $P$. For formal proof, please refer this comment by orz.

Let us represent points on the circle as the distance from point $P$. Like for $k$ sided polygon, $0$,$\frac{1}{k} ,\frac{2}{k} ,…\frac{k-1}{k}$.

Now the number of unique fractions over all the polygons would be our answer, which is equal to sum of $\phi (l)$ over all side lengths $l$ of the polygons because for $l$ sided polygon there will be $\phi(l)$ extra points required by it as compared to its divisors.

One observation to get to the final solution is $\phi(l) \ge \phi(divisor(l))$. So, we can sort the side lengths by their $\phi$ values and take the smallest $k$ of them. This will minimize the number of points as well as satisfy the property of our set.

NOTE: We can not consider polygon of side length $2$. This can be handled easily.

Code
#include <bits/stdc++.h>
using namespace std;
int phi[1000006];
void process_phis(int n){
iota(phi,phi+n+1,0);
for(int i = 2 ; i<=n;i++){
if(phi[i]==i){
phi[i]=i-1;
for(int j=2*i;j<=n;j+=i){
phi[j]=(phi[j]/i)*(i-1);
}
}
}
}
int main(){
int n,k;
cin>>n>>k;
if(k==1){
cout<<3<<endl;
return 0;
}
process_phis(n);
k+=2;
sort(phi+1,phi+n+1);
cout<<accumulate(phi+1,phi+k+1,0LL)<<endl;
return 0;
}

Code
/**
* This line was copied from template
* This is nk_sqrt_300.cpp
*
* @author: Nikolay Kalinin
* @date: Fri, 23 Aug 2019 22:28:23 +0300
*/
#include <bits/stdc++.h>

using namespace std;

#ifdef LOCAL
#define eprintf(...) fprintf(stderr, __VA_ARGS__)
#else
#define eprintf(...) 42
#endif

using ll = long long;
using ld = long double;
using D = double;
using uint = unsigned int;
template<typename T>
using pair2 = pair<T, T>;
using pii = pair<int, int>;
using pli = pair<ll, int>;
using pll = pair<ll, ll>;

#define pb push_back
#define mp make_pair
#define all(x) (x).begin(),(x).end()
#define fi first
#define se second

const int maxn = 100005;
const int infk = maxn;

const int BSZ = 700;

vector<int> gr[maxn];
int th[maxn][2];
int threshold[maxn];
int up[maxn], fastup[maxn];
bool isleaf[maxn];
int curcnt[maxn];
int was[maxn];
int curk, curkid;
int n;
int timer;
int c[maxn];
bool isinteresting[maxn];
vector<int> interesting;
pair2<int> qs[BSZ];
vector<int> at[2 * BSZ];
queue<int> q;
int path[maxn];
vector<int> ks;
int intid[maxn];
vector<int> order;
int newid[maxn];
vector<int> grinit[maxn];
int compk[2 * infk + 5];

void leavedesc(int cur, int pr)
{
newid[cur] = order.size();
up[newid[cur]] = newid[pr];
order.pb(cur);
for (int i = 0; i < (int)grinit[cur].size(); i++)
{
if (grinit[cur][i] == pr)
{
swap(grinit[cur][i], grinit[cur].back());
grinit[cur].pop_back();
i--;
} else
{
leavedesc(grinit[cur][i], cur);
}
}
}

inline void computecolor(int cur)
{
if (was[cur] != timer) curcnt[cur] = 0;
int cntred = curcnt[cur];
int cntblue = (int)gr[cur].size() - cntred;
if (cntblue - cntred >= curk) c[cur] = 1;
else c[cur] = 0;
}

inline void pushtoparent(int cur)
{
if (cur == 0) return;
int par = fastup[cur];

int colorup = 1 - (curk >= th[cur][c[cur]]);

if (was[par] != timer)
{
was[par] = timer;
curcnt[par] = 0;
}
curcnt[par] += 1 - colorup;
}

int main()
{
scanf("%d%d", &n, &curk);
for (int i = 0; i < n - 1; i++)
{
int u, v;
scanf("%d%d", &u, &v);
u--, v--;
grinit[u].pb(v);
grinit[v].pb(u);
}
leavedesc(0, -1);

for (int i = 0; i < n; i++)
{
for (auto t : grinit[i]) gr[newid[i]].pb(newid[t]);
}

for (int i = 0; i < n; i++) scanf("%d", &c[newid[i]]);

for (int i = 0; i < n; i++) if (c[i] != -1) isleaf[i] = true;
int nq;
scanf("%d", &nq);
for (int lq = 0; lq < nq; lq += BSZ)
{
int rq = min(nq, lq + BSZ);
memset(isinteresting, 0, sizeof isinteresting);
ks.clear();
ks.pb(curk);
for (int q = lq; q < rq; q++)
{
int t;
scanf("%d", &t);
if (t == 1)
{
int v;
scanf("%d", &v);
v--;
v = newid[v];
isinteresting[v] = true;
qs[q - lq] = {v, -1};
} else if (t == 2)
{
int v, newc;
scanf("%d%d", &v, &newc);
v--;
v = newid[v];
isinteresting[v] = true;
qs[q - lq] = {v, newc};
} else
{
int newk;
scanf("%d", &newk);
qs[q - lq] = {-1, newk};
ks.pb(newk);
}
}

sort(all(ks));
ks.resize(unique(all(ks)) - ks.begin());
int kssz = ks.size();
ks.pb(infk);
int curcompkid = 0;
for (int i = -infk; i <= infk; i++)
{
compk[i + infk] = curcompkid;
if (curcompkid < kssz && ks[curcompkid] == i) curcompkid++;
}

isinteresting[0] = true;
interesting.clear();

for (int i = 0; i <= kssz; i++) at[i].clear();

for (int cur = 0; cur < n; cur++)
{
path[cur] = -1;
if (isinteresting[cur])
{
path[cur] = -2;
}
}

for (int cur = n - 1; cur >= 0; cur--)
{
int ret = -1;
if (isinteresting[cur])
{
intid[cur] = interesting.size();
interesting.pb(cur);
}
if (path[cur] == -1)
{
if (isleaf[cur])
{
if (c[cur] == 0) at[compk[-infk + infk]].pb(cur);
else at[compk[infk + infk]].pb(cur);
} else
{
curcnt[cur] = 0;
int cntred = 0;
int cntblue = gr[cur].size();
at[compk[cntblue - cntred + 1 + infk]].pb(cur);
}
} else if (path[cur] == -2)
{
ret = cur;
} else
{
ret = path[cur];
}
if (cur == 0) break;
if (ret == -1) continue;

if (path[up[cur]] == -1) path[up[cur]] = ret;
else if (path[up[cur]] >= 0)
{
isinteresting[up[cur]] = true;
fastup[path[up[cur]]] = up[cur];
fastup[ret] = up[cur];
path[up[cur]] = -2;
} else
{
fastup[ret] = up[cur];
}
}

timer++;
for (int i = 0; i <= kssz; i++)
{
for (auto t : at[i]) q.push(t);
while (!q.empty())
{
int cur = q.front();
q.pop();
if (was[cur] == timer) continue;
was[cur] = timer;
if (path[up[cur]] == -1)
{
curcnt[up[cur]]++;
int cntred = curcnt[up[cur]];
int cntblue = gr[up[cur]].size() - cntred;
if (cntblue - cntred + 1 <= ks[i]) q.push(up[cur]);
else at[compk[cntblue - cntred + 1 + infk]].pb(up[cur]);
} else if (path[up[cur]] == -2)
{
int id = intid[up[cur]];
} else { // in path
}
}
}

for (int cur = n - 1; cur >= 0; cur--) if (path[cur] != -1)
{
int cntchildren = gr[cur].size();
if (path[cur] == -2)
{
th[cur][0] = -infk;
th[cur][1] = +infk;
} else
{
th[cur][0] = cntchildren + 1;
th[cur][1] = cntchildren + 1;
th[cur][0] = min(th[cur][0], max(th[path[cur]][0], cntchildren - 1 - 1 + 1));
th[cur][1] = min(th[cur][1], max(th[path[cur]][1], cntchildren - 1 - 1 + 1));
for (int i = 0; i < (int)dead_ends[cur].size(); i++)
{
int cntred = i + 1;
int cntblue = cntchildren - cntred;
th[cur][0] = min(th[cur][0], max(dead_ends[cur][i], cntblue - cntred + 1));
th[cur][1] = min(th[cur][1], max(dead_ends[cur][i], cntblue - cntred + 1));

cntred = i + 1 + 1;
cntblue = cntchildren - cntred;
th[cur][0] = min(th[cur][0], max(max(th[path[cur]][0], dead_ends[cur][i]), cntblue - cntred + 1));
th[cur][1] = min(th[cur][1], max(max(th[path[cur]][1], dead_ends[cur][i]), cntblue - cntred + 1));
}
th[path[cur]][0] = th[cur][0];
th[path[cur]][1] = th[cur][1];
}
}

::curkid = lower_bound(all(ks), curk) - ks.begin();
for (int q = 0; q < rq - lq; q++)
{
timer++;
if (qs[q].fi == -1)
{
curk = qs[q].se;
::curkid = lower_bound(all(ks), curk) - ks.begin();
} else if (qs[q].se != -1) c[qs[q].fi] = qs[q].se;
else
{
for (auto v : interesting)
{
if (!isleaf[v]) computecolor(v);
pushtoparent(v);
}
printf("%d\n", c[qs[q].fi]);
}
}
}
return 0;
}



• +133

 » 18 months ago, # | ← Rev. 2 →   +10 In H, what is the easiest way to consider a vertex which has many children in the compressed tree? We have to calculate $the~number~of~such~children+ 1$ boundary values, how to do it without sorting the boundary values from the rest of the children?
•  » » 18 months ago, # ^ |   +27 I don't understand what you mean by $the\,number\,of\,such\,children+1$ boundary values. I don't compute any boundary value for an interesting vertex, I compute its color only when a query with fixed $k$ comes.btw, the tutorial is updated to a more complete version (someone posted a sketch instead of the tutorial first by mistake).
 » 18 months ago, # | ← Rev. 2 →   +7 I did a $O(n^2\log{n})$ solution for question B and it got TLE. I don't understand why it got TLE.My solution :https://codeforces.com/contest/1208/submission/59465481After contest I did a $O(n\log{n})$ solution. (binary search + two pointer)
•  » » 18 months ago, # ^ |   +8 Yes same with me .Someone pls tell why
•  » » 18 months ago, # ^ | ← Rev. 3 →   0 Your first solution is $O(n^2\log^2{n})$: $O(log{N})$ is binsearch and $O(n^2\log{n})$ is checking.
•  » » » 18 months ago, # ^ |   +3 A few minutes after contest, I have replaced map by unordered_map, so the complexity of my program is $O(n^2logn)$. And I still have TLE on a later test case.Submission :https://codeforces.com/contest/1208/submission/59486971
•  » » » » 18 months ago, # ^ |   +14 Worst case lookup for an unordered_map is O(n). It's better to compress the array of input elements and use an array to hash.
•  » » » » » 18 months ago, # ^ |   +3 Or use this: https://codeforces.com/blog/entry/62393
•  » » » » » » 18 months ago, # ^ |   0 I tried it after the contest and it didn't worked for me, got TLE on 27 (Link to Submission)
•  » » » » » » » 8 months ago, # ^ | ← Rev. 2 →   0 same even I got TLE at test case 27. Did you figured out how to solve it ? I am using binary search over window of size mid with unordered set (with custom hash) to check pair wise distincts.
•  » » » » » » » » 7 months ago, # ^ | ← Rev. 2 →   0 Check my submission: https://codeforces.com/contest/1208/submission/89659269Got accepted using binary search over the possible length intervals and then simply checking if removal of any subsegment of that length can lead to a good answer.Time complexity is : O(n^2 log(n))?
•  » » » » » » » » » 4 months ago, # ^ |   0 You can also avoid that binary search step using two different maps.My Submission
•  » » » » » » » » » 4 months ago, # ^ |   0 Nice
•  » » » » 18 months ago, # ^ |   0
•  » » » 18 months ago, # ^ |   0 the "Ruler method" is O(N),i think it is ok,is there any bug in it?
•  » » » 14 months ago, # ^ | ← Rev. 2 →   0 But for n<=2000, the worst case time(for time complexity — n^2(log n)^2) is still less than 2 seconds,why is it giving TLE then?
•  » » 18 months ago, # ^ |   0 I also did the same and got TLE in TC 29. What could be the reason that it timed out ?
•  » » 10 months ago, # ^ |   0 what is wrong in this solution for problem B. (https://codeforces.com/contest/1208/submission/77475023) I keep getting wrong answer in test 54.
•  » » » 9 months ago, # ^ |   0 i am getting same wrong answer on test 54
 » 18 months ago, # |   +13 You can also reduce the runtime for B by $\log n$ using a constant-time map instead of a log-time one, so it's possible to solve in $O(n)$. (example: 59466996)
•  » » 18 months ago, # ^ |   0 the "Ruler method" is O(N),i think it is ok,is there any bug in it?
 » 18 months ago, # | ← Rev. 2 →   -15 So Problem C could have been a multiple of 2 as well, since all you need is to divide into 4 quadrants?Or use the following $2x2$ matrix as building block: 0 1 2 3 
•  » » 18 months ago, # ^ |   +34 Apparently no, because we don't just need to divide the grid in $4$ quadrants but also add some number to each quadrant. If $N$ is of the form $4k+2$, the element would be added odd times in a row/col of a quadrant, which changes the xor value.Also, the $2$ x $2$ matrix you mentioned does not have xor of a row equal to that of a col.
•  » » » 18 months ago, # ^ | ← Rev. 2 →   +1 I used the $2\times 2$ matrix that mkagenius mentioned as a building block -- but it also relies on $N$ being a multiple of $4$. The 2x2 building block has xor $2$ in every column and $1$ in every row, so pairs of building blocks cancel out as long as you have an even number of building blocks.See submission 59457320 for an example.
•  » » » 18 months ago, # ^ |   0 you can use the 2*2 building block but not exactly like that the trick in that submission that in every column we do (2^k xor 2^k +1) so it will always be one it got accepted i don't know if he meant the same thing by building 2*2 matrix https://codeforces.com/contest/1208/submission/59518656
 » 18 months ago, # |   +19 As for D, I have something of an idea, but I'm not sure whether it's possible to come up with an efficient algo. So, for every index from n-1 to 0, let's say the value of the input array is x at a given index. So it means there are k smallest unused numbers whose sum is x. We simply put the k+1st number in the output permutation at this index, and continue (move left). The question is, can this be implemented efficiently?
•  » » 18 months ago, # ^ |   +21 Exactly, this is the 2nd approach. Please take a look and thanks for expressing it well.
•  » » » 18 months ago, # ^ |   +9 you put his comment in the editorial, Nice trick there
•  » » » 18 months ago, # ^ |   0 Oh, now I get it, but what's BIT?
•  » » » » 18 months ago, # ^ |   0 Binary Indexed Tree, https://cp-algorithms.com/data_structures/fenwick.html (other name for fenwick tree)
•  » » » » 18 months ago, # ^ |   +6 Binary Indexed Tree.It is a useful data structure for calculating prefix sums.
•  » » » » » 18 months ago, # ^ |   0 Thanks! I'll try to implement it, then (during the contest I didn't solve D, only got this idea)
•  » » » » » » 18 months ago, # ^ |   -6 your rating is reaching 1900 and you still don't know what BIT is?
•  » » » » » » » 18 months ago, # ^ |   +3 Knowing BIT is not necessary. It is useful, but not necessary. A segment tree can do everything a BIT can do.
•  » » » » » » » » 18 months ago, # ^ |   0 I know this but when you are at a process of learning segment tree, you search on the Internet and BIT is everywhere.Even when you don't plan to learn it.
•  » » » » » 18 months ago, # ^ | ← Rev. 3 →   +4 Second approach for D(about searching in BIT in O(logN)) has been well explained in this(Link) blog.
•  » » » » » » 18 months ago, # ^ |   +3 Wow, thanks a lot!
•  » » » » » » 18 months ago, # ^ |   0 i guess normal bs will work in this case, i saw tourist solution, he has implemented naively.but thnx for the link, learnt something new.
•  » » » » » » » 18 months ago, # ^ |   0 This is because N <= 2e5 so N(LogN)^2 is roughly 3e7 which will work easily in 2sec but what if the constraints are higher like N <= 1e6 then N(logN)^2 will be almost 4e8 which might not work in 2 sec.
•  » » » » » » 18 months ago, # ^ |   0 sorry i dont understand, what is the element of the BIT represent? and why always follow a pattern like 1 3 3 10 5...? thanks in advance
•  » » » » » » » 18 months ago, # ^ |   0 You can learn about Binary Indexed Tree from here BIT. It has the best explanation till now i have found online.
•  » » » » » » 18 months ago, # ^ |   0 can you explain why he has used update(-ans[i],ans[i])? i can't get it.
•  » » » » » » » 18 months ago, # ^ |   0 its BIT, example update (5,-5) means that we already use 5 and we need to remove it from our BIT... hope it helps
•  » » » » » » 18 months ago, # ^ |   0 very helpful, thanks
•  » » 18 months ago, # ^ |   0 Can you explain what you mean by unused here and the basic intution used here?
•  » » » 18 months ago, # ^ |   0 Here, the term "unused" means the numbers that have not yet been included in the permutation (while iterating from the last). When a number is used, you can replace it with 0, so that it is not counted for numbers larger than it that occur before it.
 » 18 months ago, # |   0 any Easy way to solve E ?
•  » » 18 months ago, # ^ |   +3 I think maybe segment tree can solve it.
•  » » » 18 months ago, # ^ |   0 how ?
•  » » 18 months ago, # ^ | ← Rev. 20 →   +8 The first optimization tip I didn't quite cotton on at first is, instead of producing the answer values one by one, to instead create an array as a difference map (opposite of prefix sum, i.e. $D_i = ans_i - ans_{i-1}$), then getting the final answer from its prefix sum. This allows the input arrays to be processed individually, greatly reducing memory usage and avoiding the need for sorting. (Be sure to add a check to skip the "boring" changeless middle part for the smaller arrays to avoid $O(nw)$ or worse)When processing the arrays, there are several ways you can use to get range maximums: Segment tree. Main difficulty is implementing one, but it's worth learning, as you can use it to solve other problems. Once you have one, it's pretty easy to use; just query for the range between your pointers. $O(\log n) \times O(n)$ queries = $O(n \log n)$ Sorted multiset/countmap. Really easy to use, just add and remove (if using a countmap, be sure to delete zero-count entries), then grab the maximum. Same asymptotics as segment tree, but tends to be faster for this problem, as the multiset tends to keep a smaller size compared to the segment tree. Deque. Fastest of these, with $O(n)$, but requires some strategic thinking to use here; details in spoiler. SpoilerWhen advancing the right pointer, pop (remove) all rightmost elements that are strictly less than the new element before adding it. When advancing the left pointer, pop the leftmost element if it is equal to the element lost. The range maximum is now always the leftmost element of the deque.Another tip is that whichever method you use, it can be helpful to think of the arrays as having fictitious "0" elements to its left and right (to represent the empty space). Whether you actually add those elements to the array is up to you, but be sure to adjust your math accordingly.
 » 18 months ago, # | ← Rev. 2 →   0 I am using unordered_map in B with binary search, but still getting tle, Complexity after using unordered map should be O(n^2log(n)) right?. Here my submission: https://codeforces.com/contest/1208/submission/59490232
•  » » 18 months ago, # ^ | ← Rev. 2 →   0 No. Worst case for lookup in Unorderd_map is O(N). Have a look at this(Click Here) blog.
•  » » 18 months ago, # ^ | ← Rev. 2 →   +1 Can be done in N*LOGN*LOGN https://codeforces.com/contest/1208/submission/59453068 using Binary Search and 2 pointers (USING MAP)
•  » » » 18 months ago, # ^ |   0 What is the logic behind the condition m.size()==n-val?
•  » » » » 18 months ago, # ^ |   0 After removing Val number of elements array should contain only distinct element and size of map should be equal to number of distinct elements , checking the same..
•  » » » » » 18 months ago, # ^ |   0 Your solution is N(logN)^2 as you are using map in check function.
•  » » » » » » 18 months ago, # ^ |   0 Yeah , my fault corrected :(
 » 18 months ago, # | ← Rev. 3 →   +3 For a $O(n$ $log$ $n)$ solution to problem B, it is possible to do it with Maximum Segment Tree + Two Pointers. It lead us to 31ms and 256KB.For example, my code: 59492725
•  » » 18 months ago, # ^ | ← Rev. 4 →   0 Yes but it can also be done with binary search and two pointer O(nlog n). It also leads to 31ms..it requires use unordered_map. example : 59494540
•  » » » 11 months ago, # ^ |   0 Is it possible to do problem B in n^2 complexity by iterating over possible start positions of the subsegment to remove and then finding the length of that subsegment using 2 pointer method so that the length is sufficient to avoid having duplicates in remaining portion of string?
 » 18 months ago, # |   0 Can someone help me understand error in my logic for E please?The submission is https://codeforces.com/contest/1208/submission/59494009 .I tried doing it with max heaps for each input array and using restrictions on possible positions reachable tried to implement them.
 » 18 months ago, # |   0 Can someone explain why i got TLE, and how to fix it?
•  » » 18 months ago, # ^ |   +3 Your time complexity is $O(n^2 \cdot log(n)$ but due to the constant factor of unordered_map it is getting TLE. Try this test case : testcase20002000,1999,...., 6,5, 4, 4, 4, 4 And run it on custom invocation with big numbers.
 » 18 months ago, # |   0 Missed problems on Graphs :)
 » 18 months ago, # |   +161 What is the proof that all polygons in G should have a common point?
•  » » 18 months ago, # ^ |   0 Is G a well-known problem? I was surprised to see so many people getting it AC'ed in <10 minutes.
•  » » » 18 months ago, # ^ |   +5 I don't think that it's known, but it is definitely possible to see the trick pretty quickly and then coding is just a matter of 3 minutes. I think that difficulty-wise it is more like Div1C.
•  » » 18 months ago, # ^ |   +32 Most of the people did proof by AC. Even after getting WA on pretest 11 I was still quite confident in the approach and just looked for some special case/overflow/off-by-one.It is a really nice problem, but I am also very much interested in the proof.
•  » » 18 months ago, # ^ | ← Rev. 5 →   +221 The fact that in problem G all the polygons should contain the same vertex was in Saint Petersburg Lyceum 239 Mathematical Olympiad in 2011 as a problem.Zeroly, we will consider a single point and two diametrically opposite points as regular 1-polygon and 2-polygon as well.Firstly, all the polygons may be considered contained in a one big regular polygon $F$ with $N$ vertices inscribed in the same circle (otherwise the graph formed by the vertices and edges of our regular polygons is disconnected, and thus the number of vertices can be reduced). We will now prove by induction that for every positive integer $N$, if all the polygons are rotated so that they contain a common vertex, the total number of vertices shall not increase. The basis $N=1$ is obvious, so we shall now concentrate on the induction step from all numbers $1, 2, \ldots, N-1$ to $N$.Let $p$ be any prime dividing $N$. $F$ can now be separated into $p$ regular polygons $F_1, F_2, \ldots, F_p$ with $\frac{N}{p}$ vertices each. We are now able to highlight two types of our inscribed polygons: Ones that are completely contained in one of the $F_i$'s. The $k$-polygons that intersect with each of the $F_i$'s by a regular $\frac{k}{p}$ polygon. It's easy to prove that the first case takes place if and only if the multiplicity of the prime $p$ occurring in the prime factorization of $N$ is greater than in $k$, and the second takes place if and only if the multiplicities are the same. Since $k$ is divisor of $N$, exactly one of the cases takes place for each of the polygons.Let us now choose an arbitrary vertex $A_1$ of $F_1$ and rotate all polygons of the second type so that they contain $A_1$. It's easy to prove that since now there exist points $A_2, \ldots, A_p$ in polygons $F_2, \ldots, F_p$ respectively such that each of the polygons of the second type contains all the vertices $A_1, \ldots, A_p$. Now we will rotate each of the first-type polygons: if one is contained in $F_k$, we will rotate it so that it begins to contain $A_k$.After such two series of rotations, what happened? In each of the $F_k$ there are some regular polygons of the first type and some pieces of regular polygons of the second type that are regular polygons themselves. We rotated these polygons so that they still lie in $F_k$, but now they contain point $A_k$. By induction hypothesis after the rotations the number of vertices of $F_k$ covered by our polygons won't increase. Let's sum this up for all $\frac{N}{p}$-polygons, and we will acquire the following: the total number of vertices of $F$ covered by all our polygons won't increase.Finally, let's rotate the polygons of the first type so that they contain $A_1$ (we can rotate the polygons of the second type as well, but they will just stay still because they already have $A_1$ among their vertices). This won't increase the total number of vertices (but it can decrease it since some vertices from polygons of the first type may glue up).
•  » » » 18 months ago, # ^ |   +31 Thanks for the proof. The one we thought of during the preparation of the problem was flawed. Fortunately, the assumption is still correct. We would be more careful regarding the proofs in future.
 » 18 months ago, # |   0 In F, the SOS DP is more like Sum over Supersets DP instead of Sum over Subsets DP. So shouldn't the inner loop go from Max (1 << bits) to 0 instead of 0 to Max?
•  » » 18 months ago, # ^ |   +19 The order you pass through bits doesn't matter.
•  » » » 18 months ago, # ^ | ← Rev. 2 →   0 Yeah, I worded that wrong, I was talking about iterating in this order. But I figured out why the original one works as well
 » 18 months ago, # |   0 https://codeforces.com/contest/1208/submission/59496981 I am getting wrong answer on tc 54. Can't really figure out whats wrong.
•  » » 18 months ago, # ^ |   0 same happened with me ..still confused! https://codeforces.com/contest/1208/submission/59466861
•  » » » 9 months ago, # ^ |   0 i am also geeting wrong answer on test 54 ...what should i do??
•  » » 18 months ago, # ^ |   0 9 1 2 3 4 1 5 6 7 4 check for this tc to find out your mistake
•  » » 18 months ago, # ^ |   0 9 7 8 2 9 5 10 8 4 2  15 100 101 1 2 3 4 4 6 7 8 9 10 4 102 103 Check these.
 » 18 months ago, # |   0 For B bonus you can do it in O(NLogN) with binary search on rangeSize, then you loop through every possbile rangeSize using a sliding window + hashamp to find the number of distinct elements.
 » 18 months ago, # |   +3 I think B can solved with O(n) using Hash and two pointer. Am I wrong?
•  » » 18 months ago, # ^ |   0 Yes it can be solved in O(n): 59511426
 » 18 months ago, # |   0 May I clarify why the first approach for problem D has complexity O(n log n)? It seems to me that for i = 1 to N, we are doing a linear scan to decrease the elements accordingly, so shouldn't the complexity be quadratic? Thank you!
•  » » 18 months ago, # ^ | ← Rev. 2 →   +3 For each index from 1 to N, he uses a segment tree that handles both updating to the left of index i and getting the answer for the number i. So, as the segment tree do these operations in $O(log$ $n)$, we achieve a $O(N$ $log$ $n)$ time complexity.
 » 18 months ago, # |   0 https://ideone.com/MdCQmv Problem B: I think my solution is O(nlogn). I got wrong answer test 54. S.O helps me find out where I wrong?
 » 18 months ago, # |   0 In problem C, I solved it by making the first column to be first N Evens numbers,then secound column to be first N odds numbers,third column to be the next N evens numbers etc.. So if N = 4 it will be like0 1 8 9 2 3 10 11 4 5 12 13 6 7 14 15 I can figure out that for every column the result Xor will be equals to 0, but why it is also true for every row, is there any proof for that ?
•  » » 18 months ago, # ^ | ← Rev. 2 →   +9 As you can see, for each row, $A_{2i-1}$ $xor$ $A_{2i}$ is always $1$
•  » » » 18 months ago, # ^ |   0 Thanks, I got it :)
•  » » 11 months ago, # ^ | ← Rev. 2 →   0 How did you arrive at the conclusion that you can arrange the matrix in this way to satisfy the problem conditions?
 » 18 months ago, # |   0 Can someone explain the nlogn approach for problem B what do we do after binary searching the length we want to delete ?
•  » » 18 months ago, # ^ | ← Rev. 2 →   0 The binary search is to applied on the size of sub-segment we want to delete. Let's take an example for size of array 5 so we will set lo = -1 and high = 5 and then we will find mid and assume if we can delete the sub-segment of size mid and have unique elements after that. If we can have unique elements with this mid size then we will try explore the range lo to mid similarly and if not then we will explore mid to high. Now if we want to check uniqueness then you can do this by using sliding window with two pointers. Let's say you want to check for sub-segment size s then except this sub-segment you have to check whether the left part is having unique or not and also the elements present on the left side should not be present on the right side of the sub-segment and right side should also have unique elements.In order to implement this we can use map, you can see the implementation here 59494540
•  » » » 18 months ago, # ^ | ← Rev. 2 →   0 Ahh thanks a lot bro. Cool solution But our solution is nlog^2n is there a nlogn solution also ?
•  » » » » 18 months ago, # ^ |   0 Yes you can compress the elements which will take NlogN time and then use an array to hash in check function of binary search, instead of using map which will drop one LogN factor from your solution. So total time in worst case will be NlogN(for compressing the elements) and NlogN for binary search.
•  » » » 8 months ago, # ^ |   0 How do you determine the lower limit value lo=-1?
 » 18 months ago, # |   0 Nice contest. Level of questions was good.
 » 18 months ago, # |   0 can someone explain B pls i m not able to get editorial?
•  » » 18 months ago, # ^ |   0 you can see my explanation https://codeforces.com/blog/entry/69357?#comment-538454
 » 18 months ago, # |   +11 Problem H can be solved in $O(n \log n )$ .The key is that $f_{i}-f'_{i} \in {-1,0,1}$ ， $f'_{i}$ means if we don't consider a son of $i$,what $f_i$ will be.So we can use HLD to maintain the information simply.And you can see the fastest submission to know more.
•  » » 18 months ago, # ^ |   +92 Have you ever considered using spaces and enters in your code?
•  » » 18 months ago, # ^ |   +10 How do you remove the 2nd log factor ($O(n \log^2 n)$ to $O(n \log n)$)? I think I see how to maintain $f_i'$ with linked lists instead of BST's, but how do you update heavy chains in $O(1)$ time?
•  » » » 18 months ago, # ^ |   +10 Maybe LCT can do this thing in $O(n \log n)$.And it's amortized,not $O(1)$ for each heavy chain.
•  » » 18 months ago, # ^ |   +11 You should put a link to the submission since it is not the fastest submission anymore.
•  » » 18 months ago, # ^ |   0 Could you explain what your solution is? I don't see what you are maintaining in the HLD/LCT.
 » 18 months ago, # |   0 Better explanation of Approach-1 for problem D ?
 » 18 months ago, # | ← Rev. 2 →   0 Can anyone please explain how are we using seg tree for getting index of last 0 in problem D? Could not get it during contest too. smh. TIA.
•  » » 18 months ago, # ^ |   0 Refer to this. It's quite helpful .
 » 18 months ago, # |   0 What the time complexity in B? My solve in O(n) on Python. n = int(input()) a = list(map(int, input().split())) s, d = {}, {} for q in range(n): s[a[q]] = q d[a[q]] = d.get(a[q], 0)+1 q2, ans = 0, n-1 for q1 in d: while d[q1] > 1: d[a[q2]] -= 1 q2 += 1 f = set() for q in range(n): ans = min(ans, q2-q) if a[q] in f: break f.add(a[q]) q2 = max(q2, s[a[q]]+1, q+1) print(ans)
•  » » 18 months ago, # ^ |   -8 This is not O(n). You can check the time complexity of Python collections here: https://wiki.python.org/moin/TimeComplexityNot having a for doesn't mean it's O(1). Python abstracts the most it can, so some builtin structure actually have a high overhead and it's important to understand the complexities of each one you usually use (otherwise you may think it's an O(n) algorithm when it's actually an O(n^2) like this one).Funny enough dict/set in Python seem to not be binary trees, so the complexities are O(n) instead of O(logn).
•  » » » 18 months ago, # ^ |   0 I'm sorry for my English, this is Yandex translator.Yes, in the worst case, the dict / set time will be O (n), but on average everything will happen for O (1) — this is called amortized O (1). Therefore, To consider that the operation works for O (n) is impractical.And that means the solution work for O (n) multiply by a constant.
•  » » » » 18 months ago, # ^ |   +12 I would not believe in these average cases hahah Too many TLE for using hash tables instead of log structures. You will see. And these are not amortized, average case is not amortized, don't go around saying that average is called amortized. You can search for the difference, but basically average case is considering a somewhat general input, you can still have worst case in every operation on a specific input. Amortized is independent of input cases, it basically take all operations times and divide by the number of operations. This is a hash table, the average case is expected to be O(1), worst case is still O(n). If you find some structure that can work as a map/set/hash table and has amortized O(1) for insert/query/remove, please share, you will be one of the most important people from our generation xD
•  » » » » » 18 months ago, # ^ |   +7 It even says it in the link I sent: "The Average Case times listed for dict objects assume that the hash function for the objects is sufficiently robust to make collisions uncommon. The Average Case assumes the keys used in parameters are selected uniformly at random from the set of all keys."Assuming that your input is random it's exactly what hashtables require. Since some cases are made by hand I could just see how python hash function works and create a case that every number would end in with the same key. Of course this problem has a big time limit and O(n^2) is still accepted, but in other problems that's usually an issue.I'm not imposing my beliefs, this is how hash tables work, I'm just trying to help you so in the future you understand that some TLE or MLE using hash tables are expected in non random input.Sad to see so many downvotes haha people should search when someone says something to try to see if it makes sense or not instead of believing in magic data structures that can sort every input in O(n) xD
 » 18 months ago, # |   0 Here is my O(n) solution to 1208B — Uniqueness Code... #include #define mid (tl+tr)/2 #define mod 1000000007 #define fain(arr) for(int i=0;i>arr[i]; #define all(x) x.begin(),x.end() #define SPEED ios::sync_with_stdio(false); cin.tie(0); cout.tie(0); #define bugv(n1) if(DEBUG)cout<<#n1<<" is "< (end)); i != (end) - ((begin) > (end)); i += 1 - 2 * ((begin) > (end))) #define endl '\n' #define ll long long #define pii pair #define pll pair #define pi acos(-1) #define sz(x) ((int)x.size()) #define clr(x) memset(x, 0, sizeof(x)) #define init(x, a) memset(x, a, sizeof(x)) #define DEBUG true using namespace std; int main() { // FILE; SPEED; int n; cin>>n; int arr[n]; fain(arr); unordered_map maxind; //stores maximum index of each element rep(i,0,n){ maxind[arr[i]]=i; } int longvsuff=0; // longest valid suffix unordered_map suff; for(int i=n-1;i>=0;i--){ if(!suff.count(arr[i])){ longvsuff=i; suff[arr[i]]++; }else{ break; } } unordered_map pre; int limit=0; int finans=n-longvsuff; for(int i=0;i
 » 18 months ago, # |   0 Can someone explain me how fast bitset works because my binary search of n^2*(logn)^2 got TLE in main tests , which is obvious. But then i submitted the same using bitset which got AC with time complexity of n^2*(bitset optimization).I want to know how fast is it optimized? link to the solution
 » 18 months ago, # |   +1 Can you explain, please, why in problem C after we multiply the numbers by 4 and make adds (+1, +2, +3 to each zone), XOR still remain 0 in each row and column.
 » 18 months ago, # | ← Rev. 2 →   0 Can anybody hack my solution for F or prove it's correct ?hereThe main idea is brute force .
 » 18 months ago, # | ← Rev. 3 →   +1 In solution of problem C, "The XOR of each row and column would still remain 0 as N/2 is also even" , How ?
•  » » 18 months ago, # ^ |   +9 The main fact on which this relies is that $a$ $xor$ $a=0$ for an integer $a$, so in an array in which a number appears an even number of times the xor is $0$.Also, xor is independent of bits, so if all bits appear an even number of times in an array the xor will be $0$. After copying the numbers in the $4$ quadrants, each row/column will become a duplicated array(first half equal to the second).This way the xor is $0$ (each number appears $2$ times by being duplicated, and in such an array xor is $0$).Now, by multiplying by $4$ you just shift the numbers by $2$ bits, so xor remains $0$.In last operation, of adding $0,1,2,3$ you just set the last $2$ bits.Thinking of each row/column as an duplicated array as before, we just need to consider the last $2$ bits, because we proved that the rest don't add anything to the xor.If you think how you add, the last 2 bits of each row/column will be the same for the first half ($n/2$ numbers), and same for the second half ($n/2$ numbers).Because $n/2$ is even the xor of each half for the last $2$ bits will be $0$, so the xor of a row/column will be $0$.
•  » » » 18 months ago, # ^ |   0 by multiplying by 4 you just shift the numbers by 2 bits.how?suppose in first quadrant number is 2 then in second quadrant number will be 8but 2 xor 8 !=0
•  » » » » 18 months ago, # ^ |   0 Multiply by 4 is done in all quadrants. All numbers get shifted by 2 bits.
•  » » » » » 18 months ago, # ^ |   0 if we multiply every thing by 4 wouldn't the numbers exceed n^2-1
•  » » » » » » 18 months ago, # ^ |   0 No, because the numbers used in the quadrants were between 0 to n*n/4-1.
•  » » » » » » » 18 months ago, # ^ |   0 say for first quadrant for n=4 {1 2} {3 4} then how do u proceed after that i see why we are shifting bits by 2 and alterring the last two bits even number of times but numbers are not starting form 1 if i multiply by 4 on every quadrant
•  » » » » » » » » 18 months ago, # ^ |   0 The numbers used in the quadrant start from 0 to n*n/4 -1 each exactly once. After multiply by 4, numbers range from 0 to n*n-4. Now changing last 2 bits increases number by at max 3. So, range becomes 0 to n*n-1.
•  » » » » » » » » » 18 months ago, # ^ |   0 ohh finally got it thanks masterr
 » 18 months ago, # | ← Rev. 2 →   0 Can someone help me understand what's wrong with my logic in Problem B? My code failed in system test case 54. Here is my code ; https://codeforces.com/contest/1208/submission/59480026
•  » » 18 months ago, # ^ |   +20 Hack: 12 2 3 4 5 1 2 1 2 6 7 8 1 
 » 18 months ago, # |   +32 59495293 passes system tests(test 139 is an after-contest hack) even though it ignores i
 » 18 months ago, # |   0 A lot of contestants had a problem about unordered_map (1208B - Uniqueness), when they write bin search. So they got TLE. I know how to fix it! You can only give an id for each number in your array. And it will be correct, because MAX_ID can be only 2000 (if we have 2000 different numbers). So, after that we can do like this : how?arr[i] = id[arr[i]];for each i.So, my solution, which get TLE : https://codeforces.com/contest/1208/submission/59492817.My "OK" solution : https://codeforces.com/contest/1208/submission/59534462.
 » 18 months ago, # |   0 For the 2nd approach in D, why are we moving from the last position? Can this be proved somehow?
•  » » 18 months ago, # ^ |   +28 "If $s_i=x$, it means there are $k$ smallest unused numbers whose sum is $x$."Here "unused" means the elements $p_1,p_2,\dots,p_{i-1}$, so we need to check the array $s$ from $N$ to $1$. In other words, we need to move from the last position.
•  » » » 18 months ago, # ^ |   0 Understood. Thanks for addressing this
 » 18 months ago, # |   0 someone tell pls why to use segment tree in D ? and also how to remove elemnt with value 0 cant we update this value to any negative value instead of removing it pls clarify?
•  » » 18 months ago, # ^ |   0 Here, using segtree you get the rightmost minimum value and place current value i.e if you are checking for X then placing X over there. Yeah, for removal you update the value with a very large number not a small one
•  » » » 7 months ago, # ^ |   0 can you tell how to find rightmost element with 0 value, i have made all functions of segment tree (create, rangeupdate, query) but can't understand how to find index of last element with s[i] as 0
 » 18 months ago, # |   0 Hi,Could anybody help me why i am getting wrong answer for Problem B: Solution.Thanks in advance.
 » 18 months ago, # | ← Rev. 2 →   -8 can anyone tell me why it is getting WA on test case 9 ? 59586904 I have implemented using binary search on BIT. But still getting WA..please help
•  » » 18 months ago, # ^ |   +11 Overflow.Just change int res = 0 to ll res = 0 in the function query.
•  » » » 18 months ago, # ^ |   +3 thanx drastogi21
 » 18 months ago, # |   0 Hi! Can someone help me understand sliding window approach in question B? Thank you! :)
 » 18 months ago, # |   -8 Can Anyone explain the solution to F? I am unable to understand the editorial.
 » 18 months ago, # |   0 I don't understand any solution of problem D, any help would be appreciated? thank you in advance
 » 18 months ago, # |   0 why in problem C if i used the given example with N=4 as building block and keep on adding multiples of 16 (16,32,48,...) to make numbers distinct what does make me sure that the xor of all rows and columns will be the same ?
 » 18 months ago, # |   0 Hello!My code for 1208D is giving TLE on the first test case itself, but is running perfectly locally. Can anybody help?
•  » » 18 months ago, # ^ |   0 It was a bug. Thanks.
 » 18 months ago, # | ← Rev. 2 →   0 Hello,So, I've solved 1208D — Restore Permutations by using fenwick trees and binary search. But it's giving TLE, and rightfully, it should. So this is what I've done: Initialize BIT with all 0s. Do range updates for the BIT for each sum from 1,1+2,1+2+3,...,1+2+3+...n, by doing updates for ranges [1,1],[2,2],[3,3],[4,4],...,[n,n]. Now, we're given the sum array. We go from i=n-1 to i=0, while doing the following:Find the sum inside the Fenwick tree,by doing a Binary Search, by using the Query method. We get the position of the sum inside the Fenwick tree. We know that this is the number that must be present at this position. So, we deduct this value (say v), from every element after i. This can simply use the normal update function. We do 3, and we get the answer array. Now, time complexities!: O(n) O(n log n) O(n log n log n) O(n) So, O(n log n log n + n log n + n) = 2 x 10^5 x ~16 x ~16+ 2 x 10^5 x ~16 + 2 x 10^5) > 5.12 x 10^7That is way too much. So, where am I wrong? What should I optimize more? Here is my submission
•  » » 18 months ago, # ^ |   +1 I also used BIT and binary search, and my complexities is $O(nlognlogn)$. Here is my solution: 59884969Let's define a function called sum: $sum(n)=1+2+...+(n-1)+n=\frac{n(n+1)}{2}$You can calculate the raw value ( the result ) from right to left ( i from n to 1) . Because at the position n, the $a[n]$ must be $sum(res[n])$, because all numbers smaller than $res[n]$ are on it's left side.Then you add this number to the BIT, and let i--. When you use binary search to find the number of $res[i]$, the sum you get on the number $mid$ is $sum(mid-1)-bit.query(mid)$, which means the sum of the arithmetic progression minus the sum of numbers that appears on the right side of it and smaller than $mid$. Then you can get $res[i]$ by using that binary search.Here's a small trick that if number $mid$ is used and $cur==a[i]$, then you should let l=l+1, because at this case the answer must be greater than this value.
•  » » » 18 months ago, # ^ |   0 Hey! Thank you very much!
•  » » 5 months ago, # ^ |   0 sorry for being late but i am able to come up with idea described in editorial greedily like finding last index of 0 there would be smallest number that havent found yet then subtract number from indices ahead to it...again following same processcan you elaborate more how implement it using BIT
 » 18 months ago, # | ← Rev. 2 →   +8 Nice contest
 » 18 months ago, # |   0 Can we implement Approach-1 for problem D using BIT? Thanks
 » 15 months ago, # |   0 Anyone implemented bruteforce approach for B?
 » 14 months ago, # |   0
 » 12 months ago, # |   0 The tutorial of problem H has a typo. In the second line, and the right expression should be "Note that when k=−inf all internal vertices are blue. "
 » 12 months ago, # | ← Rev. 3 →   0 In problem C, a hint to my way of solution is two observations:1) XOR of $4k, 4k+1, 4k+2, 4k+3$ is zero.2) XOR of $16k, 16k+4, 16k+8,$ and $16k+12$ is zero.
 » 10 months ago, # |   0 https://codeforces.com/contest/1208/submission/78571667 Tried to solve B using binary search complexity : O(n^2logn) but still gives TLE dont know why
 » 8 months ago, # | ← Rev. 2 →   0 Add 1, 2, and 3 to the numbers in 1st, 2nd and 3rd quadrants respectively. The XOR of each row and column would still remain 0 as N/2 is also evenCan someone please tell me why xor will not change ? Thanks in advance.
 » 7 months ago, # |   0 Is there any Problem With my binary search approach for problem B.Uniqueness as it is working for N=1958 but giving Tle for N=1500here is my solution :- https://codeforces.com/problemset/submission/1208/90312551
 » 6 months ago, # |   0 Can someone please explain why this 90368815 for B isn't working. Two pointers should do the trick, but it's not working.
 » 5 months ago, # |   0 Problem n^2 log(n) solution of mine gives TLE at test case 29https://codeforces.com/problemset/submission/1208/95378891
•  » » 5 months ago, # ^ |   0 I'm not sure if this is the exact reason for it, but you are using unordered_set, which has a high constant and can be hacked to run in linear time. I haven't really used unordered_set before, mostly I have just stuck to using set. See if changing to that helps in any case. I'm also not 100% familiar with the way that you did binary search, but if you have used it in the past then I would expect it works fine.
•  » » » 5 months ago, # ^ | ← Rev. 2 →   0 Hey thanks so much for coming around. I've set and TLE exceeded at Test case 5 this time. But din't we expect that coz a BST just blows up access time from constant to logarithmic ? https://codeforces.com/problemset/submission/1208/95397240And yeah the binary search way that I used works fine — I learnt that from Codeforces EDU :) Same style of doing it worked well in the practice exercises givenAlso there is apparently an N log(N) solution that exists(as per the editorial) which I spent a lot of time thinking but couldn't get. If you get an idea to fo that please include it in the reply as welltruly I say thanks again :) for reaching out
•  » » » » 5 months ago, # ^ |   0 unordered_set is usually O(1), but there are cases which could make it O(n) if you aren't careful. It doesn't seem like the test case that you TLE on though is a hack case, so this probably isn't the issue. O(n^2 log n) should definitely work for this question, so I'm not sure why your code is TLEing specifically on this testcase. The reason why I believe that it is something to do with unordered_set is because the previous testcase doesn't TLE even though the size of the list is approximately the same. An easy way to speed this up would be the compress the numbers and then use an array to store counts of numbers, but this doesn't seem necessary for this problem. I suggest trying to find another solution with the same complexity, I think that the constant factor just might not be good enough.
 » 4 months ago, # |   0 There is a O(n) solution where we make the array suppose {1,2,3} as {1,2,3,1,2,3} i.e. making a copy and then applying sliding window and finding maximum contiguous segment with unique elements. but consider only when 1. p1==0 2. p2==n-1 3. p1<=n-1 && p2>=n-1 4. p1>n-1 ---> break runtime 32ms; https://codeforces.com/contest/1208/submission/96241329