I would like to invite you all to the **International Coding Marathon 2019**, under the banner of Technex'19, IIT (BHU) Varanasi.

Technex is the annual techno-management fest of Indian Institute of Technology (BHU), Varanasi organized from March 8 — 10, 2019.

International Coding Marathon is the flagship event of Byte The Bits, the set of programming events organized under Technex. ICM is brought to you by Mentor Graphics. It will take place on Friday, 8th March 2019, 21:00 IST. The contest will be held on Codechef.

The problemset has been prepared by hitman623, dhirajfx3, _shanky, drastogi21 and me (Enigma27). We would like to express our heartiest thanks to _hiccup and prakhar28 for their constant help in preparing the contest.

**Prizes -**

**Overall 1st — INR 15,000**

**Overall 2nd — INR 10,000**

**India 1st — INR 9,000**

**India 2nd — INR 5,000**

**IIT(BHU) 1st — INR 6,000**

**IIT(BHU) 2nd — INR 4,000**

**IIT(BHU) Freshmen 1st — INR 1,000**

Some of our previous contests :

ICM Technex 2017 and Codeforces Round #400 (Div. 1 + Div. 2, combined)

ICM Technex 2018 and Codeforces Round #463 (Div. 1 + Div. 2, combined)

Participants will have **2** hours **30** minutes to solve **7** problems. The scoring will be ICPC style.

Good luck everyone! Hope to see you on the leaderboard.

**UPD** — Contest begins in 1.5 hours

**UPD1** — Contest begins in 15 minutes

**UPD2** — Contest has ended. Congratulations to the winners. They will be contacted soon.

We deeply regret the inconvenience caused due to the problem ICM03. We will try to avoid such things in the future.

Editorials will be uploaded soon. Feel free to discuss the problems in the comment section.

**UPD3** — The editorial is up.

**Array Got Some Moves**

The answer array has to have *N* elements and *K* distinct elements.

Thus, choose *K* distinct elements to be from 1 to *K* and remaining *N* - *K* elements to be 1.

The sum of which is : (*K* * (*K* + 1) / 2) + *N* - *K*

**Bob and his strict Mom**

Let *P*[*i*] be the difference between the number of days Bob studied and the number of days he didn't till *ith* day.

Thus, we have to ensure that *P*[*i*] > = 0 for 1 < = *i* < = *N*.

Iterate over all *i* and greedily flip 0 to 1 whenever *P*[*i*] becomes negative.

**Snorlax hates working out**

Let us consider an array of *N* elements *b*, consisting of 0 and 1, 0 denotes Snorlax doesn't work out on the *ith* day and 1 indicates that he does.

The window size is *K*.

Let us look at the *ith* window, when you slide one window further, from the subarray sum, we have to subtract *b*[*i*] and add *b*[*i* + *k*]. Effectively adding *b*[*i* + *k*] - *b*[*i*]. If this value is 1, we can surely say that *b*[*i*] is 0 and *b*[*i* + *k*] is 1 and if this value - 1, the reverse is true.

If this difference is 0, we can say that *b*[*i*] and *b*[*i* + *k*] are equal but can't tell the exact values.

If the absolute value of this difference is more than 1, array construction is impossible.

Note that, we thus get relations between *b*[*i*], *b*[*i* + *k*], *b*[*i* + 2*k*]... and so on. We just check if these relations are consistent. If we just know any of this group, we can figure out the rest of this group. Note that part of a group are those indices who have same remainder on division with *l*.

At an index, we cannot get more than 1 invalid value. Meaning, we can't say that *b*[*i*] is neither 0 nor 1.

There is still one constraint we haven't enforced yet and this is the sum of the values in 1*st* window, till now, we only looked the difference between consecutive windows.

We enforce this constraint by making 1 those values that are unknown and occur at the end of the 1*st* window (the end because we have to minimise the number of 1's too and the ending values have the largest remainders, it can easily be seen that those with large remainder occur fewer times than those with smaller remainder).

If we are unable to enforce the above constraint, then also making the array is impossible.

The complexity is *O*(*N*).

**Yet Another Counting Problem**

Let us first represent each cell with a number by 2^{d[i][j]}. For a grid to be palindrome, the xor of all values should have at max 1 set bit in it. For a grid to be divisible by 6, the sum of digits should be divisible by 3 and the grid should contain at least 2 even digits.

A corner case is when the grid consists of only a single element. [6] is the only single element grid possible.

Since, *N* * *M* < = 100000, *min*(*N*, *M*) < = 320 Lets suppose *N* < = *M*, otherwise we can flip the grid diagonally and answer wont change. Lets iterate over 2 rows *i* and *j*. Let us store prefix_sums, prefix_xors and prefix_even_nos for the columns considering all rows from *i* to *j*. Let us iterate over the rightmost column *k*.

Now, all *l*'s such that

(

*pre*_*even*[*k*] -*pre*_*even*[*l*- 1]) > = 2

are valid leftmost columns of the grid.

For the third condition we can use 2 pointers. For first and second conditions, we can store frequency of xors along with in a [1«10][3] array and run an extra 9 size loop for second condition.

So, overall complexity will be *O*(*min*(*N*, *M*) * *N* * *M* * 10). Refer to the code for better understanding.

**Code**

```
#include <bits/stdc++.h>
#define int long long
#define pb push_back
#define pii pair<int,int>
#define vi vector<int>
#define vii vector<pii>
#define mi map<int,int>
#define mii map<pii,int>
#define all(a) (a).begin(),(a).end()
#define x first
#define y second
#define sz(x) (int)x.size()
#define endl '\n'
#define hell 1000000007
#define rep(i,a,b) for(int i=a;i<b;i++)
using namespace std;
int n,m;
int ans;
vector<string> s;
vector<vi> colsum,colxor,coleven;
int prexor[1<<10][3],curxor[100005],cursum[100005],cureven[100005];
void solve(){
cin>>n>>m;
s.resize(n+1);
rep(i,1,n+1){
cin>>s[i];
s[i]='#'+s[i];
}
if(n>m){
rep(i,1,n+1) s[i].resize(n+1);
rep(i,1,n+1){
rep(j,i,n+1){
swap(s[i][j],s[j][i]);
}
}
swap(n,m);
}
colsum.resize(m+1);
colxor.resize(m+1);
coleven.resize(m+1);
rep(i,1,m+1){
colsum[i].resize(n+1);
colxor[i].resize(n+1);
coleven[i].resize(n+1);
rep(j,1,n+1){
colsum[i][j]=(colsum[i][j-1]+s[j][i]-'0')%3;
colxor[i][j]=(colxor[i][j-1]^(1<<(s[j][i]-'0')));
coleven[i][j]=coleven[i][j-1]+((s[j][i]-'0')%2==0);
}
}
rep(i,1,n+1){
rep(j,i,n+1){
for(int k=0;k<(1<<10);k+=2) prexor[k][0]=prexor[k][1]=prexor[k][2]=0;
rep(k,1,m+1) curxor[k]=curxor[k-1]^colxor[k][j]^colxor[k][i-1];
rep(k,1,m+1) cursum[k]=(cursum[k-1]+colsum[k][j]-colsum[k][i-1]+3)%3;
rep(k,1,m+1) cureven[k]=cureven[k-1]+coleven[k][j]-coleven[k][i-1];
int lo=0;
rep(k,1,m+1){
while(lo<k and cureven[k]-cureven[lo]>=2){
prexor[curxor[lo]][cursum[lo]]++;
lo++;
}
ans+=prexor[curxor[k]][cursum[k]];
rep(l,1,10) ans+=prexor[curxor[k]^(1<<l)][cursum[k]];
}
}
}
rep(i,1,n+1){
rep(j,1,m+1){
if(s[i][j]=='6'){
ans++;
}
}
}
cout<<ans<<endl;
}
signed main(){
ios::sync_with_stdio(false);
cin.tie(0);
cout.tie(0);
int t=1;
// cin>>t;
while(t--){
solve();
}
return 0;
}
```

**Pika-Pika**

There are different ways to solve this problem. One is using a square-root trick as follows:

Run binary search over the maximum weight used. To check if a given maximum weight is valid we need to find the sum of weights from *U* to *V* with weight less than a given weight. This can be done by breaking the path in three parts : *U* to root, *V* to root and *LCA*(*u*, *v*) to root.

*sum*(*U*, *V*, *W*) = *sum*(*U*, *root*, *W*) + *sum*(*V*, *root*, *W*) - 2 * *sum*(*lca*, *root*, *W*) + (*w*[*lca*] < *W*) * *w*[*lca*]

So, we want to find the sum of weights less than *W* on a path from a node *U* to root.

Let’s divide the tree into blocks of size *B* and mark *n* / *B* special nodes such that any path from a node to root does not take more than *B* nodes until it reaches a special node.

One can do this by keeping a count of and then taking the index corresponding to the minimum value as *base*_*height*.

All nodes at a height *h* with will be marked special. Hence there can be at most *n* / *B* special nodes (pigeonhole principle) and distance between a node to its closest special node is less than *B*.

For each special node, store the weights in the path from root to this node in a sorted order. This can be done in a single dfs. This step will take *W* * (*n* / *B*) time if we sort the weights using counting sort (since the weights are bounded) where *W* is the upper bound for weight of nodes.

Then we can find the sum by moving up the tree until we reach a special node. For a special node, we can easily answer the query in *log*(*n*) using another binary search (upperbound/lowerbound). This step will take *B* + *log*(*n*) time per query.

Count of nodes can also be calculated along with the sum. We need to carefully handle the number of times nodes with maximum weight are used.

Overall time complexity is *O*(*W* * (*n* / *B*) + *q* * *log*(*W*) * (*B* + *log*(*n*))). An optimal value for *B* is close to 200.

Another solution is using persistent segment trees by keeping the information of root to a node in segment tree.

**Good Sequence**

The maximum length of *G* can be *K*. For length > = *K* + 1 , all elements of *G* won't be distinct.

The second condition can be expressed as

Lets make a directed graph containing *K* vertices from 0 to *K* - 1. An edge from *X* to *Y* signifies that in the sequence *G*, *Y* can come after *X*.

For checking if *Y* can come after *X*, we have: , for some *A*.

There exists some *A* if *gcd*(*X*, *K*) divides *Y*. This can also be written as *gcd*(*X*, *K*) divides *gcd*(*Y*, *K*). (Note that there can be multiple values of *A* but we have to find number of sequences for *G* and not for *F*, so it doesn't matter.)

Now, our task is to calculate the number of simple paths of length 1 to *K* in the graph and sum them up. Lets simplify the graph. Observe that both edges from *X* to *Y* and *Y* to *X* exist if and only if *gcd*(*X*, *K*) = *gcd*(*Y*, *K*). So, lets group the numbers from 0 to *K* - 1 such that in each group all the numbers have same gcd with *K*. Observe that the graph formed by these groups as components is a DAG. So now we can do DP to find the number of paths starting from a particular component.

Number of distinct values of *gcd*(*i*, *K*) are equal to the number of divisors of *K*. Since, *K* is upto 10^{12}, the number of divisors can be upto *M* = 7000. So, the final graph consists of at max *M* nodes and *C*(*M*, 2) edges.

The path starting from a component would look like :

some non-empty sequence of elements from this component + sequence from the child components

The first part is equal to : , where *n* is the number of elements in that component and *i* is the length of the chosen sequence from this part.

The second part would be the sum of dp values of all components reachable from current component and an extra +1 for ending at the current component only. The Final answer would be the sum of dp values of all components, as the path can start from any component.

For the first part observe that *f*[*n*]%*mod* = *f*[*n*%*mod*]. (Proof is easy).

Since, *mod* is upto 10^{6}, we can calculate the *f* values before hand, using the recurrence *f*[*n*] = *n* * (*f*[*n* - 1] + 1).

So, overall complexity would be *O*(*mod* + *M*^{2}) where *M* is the number of divisors of *K*.

**Code**

```
#include <bits/stdc++.h>
#define int long long
#define pb push_back
#define pii pair<int,int>
#define vi vector<int>
#define vii vector<pii>
#define mi map<int,int>
#define mii map<pii,int>
#define all(a) (a).begin(),(a).end()
#define x first
#define y second
#define sz(x) (int)x.size()
#define endl '\n'
#define hell 1000000007
#define rep(i,a,b) for(int i=a;i<b;i++)
using namespace std;
vi divisors;
int k,mod,a[10005],b[1000006],dp[10005],ans;
void solve(){
divisors.clear();
ans=0;
memset(dp,0,sizeof dp);
memset(a,0,sizeof a);
cin>>k>>mod;
b[0]=1;
rep(i,1,mod) b[i]=(i*b[i-1]+1)%mod;
for(int i=1;i*i<=k;i++){
if(k%i==0){
divisors.pb(i);
if(i*i!=k) divisors.pb(k/i);
}
}
sort(all(divisors));
for(int i=sz(divisors)-1;i>=0;i--){
a[i]=k/divisors[i];
dp[i]=1;
rep(j,i+1,sz(divisors)){
if(divisors[j]%divisors[i]==0){
a[i]-=a[j];
dp[i]=(dp[i]+dp[j])%mod;
}
}
dp[i]=dp[i]*(b[a[i]%mod]-1+mod)%mod;
ans=(ans+dp[i])%mod;
}
cout<<ans<<endl;
}
signed main(){
ios::sync_with_stdio(false);
cin.tie(0);
cout.tie(0);
int t=1;
cin>>t;
while(t--){
solve();
}
return 0;
}
```

Why isn't the contest organized on CF like in the last years?

Last year they faced some criticism on problem ideas(old ideas) and judging errors so they will now shift to Codechef where this happens on regular basis and so it will not be a surprise.

And you see.. contest has one problem removed and it's now unrated.

We want to distribute the prizes and we do not have enough problems for a CF div1+div2 Round. Also, we had less time to prepare the contest.

Anyways, I still hope that you will enjoy the contest.

Atlest you could have involved demoralizer,he would hve prepared the whole pset by himself.You all are noobs.

Well if he is that good, he could still do it all on his own. Then we can hold one on codeforces as well. I think two days must be more than enough time by his standard to do a div1+div2, as well as yours. We would really appreciate any help as per your capability, that is

if you have any!i m accepting i am a noob,but you are also a noob.and secondly if you are from iitbhu,why werent u included in psetting????bcz u r a noob.

Someone doing or not problem-setting is his own wish. What if i rejected because i wanted to participate in a contest my colleagues prepared.

Besides, by your logic, i am pretty sure that your friend demoralizer wasn't invited, so he must be a noob.So why are you suggesting such a person who can hold a div1+div2 on his own.This suggest that you are not only a noob, but mentally damaged as well.I don't know who he is and why he's saying all that...

Sorry brother, not anything against you. That was just reply for what he was asking.

Long challenge has still 4 more days left and this contest too is rated for all. What will happen if current div2/div1 contestants become div1/div2 after this contest while long still going on?

What usually happens is that rating changes of any short contest that happens during long challenge, come after long challenge rating changes.

Auto comment: topic has been updated by Enigma27 (previous revision, new revision, compare).What was wrong in problem ICM03? I got Ac and suddenly it got removed.

There was some problem in the author's solution.

How can I know whether my solution was correct or not??

Most likely it is incorrect like author solution. Seems that this problem couldn't be solved for such big constraints. Maximal matching could be reduced to this problem in log iterations, and as I know there is no known algorithm for 10^5 edges

Is there any solution for this problem had the edges been smaller?

I dont know such solution

https://ideone.com/V4g0Jb .... that is my solution .... can you explain how it is wrong ???

Consider the following case:

5 4

1 2

2 3

3 4

4 5

1 2 3 4 5

Answer maybe both 17 or 18, depending upon which node you choose first as there are multiple choices for highest degree node.

thanks ...that was helpful ....

How to solve Pika-Pika?

It can be solved using persistent segment tree on Trees. We first make a frequency segment tree on all the numbers encountered from Root (say 1) to other nodes.

Then to deal with query (U, V), we can decompose the frequency of the elements lying on that path as Frequency till U + Frequency till V — 2 * Frequency till LCA + the value of LCA.

Then the query can simply be answered by descending the segment tree as: If sum of values on left <= X, then take all elements on left and go to right with (X — sum of elements on left), otherwise go to left.

Complexity: (N + Q) logN

Code: http://p.ip.fi/bo0f

Thanks! This is quite a good trick

It is equivalent to performing an implicit binary search over the maximum power of a pokemon that we can defeat. I solved it using an explicit binary search. Great idea though.

Is Pika-Pika about flattening the tree and then using Mo's algorithm / sqrt-decomposition along with binary indexed tree to calculate the required prefix sum using binary search? Is O(n * sqrt(n) * logn) accepted for these constraints?

I solved in O(n log^2) by parallel binary search

Did u apply parallelism on the flattened array? then how do u cover the vertices that are occuring exactly twice in that range.

I have many many questions.

Is there some easy solution for task with tree or just HLD wih

O(log^{3}) per query?Also I can not understand why people adding constraints like

n·m≤ 10^{5}, did you think that current version is so nice and you need to make it a little bit ugly?And what is solution for sequence problem it was most interesting to me at this contest (at least before I find out what is expected solution)?

Sequence problem is really cool :)

First, note that if some G[i] is equal to 0, this must be last element of sequence. So, lets number of non-empty sequence of numbers 1, 2, .., k-1 be ans(k) and answer to problem will be 2*ans(k) + 1 (because to every sequence we can add 0 to the end).

Now lets look on sequence h[i] = gcd(g[i], k). It is not hard to see, that h[i] is always divisible by h[i-1]. And h[i] can be only divisor of k.

So, let dp[i] be number of non-empty sequences of numbers 1, 2, .., k-1, which ends on such number x, such that gcd(x, k) = i. We are only interested in such i, that are divisors of k.

Let f[i] be number of non-empty sequences of numbers 1, 2, .., i. This values can be calculated with formula f[i] = i * (f[i-1] + 1). Note that f[i + mod] % mod = f[i] % mod, so we are only interested in first mod values of f, which can be easily precalculated in O(mod).

Now, how many there are numbers j, such that gcd(j, k) = i? It can be shown that this is phi(k / i). (As we are only interested in phi of divisors of k, it can be calculated quick enough).

So, now dp[i] = f[phi(k / i)] * (1 + S), where S denotes sum of dp[j], where j is divisor of i. And answer to the problem is sum overall dp[i].

Complexity is something like O(K^2/3 + mod).

Thanks for this long and clear explanation! Task looks really good.

I have only one question (part which was only problem to me during contest too). Why is f[i]=i*(f[i-1]+1)? I do not understand multiplication by i.

I believe it can be proven by induction but I hope there is some nice proof with combinatorics.

f[n] = n + n*(n-1)+n*(n-1)*(n-2)+...+n*..*1

Logic behind this formula is following — first we choose how many numbers we use is sequence (let k). On first place we can choose any number, on second — any but first, and so on. So there are n*(n-1)*...*(n-k+1) ways to choose group with k numbers.

Now, f[n] = n * (1 + (n-1) + (n-1)*(n-2)+...+(n-1)*..*1), which is equal to n*(f[n-1] + 1)

Thanks for explanation, now everything is perfect clear. I had first part of formula and than I didn't see recursive connection between

nandn- 1.I tried something like . And second thing was known to me from math classes, but not useful :D

https://oeis.org/A007526 might be a good reference :)

Thanks for your explanation but only one question why f[i+mod]%mod = f[i]%mod ???

You can see that f[mod]=0 (has mod as factor), f[mod+1]=(mod+1)*(0+1)=1, and than continues in same manner

By induction we can prove: Suppose that f[i]%mod=f[i+mod]%mod.

Now f[i+1+mod]= (i +1 +mod)*(f[i+mod]+1)%mod= (i+1+mod)*(f[i]+1)%mod = (i+1)*(f[i]+1) % mod = f[i+1].

I have mathematical explanation for sequence problem though I figured it out very late into the contest.

Let d be the order of element x in

Z_{k}. Then, observe that order of G[i] always divides order of G[i-1](by Lagrange's Theorem: "Order of subgroup always divides order of group" and G[i] belongs to group generated using G[i-1] as generator) and in operationG[i] = (G[i- 1] ×F[i])%K, order can decrease or remain equal. And also, observe that ord(G[i])<ord(G[i-1]), thenx×G[i] =G[i- 1] will never hold. So, it will take care of the distinct property of G[i]. Now, we can do dp over the order of G[i] and use the fact that number of elements of order x is phi(K/x) since order of element a = .For the tree problem, we thought of 2 solutions, one using persistent segment trees and another using a square-root trick on tree. Details can be found in the editorial.

The constraint

n*m< = 10^{5}was not added after the problem was made. Also, we did not think that flipping the grid about diagonal would make it ugly. But, I agree that it could have been better.The solution for the sequence problem can be found in the editorial soon.