103505A - Touch
Problem author : Gheal
$$$f(1)$$$ and $$$f(n)$$$ uniquely determine a solution.
$$$f(3)$$$ is constant, regardless of $$$f(1)$$$.
$$$f(n-2)$$$ is constant, regardless of $$$f(1)$$$.
$$$f(i+3)-f(i)=a[i+2]-a[i+1]$$$
Let $$$f(i)$$$ be the number of times that Glob touches the $$$i$$$-th element.
Solution:
Lemma 1
Similarly, $$$f(n-2)=a[n-1]-a[n]$$$.
Lemma 2
Using these two lemmas, one can find the values of $$$f(3k)$$$ and $$$f(n-2-3k)$$$ relatively easily. Therefore, if $$$n+1$$$ is not a multiple of $$$3$$$, then either $$$f(1)$$$ or $$$f(n)$$$ is known, and the rest of the solution can be computed accordingly.
To solve for $$$n+1=3k$$$, we will need to write $$$f(x)$$$ as $$$a \cdot f(1) + b$$$; $$$a,b \in \mathbb{Z}$$$:
$$$f(1)=1 \cdot f(1) + 0$$$;
$$$f(2)=a[1]-f(1)=-1 \cdot f(1) + a[1]$$$;
$$$f(3)=a[2]-f(1)-f(2)=a[2]-a[1]= 0 \cdot f(1) + a[2]-a[1]$$$;
$$$f(4)=a[3]-f(2)-f(3)=a[3]-a[2]+f(1)= 1 \cdot f(1) + a[3]-a[2]$$$;
$$$f(5)=a[4]-f(3)-f(4)=a[4]-a[3]-f(1)= -1 \cdot f(1) + a[4]-a[3]$$$;
$$$f(6)=a[5]-f(4)-f(5)=a[5]-a[4]= 0 \cdot f(1) + a[5]-a[4]$$$;
$$$\ldots$$$
A crucial observation is that the values of $$$a$$$ are [$1,-1,0,1,-1,0,1,-1,0,\ldots$]. Thus, we can start by ``guessing'' that $$$f(1)=0$$$. The other values of $$$f$$$ will be computed accordingly. Using this guess, it is possible to find the minimum value of $$$f(1)$$$:
.
The final step is to set $$$f(1)$$$ as $$$f_{min}(1)$$$ and check whether this solution is valid.
Time complexity: $$$O(N)$$$ per testcase
#include<bits/stdc++.h>
using namespace std;
typedef long long ll;
const ll NMAX=3e5+5;
ll h[NMAX],dp[NMAX];
void testcase(){
ll n;
cin>>n;
for(ll i=0;i<n;i++){
cin>>h[i];
dp[i]=0;
}
dp[n]=0;
if(n%3==2){
ll fmin=0;
for(ll i=1;i<n;i++){
dp[i]=h[i-1]-dp[i-1]-(i>=2?dp[i-2]:0);
if(i%3==0)
fmin=min(fmin,dp[i]);
}
fmin=-fmin;
for(ll i=0;i<n;i++)
dp[i]+=(i%3==0?fmin:(i%3==1?-fmin:0));
}
else{
for(ll i=2;i<n;i+=3)
dp[i]=(i>2?dp[i-3]:0)+h[i-1]-h[i-2];
for(ll i=n-3;i>=0;i-=3)
dp[i]=(i+3<n?dp[i+3]:0)+h[i+1]-h[i+2];
for(ll i=0;i<n;i++){
if(i%3!=2 && i%3!=n%3)
dp[i]=h[i]-(i?dp[i-1]:0)-(i+1<n?dp[i+1]:0);
}
}
for(ll i=0;i<n;i++){
if(dp[i]<0 || dp[i]>h[i]){
cout<<"NO\n";
return;
}
if(h[i]!=(dp[i]+(i?dp[i-1]:0)+(i+1<n?dp[i+1]:0))){
cout<<"NO\n";
return;
}
}
cout<<"YES\n";
for(ll i=0;i<n;i++)
cout<<dp[i]<<' ';
cout<<'\n';
}
int main()
{
ios_base::sync_with_stdio(false); cin.tie(0);
ll t;
cin>>t;
while(t--)
testcase();
return 0;
}
103505B - In Heaters
Problem author : cadmiumky
Are sponges all that necessary? Is there a way in which we can ignore their Y-coordinate?
The Y-coordinate is useless -- if we can calculate how many vapors are generated before a time T at some column we can solve the problem. But first: how can you tell how many vapors you generated at a time T?
Assuming you solved the previous question: Let's take the formula $$$(t + c) - t + 1$$$, representing how many vapors a sock would generate per general. Is there a way in which we could extend this formula to ""deindividualise"" the two indexes (i.e. (t+c) and (t)), such that the formula wouldn't depend necessarily on the pair formed by them, but by a more general sum? And assume you did this to calculate how many vapors you generated before a distant time, such as $$$10^9$$$, what could you add to this formula to ""rewrite"" the formula according to the T (the time before which queries are to be solved) you are currently calculating?
Adding socks may be now a problem, since we have to "dynamic"-ize our data structure -- but can we just add them to our data structure before, and until they are actually added we should just ignore them (and their corresponding value in any sum)
First, by the nature of the problem, we are dealing with parallel objects (which we would've liked to merge in the problem), so we will consider that we can do the following:
Maintain a system that could at least answer queries without any updates
Maintain a structure that allows adding socks in such a manner that would approve queries in reasonable time
Let us now focus on only one lane. Considering that after any update we can determine the time at which the cell above that lane is eroding, we could maintain some sort of priority queue to get the minimum time.
Now, we must split our solution in three:
h2. What are sponges?
First, observe that the way sponges work allow us to disconsider the Y-coordinate of the sponges, since if we consider that some set of vapors would be absorbed in some sponge, it really doesn't matter where the sponge is positioned, since the same set would (or at least could) be absorbed (sooner or later). These vapors should then be disconsidered when calculating the time at which the ceiling should take a visit to the dentist. Or better, if $$$L_i$$$ should be the amount of vapors that need to accumulate in the ceiling to ruin it, then $$$L_i = W + C_i$$$, where $$$C_i$$$ is the amount that could be cumulated in sponges. Assuming we can calculate $$$L_i$$$ easily (which we can), we should then proceed to the next step: finding the time at which the ceiling falls in some point
h2. How do you count vapors?
The vapors that would reach the ceiling at time $$$X+H$$$ would be generated at time $$$X$$$ (by the way vapors move). We will continue to disconsider the fact that they should be touching the ceiling to be considered, and we will continue calculating the vapors generated before some time $$$X$$$. Let this number be denoted as $$$T(X)$$$
if we were to have to calculate how many vapors would be generated at some time $$$X$$$, this question would be easy. We could use difference arrays and prefix sums. But now, we must also sum those that would be generated earlier.
Let us consider some segment in the difference arrays. Let $$$l$$$ be the position when it grows with 1 and $$$r$$$ be the position when the difference array decreases with 1 (i.e. in the input, $$$l$$$ would be $$$t_i$$$, $$$r$$$ would be $$$t_i + c_i$$$, for some $$$i$$$ with respect to the line we currently are on). Then:
Let's state the obvious: for every time greater than $$$r$$$, that segment would not contribute, for every position between $$$l$$$ and $$$r$$$ would contribute with 1 vapor, and every position before $$$l$$$ would not contribute. Moreover, if we consider that we cannot surpass some time $$$T$$$, then we would be better to consider that $$$l$$$ has a "weight" of $$$T - l$$$ and $$$r$$$ has a "weight" of $$$-(T - r)$$$. Then, the sum of these weights would be
Generally, you could consider that any position would have the corresponding weight equal to $T - X$. Everything subsequently would be calculated similarly to the classical algorithm of the difference arrays.
A data structure in which we can maintain the socks would be a Fenwick Tree, and we suggest that on the first position we would maintain the first index (with a nonzero value) from the modified difference array (which we explained earlier). Now, getting T(X) requires us to:
Get $$$A_x$$$ = the sum of all the weights on indexes that are generated at a time before X
Get $$$B_x$$$ = the sum of all the normalized weights (i.e. the normalized weight of some index would be -1 if the weight was negative and 1 otherwise).
(the $$$_x$$$ thing is to denote that this value depends on the lane we currently calculate this value, as to not create confusion)
The problem of the weight system is that by getting $$$A_x$$$ we would count how many vapors vould be generated by some segments that end before some time X. The problem is that some vapors generated by these segments will not have the time to reach the ceiling. We need $$$B_x$$$ to try to redefine the weight of every index from $$$T - value$$$ (where $$$T$$$ (the constant) is defined earlier as a very distant time) to $$$X - value$$$. Doing this can be expressed as the following equation:
Now, with being able to calculate the sum of all vapors generated before some time, we can search binarily the first time at which we surpass the limit of the current lane.
h2. How do you process updates?
At first, we could consider all new sponges to be inactive (i.e. their contributions in both $$$A_x$$$ and $$$B_x$$$ would be 0). The ones that appear initially would be forcefully added to each lane accordingly. Then, when we are to add a sponge (after some query), we add their corresponding indexes in both the Fenwick tree for calculating $$$A_x$$$ and for the Fenwick tree for calculating $$$B_x$$$.
#include <bits/stdc++.h>
using namespace std;
const int coordmax = 1e5 + 5;
const int timemax = 2e5 + 5;
using pii=pair<int,int>;
#define lsb(x) (x&(-x))
class AIB {
vector<int> tree;
int len = 0;
public:
void update(int poz, int val) {
while(poz <= len) {
tree[poz] += val;
poz += lsb(poz);
}
}
int query(int poz) {
int sum = 0;
while(poz > 0) {
sum += tree[poz];
poz -= lsb(poz);
}
return sum;
}
void reset(int clen) {
tree = vector<int>(clen + 1, 0);
len = clen;
}
};
multiset<int> positions;
int lastwettime[coordmax];
int offset[coordmax];
int globaloffset;
AIB sponsock[coordmax];
AIB snormsock[coordmax];
vector< pii > sortsock[coordmax]; //
struct sock {
int x, vpon, vnorm, aibindex;
};
vector< sock > socks;
//vector< pii > ;
void addsock(int x, int c, int t) {
sortsock[x].push_back(pii{t,socks.size()});
socks.push_back(sock{x, timemax — t, 1, -1});
sortsock[x].push_back(pii{t + c,socks.size()});
socks.push_back(sock{x,-(timemax — t — c), -1, -1});
}
void initline(int line) {
sort(sortsock[line].begin(), sortsock[line].end());
sponsock[line].reset(sortsock[line].size());
snormsock[line].reset(sortsock[line].size());
for(int i = 0; i < sortsock[line].size(); i++) {
auto x = sortsock[line][i];
socks[x.second].aibindex = i + 1;
}
}
void insertsock(int index) {
auto x = socks[index];
sponsock[x.x].update(x.aibindex,x.vpon);
snormsock[x.x].update(x.aibindex,x.vnorm);
}
int T(int poz, int time) {
int i = distance(sortsock[poz].begin(), upper_bound(sortsock[poz].begin(),sortsock[poz].end(),pii{time,1e9}));
return sponsock[poz].query(i) + (time + 1 — timemax) * snormsock[poz].query(i);
}
void getnewtime(int poz) {
int limitval = globaloffset + offset[poz];
int temp, limittime=0;
for(int i = 17; i >= 0; i--) {
temp = min(timemax-1, limittime + (1 << i));
//cout << poz << ' '<< temp << ' ' << T(poz, temp) <<'\n';
if(T(poz, temp) < limitval)
limittime = temp;
}
//cout << T(poz,2) <<' ' << T(poz,3) <<'\n';
limittime++;
//cout << limittime << '\n';
if(lastwettime[poz] != -1)
positions.erase(positions.find(lastwettime[poz]));
lastwettime[poz] = limittime;
positions.insert(limittime);
}
vector<tuple<int,int,int,int> > queries;
int main() {
ios_base::sync_with_stdio(0);
cin.tie(0);
int n, m, q, x, c, t, y, H, type;
cin >> n >> m >> q >> globaloffset >> H;
for(int i = 0; i < n; i++) {
cin >> x >> c >> t;
addsock(x,c,t);
}
for(int i = 0; i < m; i++) {
cin >> x >> y >> c;
offset[x] += c;
}
for(int i = 0; i < q; i++) {
cin >> t >> x >> y >> c;
queries.push_back({t,x,y,c});
if(t == 1)
addsock(x,y,c);
}
for(int i = 1; i < coordmax; i++) {
lastwettime[i] = -1;
initline(i);
}
for(int i = 0; i < 2 * n; i++) {
insertsock(i);
}
for(int i = 1; i < coordmax; i++) {
getnewtime(i);
}
auto temp = *(positions.begin());
if(timemax-temp<=2) {
temp = -H — 1;
}
cout << temp + H << '\n';
int sindex = 2 * n;
for(auto tup:queries) {
tie(type,x,c,t) = tup;
if(type == 1) {
insertsock(sindex++);
insertsock(sindex++);
}
else
offset[x] += t;
getnewtime(x);
temp = *(positions.begin());
if(timemax-temp<=2) {
temp = -H — 1;
}
cout << temp + H << '\n';
}
}
103505C - The revenge of GrandPaPà
Problem author : tibinyte
Subtask 1 : $$$N \le 9$$$
In this subtask , we can just generate all the permutations using a backtracking algorithm , and use a searching algorithm to find the length of the maximal subsequence with consecutive values. This way we get an $$$O(N! \cdot N)$$$ time complexity and ACE this subtask.
Subtask 2 : $$$N \le 500$$$
Let's first assume we know how to calculate $$$f(N,K)$$$ where $$$f(N,K)$$$ is the number of permutations of length $$$N$$$ with no sequence of consecutive elements exceding length $$$K$$$. Thus , our answer can be written as $$$f(N,K) - f(N,K-1)$$$. It is easy to see why this is correct , subtraction leaves us only with the sequences that have maximum length $$$K$$$. Let's dig further into calculating $$$f(N,K)$$$. Consider the identity permutation $$$P = [1,2,3,4,...,N]$$$. Any permutation is nothing but a reunion of some subsegments of this permutation. Since we don't want to have any length of any subsegment exceding $$$K$$$ , we will only ""cut"" this permutation into ""buckets"" of lengths smaller than or equal to $$$K$$$. Let's look at such a partition for $$$N = 6 , K = 3$$$.
$$$P = [1,2,3,4,5,6]$$$ , buckets = $$$[ [1,2,3] , [4,5,6] ]$$$.
Look at the first bucket , if we want to make a valid permutation it should never be next to the left of the second bucket. The only way to make a good permutation with this buckets would be to permute them as follows : $$$[ [4,5,6] , [1,2,3] ]$$$. Let's write a formal definition of a good permutation of the buckets : A permutation $$$P$$$ of the buckets is good if and only if there is no $$$i$$$ such that $$$P_i$$$ = $$$P_{i+1} - 1$$$, considering the buckets as being numbered from 1 to $$$|P|$$$ from left to right. Great ! Now notice how the number of the valid permutations for a set of buckets only depends on $$$|P|$$$ , and not by the elements themselves. Double Great !! It is easy to see that every set of buckets is nothing but a way of partitioning $$$N$$$ into some numbers. For the above bucket split ( $$$N=6,K=3$$$ ) , we have the partition $$$6 = 3+3$$$. Call $$$g(i,j)$$$ the number of ways of writing $$$i$$$ as sum of $$$j$$$ numbers , each in range $$$[1,K]$$$. This can easily be solved via Dynamic Programming in $$$O(N^2)$$$. This may be too slow , because we need to calculate the answer for every $$$K$$$ , but it is a start. Now let's focus on the number of ways to permute $$$|P|$$$ buckets.
We will continue solving the mentioned "subproblem". It turns out that the number of permutations P ending in 1 is equal to the number of permutations ending in 2 .... equal to the number of permutations ending in $$$|P|-1$$$. The number of permutations ending in $$$|P|$$$ is different , but we can handle this. Proof for this observation is hard , so we will just rely on symmetry of the given problem. Now , let's call $$$DP(i,c)$$$ as being the number of permutations of length $$$i$$$ respecting the above restriction , $$$c = true$$$ if the permutation ends in $$$i$$$ , false otherwise. Transitions are as follows :
Now we can conclude the MAGIC formula that will solve our problem :
Since we need the answer for every $$$K$$$ , we will get a time complexity of $$$O(N^3)$$$ , which will pass this subtask , and score $$$60$$$ points.
Subtask 3 : $$$N \le 2000$$$
Notice how the only thing that stops us from getting a full solution is the recalculation of $$$g$$$ which is done $$$N$$$ times , each time in $$$O(N^2)$$$. Let's build two dp arrays $$$cnt1(j,i)$$$ = the number of ways to write i as sum of $$$j$$$ numbers in range $$$[1,\sqrt{N}-1]$$$ and $$$cnt2(j,i,t)$$$ = the number of ways to write $$$i$$$ as sum of $$$j$$$ numbers such that every number fits in range $$$[\sqrt{N},\sqrt{N}+t]$$$. Notice how $$$j \le \sqrt{N}$$$ for every $$$cnt2$$$ value. We can calculate this arrays in $$$O(N^2 \cdot \sqrt{N})$$$. Now , for the first $$$\sqrt{N}-1$$$ we will calculate the answer like before in $$$O(N^2)$$$. Now , for each $$$X$$$ in range $$$[\sqrt{N},N]$$$ we will do some smart merging of cnt1 and cnt2. Let's iterate through the sum the partitions in $$$cnt2$$$ contribue and the number of elements in such a partition. Now , we conclude the formula
We will need to optimize the way of calculating the formula , since it takes $$$O(N^2\sqrt{N})$$$ time to calculate. It turns out we can use prefix sums to calculate that sum really fast. Let's modify $$$cnt1$$$ , $$$cnt1(j,i,t)$$$ = the number of ways to write $$$i$$$ as sum of $$$j$$$ numbers , if we will add exactly $$$t$$$ numbers that are in range $$$[\sqrt{N},N]$$$. This way, all the coefficients of the equation depend on cnt1 , and we can store their values in $$$cnt1$$$. This means that $$$cnt1(j,i,t)$$$ = $$$cnt1(j,i,t) \cdot \binom{j+t}{t} \cdot DP(j+t)$$$. Let's see how our formula looks like now
Since $$$v$$$ and $$$l$$$ are fixed , we can easily calculate the last sum in $$$O(1)$$$ by keeping prefix sums over $$$cnt1$$$. This way we will get $$$O(N \cdot \sqrt{N})$$$ calculation time for every $$$X$$$ in range $$$[\sqrt{N} , N]$$$ , which could be approximated to $$$O(N^2 \cdot \sqrt{N})$$$. For the first $$$\sqrt{N}-1$$$ terms we can use the idea from the previous subtask , and we have another $$$O(N^2 \cdot \sqrt{N})$$$. Thus , our final time complexity will be $$$O(N^2 \sqrt{N})$$$ which is enough to get 100 points.
Nothes from author
GrandPaPà is happy , hope you are too :). Also , please rate this editorial , since I'm not sure that everything was explained properly. Your feedback means a lot to me , since I can further improve in problemsetting. Have a nice day , and remember that you are the best ;)
#include <bits/stdc++.h>
using namespace std;
int magic[2001][2001][46], dp[2001][2001], prec[2001][2001], cnt[2001], ans[2001], MOD;
void solve(int n, int termeni, int st, int dr) {
for (int i = 0; i <= n; ++i) {
dp[0][i] = 1;
}
for (int j = 1; j <= termeni; ++j) {
for (int i = 1; i < st; ++i) {
dp[j][i] = 0;
}
for (int i = st; i <= n; ++i) {
if (i > dr) {
dp[j][i] = dp[j - 1][i - st] - dp[j - 1][i - dr - 1];
if (dp[j][i] < 0) {
dp[j][i] += MOD;
}
}
else {
dp[j][i] = dp[j - 1][i - st];
}
dp[j][i] += dp[j][i - 1];
if (dp[j][i] >= MOD) {
dp[j][i] -= MOD;
}
}
}
for (int i = 1; i <= n; ++i) {
dp[0][i] = 0;
}
for (int j = 1; j <= termeni; ++j) {
for (int i = n; i >= 1; --i) {
dp[j][i] -= dp[j][i - 1];
if (dp[j][i] < 0) {
dp[j][i] += MOD;
}
}
}
}
int main() {
std::ios_base::sync_with_stdio(false);
cin.tie(nullptr);
int n;
cin >> n >> MOD;
int lim = sqrt(n) + 1;
cnt[0] = 1;
cnt[1] = 0;
for (int i = 2; i <= n; ++i) {
cnt[i] = (1ll * (cnt[i - 1] + cnt[i - 2]) * (i - 1)) % MOD;
}
for (int i = n; i >= 1; --i) {
cnt[i] += cnt[i - 1];
if (cnt[i] >= MOD) {
cnt[i] -= MOD;
}
}
prec[0][0] = 1;
for (int i = 1; i <= n; ++i) {
for (int j = 0; j <= i; ++j) {
if (j == 0 || j == i) {
prec[i][j] = 1;
}
else {
prec[i][j] = prec[i - 1][j] + prec[i - 1][j - 1];
if (prec[i][j] >= MOD) {
prec[i][j] -= MOD;
}
}
}
}
for (int i = 1; i <= n; ++i) {
for (int j = 0; j <= i; ++j) {
prec[i][j] = (1ll * prec[i][j] * cnt[i]) % MOD;
}
}
for (int i = 1; i <= n; ++i) {
if (i < lim) {
solve(n, n, 1, i);
for (int j = 1; j <= n; ++j) {
ans[i] += (1ll * dp[j][n] * cnt[j]) % MOD;
if (ans[i] >= MOD) {
ans[i] -= MOD;
}
}
if (i == lim - 1) { // MAGIC !!
for (int t = 0; t <= lim; ++t) {
magic[0][0][t] = cnt[t];
}
for (int j = 1; j <= n; ++j) {
for (int i = 1; i <= n; ++i) {
for (int t = 0; t <= lim && j + t <= n; ++t) {
if (dp[j][i] != 0) {
magic[j][i][t] = (1ll * dp[j][i] * prec[j + t][t]) % MOD;
}
}
}
}
for (int j = 1; j <= n; ++j) {
for (int i = 0; i <= n; ++i) {
for (int t = 0; t <= lim; ++t) {
magic[j][i][t] += magic[j - 1][i][t];
if (magic[j][i][t] >= MOD) {
magic[j][i][t] -= MOD;
}
}
}
}
}
}
else {
solve(n, lim, lim, i);
for (int sum = 0; sum <= n; ++sum) {
for (int cati = 0; cati * lim <= sum; ++cati) {
ans[i] += (1ll * magic[n][n - sum][cati] * dp[cati][sum]) % MOD;
if (ans[i] >= MOD) {
ans[i] -= MOD;
}
}
}
}
}
for (int i = n; i >= 1; --i) {
ans[i] -= ans[i - 1];
if (ans[i] < 0) {
ans[i] += MOD;
}
}
for (int i = 1; i <= n; ++i) {
cout << ans[i] << ' ';
}
}
Actually an $$$\mathcal{O}\left(n^2\right)$$$ is available for 103505C - The revenge of GrandPaPà
my submission: 144068682