A
Problem author: wuhudsm,adventofcode
#include <bits/stdc++.h>
using namespace std;
void test_case() {
int a, b, c, n; cin >> a >> b >> c >> n;
c = min(c, n / 100); n -= c * 100;
b = min(b, n / 10); n -= b * 10;
a = min(a, n); n -= a;
if (n == 0) cout << "YES\n";
else cout << "NO\n";
}
int32_t main() {
ios::sync_with_stdio(0); cin.tie(0);
int t; cin >> t;
while (t--) test_case();
}
We only need to use $$$100$$$ coins as much as possible first, then $$$10$$$ coins as much as possible, and finally $$$1$$$ coins.
#ifndef LOCAL
#pragma GCC optimize("O3,unroll-loops")
#pragma GCC target("avx,avx2,sse,sse2,sse3,sse4,popcnt,fma")
#endif
#include <bits/stdc++.h>
using namespace std;
#ifdef LOCAL
#include "../debug.h"
#else
#define dbg(...) "11-111"
#endif
int32_t main() {
ios_base::sync_with_stdio(0); cin.tie(0);
int T; cin >> T;
while(T--) {
int n; cin >> n;
vector<int> a(n), b(n);
long long ans = LLONG_MIN, sum = 0, sa = 0, sb = 0;
for(auto &i: a) cin >> i;
for(auto &i: b) cin >> i, sum += i;
ans = 2 * a.back() - sum;
for(int i = n - 1 ; i >= 0 ; i--) {
sa += a[i], sb += b[i];
ans = max(ans, min((sa - a.back()) - (sb - b[i]), sa - (sum - b.back())));
}
cout << ans << '\n';
}
return 0;
}
Consider enumerate $$$i_a$$$ from $$$1$$$ to $$$n$$$.
For a fixed $$$i_a$$$,Bob has only two possible optimal options:choose $$$i_b=1$$$ or $$$i_b=i_a+1$$$ (if $$$i_a \neq n$$$).So Bob chooses $$$min(a_{i_a}+...+a_n-(b_1+...+b_{n-1}),a_{i_a}+...+a_{n-1}-(b_{i_a+1}+...+b_n))$$$.
Note $$$score_i$$$ as the answer for fixed $$$i_a=i$$$,the final answer is $$$max(score_i)$$$.
#include <bits/stdc++.h>
using namespace std;
#define int long long
void test_case() {
int n; cin >> n;
string s; cin >> s;
int same = 1, cnt = 0;
for (int i = 1; i < n; i++) {
if (s[i] != s[i - 1]) same = 0;
same++;
if (same >= 3) cnt += same - 2;
}
for (int i = 0; i < n; i += 2) s[i] = '1' + '0' - s[i];
same = 1;
for (int i = 1; i < n; i++) {
if (s[i] != s[i - 1]) same = 0;
same++;
cnt += (same - 1) / 2;
}
cout << cnt << '\n';
}
int32_t main() {
ios::sync_with_stdio(0); cin.tie(0);
int t; cin >> t;
while (t--) test_case();
}
The key observation is that super palindromes are strings of length $$$\geq 3$$$ with all numbers equal, or a string with alternating $$$0-1$$$ (and with odd length).
After that, we only need to enumerate $$$i$$$ from $$$1$$$ to $$$n$$$ and count super palindromes that end in index $$$i$$$.
D
Problem author: KitasCraft
#include <bits/stdc++.h>
using namespace std;
typedef long long ll;
const int MAXN = 2e5 + 2;
int p[MAXN];
bitset<MAXN> used;
int main() {
int t;
cin >> t;
while (t--) {
ll n, sum = 1, k = 1;
cin >> n;
for (int i = 1;i <= n;i++) {
used.reset(i);
p[i] = 0;
}
while (sum <= n * (n + 1) / 4) {
used.set(k);
p[k] = k;
k++;
sum += k;
}
k--;
ll r = n * (n + 1) / 4 - k * (k + 1) / 2, rmax = n;
while (r) {
p[k] = min(p[k] + r, rmax);
used.set(p[k]);
used.reset(k);
r -= min(r, rmax - k);
k--; rmax--;
}
int point_p = 1, point_q = 1;
while (point_p <= n && point_q <= n) {
while (p[point_p]) point_p++;
while (used.test(point_q)) point_q++;
p[point_p] = point_q;
used.set(point_q);
}
for (int i = 1;i <= n;i++) cout << p[i] << ' ';
cout << endl;
}
}
Let $$$p$$$ be the lexicographically smallest balanced permutation of length $$$n$$$ and $$$x$$$ be the integer such that $$$1 \leq x < n$$$ and $$$p_1+p_2+⋯+p_x=p_{x+1}+p_{x+2}+⋯+p_n$$$. Let's take a look to the lexicographically smallest (not necessary balanced) permutation of length $$$n$$$, which is $$${1,2,\dots,n}$$$.
Let $$$m$$$ be the largest value such that:
Notice that since $$$m$$$ is as large as possible, $$$1+2+\dots+m+(m+1) > (m+2)+(m+3)+\dots+n$$$. Which means, $$$x < m+1$$$ for any balanced permutation of length $$$n$$$ (since no matter how you rearrange the permutation, $$$p_1+p_2+⋯+p_{m+1} \geq 1+2+\dots+m+(m+1)$$$). It is also worth noting that $$$m < n$$$. This will be useful later.
Since $$$p$$$ is the lexicographically smallest balanced permutation of length $$$n$$$, we want to greedily shove as much small numbers as possible to the prefix of $$$p$$$, then sort the prefix from smallest to largest, thus making $$$x$$$ be as large as possible. Notice that since $$$x < m+1$$$, it is possible that $$$x = m$$$ in the optimal solution and we can prove that it actually is.
Assume $$$p = {1,2,\dots,n}$$$ (in other words, $$$p_i = i$$$ for all $$$1 \leq i \leq n$$$). Focus on the $$$m$$$ first numbers in the permutation. Let $$$r = \frac{1+2+\dots+n}{2} - (1+2+\dots+m)$$$ (which means $$$r \geq 0$$$). Since:
$$$r < m+1$$$ which means $$$r \leq m$$$. This means that in order to make sure $$$p_1 + p_2 + \dots + p_m = p_{m+1} + p_{m+2} + \dots + p_n$$$, we can just add $$$1$$$ to all $$$p_j$$$ where $$$m - r + 1 \leq j \leq m$$$ and $$$p_1,p_2,\dots,p_m$$$ will still be a possible prefix of the permutation, since $$$p_m \leq n$$$ and $$$p_1 < p_2 < \dots < p_m$$$ will still be satisfied after the changes (because $$$m < n$$$ and $$$p_1 < p_2 < \dots < p_{m-r} < p_{m-r+1} < \dots < p_m$$$ before the changes). Now that we know $$$x = m$$$, how can we make $$$p$$$ as small lexicographically as possible?
Let's focus on the first $$$m$$$ numbers inside $$$p$$$. Set $$$p_i := i$$$ for all $$$1 \leq i \leq m$$$. We will modify this later. Now, we need to add the value $$$r$$$ to the first $$$m$$$ numbers by spreading the addition. However, to make sure $$$p$$$ is as small as possible, we have to greedily add some value to some last numbers inside $$${p_1,p_2,\dots,p_m}$$$ such that the added value is $$$r$$$ and $$${p_1,p_2,\dots,p_m}$$$ is lexicographically smallest. In other words, we can do this procedure to construct the first $$$m$$$ numbers inside the permutation:
- Set $$$p_i := i$$$ for all $$$1 \leq i \leq m$$$.
- Let $$$j = m$$$. While $$$r > 0$$$, add $$$\min(r,k-j)$$$ to $$$p_j$$$, decrease $$$r$$$ by $$$\min(r,k-j)$$$, and decrease $$$j$$$ by $$$1$$$ where $$$k$$$ is the maximum integer such that $$$k \leq n$$$ and $$$k$$$ currently does not appear inside $$${p_1,p_2,\dots,p_m}$$$.
For the last $$$n-m$$$ numbers, notice that we can just fill these with all the numbers that are not used in the first $$$m$$$ numbers, since the sum of the numbers will automatically be equal to $$$p_1+p_2+\dots+p_m = \frac{1+2+\dots+n}{2}$$$. Do note that to make sure $$$p$$$ is as small as possible, $$$p_{m+1} < p_{m+2} < \dots < p_n$$$ need to be satisfied.
Time Complexity: $$$O(n)$$$
Turns out that there is an unexpected DFS solution to this problem that some of the participants has used, although I don't really know how the sol works :].
Bonus question: Can you guess which $$$n$$$ values results in no possible balanced permutation?
E1 and E2
Problem author: Error_Yuan
#include<bits/stdc++.h>
using namespace std;
#define int long long
#define INF (int)(1e16)
#define N 510000
#define K 51
int a[N], cnt[N], pref[N], dp[K][N], n, k;
void calc(int i, int l, int r, int vl, int vr) {
if (r < l) return;
int mid = (l + r) / 2;
int v, mn = INF;
for (int j = vr; j >= vl; j--) {
int sum = pref[mid] - pref[j - 1];
int need = cnt[mid] - cnt[j - 1];
if (i % 2 == 0) {
need = (j + j + need - 1) * need / 2;
sum = sum - need;
} else {
need = (mid + mid - need + 1) * need / 2;
sum = need - sum;
}
if (dp[i - 1][j - 1] + sum < mn) {
mn = dp[i - 1][j - 1] + sum;
v = j;
}
}
dp[i][mid] = mn;
calc(i, l, mid - 1, vl, v);
calc(i, mid + 1, r, v, vr);
}
void test_case() {
cin >> n >> k;
for (int i = 1; i <= n; i++)
cin >> a[i];
for (int i = 2; i <= n; i++)
a[i] ^= a[i - 1];
for (int i = 1; i <= n; i++)
cnt[i] = cnt[i - 1] + a[i], pref[i] = pref[i - 1] + i * a[i];
for (int i = 1; i < n; i++)
dp[0][i] = INF;
dp[0][0] = 0;
int ans = INF;
for (int i = 1; i <= k; i++) {
calc(i, 1, n - 1, 1, n - 1);
if (i + (a[n] ^ (i & 1)) <= k)
ans = min(ans, dp[i][n - 1]);
}
cout << ans << '\n';
}
int32_t main() {
int t; cin >> t;
while (t--) test_case();
}
Considering the prefix xor array of $$$a$$$, we found that the original operation was equivalent to swapping $$$prexor_ {i-1}, prexor_ {i} (1<i<n) $$$.
In fact,it is a variation of this.
Afterwards, the constraint of 'at most 2' is equivalent to having at most one continuous $$$1... 1$$$in $$$prexor$$$.
Because $$$prexor_n$$$ cannot be changed,if $$$prexor_n=1$$$, we merge all $$$1$$$ into one suffix. Otherwise, we will merge all $$$1$$$s into the vicinity of the middle $$$1$$$.
Read the solution for the Easy Version first.
In this version,we need to divide all $$$1$$$ s into up to $$$\lfloor \frac{k}{2} \rfloor$$$ groups and merge the $$$1$$$ s of each group in the optimal way.
Note $$$pos$$$ as positions of all $$$1$$$s,we can calculate the cost of merging $$$pos_l,...,pos_r$$$ in $$$O(1)$$$.Note it as $$$cost(l,r)$$$.
Now,we have a slmple $$$DP$$$:$$$dp_{i,j}=min_{0 \leq t <i}(dp_{t,j-1}+cost(t+1,i))$$$.It's $$$O(n^2k)$$$.How to optimize it?
We notice $$$cost(l+1,r+1)+cost(l,r) \leq cost(l+1,r)+cost(l,r+1)$$$ holds here.Due to the quadrilateral inequality, decision monotonicity holds. We can optimize it to $$$O(nk log n)$$$ using classic techniques (decision stack or divide and conquer).
The code above divides every group into the left and right parts to make calculations easier.
To learn the classic techniques:
#include <bits/stdc++.h>
using namespace std;
#define int long long
#define MOD (int)(1e9+7)
#define N 110000ll
#define K 20ll
#define B 850000006ll
struct segtree {
vector<int> sum, lk, lb;
segtree() {}
segtree(int n) {
sum = vector<int>(4 * n + 1, 0);
lk = vector<int>(4 * n + 1, 1);
lb = vector<int>(4 * n + 1, 0);
}
void push(int v, int l, int r) {
if (lk[v] == 1 && lb[v] == 0) return;
sum[v] = (lk[v] * sum[v] % MOD + (r - l + 1) * lb[v] % MOD) % MOD;
if (l != r) {
lk[v * 2 + 0] = lk[v] * lk[v * 2 + 0] % MOD;
lb[v * 2 + 0] = (lk[v] * lb[v * 2 + 0] % MOD + lb[v]) % MOD;
lk[v * 2 + 1] = lk[v] * lk[v * 2 + 1] % MOD;
lb[v * 2 + 1] = (lk[v] * lb[v * 2 + 1] % MOD + lb[v]) % MOD;
}
lk[v] = 1, lb[v] = 0;
}
void upd(int v, int l, int r, int k, int x) {
push(v, l, r);
if (k < l || r < k) return;
if (l == r) {
sum[v] = x;
return;
}
int mid = (l + r) / 2;
upd(v * 2 + 0, l, mid, k, x);
upd(v * 2 + 1, mid + 1, r, k, x);
sum[v] = (sum[v * 2 + 0] + sum[v * 2 + 1]) % MOD;
}
void apply(int v, int l, int r, int ql, int qr, int k, int b) {
push(v, l, r);
if (qr < l || r < ql) return;
if (ql <= l && r <= qr) {
lk[v] = k;
lb[v] = b;
push(v, l, r);
return;
}
int mid = (l + r) / 2;
apply(v * 2 + 0, l, mid, ql, qr, k, b);
apply(v * 2 + 1, mid + 1, r, ql, qr, k, b);
sum[v] = (sum[v * 2 + 0] + sum[v * 2 + 1]) % MOD;
}
int get(int v, int l, int r, int ql, int qr) {
push(v, l, r);
if (qr < l || r < ql) return 0;
if (ql <= l && r <= qr) return sum[v];
int mid = (l + r) / 2;
return (get(v * 2 + 0, l, mid, ql, qr) +
get(v * 2 + 1, mid + 1, r, ql, qr)) % MOD;
}
};
int a[N];
segtree p[K], pp;
int inv(int x) {
return x == 1 ? 1 : MOD - MOD / x * inv(MOD % x) % MOD;
}
int32_t main() {
ios::sync_with_stdio(0); cin.tie(0);
int n, q; cin >> n >> q;
for (int i = 0; i < K; i++)
p[i] = segtree(n);
pp = segtree(n);
for (int i = 0; i < n; i++) {
cin >> a[i];
for (int j = 0; j < K; j++)
if (a[i] & (1 << j)) p[j].upd(1, 0, n - 1, i, 1);
}
int pk = (1 - 2 * B % MOD + MOD) % MOD, pb = B;
int ppk = pk * pk % MOD, ppb = (B - B * B % MOD + MOD) % MOD;
while (q--) {
int t; cin >> t;
if (t == 1) {
int i, x; cin >> i >> x; i--;
for (int j = 0; j < K; j++)
p[j].upd(1, 0, n - 1, i, (x >> j) & 1);
pp.upd(1, 0, n - 1, i, 0);
} else if (t == 2) {
int l, r; cin >> l >> r; l--; r--;
for (int j = 0; j < K; j++)
p[j].apply(1, 0, n - 1, l, r, pk, pb);
pp.apply(1, 0, n - 1, l, r, ppk, ppb);
} else if (t == 3) {
int l, r; cin >> l >> r; l--; r--;
if (l == r) {cout << 0 << '\n'; continue;}
int ev = 0;
int sumpp = pp.get(1, 0, n - 1, l, r);
for (int j = 0; j < K; j++) {
int sump = p[j].get(1, 0, n - 1, l, r);
int cur = (sump * (r - l + 1 - sump + MOD) % MOD - sumpp + MOD) % MOD;
(ev += cur * (1 << j) % MOD) %= MOD;
}
(ev *= inv(r - l + 1) * inv(r - l) % MOD * 2 % MOD) %= MOD;
cout << ev << '\n';
}
}
}
Firstly, because of linearity of expectation, we can calculate answer for each bit independently.
Formula for expected value on segment for one bit is $$$\frac{\sum_{l\leq i\neq j\leq r} p_i(1-p_j)}{(r - l + 1) \cdot (r - l) / 2}$$$, where $$$p_i$$$ is probability of $$$a_i$$$ being set to $$$1$$$.
To calculate this, we just need to maintain $$$\sum_{l\leq i\leq r} p_i$$$ and $$$\sum_{l\leq i\leq r} p_i(1-p_i)$$$ Let $$$po_i$$$ be $$$p_i$$$ before second operation and $$$b$$$ probability of bit changing ($$$\frac{1}{20}$$$).
Now to maintain $$$\sum_{l\leq i\leq r} p_i$$$ and $$$\sum_{l\leq i\leq r} p_i(1-p_i)$$$ we just need to apply linear function on segment, we can do it using segment tree. This is already fast enough for C++20 compiler. We can notice that $$$p_i(1-p_i)$$$ is same for all bits, so we can maintain only one segtree for it. With this optimization solution passes using other compilers.
The final complexity is $$$\mathcal{O}(n \log n \log A)$$$
Tnx for clear editorial ^^ ejoyed E2
How will we see the rating changes of this round?
You can check it at TheForces website for the data and rating changes from TheForces round you participate.
Do note that this is not an official CF rating, so your TheForces rating will be independent from the CF rating.
It's not even opening.. lol
just reload it i guess
Can someone explain in C how to count those type of strings ending at index i ?