Given 3 points in 3D, is there a way to calculate the area of the triangle made by them in O(1), considering the Cross Product will need to use the square root to calculate the magnitude of the vector produced by the cross product?

# | User | Rating |
---|---|---|

1 | tourist | 3434 |

2 | OO0OOO00O0OOO0O0…O | 3280 |

3 | Syloviaely | 3274 |

4 | Petr | 3233 |

5 | Um_nik | 3213 |

6 | ko_osaga | 3124 |

7 | Benq | 3105 |

8 | Swistakk | 3092 |

9 | fjzzq2002 | 3055 |

10 | anta | 3053 |

# | User | Contrib. |
---|---|---|

1 | Radewoosh | 170 |

2 | rng_58 | 162 |

3 | tourist | 159 |

4 | Petr | 152 |

4 | Swistakk | 152 |

6 | csacademy | 150 |

7 | Vovuh | 145 |

8 | Um_nik | 144 |

8 | Errichto | 144 |

10 | PikMike | 143 |

The tutorials for problems B & E were written by 1am, D by The-Legend, F by Qumair, J & K by Motarack and M by abo_od303. Thank you guys for also reviewing the rest of the tutorial!

It’s enough to try all segments of size *K* that either starts with the start of a range or ends with the end of a range, suppose that the optimal solution is somewhere that starts in some range and ends in some other range (possibly ranges of zeroes), then shifting the segment to one direction will increase the answer, and shifting it in the other direction will decrease it until you reach the boundary of the range, or if both ranges have the same amount of money then shifting will not affect the answer. This can be done using a sorted array of the ranges and for each range that starts with *l*_{i} we can binary search for *l*_{i} + *k* - 1 and to get the result of all the ranges in between we can do a prefix summation for (*r*_{i} - *l*_{i} + 1) × *v*_{i}. Same process can be done for trying the ends of all ranges.

**Complexity per test-case:** .

```
#include <bits/stdc++.h>
using namespace std;
struct node {
int a, b, c;
node() {}
node(int a, int b, int c):
a(a), b(b), c(c) {}
bool operator < (const node & n) const {
return a < n.a;
}
};
int const N = 1e5 + 10;
int t, n, x;
long long cs[N];
node a[N];
int main() {
scanf("%d", & t);
while (t-- != 0) {
scanf("%d %d", & n, & x);
for (int i = 1; i <= n; ++i)
scanf("%d %d %d", & a[i].a, & a[i].b, & a[i].c);
++n;
a[n] = node(-1, -1, 0);
sort(a + 1, a + n + 1);
for (int i = 1; i <= n; ++i)
cs[i] = 1ll * (a[i].b - a[i].a + 1) * a[i].c + cs[i - 1];
long long best = 0;
for (int i = 1, nw; i <= n; ++i) {
nw = a[i].a + x - 1;
int idx = upper_bound(a + 1, a + n + 1, node(nw, 2e9, 2e9)) - a;
--idx;
if (nw >= a[idx].b)
best = max(best, cs[idx] - cs[i - 1]);
else
best = max(best, cs[idx - 1] - cs[i - 1] + 1ll * (nw - a[idx].a + 1) * a[idx].c);
}
for (int i = n, nw; i > 0; --i) {
nw = max(0, a[i].b - x + 1);
int idx = upper_bound(a + 1, a + n + 1, node(nw, 2e9, 2e9)) - a;
--idx;
if (nw <= a[idx].a)
best = max(best, cs[i] - cs[idx - 1]);
else
best = max(best, cs[i] - cs[idx] + 1ll * (a[idx].b - nw + 1) * a[idx].c);
}
printf("%lld\n", best);
}
return 0;
}
```

This problem simply requires implementation with basic modulus and division calculations. Please observe the following image:

```
#include <bits/stdc++.h>
using namespace std;
typedef long long ll;
#define se second
#define fi first
#define pb push_back
const int N = 1001;
ll fans[N];
int main() {
int t, n;
ll x;
scanf("%d", &t);
while (t--) {
scanf("%lld%d", &x, &n);
memset(fans, 0, sizeof fans);
if (n == 1) {
cout << x << endl;
continue;
}
for (int i = 1; i <= n && x; i++, x--)
fans[i]++;
if (x > 0) {
ll temp = x / (n - 1), rem = x % (n - 1);
if (temp & 1) {
fans[1] += temp / 2 + 1, fans[n] += temp / 2;
for (int i = 2; i < n; i++)
fans[i] += temp;
for (int i = 2; i <= n && rem; i++, rem--)
fans[i]++;
}
else {
fans[1] += temp / 2, fans[n] += temp / 2;
for (int i = 2; i < n; i++)
fans[i] += temp;
for (int i = n - 1; i >= 0 && rem; i--, rem--)
fans[i]++;
}
}
for (int i = 1; i <= n; i++) {
printf("%lld", fans[i]);
if (i != n)
printf(" ");
}
puts("");
}
return 0;
}
```

The maximal number *m* which is less than *n* is *n* - 1, so the number of flipped bits will be exactly the number of different bits between *n* and *n* - 1 which is equal to the number of 1 bits in *n* XOR (*n* - 1).

**Complexity per test-case:** .

```
#include <bits/stdc++.h>
using namespace std;
typedef long long ll;
int n;
int main(int argc, char* argv[]) {
int t;
scanf("%d", &t);
while (t--) {
scanf("%d", &n);
printf("%d\n", __builtin_popcount(n ^ (n - 1)));
}
return 0;
}
```

Let us use these terminologies:

- Each line belong to 1 × 1 cell we will call it
edge- Each line parallel to one of the biggest rectangle's sides
n×mand equal to its length we will call itline.

Our solution depends on these two observations about the selected *edges*:

- They should be parallel to each others.
- They should be parallel to the shorter line in the biggest rectangle
n×m.

Assume *n* ≤ *m* (you can otherwise swap the numbers to satisfy that) And the shorter side *n* is vertical.

We have *m* + 1 vertical *lines* with length *n* from each one of them if we take an *edge* then we have to not take the one below or above But how do we select the *edges*?

We should select them alternatively, why? Let us not take two consecutive *edges* then there are two cells that are not covered by at least one *edge* till now so in the next *line* we must take two consecutive *edges* to get these two cells covered by at least one *edge* which will violate the problem's constraints (edges should not touch).

Now let us continue with taking the alternative *edges*. One of the *lines* we will take *edges* among *n*, the next one and so on..

You have to notice that we have to start with taking from the first line not since we need to take as much as we can in comparing with to minimize the number of *edges* in the required group.

Here's an example for a 5 × 6 grid:

```
#include <bits/stdc++.h>
using namespace std;
long long n, m, tmp, ans;
int T;
int main(int argc, char** argv) {
cin >> T;
while (T--) {
cin >> n >> m;
if (n > m)
swap(n, m);
tmp = m / 2 + 1;
ans = tmp * (n / 2) + (m + 1 - tmp) * ((n + 1) / 2);
cout << ans << endl;
}
return 0;
}
```

There are many different solutions for this problem that can pass, let's discuss the one where you calculate all pairs and subtract the incorrect ones. For every cell, we need to know how many cells have a manhattan distance of 1 with it. It is clear that these pairs will have the same values for *n* - 1 dimensions, with only one dimension having a different value by an absolute difference of 1.

So for every cell, keep fixed *n* - 1 dimensions, and decrease the value for one dimension by 1 will give us one non repeated pair. For each cell, there are *n* ways to do this. Since number of cells is the product of the sizes of all dimensions in the input, we multiply this product by *n* to get the number of pairs.

Notice that not all these pairs are valid, because we are considering pairs that get matched with a cell with a 0 as one of its values. All we must do now is decrease these invalid pairs. Fix one dimension with a value as 0. The number of valid cells that will be wrongly matched with this invalid one is the product of all dimensions sizes *a*_{i} except for the one that's currently fixed. Therefore for each fixed *i*, decrease the product of all dimension sizes except for *a*_{i} from our number of pairs. This is our final answer.

**Complexity per test-case:** *O*(*N*).

```
#include <bits/stdc++.h>
using namespace std;
typedef long long ll;
#define se second
#define fi first
#define pb push_back
const int N = 1e5 + 5, M = 1e9 + 7;
ll v[N], L[N], R[N];
int main() {
int t;
cin >> t;
while (t--) {
int n;
scanf("%d", &n);
long long fans = 0, sum = 1;
for (int i = 1; i <= n; i++) {
scanf("%lld", &v[i]);
sum = (sum * v[i]) % M;
}
fans = (sum * n) % M;
L[0] = R[n + 1] = 1;
for (int i = 1; i <= n; i++) {
L[i] = (L[i - 1] * v[i]) % M;
}
for (int i = n; i > 0; i--) {
R[i] = (R[i + 1] * v[i]) % M;
}
for (int i = 1; i <= n; i++) {
long long temp = (L[i - 1] * R[i + 1]) % M;
fans = (fans - temp + M) % M;
}
printf("%lld\n", fans);
}
return 0;
}
```

Let’s define a frequency array for the original array and a visited array for the values, now for every number *x* from 1 to 10^{6} with frequency more than zero (it exists in the array) then loop over the multiples of *x*, if a multiple of *x* say *y* is not visited before then we should change *y* into *x* as *x* is the smallest divisor for *y* in the array.

**Complexity per test-case:** .

```
#include <bits/stdc++.h>
using namespace std;
int const N = 1e6 + 100;
int const sz = 1e5 + 100;
int t, n, a[sz], freq[N], vis[N];
long long summation() {
long long sum = 0;
for (int i = 1; i < N; ++i) {
if (freq[i]) {
for (int j = i; j < N; j += i)
if (freq[j] && !vis[j])
sum += 1ll * freq[j] * i, vis[j] = 1;
}
}
return sum;
}
int main() {
scanf("%d", &t);
while (t--) {
scanf("%d", &n);
for (int i = 0; i < n; ++i) {
scanf("%d", &a[i]);
++freq[a[i]];
}
printf("%lld\n", summation());
for (int i = 0; i < n; ++i)
freq[a[i]] = vis[a[i]] = 0;
}
return 0;
}
```

Notice that the position of the characters doesn't matter because, for example, if you have character *ch* repeated *F*_{ch} times then no matter what their positions are in the string they will add the same value which is equal to , So we are only interested in *F*_{ch}.

Suppose that we have determined what characters we will change, then all of them should be changed to the same character *ch*_{1}, and any other equal characters *ch* should all be changed into *ch*_{1} or none, except for one character *ch*_{2} that we can change part of it, check the proof below.

Now if we will fix *ch*_{1} and *ch*_{2}, we can calculate knapsack-like DP on all other characters (we know that we will change either all or none of all equal characters).

**Complexity per test-case:** *O*(*K* × *A*^{3}) (*O*(*A*^{2}) for iterating over (*ch*_{1}, *ch*_{2}), and *O*(*K* × *A*) for knapsack) where *A* is equal to the number of different characters in the string which is 26.

Let's say we increased the frequency of two characters *ch*_{1} and *ch*_{2}, and let's say we increased the frequency of *ch*_{1} by *a* and *ch*_{2} by *b*, let's define *W*_{1}(*i*) = weight of adding *i* instances of *ch*_{1} and *W*_{2}(*i*) similarly.

Now let's say we want to add one instance of *c*1 and remove one instance of *c*2, if *W*_{1}(*a* + 1) - *W*_{1}(*a*) > *W*_{2}(*b*) - *W*_{2}(*b* - 1) then our operation will increase the total weight, also since *W*_{1}(*a* + 2) - *W*_{1}(*a* + 1) > *W*_{1}(*a* + 1) - *W*_{1}(*a*) > *W*_{2}(*b*) - *W*_{2}(*b* - 1) > *W*_{2}(*b* - 1) - *W*_{2}(*b* - 2), we can keep adding a single instance of *c*1 and removing a single instance of *c*2, increasing the weight each time until *b* = 0.

Now if *W*_{1}(*a* + 1) - *W*_{1}(*a*) < *W*_{2}(*b*) - *W*_{2}(*b* - 1) then similarly *W*_{2}(*b* + 1) - *W*_{2}(*b*) > *W*_{1}(*a*) - *W*_{1}(*a* - 1), so we add instances of *ch*_{2} and remove instances of *ch*_{1} instead.

That proofs that it's always optimal to increase the frequency of only a single character, and similarly assume that we have taken *a* of *ch*_{1} and *b* of *ch*_{2} such that there are still characters left of both *ch*_{1} and *ch*_{2}, then from above we can see that it's optimal to either take all *ch*_{1} or all *ch*_{2} instead of some of both proofing that we should decrease the frequency of some characters to 0 expect for at most one character where we don’t have enough *k* to take all of it.

Written by Motarack

```
#include <bits/stdc++.h>
using namespace std;
typedef long long ll;
const int N = 5000 + 5, M = 26;
int freq[M], n, k, from, to;
ll dp[N][M], res;
char st[100000 + 5];
ll find(int f, int ch) {
return f * (f - 1LL) / 2 * (ch + 'a');
}
ll calc(int ch, int rem) {
if (ch == from || ch == to)
return calc(ch + 1, rem);
if (ch == 26) {
int t = min(rem, freq[from]);
rem -= t;
return find(freq[from] - t, from) + find(freq[to] + (k - rem), to);
}
ll& ret = dp[rem][ch];
if (ret != -1)
return ret;
ret = find(freq[ch], ch) + calc(ch + 1, rem);
if (rem - freq[ch] >= 0)
ret = max(ret, calc(ch + 1, rem - freq[ch]));
return ret;
}
int main(int argc, char* argv[]) {
int t;
scanf("%d", &t);
while (t--) {
scanf("%d%d%s", &n, &k, st);
memset(freq, 0, sizeof freq);
for (int i = 0; i < n; ++i)
++freq[st[i] - 'a'];
res = 0;
for (int c1 = 'a'; c1 <= 'z'; ++c1)
for (int c2 = 'a'; c2 <= 'z'; ++c2) {
if (c1 == c2)
continue;
memset(dp, -1, sizeof dp);
to = c1 - 'a';
from = c2 - 'a';
res = max(res, calc(0, k));
}
printf("%lld\n", res);
}
return 0;
}
```

Find the maximum value for *a*_{i} + *a*_{2 × n - i - 1} for all *i* < 2 × *n* - *i* - 1.

**Complexity per test-case:** *O*(*N*).

```
#include <bits/stdc++.h>
using namespace std;
const int N = 1000;
int n, h[N + N], res;
int main(int argc, char* argv[]) {
int t;
scanf("%d", &t);
while (t--) {
scanf("%d", &n);
n += n;
for (int i = 0; i < n; ++i)
scanf("%d", h + i);
res = 0;
for (int i = 0; i < n; ++i)
res = max(res, h[i] + h[n - i - 1]);
printf("%d\n", res);
}
return 0;
}
```

The minimum difference is either 0 or 1, splitting *x* equally over the *n* parts giving each part , now we have as left-over which is less than *n*, we can split them over the last parts. note that if equals 0 then there is no strictly positive solution so print - 1.

**Complexity per test-case:** *O*(*N*).

```
#include <bits/stdc++.h>
using namespace std;
int T, x, n, d, mod;
int main(int argc, char** argv) {
cin >> T;
while (T--) {
scanf("%d %d", &x, &n);
d = x / n;
if (d == 0) {
puts("-1");
continue;
}
mod = x % n;
printf("%d", d);
for (int i = 1; i + mod < n; ++i)
printf(" %d", d);
for (int i = 0; i < mod; ++i)
printf(" %d", d + 1);
puts("");
}
return 0;
}
```

Let *a*_{i}, *b*_{i} and *v*_{i} be the *i*th bit of *a*, *b* and *v* correspondingly, and let's iterate from the most significant bit to the least significant bit, and each time check the following:

- If
*v*_{i}= 1 and*a*_{i}=*b*_{i}= 0 then we can take the segment segment [*a*,*b*] as their cost will be less than*v*. - If
*v*_{i}= 0 and*a*_{i}=*b*_{i}= 1 then we can't take any number as the cost of every number is greater than*v*. - If
*v*_{i}=*a*_{i}=*b*_{i}= 0 we don't have enough information so we go to the next bit. - If
*v*_{i}=*a*_{i}=*b*_{i}= 1 we don't have enough information here either, so we set*a*_{i},*b*_{i}and*v*_{i}to 0 and go to the next bit (note that flipping the bits won't affect the answer because the*OR*of any set of numbers before and after the flipping will remain the same expect for the flipped bit, but that bit is the same for*v*and all numbers in [*a*,*b*], so it doesn't matter). - If
*v*_{i}=*a*_{i}= 0 and*b*_{i}= 1 then we can't include any number with it's*i*th bit 1, let*c*= number with all bits after*i*th bit 1, and rest of bits 0, then the answer for (*a*,*b*,*v*) equals answer for (*a*,*c*,*v*), so we replace*b*with*c*and go to the next bit. - If
*v*_{i}= 1,*a*_{i}= 0 and*b*_{i}= 1, if all the bits to the right of the*i*th bit in*v*are 1s then we can take the segment [*a*,*b*] as our answer, else let*c*= number with all bits after*i*th bit 1, and rest of bits 0, then we can either take the segment [*a*,*c*] as an answer or take the answer of the problem (*c*+ 1,*b*,*v*) with setting the*i*th bit for all the numbers to 0 (note that we don't benefit by taking the answer of the problem (*c*,*b*,*v*) because*c*OR (*c*+ 1) is already larger than*v*), we take the max of those two as our answer.

Cases where *a*_{i} = 1 and *b*_{i} = 0 are impossible because all the bits to the left of the *i*th bit are always 0 and *b* ≥ *a*. See the code below for better understanding of how the algorithm works.

**Complexity per test-case:** .

```
#include <bits/stdc++.h>
using namespace std;
#define ll long long
ll go(ll a, ll b, ll v, int i = 59) {
if (i == -1)
return 1;
bool ai = a >> i & 1, bi = b >> i & 1, vi = v >> i & 1;
if (vi && !ai && !bi)
return b - a + 1;
if (!vi && ai && bi)
return 0;
if (!vi && !ai && !bi)
return go(a, b, v, i - 1);
if (vi && ai && bi)
return go(a & ~(1ll << i), b & ~(1ll << i), v & ~(1ll << i), i - 1);
if (!vi && !ai && bi)
return go(a, (1ll << i) - 1, v, i - 1);
if (v == (1ll << (i + 1)) - 1)
return b - a + 1;
ll c = (1ll << i) - 1;
return max(c - a + 1, go((c + 1) & ~(1ll << i), b & ~(1ll << i), v & ~(1ll << i), i - 1));
}
int main() {
int t;
scanf("%d", &t);
while (t--) {
ll a, b, v;
scanf("%lld%lld%lld", &a, &b, &v);
printf("%lld\n", go(a, b, v));
}
}
```

Let's say that θ is the angle in which Lux will rotate clockwise, and an optimal θ is an angle which will kill the maximum number of soldiers, notice that there could be an infinite amount of optimal θs, but at least one of them will have a soldier intersecting with either the left border of the laser or the left half of the laser's base (the line intersecting with the point (0,0) and having length *z*), note that this is true because if we take any optimal θ we can keep increasing θ until such soldier appears.

Now let's for each soldier figure out if he can be the chosen one, this can be done by calculating the corresponding θ and counting the number of dead soldiers for such a θ.

let *d* be the distance of the soldier to the point (0, 0), if the soldier will intersect with the base, otherwise he will intersect with the left border (imaging a circle at (0, 0) with radius ).

Let be vector that resembles the left side of the base and be a vector from (0, 0) to our soldier, if then (in terms of direction), else we can get by rotating by θ _{2} counter clockwise, .

```
#include <bits/stdc++.h>
using namespace std;
typedef long long ll;
#define f(i, x, n) for (int i = x; i < (int)(n); ++i)
int const N = 1000;
double const eps = 1e-7;
struct V {
double x, y;
V() {}
void sc() { scanf("%lf%lf", &x, &y); }
V(double a, double b) : x(a), y(b) { }
V operator+(V o) { return V(x + o.x, y + o.y); }
V operator-(V o) { return V(x - o.x, y - o.y); }
double L() { return sqrt(x * x + y * y); }
V N() {
double l = L();
return V(x / l, y / l);
}
V rot(double th) { return V(x * cos(th) - y * sin(th), x * sin(th) + y * cos(th)); }
V operator*(double z) { return V(x * z, y * z); }
double operator*(V o) { return x * o.x + y * o.y; }
double operator|(V o) { return x * o.y - o.x * y; }
void pr() { printf("%lf %lf\n", x, y); }
} p[N];
int gcd(int a, int b) { return b ? gcd(b, a % b) : a; }
int main() {
int t;
cin >> t;
while (t--) {
bool dn = false;
int n, m;
double z;
scanf("%d%d%lf", &n, &m, &z);
z /= 2.0;
f(i, 0, n) p[i].sc();
f(i, 0, n) {
V dv = p[i];
double l = dv.L();
V c;
if (l <= z)
c = dv.N();
else {
double th = acos(z / l);
c = dv.rot(th).N();
}
int ins = 1;
f(j, 0, n) if (i != j) {
V d = p[j];
double t = abs(d * c);
if (t - eps <= z && (d | c) > -eps)
++ins;
}
if (ins >= m) {
printf("Yes\n");
dn = true;
break;
}
}
if (!dn)
printf("No\n");
}
}
```

The problem can be solved using Dynamic Programming, the *DP* state is *dp*[*i*][*j*][*p*1][*p*2][*p*3][*p*4][*p*5] which *i* and *j* means at the *i*^{th} row and *j*^{th} column and the colors of the last 5 cells are *p*_{i} where *p*_{1} is the earliest one and *p*_{5} is the latest one, that is because only the last 5 cells colored will affect the result if we move column by column making sure no two adjacent cells have the same color.

Now *p*_{i} values can reach 10^{5} each, but some different coloring patterns have the same result, for example if you have the colors 5, 9, 5, 2, 1 for *p*_{1}, *p*_{2}, *p*_{3}, *p*_{4}, *p*_{5} the answer for *dp*[*i*][*j*][*p*1][*p*2][*p*3][*p*4][*p*5] will be the same for the colors 0, 1, 0, 2, 3 which means mapping the colors to smaller numbers in a smart way will make: *p*_{1} < 1, *p*_{2} < 2, *p*_{3} < 3, *p*_{4} < 4, *p*_{5} < 5 while giving the same result, this way we only need to try colors from 0 to 4, and the rest from 5 to *k* will give the same answer as they will have the same mapping so in the DP we only need to iterate over *min*(*k*, *n* + 1) colors.

**Complexity per test-case:** *O*(*N*^{2} × *M* × *N*!), the extra *N* is from the loop inside the DP.

```
#include <bits/stdc++.h>
using namespace std;
typedef long long ll;
const int N = 10000 + 1, M = 5 + 1, MOD = 1000000007;
int dp[M][N][2][3][4][5];
int n, m, k, lim;
int p[5], vl[6];
void fix() {
memset(vl, 0, sizeof vl);
for (int i = 0, idx = 1; i < 5; ++i) {
if (vl[p[i]] != 0)
p[i] = vl[p[i]] - 1;
else
vl[p[i]] = idx, p[i] = idx++ - 1;
}
}
int calc(int i, int j, int p2, int p3, int p4, int p5) {
if (i == n)
return calc(0, j + 1, p2, p3, p4, p5);
if (j == m)
return 1;
int& ret = dp[i][j][p2][p3][p4][p5];
if (ret != -1)
return ret;
ret = 0;
int lft = (n == 5 ? 0 : (n == 4 ? p2 : (n == 3 ? p3 : (n == 2 ? p4 : p5))));
for (int val = 0; val < lim; ++val) {
if (val == lft && j != 0)
continue;
if (val == p5 && i != 0)
continue;
p[0] = p2, p[1] = p3, p[2] = p4, p[3] = p5, p[4] = val;
fix();
ret = (ret + (val == 5 ? k - 5LL : 1LL) * calc(i + 1, j, p[1], p[2], p[3], p[4])) % MOD;
}
return ret;
}
int main(int argc, char* argv[]) {
int t;
scanf("%d", &t);
while (t--) {
scanf("%d%d%d", &n, &m, &k);
lim = min(k, 6);
for (int i = 0; i < n; ++i)
for (int j = 0; j < m; ++j)
memset(dp[i][j], -1, sizeof dp[i][j]);
printf("%d\n", calc(0, 0, 0, 0, 0, 0));
}
return 0;
}
```

It's obvious that we need to traverse the maximum number of edges. Let's denote *u* as the starting node and *v* as the finishing node, the only edges that we can not traverse are the edges on the path directed from *v* to *u*, so we need to calculate the sum of coins on this path and then subtract it from the total number of coins on all edges. Calculating the sum of coins on a path can be done easily by finding the lowest common ancestor(LCA) of node *u* and node *v*, the answer will be: *cost*_{1}(*u*) - *cost*_{1}(*lca*) + *cost*_{2}(*v*) - *cost*_{2}(*lca*) where *cost*_{1}(*x*) is the cost of the directed path from the root to *x* and *cost*_{2}(*x*) is the cost of the directed path from *x* to the root which can be pre-calculated with a DFS.

**Complexity per test-case:** .

```
#include <bits/stdc++.h>
using namespace std;
typedef long long ll;
typedef pair<int, int> pp;
int const N = 1e5 + 10, oo = 1e9;
int mod = oo + 7;
ll const OO = 1e18;
int t, n, q;
vector<vector<int> > from_root, dp;
vector<int> dep;
vector<vector<pair<int, pp> > > adj;
void dfs(int u, int p = 0, ll cs1 = 0, ll cs2 = 0) {
from_root[0][u] = cs1;
from_root[1][u] = cs2;
dp[u][0] = p;
for (auto v : adj[u]) {
if (v.first == p)
continue;
int a = v.second.first, b = v.second.second;
dep[v.first] = dep[u] + 1;
dfs(v.first, u, cs1 + a, cs2 + b);
}
}
int lca(int a, int b) {
if (dep[a] > dep[b])
swap(a, b);
for (int i = 18; i > -1; i--)
if (dep[b] - (1 << i) >= dep[a])
b = dp[b][i];
if (b == a)
return a;
for (int i = 18; i > -1; i--) {
if (dp[a][i] != dp[b][i]) {
a = dp[a][i];
b = dp[b][i];
}
}
return dp[a][0];
}
int main() {
scanf("%d", &t);
while (t--) {
scanf("%d", &n);
dp = vector<vector<int> >(n + 1, vector<int>(19));
from_root = vector<vector<int> >(2, vector<int>(n + 1));
dep = vector<int>(n + 1);
adj = vector<vector<pair<int, pp> > >(n + 1);
ll sumAll = 0;
for (int i = 0; i < n - 1; i++) {
int u, v, a, b;
scanf("%d%d%d%d", &u, &v, &a, &b);
adj[u].push_back({ v, { a, b } });
adj[v].push_back({ u, { b, a } });
sumAll += a + b;
}
dfs(1);
scanf("%d", &q);
for (int j = 1; j < 19; j++)
for (int i = 1; i <= n; i++)
if (dp[i][j - 1])
dp[i][j] = dp[dp[i][j - 1]][j - 1];
while (q--) {
int u, v;
scanf("%d%d", &u, &v);
int lc = lca(u, v);
printf("%lld\n", sumAll - from_root[1][v] + from_root[1][lc] - from_root[0][u] + from_root[0][lc]);
}
}
return 0;
}
```

Hello Codeforces,

I would like to invite you all to participate in the **2018 ACM Amman Collegiate Programming Contest (AmmanCPC 2018)**. The contest was originally held on the 5th of May, and it will launch in Codeforces Gym on Saturday, 2 June 2018 10:00 UTC.

The problems were prepared by Kudo. (Ali Fadel), Vendetta. (Ibraheem Tuffaha) and The-Legend (Abdulmajeed Alzoubi).

Thanks to Flerynn (Yosef Darwish), M.Dawwas (Mohammad Abu Dawwas), abo_od303 (Abdalrhman Ali) and Qumair (Fares Obaid) for sharing the ideas of four problems, to Motarack (Motasem AL-Kayed), abo_od303 (Abdalrhman Ali) and SAFWAN.K (Safwan Olaimat) for testing some of the problems and to Ownography (Eyad Hajja) for reviewing problem statements.

Also a big thanks to Um_nik (Alex Danilyuk) for sharing the solution idea for a problem and to Hasan0540 (Hasan Alhamsh) and justHusam (Husam Musleh) for the contest wouldn't have been possible without their help. :)

The duration of the contest is 5 hours, and the registration will be open 6 hours before the start of the contest.

Good luck for everyone, and I wish you all Accepted solutions.

**UPD:** Tutorial

Due to symmetry, the answer of `n m`

is equal to the answer of `-n m`

, so work with the absolute value of *n*.

We are looking for the probability of starting at position 0 and ending at *n* positions to the right, using *m* moves.

A move goes to Right with probability and Left with probability , we can imagine we have a binary string of instructions of length *m* (executed one by one), where 0 means go left and 1 means go right. because of this discrete uniform distribution, every unique binary string of length *m* will have a probability of as we have 2^{m} different strings.

So the answer will be the number of such strings that *x* - *y* = *n* where *x* is the number of 1's and *y* is the number of 0's, with some work on paper you can find that . Now the probability will be where *mCx* means *m* Choose *x*. You can handle the output format using the multiplicative inverse.

If *m* - *n* is odd or negative then the answer is 0.

Vendetta. solution: https://ideone.com/N8WC9O

justHusam solution: https://ideone.com/oAPb42

**Complexity:** *O*(*N*) for pre-calculating factorials and per test case for the multiplicative inverse.

We need *n* numbers that sums up to *n* × *a* so that their average is equal to *a*.

To choose the numbers to be the most distinct, we want to include as many smaller distinct number as possible so we will take numbers: 1, 2, 3 ..., *x* so that their summation is *n* × *a*, we can find *x* using Binary Search.

Why is it better to choose smaller numbers? because if we replace a small number with a bigger number, then we get closer to *n* × *a* thus less chances to use more distinct numbers.

A problem arise when there is no summation from 1 to *x* that is equal to *n* × *a*, so we choose a summation less than *n* × *a*, and what's left will be equal to which will be distributed on *n* - *x* unused numbers (because we already used *x* numbers).

So at least we can give each of the *n* - *x* numbers a value of 1 since values must be positive, so must be at least equal to *n* - *x* which will be our Binary Search condition which makes *x* a valid answer.

Vendetta. solution: https://ideone.com/VNyo8h

justHusam solution: https://ideone.com/KChjCF

**Complexity:** per test case.

The answer is the number of possible values for *a* multiplied by the number of possible values for *b*.

We can see that we can use any value for *b* from 1 to *m* - 1, thus *m* - 1 possible values for *b*, as for *a*, we can only use a value *x* that has a multiplicative inverse in respect to the module *m* which means *x* and *m* are co-primes, and the number of positive numbers co-primes with *m* less than *m* are calculated using the totient function. so the final answer is (*m* - 1) × *Totient*(*m*).

Vendetta. solution: https://ideone.com/Vz5u8p

**Complexity:** for pre-calculating totient and *O*(1) per test case.

justHusam solution 1: https://ideone.com/yXLv7v

justHusam solution 2: https://ideone.com/s4EdLm

Since the number of nodes are so low, you can work on all different subsets of nodes.

One solution is a minimization Dynamic Programming, with state holding only a mask telling which nodes are already included in the spanning tree, and try to include any un-included node that can connect directly to one of the included nodes until all the important nodes are in the tree.

Another slower solution is to find all the subsets of nodes that include all the important nodes, and do an MST for each of them and take the minimum.

Vendetta. solution: https://ideone.com/g6gcKd

**Complexity:** *O*(2^{N} × *N*) per test case.

**Complexity:** *O*(2^{N} × *M*) for the other slower solution per test case.

justHusam solution: https://ideone.com/uXfZUV (A bit different solution)

Minimize when the condition mentioned in the statement applies :)

Vendetta. solution: https://ideone.com/bF0axw

justHusam solution: https://ideone.com/LxK1PD

**Complexity:** *O*(*N*) per test case.

Since values are small, as well as the grid size, we can make a frequency array for every cell in a grid and make a 2D cumulative summation on every frequency array, this way we can get the frequency of a number in the sub-rectangle in *O*(1).

For every query, loop over each number from 1 to 500 and find its frequency in that sub-rectangle, as we count the sum of the frequencies we know the median is at the element that makes the sum exceed half the size of the rectangle.

The solution above gets AC, but to make the solution faster, we can do a cumulative summation on the frequency arrays so then we can binary search the point where the sum exceeds half the size of the sub-rectangle instead of looping over them.

Vendetta. solution: https://ideone.com/3EZ1SY

**Complexity:** per test case: *O*(*N*^{2} × *Max*(*A*_{i})) for construction and *O*(*Max*(*A*_{i})) per query.

justHusam solution: https://ideone.com/WEkeVH

**Complexity:** per test case: *O*(*N*^{2} × *Max*(*A*_{i})) for construction and per query.

Notice that *abcd* is a cyclic quadrilateral, which means its vertices all lie on a single circle.

A property for cyclic quadrilateral is that its opposite angles are supplementary. https://en.wikipedia.org/wiki/Cyclic_quadrilateral#Characterizations

With some work on paper we can find that the 2 triangles which areas are given are similar, thus the similarity ratio can be calculated using the areas: , from which we can find and then *x* and *y*.

Vendetta. solution: https://ideone.com/yKYgb9

justHusam solution 1: https://ideone.com/u8xofZ

justHusam solution 2: https://ideone.com/XNvyX9

**Complexity:** per test case since *sqrt*(*x*) complexity is .

Keep track of the number of different opposite characters, and at the end of each query add 1 to the answer if the number of different opposite characters is zero.

Vendetta. solution: https://ideone.com/C6X94O

justHusam solution: https://ideone.com/YYrF7f

**Complexity:** *O*(1) per query.

Read problem statement carefully to handle the cases where both teams have the same number of goals overall.

Vendetta. solution: https://ideone.com/NWtBvZ

justHusam solution: https://ideone.com/c7m35z

**Complexity:** *O*(1) per test case.

You can solve this problem by brute forcing on the numbers from *b* down to *a* until you find a solution, this solution is valid and won't cause a Time Limit Exceeded.

Why no TLE? because the biggest gap between any 2 consecutive 9 digits primes (half of the given number) is less than 320, so at most you will loop 320 times and find the second half of the number to be a prime and the GCD between a prime and any other number is 1, unless the other number is a multiple of that prime. but even if it is, the next prime number below that will work because if the first half is a multiple of both primes then it can't only be 9 digits (must be 18 digits if both primes are 9 digits). https://en.wikipedia.org/wiki/Prime_gap#Numerical_results

Vendetta. solution: https://ideone.com/bLV0UI

**Complexity:** *O*(*Max*(*g*_{n})) per test case where *g*_{n} is the *n*^{th} prime gap.

Show me your implementation powers :P

Vendetta. solution: https://ideone.com/0bLR7c

**Complexity:** per test case.

justHusam solution: https://ideone.com/aBKduB

**Complexity:** per test case.

Consider each bit on its own since the AND operation is bit-wise.

For each bit, notice that any sub-array that has a zero bit will add nothing to the answer.

For each bit, A sub-array of only ones of length *X* has sub-array of only ones, and each of them will add 2^{W} where *W* is the Weight of the bit to the answer.

justHusam solution: https://ideone.com/PL7s1U

Vendetta. solution: https://ideone.com/23iukT

**Complexity:** *O*(*Nlog*(*Max*(*A*_{i}))).

You can construct the array from only one known element.

justHusam solution: https://ideone.com/IDqwCe

**Complexity:** *O*(*N*).

All numbers are less than 10^{9} + 7, so the summation of any 2 numbers is either less than 10^{9} + 7 or less than 2 × (10^{9} + 7).

Sort the elements and for each element use binary search to handle each of the cases above to find the best answer.

justHusam solution: https://ideone.com/1mSW7q

Vendetta. solution: https://ideone.com/V2wqJT

**Complexity:** *O*(*NLog*(*N*)).

To make things easier, use the following concept: Let's make a function *F*(*N*) that finds the answer of range [0, *N*], then the answer of range [*L*, *R*] would be: *F*(*R*) - *F*(*L* - 1).

Make a frequency array of the original string then find how many times the string was repeated from 0 to *N*, the number of times is where |*S*| is the length of the original string.

If is not zero then we have extra characters that doesn't make a full string, we can make 26 frequency arrays for each letter in alphabet and do a prefix sum to get the number of times a specific letter was repeated in the first letters of the original string.

justHusam solution: https://ideone.com/BbGc7w

**Complexity:** *O*(26*N* + *Q*).

Meet in the middle, divide the dice into 2 groups of equal sizes and brute force each part on it's own and store the in 2 vectors *V*_{1}, *V*_{2}.

For each element in *V*_{1}, we must find how many elements in *V*_{2} satisfies the equality: .

Let's call the element from *V*_{1} as *A* and the element from *V*_{2} as *B* and we want to find the value of *B*:

.

multiply both sides by :

.

where is the inverse of *A* respective to the mod 10^{9} + 7.

Since all values of *V*_{2} are below 10^{9} + 7, then there's only one value for *B* in *V*_{2} that satisfies the equality (because all values *B* + *K*(10^{9} + 7) satisfies the equality for any natural number *K*).

So we will calculate the value of *B* and check how many times was it repeated in *V*_{2}.

justHusam solution: https://ideone.com/pFtGym

Vendetta. solution: https://ideone.com/udraup

**Complexity:** .

**Note:** *O*(6^{N}) gives TLE because 6^{14} is approximately 78 × 10^{9}.

Find the number of palindrome sub-strings for each string, using an algorithm that runs worse than *O*(*L*^{2}) gives TLE where *L* is the length of the string.

The problem is now reduces to RMQ (Range Max Query), we can now use data structures like Sparse Table, Segment Tree ... but Sparse Table is preferred as it runs faster than Segment Tree.

We still have a problem that the queries are given as strings not indices, to solve this we can map the strings to their indices, but using a map of strings will give TLE as the factor for comparing strings isn't small, to solve this was can use Hashing or Trie to get the index of the string fast.

Hashing will take *O*(*L* + *log*(*N*)) per query to hash and look for the hash value in a map of hash values.

Trie will take only *O*(*L*) per query to track the string in the Trie, but Trie needs pre-construction of *O*(*LN*).

Sparse Table with Hashing solution: https://ideone.com/iVZNHd (Running Time: 1621 ms)

Segment Tree with Hashing solution: https://ideone.com/UPsA6o (Running Time: 2042 ms)

**Complexity:** *O*(*NL*^{2} + *NLog*(*N*) + *Q*(*L* + *Log*(*N*))).

Sparse Table with Trie solution: https://ideone.com/pn4haB (Running Time: 1716 ms)

**Complexity:** *O*(*NL*^{2} + *NLog*(*N*) + *NL* + *QL*).

**Note:** You don't need to worry about collision in hashing since you don't need to use at all, the max hash value will be 4^{30} which is approximately 10^{18} which fits into `long long`

.

Construct 2 arrays, *MX* and *MN* where *MX*_{i} is the maximum value in the range [1, *i*] and *MN*_{i} is the minimum value in range [*i*, *N*].

For each element *X*, if the max value before it is less than *X* and the min value after it is more than *X* then add one to the answer.

justHusam solution: https://ideone.com/9SzqJb

**Complexity:** *O*(*N*).

Count the number of ones on the border, let's call it *Y*.

There are total of *X* = *N* + *N* + *M* + *M* - 4 border cells, the answer is *X* - *Y*.

There is no solution when the total number of ones in the whole image is less than *X*, then we can't cover the whole border with ones.

justHusam solution: https://ideone.com/wS5TPl

**Complexity:** *O*(*NM*).

Construct a graph, add edge from cell *i* to cell *i* + 1 and from cell *i* to cell *j* where cell *j* has same color of cell *i* and there are no cells between *i* and *j* that also have the same color.

Using dynamic programming *dp*_{i} = 1 + *min*(*dp*_{i + 1}, *dp*_{nexti}), try to jump to the next cell or to the first call that has the same color.

justHusam solutions:

BFS: https://ideone.com/P2yACH

DP top-down: https://ideone.com/UnBst5

DP buttom-up: https://ideone.com/ysugUh

**Complexity:** *O*(*N*).

To be able to find this solution, you need to work on a paper, let's suppose we want to find the answer for 3 numbers, *A* *B* and *C*.

The terms of those 3 will be:

*A* + *B* + *C* + *AB* + *AC* + *BC* + *ABC*

Take *C* as a common factor from the terms that has *C*:

*A* + *B* + *AB* + *C*(1 + *A* + *B* + *AB*)

Notice that if *X* = *A* + *B* + *AB* then:

*X* + *C*(1 + *X*)

Now take B as a common factor as well:

*A* + *B*(1 + *A*) + *C*(1 + *A* + *B*(1 + *A*))

Suppose the answer for the problem is *X*, we can notice that if we add a new number *Y*, the answer will be *X* + *Y*(1 + *X*).

Using dynamic programming, *dp*_{i} = *dp*_{i + 1} + *A*_{i} × *dp*_{i + 1}, the DP tries to either take the element or not, we will have to subtract 1 from the answer as the DP will consider the empty sub-set as valid sub-set which will add one to the wanted answer.

Vendetta. solutions:

Math: https://ideone.com/rYmHOD

DP: https://ideone.com/IDWlkQ

**Complexity:** *O*(*N*).

If there are more than one letter with odd frequency then we can't make any palindromes.

Since the limits are too small, we can brute force and build all the palindromes, since the first half of the palindrome is the same as the second half but reversed, we can only generate all the palindromes by generating the first half only, make a string containing half of the frequency for each letter in the original string and use `next_permutation`

to check how many different permutations you can get.

There's also another faster solution if the constraints were bigger, to know how many different ways we form a string using half of the letters to make the first half of the palindrome, the answer will be: , to eliminate the repeated permutations we divide by the factorial of half of the frequencies for each letter, basic counting math.

justHusam `next_permutation`

solution: https://ideone.com/zaXvKc

**Complexity:** .

**Note:** *O*(*N*!) gives TLE because 20! is approximately 2.4 × 10^{18}.

Vendetta. math solution: https://ideone.com/0BNI2r

**Complexity:** *O*(*N*).

Codeforces (c) Copyright 2010-2018 Mike Mirzayanov

The only programming contests Web 2.0 platform

Server time: Aug/20/2018 12:33:51 (d1).

Desktop version, switch to mobile version.

User lists

Name |
---|