Hi!

I posted a video about my competitive programming setup on Algorithms Conquered. It's a very short video which gives you an overview of my setup. Check it out if you're interested!

# | User | Rating |
---|---|---|

1 | tourist | 3757 |

2 | jiangly | 3590 |

3 | ksun48 | 3582 |

4 | Um_nik | 3539 |

5 | maroonrk | 3533 |

6 | slime | 3498 |

7 | djq_cpp | 3486 |

8 | Radewoosh | 3442 |

9 | cnnfls_csy | 3427 |

10 | ko_osaga | 3355 |

# | User | Contrib. |
---|---|---|

1 | -is-this-fft- | 184 |

2 | awoo | 180 |

3 | dario2994 | 171 |

4 | SecondThread | 168 |

4 | maroonrk | 168 |

4 | Um_nik | 168 |

7 | adamant | 166 |

8 | YouKn0wWho | 165 |

9 | errorgorn | 164 |

10 | antontrygubO_o | 162 |

Hi!

I posted a video about my competitive programming setup on Algorithms Conquered. It's a very short video which gives you an overview of my setup. Check it out if you're interested!

Hi,

I recently posted a tutorial on the Sieve of Eratosthenes. Participants with less experience consider it to be a tool for prime factorisation/computing primes, but the general idea of "iterating over multiples" can be very powerful and I will make follow-up videos on some of the following problems, which don't really have anything to do with primes:

Video Link: Click Here

Subscribe to the channel to see more tutorials!

Hey all,

As always, we hope you’re doing well. We’re reaching out to announce that we’re now opening sign ups for UFDS 2023. This program is designed to train students for ZCO, INOI and IOITC. If you're a school student in India, this post could be of interest to you. Last year, we had all 10 gold medallists, 16 silver medallists and 17 bronze medallists at the INOI present in our server. Part of the program is to surround you with others who will attempt the contest, so you can all help each other.

Like last year (and the year before), if you join the program, we’ll help you by:

- Compiling useful practice problem sets to introduce you to a variety of different techniques
- Selecting problems to practice with
- Providing hints to solve problems in practice
- Discussing problems after a contest
- Recommending similar problems (if you’d like to reinforce a trick/observation)
- Providing debugging assistance
- Finding tutorials for classical techniques
- Hosting sessions to teach/discuss classical techniques (we now have a list of topics we’d like to cover in order, but we’re usually also happy to teach other things as necessary)
- Hosting sessions to teach/discuss specific problems

Most importantly, joining the server gives you access to sample ZCO and INOI sets for you to practice on. The majority of participants rate this as the most important element of their preparation.

One of the biggest issues with ZCO preparation is the extremely large gap between learning a programming language and qualifying through ZCO. This year, there is a separate program designed specifically for ZCO, instead of a combined program. Our hope is that this will better serve students struggling to qualify through ZCO. The only criteria for this program is that you understand any programming language (ideally C++, since all instruction will be in C++). You will be given some resources and a couple of weeks to learn C++, although no sessions will be offered on this.

After ZCO, more personalised mentoring for INOI and IOITC will be offered to students in the program who demonstrate an extraordinary level of effort, achievement and willingness to help others during the ZCO preparation process.

If you’re interested in joining, please fill out this form. You will obtain the invite link immediately, but please note that you will only be given roles to view the server once your form submission has been reviewed and approved. In particular, UFDS 2023 is only open to students intending to participate in ZIO, ZCO, INOI and IOITC in 2023, although there is no criteria on having attended or not attended past versions of ICO and UFDS. If you have any questions, please message ak2006 on Codeforces or Discord (ak2006#5194).

Mentors this year include kshitij_sodani aryan12 smjleo arujbansal vishesh312 niloyroot ak2006 VivaanGupta mynameismon. To be clear, these mentors have qualified for IOITC at least once and/or are INOI medallists (with the exception of a few in charge of the overall structure and our website). Also, the mentors listed are mostly involved with the ZCO part of the program for now, which is the first stage of the olympiad. For the INOI part, we will most likely have a somewhat different set of highly qualified mentors.

Regards,

UFDS Mentoring 2023 Team

Hi Codeforces!

I am delighted to announce the launch of my platform, CP Drills (https://www.cpdrills.com/).

demoralizer says, *"CP Drills contains a good collection of handpicked problems with various charts and graphs, which are very beneficial for measuring progress. The UI is lite and pretty. A great app overall!"*

Thank you demoralizer for reviewing the problems and providing feedback.

CP Drills consists of training modules for you to improve at competitive programming. We currently feature sections on *speed training* and *dynamic programming*, with many more such as *skill training*, *graph theory*, *brute force*, *binary search*, and *data structures* to come. *Skill training* is a special section which will recommend to you the next problem you should solve, based on your performance.

You might have often heard of the term *speedforces*. *Speedforces* refers to when there is a large jump in difficulty between two consecutive problems in a contest, which results in a large part of the ranklist being decided on the basis of speed on earlier problems. This happens more often than you think. Even if two participants solve the same set of problems, it is possible that one of them is an Expert and the other is a Candidate Master, due to speed. Regardless, it is always beneficial to solve problems faster as it leaves you with more time to attempt harder ones.

You can use our problem selector to give you problems according to your needs. Currently, we have options to choose CF tags for problems as well as their contest type. You can then use the stopwatch to save the time you took to solve the problem. On saving, refresh the page, and your analytics graphs will be updated. We currently support analytics on the average time taken across ratings, the variation of average time taken per rating across months, and the number of problems solved per rating every month.

We currently feature a section on dynamic programming as part of our topic practice module. This is a curated set of handpicked problems, some of which are reviewed by demoralizer. We have problem sets for competitors of all skill levels from Beginners to Candidate Masters. Just like the speed training section, every other section also contains analytical graphs.

So, what are you waiting for? Go ahead and sign up on CP Drills now! In case you are interested in helping curating problems for the other topic sections, please comment below in this blog post.

Please reach out to info.cpdrills@gmail.com for any issues! Make sure you check your spam folder for the verification email in case you cannot find it.

Let's look at the following two problems:

1. *Given an array $$$A$$$ of $$$N$$$ elements where $$$0 \le A_i \le X$$$ for some positive integer $$$X$$$, find all possible subset sums.*

2. *Given an array $$$A$$$ of $$$N$$$ elements where $$$0 \le A_i \le X$$$ for some positive integer $$$X$$$, for all possible subset sums, calculate the minimum number of elements required to achieve that sum.*

Thanks to socho for improving the tutorial!

This is a classical dynamic programming problem which can be solved in $$$O(N^2 \times X)$$$ (the sum of elements is bounded by $$$N \times X$$$) and hence that solution won't be discussed in much detail here. We can create an array $$$\it{dp}$$$ of size $$$X + 1$$$ and say that $$$\it{dp}_a$$$ is $$$1$$$ if it's possible to form a sum of $$$a$$$ using some elements from the array and $$$0$$$ otherwise. Then, we try to include every element $$$A_i$$$ and set all $$$\it{dp}_a$$$ for which $$$\it{dp}_{a - A_i}$$$ is $$$1$$$, to $$$1$$$.

However, if the sum of elements is bounded by $$$N$$$, it turns out that this problem can be solved in $$$O(N\sqrt{N})$$$. Formally, we have the following constraint:

So, what is the maximum number of distinct elements our array can have if its sum is bounded by $$$N$$$? Since we are trying to maximise, it is optimal for us to consider the smallest possible elements, starting from $$$1$$$. We want to find the smallest $$$P$$$ such that the sum of the first $$$P$$$ natural numbers is equal to $$$N$$$. The sum of the first $$$N$$$ natural numbers is given by the formula $$$\frac{P (P + 1)}{2}$$$. Solving $$$\frac{P (P + 1)}{2} = N$$$ for $$$P$$$, we get:

It's okay for $$$P$$$ to be slightly bigger, as we want a rough estimate only.

Therefore, we have at most $$$\sqrt{2N}$$$ distinct values in the array, which allows us to develop an $$$O(N\sqrt{N})$$$ algorithm.

Let's retain the definition of $$$\it{dp}$$$. Our transitions will change. We can process elements in pairs of $$$(W_i, K_i)$$$ where $$$W_i$$$ and $$$K_i$$$ correspond to the value and frequency of element i of the compressed array. How do we update the $$$\it{dp}$$$ array? To understand this, first take a look at what an array looks like if the $$$i^{\it{th}}$$$ element is $$$i \mathbin{\%} W_i$$$. Consider $$$W_i = 4$$$.

So, for each $$$W_i$$$, let $$$T$$$ be any value smaller than $$$W_i$$$ and greater than or equal to $$$0$$$ $$$(0 \le T \le W_i - 1)$$$. We see that array indices with the same value of $$$T$$$ are exactly $$$W_i$$$ elements apart. Let's consider indices with the same value of $$$T$$$ together. Let's set $$$T = 1$$$ and look at the indices where $$$T = 1$$$ appears in the array above. Call this array $$$B$$$.

Recall that the $$$i^{\it{th}}$$$ index of the $$$\it{dp}$$$ array stores whether we can form a sum of $$$i$$$. Since the same $$$T$$$ values occur $$$W_i$$$ elements apart, going from one occurrence of $$$T$$$ to the next uses a single copy of $$$W_i$$$. Since we have exactly $$$K_i$$$ copies of $$$W_i$$$, we can form sum $$$i$$$ if $$$\it{dp}_{i - W_i \times Y}$$$ for some $$$Y$$$ $$$(0 \le Y \le K_i)$$$ is one. In other words, if the sum of all such $$$\it{dp}$$$ values is positive, then $$$\it{dp}_i$$$ is possible. We can maintain a variable which stores the sum of the last $$$K_i$$$ such $$$\it{dp}$$$ values. For example, for the ninth index, if we have at least two copies of $$$W_i = 4$$$, then we want at least one of $$$dp_5$$$ or $$$dp_1$$$ to be $$$1$$$.

For every $$$W_i$$$, for every index $$$P$$$ in the range $$$[0, W_i - 1]$$$, we loop over all multiples of $$$W_i$$$ starting at $$$P$$$. That is a total of $$$\frac{N}{W_i}$$$ multiples per $$$P$$$. Since $$$P$$$ assumes exactly $$$W_i$$$ different values, the complexity for doing so is $$$O(W_i \times \frac{N}{W_i})$$$. Since we do this $$$\sqrt{2N}$$$ times, the final complexity is $$$O(\sqrt{N} \times W_i \times \frac{N}{W_i}) = O(N\sqrt{N})$$$. We require $$$O(N)$$$ memory. Take a look at the code below to understand this better.

```
#include <bits/stdc++.h>
using namespace std;
int main() {
int N;
cin >> N;
// Sum of elements <= N implies that every element is <= N
vector<int> freq(N + 1, 0);
for (int i = 0; i < N; i++) {
int x;
cin >> x;
freq[x]++;
}
vector<pair<int, int>> compressed;
for (int i = 1; i <= N; i++) {
if (freq[i] > 0) compressed.emplace_back(i, freq[i]);
}
vector<int> dp(N + 1, 0);
dp[0] = 1;
for (const auto &[w, k] : compressed) {
vector<int> ndp = dp;
for (int p = 0; p < w; p++) {
int sum = 0;
for (int multiple = p, count = 0; multiple <= N; multiple += w, count++) {
if (count > k) {
sum -= dp[multiple - w * count];
count--;
}
if (sum > 0) ndp[multiple] = 1;
sum += dp[multiple];
}
}
swap(dp, ndp);
}
cout << "Possible subset sums are:\n";
for (int i = 0; i <= N; i++) {
if (dp[i] > 0) cout << i << " ";
}
}
```

This successfully solves the first problem mentioned at the starting of the post. Let's now take a look at how to modify our approach to solve the second one as well. Before proceeding, make sure you understand the tutorial till this point. If you want to try the second problem once yourself, you may do so here: 95E - Lucky Country. Here's my code for it: 137173086.

This time, we want to find the minimum number of elements in a subset, for each subset sum. $$$\it{dp}_i$$$ should now store the minimum number of elements required to form a subset sum of $$$i$$$. $$$\it{dp}_i = \infty$$$ if it is impossible to do so. How do we transition between states? This time, it is optimal to take the minimum among the last $$$K_i$$$ $$$\it{dp}$$$ values (remember that the last $$$K_i$$$ values refers to the last $$$K_i$$$ indices with value $$$T$$$ as shown for the first problem i.e. array $$$B$$$). We can do so by maintaining the minimum over the last $$$K_i$$$ $$$\it{dp}$$$ values with indices belonging to array $$$B$$$. This essentially becomes a sliding window minimum problem, and the time complexity remains the same. At this point, you might be wondering what value to push inside the deque so that we can accurately find the minimum since we are iterating over multiples. Let $$$L$$$ we the number of times we have added $$$W_i$$$ to $$$P$$$. We can simply insert $$$\it{dp}_{B_j} - L$$$. Once we have the minimum, we add a constant value, which is the current value of $$$L$$$. This is the same idea used to solve the problem Lucky Countries mentioned above. Take a look at the code snippet below.

```
vector<int> dp(N + 1, INF);
dp[0] = 0;
for (const auto &[w, k] : components) {
vector<int> ndp = dp;
for (int i = 0; i < w; i++) {
deque<pair<int, int>> window;
for (int j = i, mul = 0; j <= N; j += w, mul++) {
while (!window.empty() && window.front().second < mul - k)
window.pop_front();
if (!window.empty()) smin(ndp[j], window.front().first + mul);
while (!window.empty() && window.back().first >= dp[j] - mul)
window.pop_back();
window.emplace_back(dp[j] - mul, mul);
}
}
swap(ndp, dp);
}
```

Check out my YouTube channel Algorithms Conquered to learn many more interesting algorithms!

I just posted a tutorial on Sparse Tables to my YouTube channel here: Sparse Tables.

Since there are already quite a few tutorials on it, I wanted to try to explain in the shortest time possible while still keeping it easy to understand so my actual explanation is around 7 minutes long. Do check it out!

In computer science, 2-satisfiability, 2-SAT or just 2SAT is a computational problem of assigning values to variables, each of which has two possible values, in order to satisfy a system of constraints on pairs of variables. It is a special case of the general Boolean satisfiability problem, which can involve constraints on more than two variables, and of constraint satisfaction problems, which can allow more than two choices for the value of each variable. But in contrast to those more general problems, which are NP-complete, 2-satisfiability can be solved in polynomial time. -Wikipedia

If you have not heard of propositional logic before, plead read about it here. Wikipedia links have been provided wherever a term has been used for the first time in this tutorial. The whole algorithm is based on graph theory so it will help to be familiar with directed graphs and strongly connected components beforehand.

By representing our formula in the 2-Conjunctive Normal Form, we can find an assignment of values for our variables in $$$O(N + M)$$$ where we have $$$N$$$ variables and $$$M$$$ clauses or report that the proposition isn't satisfiable. Simply put, a formula in the 2-CNF is an *AND of ORs* where each clause contains exactly two literals.

Thanks to Monogon for providing valuable suggestions and helping with the proof.

$$$\lnot x$$$ is the complement of $$$x$$$. If $$$x$$$ is $$$\it{true}$$$, $$$\lnot x$$$ is $$$\it{false}$$$. If $$$x$$$ is $$$\it{false}$$$, $$$\lnot x$$$ is $$$\it{true}$$$.

$$$(x \lor y)$$$ stands for $$$x$$$ *OR* $$$y$$$ (disjunction). It results in $$$\it{true}$$$ only if at least one of $$$x$$$ and $$$y$$$ is $$$\it{true}$$$.

$$$(x \land y)$$$ stands for $$$x$$$ *AND* $$$y$$$ (conjunction). It results in $$$\it{true}$$$ only if $$$x$$$ and $$$y$$$ are both assigned $$$\it{true}$$$.

$$$(x \Rightarrow y)$$$ stands for $$$x$$$ *implies* $$$y$$$ (implication). In order for it to result in $$$\it{true}$$$, if $$$x$$$ is $$$\it{true}$$$, $$$y$$$ *has* to be $$$\it{true}$$$. Otherwise, $$$y$$$ could be $$$\it{true}$$$ or $$$\it{false}$$$.

$$$(x \iff y)$$$ stands for ($$$x$$$ *implies* $$$y$$$) *AND* ($$$y$$$ implies $$$x$$$). It results in $$$\it{true}$$$ only if both $$$x$$$ and $$$y$$$ have the same value.

2-CNF consists of a bunch of *OR* clauses combined with *AND*. Each *OR* clause must result in $$$\it{true}$$$ for a proposition to be satisfied.

We want to create a *directed* graph of implications where each node is a variable.

Let's consider a disjunction of two literals $$$x$$$ and $$$y$$$ to see what all information we can deduce from it. We know that: $$$(x \lor y) \iff (\lnot x \Rightarrow y)$$$ (Implication Law).

This becomes an edge $$$(\lnot x, y)$$$ in our graph. An edge $$$(u, v)$$$ in this tutorial is *directed* from $$$u$$$ to $$$v$$$.

We must also add an edge $$$(\lnot y, x)$$$ to our graph.

An edge $$$(\lnot y, x)$$$ refers to the clause $$$(\lnot y \Rightarrow x)$$$.

$$$(x \lor y) \iff (y \lor x)$$$ (Commutative Property)

$$$(y \lor x) \iff (\lnot y \Rightarrow x)$$$ (Implication Law)

Finally, $$$(x \lor y) \iff ((\lnot x \Rightarrow y) \land (\lnot y \Rightarrow x))$$$.

Since our formula is a conjunction of clauses of the form $$$(x \lor y)$$$, we will be adding edges of this type only.

**Do keep in mind that if there exists an edge $$$(x, y)$$$ in the graph, there also exists an edge $$$(\lnot y, \lnot x)$$$**.

Here is what the graph looks like for the following proposition in 2-CNF:

Shown above is the implication graph and its condensation graph.

After we have added all the required edges, we want to find all the strongly connected components in the graph. The condensation graph is acyclic (graph obtained on compressing each SCC into a single node but retaining connections). Visualise this as a graph where SCCs are connected to each other with edges directed from left to right (as shown in the diagrams above).

What happens when for any variable $$$x$$$, $$$\lnot x$$$ lies in the same SCC as $$$x$$$? There is no valid assignment of truth values. Setting $$$x$$$ to $$$true$$$ would imply that $$$\lnot x$$$ has to be $$$true$$$ and setting $$$\lnot x$$$ to $$$true$$$ would imply that $$$x$$$ has to be $$$true$$$. In both cases, this leads to a contradiction. Hence, no answer exists.

When that is not the case, we can assign values to all variables.

Let $$$id[v]$$$ represent the index of the SCC (in any valid topological ordering) that contains $$$v$$$. In the topological ordering, for any two nodes $$$u$$$ and $$$v$$$, if $$$id[u]$$$ < $$$id[v]$$$, the SCC containing $$$u$$$ lies to the left of the SCC containing $$$v$$$. If they are equal, both nodes lie in the same SCC. If it's the opposite, $$$u$$$'s SCC lies to the right of $$$v$$$'s SCC.

For each variable $$$x$$$:

1. If $$$id[x]$$$ > $$$id[\lnot x]$$$, let's set $$$x$$$ to $$$\it{true}$$$ and $$$\lnot x$$$ to $$$\it{false}$$$.

2. If $$$id[x]$$$ < $$$id[\lnot x]$$$, let's set $$$x$$$ to $$$\it{false}$$$ and $$$\lnot x$$$ to $$$\it{true}$$$.

An important observation is that all variables in a SCC have the same truth value.

This way, we can satisfy our proposition in linear time.

The implication graph is set up in a way that our proposition is satisfied if we don't have a case where some node $$$u$$$ which has been assigned $$$true$$$ can reach some node $$$v$$$ which has been assigned $$$false$$$.

If some variable $$$x$$$ is assigned true, we know that $$$id[x]$$$ > $$$id[\lnot x]$$$ because of how we construct the answer. Keep in mind that if $$$x$$$ can reach $$$y$$$, $$$\lnot y$$$ can reach $$$\lnot x$$$ in our implication graph.

Let's prove that we don't have two nodes $$$u$$$ and $$$v$$$ such that there is a path from $$$u$$$ to $$$v$$$ and $$$u$$$ is assigned $$$true$$$ while $$$v$$$ is assigned $$$false$$$. Assume that this is the case. What does this mean? $$$id[\lnot u] < id[u]$$$ because $$$u$$$ has been assigned $$$true$$$ and $$$id[v] < id[\lnot v]$$$ because $$$v$$$ has been assigned $$$false$$$. According to our assumption, $$$id[\lnot u] < id[u] \leq id[v] < id[\lnot v]$$$. However, since there exists a path from $$$u$$$ to $$$v$$$, there has to exist a path from $$$\lnot v$$$ to $$$\lnot u$$$. That would imply that $$$id[\lnot u] \ge id[\lnot v]$$$. This contradicts our previous inequality.

Keep in mind that:

**1. Forcing a variable to be $$$true$$$ ** If we want to force $$$x$$$ to be true, it is equivalent to adding a clause $$$(x \lor x)$$$.

This is equivalent to $$$(x \lor y) \land (\lnot x \lor \lnot y)$$$.

This is just a clause $$$(x \lor y)$$$.

**4. Both variables must have the same value**

This is equivalent to $$$(\lnot x \lor y) \land (x \lor \lnot y)$$$.

After we figure out the clauses that need to be added and set up the graph adjacency list, we need to extract all strongly connected components (using any algorithm keeping in mind its efficiency). Kosaraju's algorithm or Tarjan's algorithm is commonly used to extract strongly connected components. I use Kosaraju's algorithm in my implementation below. We want to give each SCC an ID and store for each node which SCC it belongs to. Variables can be numbered from $$$1$$$ to $$$N$$$ and their complements can be numbered from $$$N + 1$$$ to $$$2N$$$. So $$$i + N$$$ is $$$i$$$'s complement.

```
struct two_sat {
int n;
vector<vector<int>> g, gr; // gr is the reversed graph
vector<int> comp, topological_order, answer; // comp[v]: ID of the SCC containing node v
vector<bool> vis;
two_sat() {}
two_sat(int _n) { init(_n); }
void init(int _n) {
n = _n;
g.assign(2 * n, vector<int>());
gr.assign(2 * n, vector<int>());
comp.resize(2 * n);
vis.resize(2 * n);
answer.resize(2 * n);
}
void add_edge(int u, int v) {
g[u].push_back(v);
gr[v].push_back(u);
}
// For the following three functions
// int x, bool val: if 'val' is true, we take the variable to be x. Otherwise we take it to be x's complement.
// At least one of them is true
void add_clause_or(int i, bool f, int j, bool g) {
add_edge(i + (f ? n : 0), j + (g ? 0 : n));
add_edge(j + (g ? n : 0), i + (f ? 0 : n));
}
// Only one of them is true
void add_clause_xor(int i, bool f, int j, bool g) {
add_clause_or(i, f, j, g);
add_clause_or(i, !f, j, !g);
}
// Both of them have the same value
void add_clause_and(int i, bool f, int j, bool g) {
add_clause_xor(i, !f, j, g);
}
// Topological sort
void dfs(int u) {
vis[u] = true;
for (const auto &v : g[u])
if (!vis[v]) dfs(v);
topological_order.push_back(u);
}
// Extracting strongly connected components
void scc(int u, int id) {
vis[u] = true;
comp[u] = id;
for (const auto &v : gr[u])
if (!vis[v]) scc(v, id);
}
// Returns true if the given proposition is satisfiable and constructs a valid assignment
bool satisfiable() {
fill(vis.begin(), vis.end(), false);
for (int i = 0; i < 2 * n; i++)
if (!vis[i]) dfs(i);
fill(vis.begin(), vis.end(), false);
reverse(topological_order.begin(), topological_order.end());
int id = 0;
for (const auto &v : topological_order)
if (!vis[v]) scc(v, id++);
// Constructing the answer
for (int i = 0; i < n; i++) {
if (comp[i] == comp[i + n]) return false;
answer[i] = (comp[i] > comp[i + n] ? 1 : 0);
}
return true;
}
};
```

Try to solve this problem on your own.

This problem is very direct. Each topping represents a variable. For a topping $$$x$$$ and preference '$$$+$$$', we use $$$x$$$ while adding a clause. For preference '$$$-$$$' we use $$$\lnot x$$$ in our clause.

```
#include <bits/stdc++.h>
using namespace std;
struct two_sat {
int n;
vector<vector<int>> g, gr;
vector<int> comp, topological_order, answer;
vector<bool> vis;
two_sat() {}
two_sat(int _n) { init(_n); }
void init(int _n) {
n = _n;
g.assign(2 * n, vector<int>());
gr.assign(2 * n, vector<int>());
comp.resize(2 * n);
vis.resize(2 * n);
answer.resize(2 * n);
}
void add_edge(int u, int v) {
g[u].push_back(v);
gr[v].push_back(u);
}
// At least one of them is true
void add_clause_or(int i, bool f, int j, bool g) {
add_edge(i + (f ? n : 0), j + (g ? 0 : n));
add_edge(j + (g ? n : 0), i + (f ? 0 : n));
}
// Only one of them is true
void add_clause_xor(int i, bool f, int j, bool g) {
add_clause_or(i, f, j, g);
add_clause_or(i, !f, j, !g);
}
// Both of them have the same value
void add_clause_and(int i, bool f, int j, bool g) {
add_clause_xor(i, !f, j, g);
}
void dfs(int u) {
vis[u] = true;
for (const auto &v : g[u])
if (!vis[v]) dfs(v);
topological_order.push_back(u);
}
void scc(int u, int id) {
vis[u] = true;
comp[u] = id;
for (const auto &v : gr[u])
if (!vis[v]) scc(v, id);
}
bool satisfiable() {
fill(vis.begin(), vis.end(), false);
for (int i = 0; i < 2 * n; i++)
if (!vis[i]) dfs(i);
fill(vis.begin(), vis.end(), false);
reverse(topological_order.begin(), topological_order.end());
int id = 0;
for (const auto &v : topological_order)
if (!vis[v]) scc(v, id++);
for (int i = 0; i < n; i++) {
if (comp[i] == comp[i + n]) return false;
answer[i] = (comp[i] > comp[i + n] ? 1 : 0);
}
return true;
}
};
signed main() {
int N, M;
cin >> N >> M;
two_sat solver(M);
while (N--) {
int topping1, topping2;
char preference1, preference2;
cin >> preference1 >> topping1 >> preference2 >> topping2;
solver.add_clause_or(topping1 - 1, preference1 == '+', topping2 - 1, preference2 == '+');
}
if (!solver.satisfiable()) {
cout << "IMPOSSIBLE";
return 0;
}
for (int i = 0; i < M; i++)
cout << (solver.answer[i] ? '+' : '-') << " ";
}
```

Though this problem asks only whether it is possible to place the cards in such a way that no two cards are showing the same picture, try to find a valid arrangement as well.

If $$$N$$$ was small enough, we could add constraints between each pair of cards. This would be $$$O(N^2)$$$. We cannot do that here.

For each integer $$$x$$$ from $$$1$$$ to $$$2N$$$, we want to store the indices of all the cards which have $$$x$$$ on any side, along with which side they have it on. Now our problem is the following: add implications in such a way that at most one card from each list is turned to the side which has that integer on it.

We can solve this problem for each list of indices separately. What all implications do we have? Let's consider a list $$$C$$$ of size $$$M$$$, for an arbitrary integer $$$x$$$. Let's make the use of $$$M$$$ additional variables $$$\it{pref}_i$$$ in our 2-SAT. $$$\it{pref}_i$$$ tells us if we have already displayed integer $$$x$$$ on a card in the range $$$[0, i - 1]$$$ of this list. We have the following logical implications:

1. If $$$C_i$$$ is $$$\it{true}$$$, $$$\it{pref}_{i + 1}$$$ must be $$$\it{true}$$$.

2. If $$$\it{pref}_i$$$ is $$$\it{true}$$$, $$$\it{pref}_{i + 1}$$$ must be $$$\it{true}$$$.

3. If $$$\it{pref}_i$$$ is $$$\it{true}$$$, $$$C_i$$$ must be $$$\it{false}$$$.

This way, we can solve this problem in linear time. See the code below to understand this better.

```
#include <bits/stdc++.h>
using namespace std;
#define sz(x) (int) (x).size()
struct two_sat {
int n;
vector<vector<int>> g, gr;
vector<int> comp, topological_order, answer;
vector<bool> vis;
two_sat() {}
two_sat(int _n) { init(_n); }
void init(int _n) {
n = _n;
g.assign(2 * n, vector<int>());
gr.assign(2 * n, vector<int>());
comp.resize(2 * n);
vis.resize(2 * n);
answer.resize(2 * n);
}
void add_edge(int u, int v) {
g[u].push_back(v);
gr[v].push_back(u);
}
void add_edge(int i, bool f, int j, bool g) {
add_edge(i + (f ? 0 : n), j + (g ? 0 : n));
add_edge(j + (g ? n : 0), i + (f ? n : 0));
}
// At least one of them is true
void add_clause_or(int i, bool f, int j, bool g) {
add_edge(i + (f ? n : 0), j + (g ? 0 : n));
add_edge(j + (g ? n : 0), i + (f ? 0 : n));
}
// Only one of them is true
void add_clause_xor(int i, bool f, int j, bool g) {
add_clause_or(i, f, j, g);
add_clause_or(i, !f, j, !g);
}
// Both of them have the same value
void add_clause_and(int i, bool f, int j, bool g) {
add_clause_xor(i, !f, j, g);
}
void dfs(int u) {
vis[u] = true;
for (const auto &v : g[u])
if (!vis[v]) dfs(v);
topological_order.push_back(u);
}
void scc(int u, int id) {
vis[u] = true;
comp[u] = id;
for (const auto &v : gr[u])
if (!vis[v]) scc(v, id);
}
bool satisfiable() {
fill(vis.begin(), vis.end(), false);
for (int i = 0; i < 2 * n; i++)
if (!vis[i]) dfs(i);
fill(vis.begin(), vis.end(), false);
reverse(topological_order.begin(), topological_order.end());
int id = 0;
for (const auto &v : topological_order)
if (!vis[v]) scc(v, id++);
for (int i = 0; i < n; i++) {
if (comp[i] == comp[i + n]) return false;
answer[i] = (comp[i] > comp[i + n] ? 1 : 0);
}
return true;
}
};
const int MXN = 5e4 + 1, INF = 1e9 + 5;
two_sat solver;
vector<pair<int, int>> cards[2 * MXN];
void solve() {
int N;
cin >> N;
for (int i = 0; i < N; i++) {
int x, y;
cin >> x >> y;
x--, y--;
cards[x].emplace_back(i, 1);
cards[y].emplace_back(i, 0);
}
int use = N;
solver.init(3 * N);
for (int i = 0; i < 2 * N; i++) {
if (cards[i].empty()) continue;
for (int j = 0; j < sz(cards[i]); j++) {
int cur = cards[i][j].first, val = cards[i][j].second;
solver.add_edge(cur, val, use, 1); // If current true, prefix from next has to be true
if (j > 0) {
solver.add_edge(use - 1, 1, cur, val ^ 1); // If prefix found, current cannot be true
solver.add_edge(use - 1, 1, use, 1); // If prefix found, next prefix also true
}
use++;
}
}
cout << (solver.satisfiable() ? "possible" : "impossible") << "\n";
for (int i = 0; i < 2 * N; i++)
cards[i].clear();
}
signed main() {
ios_base::sync_with_stdio(false);
cin.tie(nullptr);
int TC = 1;
cin >> TC;
while (TC--) solve();
}
```

Hi!

I posted a new tutorial on Persistent Segment Tree on my YouTube channel. I go over the main idea behind persistency and solve CSES Range Queries and Copies. Thanks for watching!

Video: Majority Element Range Queries

This can be used to solve the recent problem 1514D - Cut and Stick in $$$O((N + Q).log(N))$$$ (the bonus complexity mentioned in the editorial).

See my code: 113593731

Hey! I made a video tutorial on Parallel Binary Search which also serves as an editorial for the problem Meteors from POI. Check it out here!

All the code in the video can be found here.

**Edit: Re-uploaded with increased volume.**

I made a short tutorial on answering range queries offline with Mo's algorithm on my YouTube channel here: Offline Range Queries with Mo's Algorithm

I explain how to use Mo's algorithm in general and solve the problem 221D - Little Elephant and Array using it in the video.

Improving runtime: CP-Algorithms — Tips for improving runtime

Alternative sorting order for Mo's algorithm: An alternative sorting order for Mo's algorithm by gepardo

Here are some practice problems (feel free to recommend some in the comments):

- SPOJ — Range Distinct Queries

- Hackerearth — Range Inversion Queries

Hey everyone,

I had written a blog post on the first three problems from the AtCoder Educational DP Contest some time ago. I have decided to make video editorials for most problems after Problem K — Stones.

Here is the post on the first three problems:

Editorial For The First Three Problems

Here are the video editorials (still being made):

Problem K — Stones

Problem Z — Frog 3 + Convex Hull Trick

I made a short video tutorial on the main idea behind Binary Lifting (a cool technique for answering tree related queries) and answering lowest common ancestor queries in O(log(N)) with it. Binary lifting can be used to answer other queries such as path sum, min, max, etc as well.

Euler Tours give us a lot of information about trees. One thing they can tell us is if some node u is an ancestor of some node v. This is a very nice problem related to this: 1328E - Tree Queries

I explain the version involving an Euler Tour. In my opinion, it is shorter to code than some other versions involving binary search and getting nodes to the same depth. You don’t need to know what Euler Tours are, I explain that. You just need to have a thorough understanding of what trees are and depth first search.

Check it out here: Binary Lifting Introduction

Here's how I like to implement it: My Implementation

Hey guys, I made a video on the Convex Hull Trick (a great dynamic programming optimization). I explain the general idea behind it and how we can implement it/the idea behind using it when:

1. Our slopes are monotonic and our query values are monotonic

2. Our slopes are monotonic

I basically explain it through the last problem of the atcoder dp contest (links to everything are available in the video description) so this video also serves as an editorial for Frog 3.

I do recommend that you first read the description as this video does have prerequisites.

Here is the video: Convex Hull Trick Video

You can read more about CHT here: CP-Algorithms Convex Hull Trick and Li Chao Trees

I don't go into dynamic CHT or Li Chao Trees but you can check the video description for a tutorial on Li Chao Trees by radoslav11 which is a great tutorial. I was easily able to learn how Li Chao Trees work from it.

Given $$$Q$$$ ranges of the form $$$[L_i, R_i]$$$, find for each point $$$x \in [1, N]$$$ the number of ranges that contain that point.

$$$1 \le N$$$, $$$Q \le 10^7$$$

$$$1 \le L_i \le R_i \le 10^7$$$

One solution is to loop over each element for each range but this takes $$$O(NQ)$$$ time. We can do better.

A difference array can be used to perform multiple range update where we need to find the answer only after performing all the queries. We can do this in $$$O(N)$$$ time and space. We can update an arbitrary range in $$$O(1)$$$. It is only when we need to print our final answer that we perform an $$$O(N)$$$ computation.

Let $$$N = 5$$$. Let's create an array $$$\it{diff}$$$ of size $$$N + 2$$$ which is initially filled with zeroes.

Let $$$Q = 4$$$. Let's calculate the answer given these ranges — $$$[1, 3], [4, 5], [3, 4], [1, 5]$$$

Now, instead of iterating over each element of our array and adding the values, we can simply add the update value to index $$$L$$$ of our difference array and subtract it from the index $$$R + 1$$$ of our difference array. We need a difference array of size $$$N + 2$$$ because we subtract the update value from $$$R + 1$$$. It is possible for $$$R$$$ to be equal to $$$N$$$.

Our $$$\it{diff}$$$ array after each query:

Finally, we will run a loop from $$$2$$$ to $$$N$$$ (size of the array) and add $$$\it{diff}_{i - 1}$$$ to $$$\it{diff}_i$$$.

When we added our update value to index $$$L$$$ and subtracted it from index $$$R + 1$$$ and calculated prefix sums, for every range that we were supposed to increment by one, our update value got added to it. We subtracted the update value from index $$$R + 1$$$ of the difference array so that when we are summing it up, the update value we added to index $$$L$$$ does not get added to elements outside the update range.

We can also perform range increment queries this way. It is not necessary for us to add a fixed value of $$$1$$$ to a range. It can be any arbitrary value.

Code:

```
#include <bits/stdc++.h>
using namespace std;
int main() {
int n = 5; // Size of array
vector<int> elements{0, 1, 1, 1, 1, 1}; // 1 based indexing
// n+2 because we need are not using the 0-th index and we need one more element in the array.
vector<int> diff(n + 2, 0);
int updateValue = 10;
int l = 2, r = 5;
diff[l] += updateValue;
diff[r + 1] -= updateValue;
for (int i = 1; i <= n; i++) {
diff[i] += diff[i - 1];
elements[i] += diff[i];
}
for (int i = 1; i <= n; i++) cout << elements[i] << " ";
return 0;
}
```

I highly recommend you try this problem based on this concept to understand it better:

295A - Greg and Array

My solution:

83554391

**Feel free to share problems :)**

Take a look at the tasks here: Contest Link

The main thing to note in this problem is that the frog, from a position **i** can jump to only **i + 1** or **i + 2**. This simplifies the problem.

We will create an array **dp of size n (the total number of stones)**. **dp[i]** will store the minimum cost we can achieve till position **i**. An array **jumps of size n** will store the height of each stone.

For our base cases, we will set **dp[0] = 0** and **dp[1] = abs(jumps[1] — jumps[0])** as if we are on the second stone (0 based indexing), there is only one way to reach it i.e from the first stone.

Now, to get our final answer, **dp[n-1]**, we need to solve all of our subproblems. To solve for any given position **i**, we need to have calculated the best we could do for positions **i — 1** and **i — 2**. Why do we need to only consider positions **i — 1** and **i — 2**? Well that is because the frog can only jump one or two stones ahead of its current position i.e to reach its current position **i** it had to have been at either **i — 1** or **i — 2**.

We can simply do this in **O(n)**.

For any given position **i > 2**, we can solve the subproblem this way:

```
dp[i] = min(dp[i-1] + abs(jumps[i] - jumps[i-1]), dp[i-2] + abs(jumps[i] - jumps[i-2]))
```

**dp[i]** will be the minimum cost to reach that position. To calculate this, we find the minimum cost of reaching positions **i — 1** and **i — 2**. and then add the cost of jumping to position **i** and store the minimum.

Problem Link

Solution Link

This problem is slightly different from the first one. This time, we are given an additional variable **k** telling us the maximum size of a jump. **k** in the last problem was basically fixed to 2 as from a stone **i** we could only jump to **i + 1** and **i + 2**.

Now, how do we modify our solution to handle this new case where **k** can be greater than 2?

We will still create an array **dp (all values set to INFINITY) of size n (the number of stones we have)**. **dp[i]** will denote the minimum cost to reach **stone i**.

Our base case again, **dp[0] = 0** (0 based indexing) as we start off from the first stone.

We use a nested loop:

```
for (int i = 0; i < n; i++) { // i represents the stone the frog is currently at.
for (int j = i + 1; j <= i + k; j++) { // j represents a potential stone for the frog to jump to.
if (j < n) // We have to check if the stone we want to jump to is not out of bounds.
// Storing the total minimum cost to reach stone j from stone i.
dp[j] = min(dp[j], dp[i] + abs(stones[j] - stones[i]));
}
}
```

Here, the first for loop denotes that the frog is currently at **stone[i]**. The second for loop basically calculates the best we can do if we jump from **stone[i] to stone[i+1]**, **stone[i+2], stone[i+3] ... stone[i+k]**.

We are performing the **min()** operation because if the cost of jumping from **i to j** is higher than a cost we have already found, we do not want to update **dp[j]** to a higher cost.

We perform this for every position the frog could be at i.e it is performed for all positions **i, i+1, i+2...n**.

After performing these operations (solving all the subproblems), our final answer is stored at **dp[n-1]** (0 based indexing).

Problem Link

Solution Link

We are given three types of activities. We get different amount of points by doing each activity each day. The only restriction is that we cannot do the same activity on two consecutive days.

For example, if we did activity A yesterday, we cannot do activity A today but we can do it tomorrow.

To solve this problem, we will maintain a 2D array, **dp[n][3]**.

dp[i][0] will store the maximum points we can gain by doing activity A on day i.

dp[i][1] will store the maximum points we can gain by doing activity B on day i.

dp[i][2] will store the maximum points we can gain by doing activity C on day i.

Our base cases will be:

```
// If you swim in the sea in sea on the first day, you cannot get more than a[0] points.
dp[0][0] = a[0];
// If you catch bugs in the mountains on the first day, you cannot get more than b[0] points.
dp[0][1] = b[0];
// If you do homework at home on the first day, you cannot get more than c[0] points.
dp[0][2] = c[0];
```

For every day **i**, as we cannot do the activity we did on the previous day,

```
// If we do activity A today, we have to have done activities B or C on the previous day.
dp[i][0] = a[i] + max(dp[i - 1][1], dp[i - 1][2]);
// If we do activity B today, we have to have done activities A or C on the previous day.
dp[i][1] = b[i] + max(dp[i - 1][0], dp[i - 1][2]);
// If we do activity C today, we have to have done activities A or B on the previous day.
dp[i][2] = c[i] + max(dp[i - 1][1], dp[i - 1][0]);
```

Simply by checking for each day **i**, we can calculate the best we can do by the end of our last day.

As we can end by doing activities A, B, or C, our answer will be the maximum points gained on doing activities A, B, or C on the last day.

Time complexity: **O(n)**

Problem Link

Solution Link

Codeforces (c) Copyright 2010-2022 Mike Mirzayanov

The only programming contests Web 2.0 platform

Server time: Oct/07/2022 09:01:17 (g1).

Desktop version, switch to mobile version.

Supported by

User lists

Name |
---|