How can I write math in my blog now? The old functionality (using dollar signs) no longer seems to work, e.g.

*a* = *b* + *c*

**Update:** It's fixed :)

# | User | Rating |
---|---|---|

1 | tourist | 3707 |

2 | Benq | 3672 |

3 | Radewoosh | 3627 |

4 | ksun48 | 3547 |

5 | Miracle03 | 3480 |

6 | ecnerwala | 3400 |

7 | peehs_moorhsum | 3384 |

8 | maroonrk | 3361 |

9 | sunset | 3338 |

10 | Um_nik | 3320 |

# | User | Contrib. |
---|---|---|

1 | 1-gon | 209 |

2 | Um_nik | 197 |

3 | YouKn0wWho | 192 |

4 | Errichto | 182 |

5 | awoo | 181 |

6 | sus | 180 |

7 | tourist | 175 |

8 | -is-this-fft- | 171 |

9 | SecondThread | 170 |

9 | Radewoosh | 170 |

How can I write math in my blog now? The old functionality (using dollar signs) no longer seems to work, e.g.

*a* = *b* + *c*

**Update:** It's fixed :)

This post is motivated by a problem I recently saw, Problem G of NCPC 2007. This is a standard problem that I'm sure many of you have seen before, but the general topic of partially ordered sets is not too well known.

Let *S* be a set of elements and ≤ be a partial ordering on the set. That is, for some elements *x* and *y* in *S* we may have *x* ≤ *y*. The only properties that ≤ must satisfy are reflexivity (*x* ≤ *x*), antisymmetry (if *x* ≤ *y* and *y* ≤ *x*, then *x* = *y*), and transitivity (if *x* ≤ *y* and *y* ≤ *z*, then *x* ≤ *z*). Note that because it is a *partial* ordering, not all *x* and *y* are comparable.

An example of a partially ordered set (poset) is the set of points (*x*, *y*) in the Cartesian plane with the operator (*x*_{1}, *y*_{1}) ≤ (*x*_{2}, *y*_{2}) iff *x*_{1} ≤ *x*_{2} and *y*_{1} ≤ *y*_{2} (e.g. NCPC 2007 problem).

We define a **chain** *C* to be a subset of *S* such that the elements of *C* can be labelled *x*_{1}, *x*_{2}, ..., *x*_{n} such that *x*_{1} ≤ *x*_{2} ≤ ... ≤ *x*_{n}. A **partition** *P* is a set of chains where each element occurs in exactly one chain.

We define an **antichain** *A* to be a subset of *S* such that for any *x* and *y* in *A* we have neither *x* ≤ *y* nor *y* ≤ *x*. That is, no two elements of an antichain are comparable.

We can define the width of a poset in two ways, which is the result of Dilworth's Theorem. One definition is the size of the maximum antichain; the other is the size of the minimum partition. It is easy to see that any partition must have at least as many chains as the size of the maximum antichain because every element of the antichain must be in a different chain. Dilworth's Theorem tells us that there exists a partition of exactly that size and can be proved inductively (see Wikipedia for proof).

So how does one calculate the width of a poset? To solve this problem in general, we can use maximum matching on a bipartite graph. Duplicate each element *x* in *S* as *u*_{x} and *v*_{x} and if *x* ≤ *y* (for *x* ≠ *y*) then add the edge . If you compute the maximum matching on this graph, this is equivalent to partitioning the elements into chains, where if the edge is chosen then *x* and *y* are in the same chain. In particular, if the size of the matching is *m* and |*S*| = *n*, then the number of partitions is *n* - *m*. Notice that this gives a bijection between partitions and matchings, so the maximum matching gives the minimum partition, which we know is equal to the width as given by Dilworth's Theorem.

So now that we can calculate the width of a poset, we can just apply that to our problem, right? Not quite.

The bounds on our problem are up to 20, 000 so we can't use maximum matching. Luckily, points in the Cartesian plane are more structured than general posets, and we can use the other definition of width (maximum antichain) to solve this problem more efficiently. Consider iterating over the points sorted in order by *x*. We maintain a set of pairs (*a*, *b*) which indicates that there is an antichain of size *b* that ends at *y*-value *a* (among all points that have already been processed). Thus for any future point (*x*, *y*) we can insert (*y*, *b* + 1) into the set as long as *y* < *a*.

Notice, however, that if we have two points in the set (*a*, *b*) and (*c*, *d*) such that *c* ≤ *a* and *d* ≤ *b* then the latter is redundant (it is always worse) and we can remove it. In this way we keep the set small and only need to check a single element whenever we insert, which is (*a*, *b*) with minimal *a* such that *a* > *y*. All of this can be done with a C++ set, for example. At the end, the largest antichain we recorded is indeed the maximum one, and we are done.

A recent Google Code Jam problem uses these ideas.

Hey everyone,

There are no CodeForces rounds coming up, so I thought I would set up a training round in the new Gym. The problems are from our local contest which I helped organize last October. We used it to select our team for the ACM-ICPC. There is a large range of problem difficulties, so I am sure everyone will have interesting problems to solve.

It will be held on Saturday, 1/28 at 8am Moscow time (4-hour contest) **in the CodeForces Gym**. I hope you enjoy the problems!

It's been some time since my last blog post, so this is the first one of the new year. Although I didn't participate in Round 102, I worked through several of the problems, and this post is motivated by Problem D. It is about a variation on our favorite algorithmic game, Nim. There is already an editorial for the round, but I intend to use this post as future reference in case I come upon such problems again. If you don't see the reduction from the problem to the Nim variation, please read the editorial.

The normal many-pile version of Nim is played as follows. There are *n* piles of stones, and two players take turns removing stones. On any turn, a player may remove as many stones as desired from a single pile. The player who cannot make a move loses.

A very nice and classical result is that a position *a*_{1}, *a*_{2}, ..., *a*_{n} is a losing position if and only if , where is the XOR operator. To prove this, we simply realize that from any losing position, if we change the value of one pile, the XOR will no longer be zero (thus a winning position). And from any winning position, we can look at the highest order bit which does not XOR to zero, pick a pile in which that bit is 1, change it to 0, and adjust the rest of the bits in that pile to make the XOR zero.

The variation in Problem D lies in the fact that a player may remove stones from up to *k* different piles on each turn rather than just a single one. As such, the winning and losing positions change. We begin by defining the operation *b*_{i}(*x*) which equals the *i*-th bit of *x* (always 0 or 1). Then define the *p*-XOR of the values *a*_{1}, *a*_{2}, ..., *a*_{n} to be

.

That is, for each digit in the binary expansion, we add up all the values and take the result modulo *p*. If we choose *p* = 2, for example, we get the sequence of bits resulting from the normal XOR of the values.

It turns out that in this variation of Nim, a position *a*_{1}, *a*_{2}, ..., *a*_{n} is a losing position if and only if is all zeros. The proof of this result is a little less nice, so I will just outline the idea. From any losing position, consider the highest order bit affected among any of the piles. This bit always goes from 1 to 0 (or else the pile is getting bigger). Since at most *k* piles are affected, the (*k* + 1)-XOR of that bit can no longer be 0, leaving us in a winning position.

From any winning position, we again look at the highest order bit which does not (*k* + 1)-XOR to 0. Let's say those bits add up to *m* modulo *k* + 1. Choose *m* piles which have that bit set to 1 and change them to 0, as well as zeroing out all of the lower order bits in those piles. Then move on to the next bit of lower order. If we can fix the (*k* + 1)-XOR of that bit using just the piles we have selected so far (by changing some 0's to 1's), then do so and move on. Otherwise, choose as many additional piles as necessary to change, noting that this will be at most *k* - *m*. If we continue this process we can set the (*k* + 1)-XOR of all bits to be 0 while choosing at most *k* piles.

It is useful to understand Nim and its variations very well because game problems often reduce to them. If you know what you're trying to reduce to, it can often be a lot easier to find the reduction. Then knowing how to solve a simple algorithmic game is all that's left!

This blog post is motivated by an interesting problem I solved today: Timus 1846. I knew about segment trees previously for solving range-minimum query, but was not fully aware of the obvious generalizations of this nice data structure. Here I will describe the basics of a segment tree and some applications.

The problem that a segment tree can solve is the following. We are given an array of values *a*[0], *a*[1], ..., *a*[*N* - 1]. Assume without loss of generality that *N* = 2^{n}; we can generally pad the computations accordingly. Also, consider some associative binary function *f*. Examples of *f* include sum, min, max, or gcd (as in the Timus problem). Segment trees allow for each of the following two operations on *O*(*logN*) time:

- compute
*f*(*a*[*i*],*a*[*i*+ 1], ...,*a*[*j*]) for*i*≤*j*; and - update
*a*[*x*] =*v*.

So how does a segment tree work? It's quite simple, in fact. We build a single two-dimensional array *t*[0...*N* - 1][0..*n*] where *t*[*x*][*y*] = *f*(*a*[*x*], *a*[*x* + 1], ..., *a*[*x* + 2^{y} - 1]) where *x* is divisible by 2^{y} (i.e. most of the entries are ignored, but it is easiest to represent this way). We can initialize the entries by the following procedure:

*t*[*x*][0] =*a*[*x*]; and*t*[*x*][*y*] =*f*(*t*[*x*][*y*- 1],*t*[*x*+ 2^{y - 1}][*y*- 1]) for*y*= 1, ...,*n*,

where the second step uses the associative property of *f*. The logic is that *t*[*x*][*y* - 1] computes *f* over the range [*x*, *x* + 2^{y - 1}) and *t*[*x* + 2^{y - 1}][*y* - 1] computes *f* over the range [*x* + 2^{y - 1}, *x* + 2^{y}) so when we combine them we get the corresponding computation of *f* for *t*[*x*][*y*]. Now we can describe how each of the two operations is implemented (at a high level).

We'll start with computing *f*(*a*[*i*], *a*[*i* + 1], ..., *a*[*j*]). It's pretty easy to see that this amounts to representing the interval [*i*, *j*] as disjoint intervals [*i*_{1}, *j*_{1}];[*i*_{2}, *j*_{2}];...;[*i*_{k}, *j*_{k}] where each of these is of the form [*x*, *x* + 2^{y} - 1] where *x* is divisible by 2^{y} (i.e. it shows up in the table *t* somewhere). Then we can just combine them with *f* (again using the associative property) to get the result. It is easy to show that *k* = *O*(*logN*) as well.

Now for updating *a*[*x*] = *v*, notice that there are only a few terms in the segment tree which get affected by this change. Obviously, *t*[*x*][0] = *v*. Also, for *y* = 1, ..., *n*, only the entry *t*[*x* - (*x*&(2^{y} - 1))][*y*] will update. This is because *x* - (*x*&(2^{y} - 1)) is the only value divisible by 2^{y} (notice the last *y* bits are zero) that also covers *x* at level *y*. Then it suffices to use the same formula as in the initialization to recompute this entry in the tree.

Here's a somewhat crude implementation of a segment tree. Note that the value of IDENTITY should be such that *f*(*IDENTITY*, *x*) = *x*, e.g. 0 for sum, + ∞ for min, - ∞ for max, and 0 for gcd.

void init() { for (int x = 0; x < N; x++) t[x][0] = a[x]; for (int y = 1; y <= n; y++) for (int x = 0; x < N; x+=(1<<y)) t[x][y] = f(t[x][y-1], t[x+(1<<(y-1))][y-1]); } void set(int x, int v) { t[x][0] = a[x] = v; for (int y = 1; y <= n; y++) { int xx = x-(x&((1<<y)-1)); t[xx][y] = f(t[xx][y-1], t[xx+(1<<(y-1))][y-1]); } } int get(int i, int j) { int res = IDENTITY, h = 0; j++; while (i+(1<<h) <= j) { while ((i&((1<<(h+1))-1)) == 0 && i+(1<<(h+1)) <= j) h++; res = f(res, t[i][h]); i += (1<<h); } while (i < j) { while (i+(1<<h) > j) h--; res = f(res, t[i][h]); i += (1<<h); } return res; }

One possible application is letting *f* just be the sum of all arguments. This leads to a data structure that allows you to compute sums of ranges and do updates, but is slightly less efficient than a Binary Indexed Tree. The most well-known application seems to be the range-minimum query problem, where *f* is the minimum of all arguments. This also allows you to compute the least-common ancestor of nodes in a tree.

Lastly, in Timus 1846, we can use it with *f* as the GCD function. Every time we remove a number we should update the corresponding entry to zero (effectively removing it from the GCD computation). Then the answer at each step is *f*(*a*[0], *a*[1], ..., *a*[*k*]) which can be computed directly from the segment tree.

Today I have become one :D

This was a pretty good SRM for me, tantalizingly close to red now. I'll talk about the easy (decent) and medium (really cool), but I didn't have time to take a look at the hard.

We observe immediately that if *n* is composite, it takes only 1 move, so all of those cases are done. Then when *n* is prime, we can just use 1 and *n* - 1 (which is even), so it takes 2 moves and we're done, right? Almost! For *n* = 2, the answer is indeed 2 because 1 is not prime, but for *n* = 3, the answer is 3 because 2 is prime. Those are the only two interesting cases, so now we're really done.

This is a very nice problem in my opinion because it's not obvious where to start, but once you figure it out it's quite simple. First, let's consider the values in *C*. If they are all positive or all negative, the answer is - 1. Otherwise, there is a finite sequence because for values + *a* and - *b* we can never have a sequence of length *ab* as it would have to be both positive and negative.

Now, we want to find the longest sequence *a*_{1}, *a*_{2}, ..., *a*_{n} satisfying the constraints of the problem. First, we'll note that if *N* is the maximum value of *n* that works, then *n* ≤ *N* all work and *n* > *N* all do not work, so we can binary search on the answer. I don't have a tight bound on the maximum but in the contest I just used 1000000 which is definitely high enough due to the argument above (I'm curious as to a proof of a better one -- it definitely exists because the algorithm does not run in time if the answer is too high).

The way the constraints are specified make them tricky to work with, so instead we'll consider *p*[*i*] = *a*_{1} + *a*_{2} + ... + *a*_{i} for *i* = 0, 1, ..., *n* in which case the constraints can be written as *p*[*i* + *C*[*j*]] - *p*[*i*] > 0 (note this works for both positive and negative *C*[*j*]). This induces a directed graph of constraints on *n* + 1 nodes where an edge from *u* to *v* means *p*[*u*] < *p*[*v*], which reduces the problem to verifying whether this graph of constraints can be satisfied. This is then equivalent to checking for cycles. We can do this in *O*(*n* * |*C*|) time by running a topological sort; alternatively, notice that if there is a cycle then there must be one through 0, so we can just do a BFS from 0. Thus the total running time is *O*(*N* * *logN* * |*C*|) where *N* is the maximum possible answer.

Today's contest had some pretty interesting problems; I particularly liked Problem C and Problem D. I'm going to describe my solution to D which I think is quite nice (and results in simple code). I don't doubt that there are even easier solutions to the problem, so please enlighten me if you have one :).

Let us denote the sequence by *a*_{1}, ..., *a*_{n} and assume it has been sorted already. We can note immediately that there are some easy cases we can rule out. If *n* is odd, by a parity argument it is impossible. Moreover, let *m* = *a*_{n} - *a*_{1}, i.e. the range of the values. If *m* = 0 it is impossible since *n* ≥ 3, and if *m* > *n* / 2 it is impossible since we cannot get from *a*_{1} to *a*_{n} and back going around the circle. Now that these cases are out of the way we move on to the actual algorithm.

It's natural to transform the input we're given into a form that's easier to work with, i.e. a histogram. As such, we define *c*_{i} to be the number of values *a*_{j} with *a*_{j} - *a*_{1} = *i* for *i* = 0, 1, ..., *m*. Now we replace each *a*_{i} with *a*_{i} - *a*_{1} so that they are all in the range [0, *m*].

A nice observation we can make about any valid sequence is the following. Consider the positions of any 0 and any *m* in the circle. There are two paths from 0 to *m*. We observe that if any valid sequence exists, then there exists one for which, along each of these paths, the value never decreases twice in a row (that is, we can think of each path as "increasing" with some oscillation). Suppose this does happen in our sequence, i.e. we have something like

0, 1, ..., *i* - 1, *i*, *i* - 1, ..., *j* + 1, *j*, *j* + 1, ..., *m* - 1, *m*

where *j* < *i* - 1 so *i*, *i* - 1, ..., *j* + 1, *j* is the decreasing sequence. Then we note that *j* + 1 appeared somewhere between 0 and *i*, so we can take the values *j*, *j* + 1 and move them directly after the *j* + 1 that was between 0 and *i* (easy to see that the sequence is still valid). Repeating this for whenever we have a decrease of two or more gives us paths that never decreases twice in a row.

Now with our transformation and observation, we can come up with the method of checking whether or not the histogram can produce a valid sequence. To do so, first place a 0 on the circle. Then we have *L* = *c*_{0} - 1 zeros left to place. Note, however, that each zero must be adjacent to a one. So in fact there must be at least *L* + 2 ones to "hide" all of the zeros from the rest of the numbers (one on each side and one for each additional zero). If there are fewer than that many, it is impossible. After placing the remaining zeros and ones arbitrarily (but alternating) on each side of the initial 0, we see that there are *L* = *c*_{1} - (*L* + 2) ones remaining. By the same argument, we know how many twos there must be (again, *L* + 2). We can continue like this until we get to *m*. If there are *L* remaining values of *m* - 1, we need *c*_{m} = *L* + 1 to complete the circle (one to be the "end" and one for each additional *m* - 1). If this is true, then it is possible and otherwise it is not.

Note that the above placement strategy works because of the observation we made (we know we can use the small numbers greedily) and the fact that it doesn't matter how many of each number we put in each path as long as they end in the same value.

int a[100100], c[100100]; int main() { int n; cin >> n; for (int i = 0; i < n; i++) cin >> a[i]; sort(a, a+n); int m = a[n-1]-a[0]; if (n % 2 == 1 || m > n/2 || m == 0) { cout << "NO" << endl; return 0; } memset(c, 0, sizeof(c)); for (int i = 0; i < n; i++) c[a[i]-a[0]]++; bool poss = true; int L = c[0]-1; for (int i = 1; i < m; i++) { if (c[i] < L+2) { poss = false; break; } c[i] -= (L+2); L = c[i]; } poss = poss && (c[m] == L+1); cout << (poss ? "YES" : "NO") << endl; return 0; }

Tutorial of Codeforces Beta Round #94 (Div. 2 Only)

In my quest to regain my red color on Topcoder this morning, I failed once again in SRM 523. I'll present a quick summary of the solutions here; the problem set was actually quite easy, but with some tricks and edge cases.

*dp*[*i*][0] = 1, and for*j*≥ 1,*dp*[*i*][*j*] =*dp*[*i*][*j*- 1] (position*j*is empty)*dp*[*i*][*j*] =*dp*[*i*][*j*] +*ways*[*j*] **dp*[*i*- 1][*j*]*dp*[*i*][*j*] =*dp*[*i*][*j*] +*dp*[*i*][*j*-*L*- 1] **ways*[*L*] **dp*[*i*- 1][*L*] for 1 ≤*L*≤*j*- 1

Consider two paths starting from the same cell which see sets of letters

This is the first post in my Codeforces blog! I plan to use this as a place where I can write down new algorithms I learned, problems that I found interesting, or stupid mistakes I made during contests.

Today, the topic will be the Z Algorithm, which I learned as a result of failing to solve Problem B of Beta Round 93 (http://codeforces.com/contest/126/problem/B). There are some other solutions like binary search + hashing, but this one is quite nice. Anyway, first, a description of the algorithm and why it works; it is simple and makes a lot of sense (as all good algorithms are).

Given a string *S* of length *n*, the Z Algorithm produces an array *Z* where *Z*[*i*] is the length of the longest substring starting from *S*[*i*] which is also a prefix of *S*, i.e. the maximum *k* such that *S*[*j*] = *S*[*i* + *j*] for all 0 ≤ *j* < *k*. Note that *Z*[*i*] = 0 means that *S*[0] ≠ *S*[*i*]. For easier terminology, we will refer to substrings which are also a prefix as prefix-substrings.

The algorithm relies on a single, crucial invariant. As we iterate over the letters in the string (index *i* from 1 to *n* - 1), we maintain an interval [*L*, *R*] which is the interval with maximum *R* such that 1 ≤ *L* ≤ *i* ≤ *R* and *S*[*L*...*R*] is a prefix-substring (if no such interval exists, just let *L* = *R* = - 1). For *i* = 1, we can simply compute *L* and *R* by comparing *S*[0...] to *S*[1...]. Moreover, we also get *Z*[1] during this.

Now suppose we have the correct interval [*L*, *R*] for *i* - 1 and all of the *Z* values up to *i* - 1. We will compute *Z*[*i*] and the new [*L*, *R*] by the following steps:

- If
*i*>*R*, then there does not exist a prefix-substring of*S*that starts before*i*and ends at or after*i*. If such a substring existed, [*L*,*R*] would have been the interval for that substring rather than its current value. Thus we "reset" and compute a new [*L*,*R*] by comparing*S*[0...] to*S*[*i*...] and get*Z*[*i*] at the same time (*Z*[*i*] =*R*-*L*+ 1). - Otherwise,
*i*≤*R*, so the current [*L*,*R*] extends at least to*i*. Let*k*=*i*-*L*. We know that*Z*[*i*] ≥*min*(*Z*[*k*],*R*-*i*+ 1) because*S*[*i*...] matches*S*[*k*...] for at least*R*-*i*+ 1 characters (they are in the [*L*,*R*] interval which we know to be a prefix-substring). Now we have a few more cases to consider. - If
*Z*[*k*] <*R*-*i*+ 1, then there is no longer prefix-substring starting at*S*[*i*] (or else*Z*[*k*] would be larger), meaning*Z*[*i*] =*Z*[*k*] and [*L*,*R*] stays the same. The latter is true because [*L*,*R*] only changes if there is a prefix-substring starting at*S*[*i*] that extends beyond*R*, which we know is not the case here. - If
*Z*[*k*] ≥*R*-*i*+ 1, then it is possible for*S*[*i*...] to match*S*[0...] for more than*R*-*i*+ 1 characters (i.e. past position*R*). Thus we need to update [*L*,*R*] by setting*L*=*i*and matching from*S*[*R*+ 1] forward to obtain the new*R*. Again, we get*Z*[*i*] during this.

The process computes all of the *Z* values in a single pass over the string, so we're done. Correctness is inherent in the algorithm and is pretty intuitively clear.

We claim that the algorithm runs in *O*(*n*) time, and the argument is straightforward. We never compare characters at positions less than *R*, and every time we match a character *R* increases by one, so there are at most *n* comparisons there. Lastly, we can only mismatch once for each *i* (it causes *R* to stop increasing), so that's another at most *n* comparisons, giving *O*(*n*) total.

Simple and short. Note that the optimization *L* = *R* = *i* is used when *S*[0] ≠ *S*[*i*] (it doesn't affect the algorithm since at the next iteration *i* > *R* regardless).

int L = 0, R = 0; for (int i = 1; i < n; i++) { if (i > R) { L = R = i; while (R < n && s[R-L] == s[R]) R++; z[i] = R-L; R--; } else { int k = i-L; if (z[k] < R-i+1) z[i] = z[k]; else { L = i; while (R < n && s[R-L] == s[R]) R++; z[i] = R-L; R--; } } }

One application of the Z Algorithm is for the standard string matching problem of finding matches for a pattern *T* of length *m* in a string *S* of length *n*. We can do this in *O*(*n* + *m*) time by using the Z Algorithm on the string *T* Φ *S* (that is, concatenating *T*, Φ, and *S*) where Φ is a character that matches nothing. The indices *i* with *Z*[*i*] = *m* correspond to matches of *T* in *S*.

Lastly, to solve Problem B of Beta Round 93, we simply compute *Z* for the given string *S*, then iterate from *i* to *n* - 1. If *Z*[*i*] = *n* - *i* then we know the suffix from *S*[*i*] is a prefix, and if the largest *Z* value we've seen so far is at least *n* - *i*, then we know some string inside also matches that prefix. That gives the result.

int maxz = 0, res = 0; for (int i = 1; i < n; i++) { if (z[i] == n-i && maxz >= n-i) { res = n-i; break; } maxz = max(maxz, z[i]); }

Codeforces (c) Copyright 2010-2021 Mike Mirzayanov

The only programming contests Web 2.0 platform

Server time: Oct/23/2021 10:08:13 (k1).

Desktop version, switch to mobile version.

Supported by

User lists

Name |
---|