### snacache's blog

By snacache, history, 3 years ago,

Does somebody know where will be held? Why have they taken so long to announce it? As far as I know, the host of next competition is always announced formally at the closing ceremony, but this wasn't the case at Beijing 2018.

Has this happened before? I am just asking out of curiosity :)

Update

There hasn't been an official announcement, but Bill Poucher said (kinda) that it will be held on Porto!

You can read his "announcement" here

Second update

ICPC has made an official announcement on ICPCNews! You can see the facebook post here

Yay!

• +94

By snacache, history, 3 years ago,

The objective of this post is NOT understanding FFT or Karatsuba, or any algorithm that performs fast convolutions, it's about understanding one of its uses

Hi Codeforces community! I wanted to write this blog entry to help coders who would like to see a nice example of convolutions used in a problem which, at first sight, has little to no relation with polynomial multiplication (of course it has, it's a convolution after all)

The problem is as follows: You are given two strings S and P, with lengths n and m respectively, where m ≤ n

For each substring T of S with length m, we have to find the one that maximizes the number of positions with the same character, i. e T[i] = P[i]

For the sake of simplicity, let us assume that our strings consists only of letters a and b. For example, if S = baabab and P = bba, let's calculate the "similarity" between P and each substring of S of length 3.

s(baa, bba) = 2

s(aab, bba) = 0

s(aba, bba) = 2

s(bab, bba) = 1

We can see that there are two substrings which maximize the similarity (any answer will be fine)

A naive approach has a O(n·m) time complexity and uses O(n + m) memory

This should be OK if 1 ≤ n, m ≤ 1000. What about 1 ≤ n, m ≤ 105? We have to find a faster way to apply for every substring T of S

First, let's try to change the formula above with a more mathematical approach. If we map each character {a,b}={0,1}, we can apply the next formula:

. This will give us the number of 1's (originally, b's) that match.

Let Sc be the complement of S (0 becomes 1, and viceversa). To compute the number of 0's that match, we apply

Now what? This doesn't improve the complexity at all

Let's remember how the convolution f of two discrete functions g and h is defined:

Now we're talking! This looks pretty similar to the formulas previously explained.

Next we will try to change our formulas to a simple convolution. First we will say that our array S and P are two functions g and h whose domain is [0, n - 1] and range is {0,1}.

But the formula is incorrect, this convolution, for each x applies the similarity formula, but reversing P!

This is our first issue, which we can easily solve by reversing P ourselves. Let's call Pr the reversed version of P. We can see that

is equivalent to

Our second issue is, m ≤ n. We can easily solve this issue (again) by adding some trailing zerores to Pr, which is equivalent to add leading zeroes to P until m = n

But, what is f(x) exactly, and how can this help us to improve the time complexity?

We saw that , which is the same as taking a preffix of S and (this is the tricky part) a suffix of P, each of length x + 1

If we take x = m - 1, we can see that this applies the similarity function to the whole pattern P and the first substring of S which starts at 0. If we take x > m - 1 then we apply the function to P and the (x - m + 1) - th substring of S. And if we take x < m - 1 we apply the function to a substring which goes out of the bounds of S.

Let's check our example S = baabab and P = bba. First, let's map each character S = 100101 and P = 110.

Then,Pr = 011 and let's add some trailing zeroes: Pr = 011000.

Let's calculate the convolution f

f(0) = 0

f(1) = 1

f(2) = 1

f(3) = 0

f(4) = 1

f(5) = 1

Great! f computes the number of 1's that match. What about the number of 0's? Let's apply the same procedure to Sc and Pc

Sc = 011010 and Pc = 001

Prc = 100000

fc(0) = 0

fc(1) = 1

fc(2) = 1

fc(3) = 0

fc(4) = 1

fc(5) = 0

Fine! Now, the answer should be max f(i) + fc(i) where i is in [m - 1, n - 1]

With this approach, we can solve the problem with a O(nlogn) time complexity using an algorithm like FFT to perform the convolution, or maybe Karatsuba algorithm

And what if the alphabet size is not 2?

For each character on the alphabet, we apply the convolution. Suppose fai is the convolution between S and P where, if S[i] =  = ai then S[i] = 1, else S[i] = 0 (the same applies for P), for every character ai on the alphabet a

The answer should be max with j in [m - 1, n - 1]

This solution has a O(|anlogn) time complexity, where |a| is the size of the alphabet

And what now?

Here are some problems which can be solved using this approach

MAXMATCH

K inversions

Hope you like this post, and helps you solving some convolution problems!

• +59

By snacache, 3 years ago,

Hi Codeforces community!

I write on behalf of the team "Super Pollos", a Venezuelan team who made it to the third place at the ACM ICPC Latin American Regional Contest — South America North region. With this place, the team earned its first qualification to the ACM ICPC World Finals! Which most of us know will be held at the city of Beijing!

Recently we are facing a very stressful situation. We are really short of money and time, and as the date of the competition gets closer, the tickets cost gets higher and higher!

We don't have an American Visa, so our trip cannot make stops at the USA, and this make the air tickets even more expensive!

We need at least three tickets to Beijing, one for each contestant obviously! But we wish to have 4 tickets, in order to make this trip possible for our coach too!

We have been looking for air tickets prices, and a Caracas-Beijing (Round trip) ticket that doesn't make any stop at the USA (or a country which requires a visa for transit) has an average price of $3000. In order to make this trip, our cost estimate is$10000-$12000, taking into consideration that, obviously, we are not able to buy the tickets right now (as of January 29th), and the costs increase as the day pass. For that matter, we created a crowdfunding, which as of January 29th, has a funding of$1170.

NOTE

The funding goal is \$20000, which is an over-estimation. The funding states that we need five tickets for the Venezuelan delegation, but in the worst case we NEED three tickets to participate at the competition.

Please if you want to contribute with our cause, you can donate through that link, and you can share it in your social networks too. That would help us a lot!

Thank you very much. If we fund more money than needed to the trip, we will use that money to contribute with the improvement of the competitive programming in our country (Venezuela), doing training camps, teaching about CP in order to prepare for IOI and ACM, etc!

• +34

By snacache, history, 4 years ago,

Hello Codeforces Community!

I have been solving some interesting bitmasks problems. In some problems, we have to generate all possible subsets from a set. Obviously, if our set is a bitmask with all its bits on, we can iterate from 0 to 2n - 1

However, if we need to generate all the subsets from another subset (subset-ception), we can do it with the following procedure:

for(subset = set; subset > 0; subset = (subset - 1)&set)

(The code above ignores the empty subset)

Generate all the subsets from all possible subsets has O(3n) complexity. Now, I can intuitively confirm this, because we can represent our subsets as a string with 3 possible values: while 1 and 0 at position i means that the object i is in our subset or not respectively, 2 means that the object i is not present in the set at all.

However, mathematically:

Amazing! I cannot find a way to prove this though. Do you know how to prove this interesting property?

Update: Solved, thanks! It was very simple :(

• +47

By snacache, history, 4 years ago,

Hi everyone!

Suppose we have a weighted graph with N vertices and two vertices S and T are given. We have to remove some edges from the graph in order to separate the given vertices (after removing the edges, there should be no path from S to T). Suppose the cost is the sum of the edges weights. Find the minimum cost to achieve this.

In case of tie (more than one way to remove edges, obtaining the minimum cost), remove the minimum number of edges.

I know that finding the minimum cost of the cut can be solved via Max Flow, but how can we handle the ties? Sure, we can find the cut-set, but the cut-set cardinality isn't always the answer.

Is there a good way to handle these cases?

Thanks!

• +16

By snacache, 4 years ago,

Hi everyone! I was recently doing some FFT-related problems, and I found a very interesting one.

It's from the CERC 2015 (problem F)

This problem involves many things from number theory, and one of them is doing some convolutions mod 106 + 3. After several WA, I found my error. The FFT algorithm produce many overflows after doing the convolution.

I know that FFT was one of the expected solutions, but how this issues can be handled? Is there a way to apply FFT with modular arithmetic?

Thanks!

• +26

By snacache, history, 5 years ago,

Hi everyone! I was training with some regional contests, and today we (my team) picked the Greater New York Regional Contest 2015.

We managed to solve all but one problem. Here is the link

The statement lacks of constraints, but the official data tests contains cases with at most 600 nodes.

We tried a brute force with no success (obviously, there could be at most 2n possible states).

Can you help me? I read the official solution but I really couldn't understand it, however it doesn't seem very complicated. Maybe I'm missing something.

If you want to check it out, here is the official solution

Thanks!

• 0

By snacache, history, 5 years ago,

Hi everybody!

I recently was in a programming contest. There were 20 tasks, and one of them was as follows:

In resume, you have C chairs and P pairs of chairs are given. The pairs P(i, j) denote the probability of changing from chair i to chair j. You are sitting in one of these chairs, and every second you have to move to another one.

Then you have Q queries, each with two integers Ci, s , where Ci is the initial chair you are seated and you have to tell the chair that has the highest probability of being seated in after s seconds (after s changes).

1 ≤ Ci ≤ 50

1 ≤ s ≤ 100000000

At first, I thought this was a simple problem, solvable using matrix exponentiation. First creating a matrix M, where Mij = P(i, j). The answer should be in the Ci row of the resulting matrix MS, but the problem has the next important requirement:

You have to output the probability as an irreducible fraction. Worst of all, you have to output ONLY the very last digits of numerator and denominator...

For example: 15/23 -> 5/3, 19/33 -> 9/3

I tried using Java BigInteger, doing reductions with gcd, but it was way too slow.

This is a very interesting requirement (yet a very tough one too, at least for me). Any help will be very appreciated :)

(The contest has already ended, I'm not cheating, and I struggled a lot of hours with this task :( )

• +28

By snacache, history, 6 years ago,

Hi everybody!

I was solving some ACM regional problems from Live Archive and I got stuck realy bad with this task:

https://icpcarchive.ecs.baylor.edu/index.php?option=com_onlinejudge&Itemid=8&category=519&page=show_problem&problem=4002

There is a family of dragons represented as a rooted tree. Every dragon has a hatchyear and a deathyear, and they can have children. So, the task is to find for every dragon the deepest descendant that lives during its lifetime (between its hatchyear and deathyear).

I tried some things without any success, always receiving TLE or WA verdict.

However, I made a naive solution with dfs over all the dragons, looking for the deepest descendant and much to my surprise it was accepted O.o . The maximum amount of dragons can be as large as 50000, so in the worst case I'm pretty sure that N dfs shouldn't work (but it did)

Although I could "solve" this task, I really want to know if there is a solution that actually makes sense with such constraints. I thought about some queries with segment trees or some interval data structure, but nothing really useful came to my mind :(

Thanks!

• +5

By snacache, 6 years ago,

Hi everybody! I have been training these last months and I have learned a lot of things.

However, last weekend I came across with a problem that goes likes this:

There are N positions and N tokens (numbered from 1 to N, all distinct). There is a machine that moves the token that is in position i to position Pi. Initially the tokens are in the order 1, 2, 3, ..., N. The machine has a button that moves the tokens to the positions Pi's. After pushing the button for the first time, compute the number of times the button must be pushed in order to get all the tokens to their initial positions (Modulo 10^9 + 7).

For example, N = 4 and the Pi's are: 4 1 2 3 The answer is 4:

Initially — 1 2 3 4

Push 1: — 2 3 4 1 (Token 1 goes to position 4, Token 2 goes to position 1, ...)

Push 2: — 3 4 1 2

Push 3: — 4 1 2 3

Push 4: — 1 2 3 4

The maximum value of N is 10^6

I tried finding the disjoint cycles and computing the LCM of the lengths of them, but got TLE. Because of the modulo 10^9 + 7, the way I compute the LCM is obtaining the prime factorization of the lengths, keeping record of the maximum power of those prime factors, and multiplying them.

After receiving lots of Time Limit Exceeded verdicts I looked for help on the internet and found that this is a common Abstract Algebra problem called "The order of a Permutation Cycle".The answer is the LCM of the lenghts of the disjoint cycles :(

I don't know if my method is too slow or there are many test cases and the time limit is extremely strict... To compute the length of the cycles I use a simple adhoc algorithm. Iterate over all the Pi's, if some position is not visited, start the cycle until finding a position already visited. I used fast input to handle large test data, but didn't worked...

Thanks! :)

UPDATE: Here is my code

UPDATE 2: Finally accepted! Doing the factorization with smallest prime factors and better ways to handle the modular arithmetic were the key to pass the time limit! Thanks all specially user Jacob for pointing that out :)