diego_v1's blog

By diego_v1, 9 years ago, In English

Problem 1 — Guard Mark:

The "2 ≤ N ≤ 20" constraint immediately suggests a DP + Bitmask solution. Let DP[m] be the maximum safety factor we can achieve with mask m. The transition will be as follows: DP[m + 2b] = min(Str[b], DP[m] - W[b]), where Str[b] and W[b] are the strength and weight of cow b, respectively, and b is a bit that's off in the current mask m. Initially, DP[0] = INF and DP[i] =  - INF for all i in the range [1, 2N - 1]. Our answer will be the maximum among all DP[m] such that the sum of heights of all the cows included in mask m is  ≥ H.

Problem 2 — Marathon:

Let D[i] be the distance to go from checkpoint i - 1 to checkpoint i, and S[i] be the distance we save by skipping checkpoint i (that is, go from checkpoint i - 1 to checkpoint i + 1, skipping i). Naturally, D[1] will be zero, as well as S[1] and S[N].

Let querydist(l, r) be the sum of all D[i] in the range [l, r], and querysave(l, r) be the maximum among all S[i] in the range [l, r], then to answer a "Q L R" query, we would do querydist(l + 1, r) - querysave(l + 1, r - 1). We can manage these queries with a segment tree with two satellite data in each node.

Finally, when we receive an update query to checkpoint i, we would need to properly update data for checkpoints i - 1 through i + 1.

Problem 3 — Cow Jog:

The first thing we must think about is "When do two cows cross each other if they are on the same lane?". They will cross when one cow's starting point is behind the other cow's, but its finishing point is farther than the other cow's. That is, if we represent the cows' covered points with a segment, one cow's segment will be completely contained within the other cow's segment.

Now, how do we know exactly how many lanes we will need? Well, let L[i] be the lane that cow i runs in. Initially, we set all L[i] to 0 and then we process all cows by starting point in the following way: Suppose we're currently processing cow c and the current cow's segment is completely contained within some other cows' segments. Among all those other cows, let m be the maximum of all L[i]. Then L[c] = m + 1. If the current cow's segment is not contained in any other segment, then m will be zero.

Another thing we must deal with is how to calculate those values efficiently. We can do it with a BIT or a Segment Tree after compressing the finishing points of all cows (BIT is probably the easiest way). Since the cows will be processed by starting point, when we process cow c, we know that all segments we've encountered so far, will have a starting point lower than the current cow's segment, so all we need to worry about is their ending points and the lanes they are assigned. We can have a Range Maximum Query using a BIT that stores the maximum lane number for each segment ending point. Let query(x) be the maximum lane number for all segments which have an ending point equal or higher than x, and update(x, l) be the update routine to set ending point x to lane l in the BIT. Then to process a certain cow c, all we must do is update(r, query(r) + 1).

Finally, our answer will be query(1).

Full text and comments »

  • Vote: I like it
  • +22
  • Vote: I do not like it

By diego_v1, 9 years ago, In English

Recently, there has been a couple of blog entries about macros to trace/debug variables in C++. I found them quite interesting, but none is actually what I'm looking for. I'm not very familiar with templates (I actually never used them), so maybe what I'm about to ask isn't even possible.

I'd like to have a macro/template/anything in my program that allows me to do something like this...

int A[2];
for(int i = 0; i < 2; i++) A[i] = i;
for(int i = 0; i < 2; i++) trace(A[i]);

And I'd like to receive this as output...

A[0] = 0
A[1] = 1
A[2] = 2

When using the macros posted in the previous entries by other coders, I get this...

A[i] = 0
A[i] = 1
A[i] = 2

Is it possible to actually get the array name and index when using tracing/debugging macros with arrays?

I'd like to receive any kind of help/information on the matter. Thanks in advance!

Full text and comments »

  • Vote: I like it
  • 0
  • Vote: I do not like it

By diego_v1, 10 years ago, In English

Today, when I solved problem C from the Contest #260 (455C - Civilization), I wrote a routine to find the center of each tree before proceeding to the queries and the DSU.

Later, after getting AC, I read other coders' solutions and it seems like finding the centers wasn't necessary for this problem. From what I've seen in the codes, it seems like there's a theorem that states something like the following: If the diameter of a tree is d, there will always be at least one node v such that the farthest node from v is no farther than (d + 1) / 2.

Is there such a theorem? I'd be grateful if someone could enlighten me on the matter.

Full text and comments »

  • Vote: I like it
  • +18
  • Vote: I do not like it

By diego_v1, 10 years ago, In English

I've been trying to come up with an efficient data structure for the following problem for a couple of hours... without luck.

We have an array of numbers. Initially, all of them are 0. Then we have two types of operations...

  • update(l, r, v): Add v to all elements in the range [l, r]
  • query(l, r): Return the numbers of positive numbers in the range [l, r]

I thought about a segment tree where each node stores the minimum and maximum value found in that range, then in the query, if we are at a node where the minimum value is  > 0, we know that all numbers in the range are positive. But that data structure would visit every node in the tree in the worst case, so I was wondering if there's something more efficient.

Can anyone come up with a better idea?

Full text and comments »

  • Vote: I like it
  • +9
  • Vote: I do not like it

By diego_v1, 10 years ago, In English

I recently installed Linux with the latest version of Code::Blocks, and the way it auto formats and indents the code is HORRIBLE. I can't possibly code efficiently with an IDE like that. It worked perfectly under Windows, but its behaviour under Linux is disastrous. I tried tweaking all the options of the editor, and none of them fixes the issue.

The issue is like this...

Let's suppose I type if(k==0), press Enter, and then press the open brace key. Under Windows, it would auto format itself to this...

if(k==0)
{
    |
}

NOTE: character '|' marks the position of the cursor.

But under Linux, it auto formats into this useless thing...

if(k==0)
{|}

Which means I need to press Enter, then Up, then Right, then Enter, and finally Tab to format it correctly...

if(k==0)
{
    |
}

Basically, I have to press FIVE freaking keys after opening the braces to format the code correctly, whereas under Windows, I had to press NONE. And that's a lot of wasted time if you're in a contest.

If anyone knows how to correct Code::Blocks behaviour under Linux, please let me know. I'll greatly appreciate it.

Full text and comments »

  • Vote: I like it
  • +4
  • Vote: I do not like it

By diego_v1, 11 years ago, In English

I just encountered a weird issue with problem 333D.

Check these two codes:

Accepted: 4201145

TLE: 4201157

The only difference between these two submissions is that in the first one, it says if(A[i][k]>B&&A[j][k]>B) and in the other one it says if(min(A[i][k],A[j][k])>B). How can two statements that are practically identical be the difference between getting accepted with 1122 ms and getting TLE over 3000 ms ?!?!

Another coder suggested that maybe the std::min function is slow, so I changed it to a statement of the form a<b?a:b, and it still gets TLE, so the issue is not the std::min function.

Another TLE: 4201326

I'm puzzled...

Full text and comments »

  • Vote: I like it
  • +8
  • Vote: I do not like it

By diego_v1, 11 years ago, In English

I was wondering about the problem C from Div 1 of Codeforces Round #177 (Problem link: 288C - Polo the Penguin and XOR operation). Why does the greedy solution work?

The greedy solution is as follows:

  • For every index 'i' from 0 to N, pick an unused number 'n' such that i^n (i xor n) is maximum.

I know it works because I got accepted with that. I was hoping someone could prove it.

Full text and comments »

  • Vote: I like it
  • +12
  • Vote: I do not like it

By diego_v1, 11 years ago, In English

I always suffer from overkill. I mean, using a complex solution in a problem that doesn't require it. It's like killing a fly with a bazooka.

A clear example is the problem "Ladder" (279C). The solution is to check for each element i, the biggest j (j > i) such that the range (i, j] is non-decreasing (call it Inc[i]), and the biggest k (k > i) such that the range (i,k] is non-increasing (call it Dec[i]). Then when a query L,R arrives, just check that Dec[ Inc[L] ] >= R.

Well, I don't usually realize those things (especially on contests, where I don't have too much time to think a better solution), and I end up using more complex things (very frequently segment trees).

For the problem "Ladder" (279C), I used a segment tree that stores for every range [L,R] the number of increasing and decreasing consecutive elements in that range (if a[i] > a[i-1], then a[i] is an increasing element, and if a[i] < a[i-1], then a[i] is a decreasing element). Then, when a query arrives, I use a binary search on that range to search for the rightmost element k such that the range (L,k] has 0 decreasing elements. Then I check the range (k,R], and if it has 0 increasing elements, the range is a ladder, otherwise it isn't. I end up doing in the worst case Q*log(N) range queries in a segment tree, where the problem can be solved in O(N) to make the data, and then answer every query in O(1).

For the problem "Dima and Staircase" (272C), I also used Segment Trees (Yeah, I use them nearly everywhere), when all that was needed were two variables.

Anyway, I was wondering if any of you suffer from the same problem too.

Full text and comments »

  • Vote: I like it
  • +19
  • Vote: I do not like it

By diego_v1, 11 years ago, In English

I personally believe weighted rating is more fair than the standard IOI rating, because solving a difficult task should earn you more points than solving an easier one, even if the number of test cases for both tasks is the same.

That's why I decided to see what the scores would have been like for the past USACO Contests if that system was the one used.

It's not the exact same system USACO used to employ, because I didn't limit a problem weighting to 400 (ie: btree weighting in the November contest is 519).

Check it out and let me know what you think.

USACO Weighted Ratings

Full text and comments »

  • Vote: I like it
  • 0
  • Vote: I do not like it