Recent actions

mine for 1e15 is TLE, but 1e10 accepted with 93ms

Very quietly I take my leave

As quietly as I came here;

Quietly I wave good-bye

To the rosy clouds in the western sky.

hello world! :)

Beacause you're so WEN3 that you needn't think about it.

Same result.. WA

Try to submit in g++14, double is more precise. I guess the solution lost precise in (thisAns * v[i].first).

MS C ++ makes me sad now :)

Thank you for clarifying.

You are right. It almost same as your solution. And again, this AC double solution was sent to GNU G++. In fact, it produced a better assembler code for implementing function f(...). The same code sent to MS C++ generates WA 71:

But if we rework a bit f(...) function with Kahan summation approach — we get AC, even on MS C++. See this attempt:

The main problem is in this line:

double add = T * a[i] - b[i];

UPD. In test 71, T*a[i]-b[i] ~ 10^14. Then we get the sum of 100000 such numbers, which is about 10^19 and does not fit into double precision. Lower bits of final value are just truncated and this leads your binary search to a smaller value, than test requires. So no magic here, just a bit of luck for GNU users :) I suppose GNU organizes all intermediate computations within floating-point registers (which are 10 bytes and more) with no writes to RAM.

Thx, correct explanation — extinguish burning )

But anyway there are submissions which using double (not long double) with 8bite size and the get AC. Example:

Different compilers generate different machine code, i.e. different set of assembler commands. And differently process floating point numbers. Try this code under MS C++ and GNU G++ 14 (you can use Run option):

#include <iostream>
int main(){
  std::cout << sizeof(long double)  << std::endl;
  return 0;

It turns out that MS C++ prints 8 (bytes), while GNU 14 prints 12 (bytes). It means that under MS C++ long double is less precise than under GNU C++ 14. Moreover you could think about computing triangle area with more precision in problem B. In this case double would be enough.

I actually think the difference between 1e14 and 99999997522874.859000000 is not very small, particularly not as small as something like 1e-5, (2477125.14063 to be exact).
Even then, here is the same verdict after the suggested modification WA

How does it relate to the fact that under GNU G++ 14 everything is OK?)

My team submited this code under MS C++ for problem B in 22 min of the Round 2: — got WA 4.

The same code submited under GNU G++ 14 get AC:

IDK what's wrong with MS C++ in codeforces, but with problem A — my team lost 1390 points and, ironicaly, 100 place. :P

I think it's because the <= with doubles in the function check. Try to change the )need <= have) to (need < have+EPS_ (EPS = 1e-6)

This is accuracy error with doubles, it's not good to compare >= when using doubles. Try to change (ans >= (1e14)) to (ans > (1e14)-EPS), EPS is a low number, like 1e-5. In this case 99999997522874.859000000 is very close to (1e14), but not equal, then it's good use EPS when comparing doubles.

Strange behavior with my submission:

Submited under MS C++ during the round and got WA71:

The same code (only casting to double in printf) submited under GNU G++ 14 get AC:

How is it possible?!

I understand the frustration, and I have experienced this multiple times first hand where the problem is just not solvable in Java because of the problem setter's mistakes. As a contestant, I definitely would have been upset by this as well.

On the other hand, I think it's a dangerous precedent to set to remove hacks that are difficult for a particular language to handle. I think the ideal solution is that problem setters should learn from this and set lower bounds or make sure precision is not as big of an issue (for example, there is no need for me to set a_i,b_i <= 10^5, and I could have set a_i,b_i <= 100).

Anyways, I think the most similar situation is in round 284 where denormal numbers caused some submissions to TLE: In that case, the decision was to keep the contest rated. I feel that that situation is closest to this situation, so that's why I think the results should still stand.

You're fine only with GNU c++. With MS c++ I also had precision problems using the same code.

Hello... Why I don't rated in this contest I solved a question A

I think the upper limit is 1e10 because the worst case that I can think of is you have n = 1e5 devices, The charger is as fast as possible but doesn't make the devices work for infinity p = 99999, each device uses minimum amount of power a[i] = 1 and has max starting power b[i] = 1e5. Now the time needed will be 1e10 because you originally have 1e10 power in total (summation of b[i]) and each second you lose one power in total (lose 1e5, one for each device.. and regain 99999 from the charger) so it will take you 1e10 seconds to run out of power.

Always use setprecision.

I had to face the similar problem.



Can't understand which statement caused the overflow, if any.

I think it's something else with C++ big integers. If you need big integers, then the whole problem is about big integers, and handling hundreds or thousands of digits. You can know, that you can't use C++ there. But here, the problem wasn't about the double's precision at 10^10. Even lewin said, that this result was unexpected, so they didn't make this to have problem with double's precision.

In C++ everyone knows that it can't handle big integers, because there is simply no datatype for that. But in Java, as you can see from the comments, and submissions, most of the people didn't know, that the precision of double is not enough.

On the other hand I get your PoV, and you are mostly right, but I still think, that this problem was unfair for java users as a div2 C.

Thanks for the explanation :) , my solution was also getting stuck in Infinite loop whenever the ans was calculated correctly, so I used clock to break the loop. Anyways it failed sys test due to 1e9 Solution

Yeah, it was precision error, changing double to long double gave me AC.

Did you find ur mistake? I have wr on test 4 too.

It works but when i put the upper bound of binary search = 1e18 instead of 1e12, it is giving me WA verdict. 26445116

I WAed on the same test case, but the my issue is that my INF is too large. Haha.

For Div2-C / Div1-A
This is AC code and this is the WA code.
Only difference is, for -1 case, in AC code I have :


and in WA code I have:

if(ans >= (1e14))

Can someone please point out what is my mistake here. Is there some kind of overflow which breaks my binary search to converge on a solution.
The WA solution gives answer 99999997522874.859000000 on test-74, that is, it converges on a solution.

Similar problem for me. Got WA using MS C++ (26438473) and AC using GNU G++ (26438539). This is a really bad behaviour.

Sure, we should take minimum of such values for all triples of consecutive points. Besides, we should also satisfy the condition that the answer is less or equal than half of any edge of the polygon(not to have intersections).

Finally, I achieved to solve this problem (26441610) in C# by summing up like segment tree. Now, I'm really interested in whether we could hack this or not and how we should manage real number!

Thanks, I got it)

why there is not a fixed value for infinity .. like 1e10 or anything else (limits) in problem C

you are google it self ;)

For the following test it is clear that each second our total sum of b[i] decreases by 1 every second (as p — sum(a) == -1). Our total initial sum of b[i] is 10^10, thus we can see that the answer will be 10^10 and that this is the worst case.

n = 100 * 1000;
p = 100 * 1000 - 1;
print(n, p)
for i in range(n):
	print(1, 100 * 1000)

Can someone look at my solution for Div1 B? I'm trying to use Binary search and moving the points closer to each other (perpendicular) and then computing the cross product to see if it's convex or not.

Please help:

Thank you so much! Using relative error it turns out that I can use upper bound up to 1e16. The only not clear thing here is why 1e9 is too little. I mean how to find upper bound for the answer in this problem?

Had u been a contestant trying to solve this problem in Java, and not the setter today, you would have failed to solve this problem right ?

Thanks for the information. I just wanted to know if this is supposed to happen. I thought that CF ignores all solutions that fail samples, not necessary the first test. Soo, the answer for the first question was clarified. Can anyone answer the second?:)

To your first question yes it is normal, only incorrect submissions to the first sample / pretest are not counted. I don't claim to judge if this is fair or not, however there are cases where it is not trivial to know if a solution passes the samples, say in constructive problems with many possible solutions, and thus making all submissions that fail on samples not count would be a big departure from current policy.

A participant might send a wrong code, print some debug info, or their solution can be wrong in CF system (e.g. small MAX_RAND). These mistakes aren't usually penalized thanks to friendly checking of the first test. Doing the same for other samples would help mostly if a participant is lazy or doesn't have time to run his solution on those tests — these arguments aren't that strong. I think the current system is ok.

I tried to use BigDecimal, but it is too slow and TLEs. Maybe there is some way to be very careful with the precision of the BigDecimals and make it work, but have not seen any solution like that so far. meijun has this submission though, 26437993, that uses streams somehow and passes in Java, but I don't really see how to intuitively know that this should work.

Is it normal to have a submission failing the second pretest (which was part of the sample in div1D) counted as wrong submission? It was just now when I saw that I have -50 on that problem because it failed second sample (of course I could have made sure it works, but I sent it just about 2 minutes before the ending and, as I was running out of time, I didn't check all samples).

Also, in the first problem, I used custom invocation to test my first attempt for div1A on a test with N = 100.000 and I saw it was running in 2200 ms, so I decided to resubmit it with smaller precision (170 iterations of the binary search instead of 200). The new source worked in custom invocation in 1930 ms, but on the final tests it worked in just 300 ms, which makes me wonder: is custom invocation running the code with the same speed as it does whilst system testing?

In a problem with bigintegers, will you want all big tests to be removed for C++ submissions?

You choose a language yourself and it's up to you to know its weaknesses. A solution is good if and only if it passes all created tests — not if it passes all tests except for those hard for this language.

I'm the one to blame for the same number of points for two last problems. I solved the last problem quickly but struggled with the other one a bit. I even proposed to swap those two problems... (fortunately, it didn't happen).

And btw. I'm surprised by precision issues in Java. Is there any fix for that? Maybe we can simulate float values with bigintegers in Java (if it's necessary)?

Manchester Vs Chelsea.... The awesomest 1...

Awesome rounds... and contests... fallen in love with codeforces <3

I don't find it pretty fair to leave the results as they are.

The fact that the same algorithm can pass in C++ and not in Java, is pretty unfair. It should be samely easy to get AC in every languages. Even the sentence We managed to find a java solution that works tells, that this was pretty hard for Java programmers. We couldn't have known, that Java double is buggy at big values, so this test is pretty cruel.

Personally, I have lost 71 rating instead of winning something like 30, because of this test case. I'd be happy if this test case could be ignored for Java users, as a lots of people got WA on this test while using the right approach.

Stress testing isn't a good strategy for hacking.

  1. look at the binary search look (it's now a for, not a while)

  2. look at the infinite condition (it's using now relative error instead of absolute error, the same could be done for the first loop)

Can you please point out what edit i need to make in my solution.

Lol, such a simple solution for div2C, and I spent a hour for solving it in much more complex way

Iterate a fixed number of times (like 100-200) or use relative error

I checked some solutions from the top of the leaderboard, and they fail if compiled with -ffloat-store flag. Nothing new, though.

How do we resolve this issue?

Probably because you lose precision and can't get a difference of 1e-5 when the numbers go too big so it creates an infinite loop.

It fails with 1e9 because it is too small and fails with 1e11 because you use the == operator. Never use that thing with doubles.

Interesting. I didn't by any means imply that results should be changed in any way — jury answer is clearly the right one.

I tried to debug what's going on, but I still don't understand what happens there :( Apparently function f(t) = sum(a * t — b) / t — p is very non-monotonic around t = 1010, and jumps around 0 like crazy even with small changes of t. I'm not sure why the same doesn't happen in C++ — maybe it gets magically optimized to use higher-precision arithmetic.

It makes sense that summing many doubles is less precise than multiplying long by double. Why does it only matter in java is less clear to me.

Can Someone explain why are solutions for Div2/C got TLE when submitted in Ms C++ 2010 and got Accepted when submitted in Gnu++14? here's my code in Ms got TLE at test 66 26436248 and Accepted with exactly the same code on Gnu++14 with time 78!! 26436769

Huge minus to author's karma for Div 2C/Div 1A problem. They decided to not accept Java's double precision and didn't give enough time for BigDecimal solution. So if you use c++ and long double you are fine if java — please fail on test 71. BTW: solution with hardcoded value for such "tests" receives "Accepted":

But the correct answer will be between from and to always, won't it? I thought about the iteration number also, but I found it better to go for relative error, as the judge checks.

My mistake was using 1e14 as upper bound instead of 1e13. Bad luck.

Test 71 was a hack. Actually it caused unexpected verdict in our java solution, but since all other c++ solutions agreed (in particular, the one that used long doubles also), we decided to mark the java solution as incorrect.

We managed to find a java solution that works. We believe the problem is that the comparison A <= B when A,B are very large may not be precise. We're not sure why it only shows up in java.

More specifically, the inequality we want to check is sum(a) * t — sum(b) — p * t <= 0, for devices that need to be charged. This may be imprecise, so we can compute it as sum(a * t — b — p * t / n) <= 0 instead. This seems to agree with all of our other solutions right right now.

Unfortunately, I don't think we will change results, but I think next time I'll lower the bounds of such a problem more so precision issues like this are less likely to come up.

Probably something is wrong with the checker, lots of people got WA on div2 C with java, so they'll check it. It may take some time.

26431601 has WA 28.
I have a variable ans, the initial value of which does not matter.
After some attempts:
1. ans is not defined -- WA 28.
2. ans = INF, INF = 1e9 + 7 -- WA 28.
3. ans = 1e9 -- WA 33.
4. ans = 2e9 -- AC.
5. ans = 0 -- AC.
Can someone explain me WHY?

I cannot believe I passed Div2 C systest. I have no idea how precision works here. I just keep adjusting parameters until I passed pretest.

Just out of curiosity, given that you use Java yourself, is the reason you didn't see this issue when writing problems because your reference solutions did not use binary search? Or is test 71 a hack case and thus was not seen before?

isn't the range of double much larger than that?

I am really not able to understand why i am getting TLE on test 74. Please help me!!! Code: 26437499 UPD: It got accepted when upper bound = 1e10 but fails when upper bound = 1e12 or 1e18 , i don't know why?

Can someone explain to me why my solution for Div.2 C fails with binsearch upperbound 1e11 or 1e9 but gets AC with 1e10 ?

In div2 C , what happens wrong when we compare the answer to the upper bound in binary search for the infinity case? 26437983

how much time will it take to update ratings?

Two problems double WA , long double after contest AC , i hope that's not intended.

I think you just have to print answer with greater precision.

For each adjacent vertexes A, B and C: find the distance between point B and line AC, divide the distance by 2. Choose the minimum.

You stop when to and from are relatively close to each other, but it doesn't guarantee you that the number in between is close to answer.

You can simply run some number of iterations (smth like 500) and then print the result. It's counted as good practice here. :)

Got AC with Java 8 & binary search: 26437377.

what's wrong !!?

This is a bit unexpected for me too, and we'll look into it.

Test 71 is generated by the following code:

n = 100000
print(n, 10 ** 9)

for i in range(99999):
    print(10000, 100000)

print(10001, 100000)

There seem to be some really weird stuff going on today, I solved A with sorting + traversal in java. However I solved B with the standard approach of calculating all heights of every 3tuple in java, but got WA, then I just ported my solution to python and got accepted. Is the servers JVM broken? :^)

t*a[i] can overflow, when t = 1e24.

oh, ure right, thanks for the explanation

Yes I agree and somehow it passes with EXACT same solution in C++, see my above comment for the codes. Bad time to use Java :(

I did, check line 53. If it is possible even at max_limit, I output -1

cout gave me wrong answer on test case 3 26434138 later after the contest just printed output using printf with 8 decimal places and got AC 26435680 :/

Got it, Thanks ^^

I believe that 'that green guy' got his solution on problem D accepted.

I had the same problem with Java — rewrote it in python3 after an hour and it passed.. Too bad I get a really bad time :(

you have to output -1 if it is possible to keep all devices on indefinitely, you didn't do that.

Fast Subset Convolution is a technique to solve a following problem. We are given set U and a function (P denotes power set). We need to compute function g so that . Bruteforce way is O(3n), using FSC we get O(2n·n).

What is wrong with this solution for Div2C / Div1A

Hi, can anyone explain me this?


Why the green guy, who got 1232 points doing A and B got +203 points when me and other green people with 1230 points A and B done got only 30-50?

Mhammad1 Well, we didn't get the visa for ICPC this year, but at least now you are famous :D

Problem A, Java 8: I don't see a single Accepted solution with binary search... (there are a few with sorting + explicit computation though).

All of them failed on test 71, with "9995877311.0944" vs "10000000000.002304". Unfortunately, I don't see a way to download the test (as it is too long and gets truncated). Is it possible to share it somehow? I'm really interested in what's going on there. (I can try to get it part by part with debug submits, but I guess sharing is easier)

FWIW, the troubles are clearly with precision, but I don't get why they appear only in Java (I see C++ binary search solutions accepted), and so consistently. I tried known precision-related tricks (like sorting doubles before adding them together), but they don't seem to help here...

I have this problem , too. I got accepted D and it's counted as a "Failed System Test" in the final standing

I found, only 1 person got accepted by Java in Div1A. Most contestants got failed at test 71. Is this case valid?