eatmore's blog

By eatmore, history, 3 months ago, In English

TopCoder recently announced that TCO21 finals are going to be held online. They previously planned to hold them in Orlando, Florida, however, the plans were changed due to the worsening COVID-19 situation in the US and the government's plans to keep the travel restrictions.

You can watch the video of the announcement here.

Read more »

 
 
 
 
  • Vote: I like it
  • +68
  • Vote: I do not like it

By eatmore, history, 11 months ago, In English

Google Code Jam 2021 has been announced. Here is the schedule:

The location of the finals will be announced later.

The dates of other Google competitions, Hash Code and Kick Start, has also been announced.

Read more »

 
 
 
 
  • Vote: I like it
  • +118
  • Vote: I do not like it

By eatmore, history, 12 months ago, translation, In English

Due to the COVID-19 pandemic, ICPC administration in Northern Eurasia announced special rules for the 2020/2021 season. The contests will be held online, each participant will be allowed to use a separate computer, it will be permitted to use any written source (but not others' code). It is planned to have a selection round at the beginning of April 2021 using regular rules. Below is the full text of the new rules.

Read more »

 
 
 
 
  • Vote: I like it
  • +83
  • Vote: I do not like it

By eatmore, history, 13 months ago, In English

Looking at the scripts that Codeforces is loading, I found that it is using Evercookie. Evercookie is a script written by Samy Kamkar (the same guy who created Samy worm) which creates "supercookies", pieces of information that a website can store on the user's machine that are intentionally made difficult to get rid of. Supercookies are generally used for tracking, and the use of such technology would be understandable for an anti-privacy company such as Facebook, but why is Codeforces doing this?

Read more »

 
 
 
 
  • Vote: I like it
  • +242
  • Vote: I do not like it

By eatmore, history, 13 months ago, In English

Java 15 was released a few days ago. Here are some changes that may be useful for competitive programming.

  • Pattern matching for instanceof, first introduced in Java 14, is still a preview.
  • Text blocks, first introduced in Java 13, are now final:
String input = """
	3 3
	1 2
	1 3
	2 3""";
  • Records, first introduced in Java 14, are now final:
record Fraction(int num, int den) {
	Fraction {
		if (den == 0) {
			throw new IllegalArgumentException("zero denominator");
		}
		if (den < 0) {
			num = -num;
			den = -den;
		}
	}

	Fraction add(Fraction o) {
		return new Fraction(num * o.den + den * o.num, den * o.den);
	}

	// ...
}
  • Useful new methods: CharSequence.isEmpty() (also inherited by String), String.stripIndent() (useful for text blocks), String.formatted(), Math.absExact().

Previous posts: Java 8, 9, 10, 11, 12, 13 and 14.

Read more »

 
 
 
 
  • Vote: I like it
  • +83
  • Vote: I do not like it

By eatmore, history, 16 months ago, In English

Here I want to tell my personal opinion of the Microsoft Q# coding contest. First, I'd like to thank Microsoft Quantum team for making a great contest in an unusual format. All the problems were interesting (except D problems ‒ more about them below), and I'm looking forward to similar contests in the future. However, there is a number of issues that I'd like to see discussed and hopefully fixed.

The tools

The first problem with Q# compiler that everyone trying to use it will see is that it is SLOW. On my machine, compiling a trivial Hello World program takes about 45 seconds. This is a well known problem, reported both on GitHub and even here on Codeforces, and frankly it's a PITA to wait so much after every fix to know if your code still compiles (or still doesn't).

But this is only a tip of the iceberg. Before the contest, I decided to read the source code of the compiler and the libraries to learn more about them (unfortunately, the documentation is not comprehensive). And the code I saw was not good: of poor quality, overly complicated and inefficient. Everywhere I looked, there were bugs. Overall, I reported 8 issues before and during the contest, but given that I only looked through a fraction of all code, I'm sure there are many more.

Before looking at the code, I thought that they would parse the entire file using either a parser automatically generated from a grammar, or a manually created one. But actually it was very different. Their compiler is designed for incremental parsing, that is, to re-parse a file more efficiently if only a small part of it changes. To that end, they first split the file into lines, and then they split the lines into "code fragments" (simple statements and parts of compound statements) by scanning the text for occurrences of the characters {, } and ;. Because these characters can also appear in string literals and comments, they try to identify those as well, but this code has a bug which makes it not work with nested string literals. Then they parse each fragment using a parser made using FParsec library, but it is written in such a way that it would re-parse many subexpression multiple times. For example, when parsing bracketed constructs (such as tuples), they would first scan the code to find a closing parenthesis, and then they would apply a parser to a substring between the current position and the closing parenthesis. Also, when parsing an expression, they at some point try to parse it as a function call (which would first parse anything that can be used in place of a function name, such as a tuple or an array indexing expression), and then, if there is no function call, they will parse the same subexpression again (not looking for a function call this time).

Overall, I think that the complexity of their parser overrides any benefit from it being able to parse incrementally (which it doesn't do anyway when compiling from a command line). Another problem is that they "compile" Q# programs by converting them into C#. As a result, it is not possible to use C# keywords in Q# programs (that restriction is actually not necessary, because in C# it is possible to use keywords as identifiers by prefixing them with @, but perhaps the authors of Q# compiler don't know that). Also, if you name your operations or variables in a certain way, Q# compiler may produce C# code that doesn't compile or has unexpected semantics.

Quantum classification

Quantum classification, at least in the way it is implemented in Q#, is a joke. The systems that are actually used for classification, such as neural networks and decision trees, have a very useful property: with a sufficiently big model and enough training, they can learn to approximate an arbitrary function to an arbitrary precision. With quantum models, this is not possible. The set of functions that a quantum model can learn is very limited: specifically, quantum models can only recognize classes of the form $$$q(\mathbf{f})>0$$$, where $$$q$$$ is a quadratic form (a homogeneous polynomial of degree two) and $$$\mathbf{f}$$$ is a vector of features. Essentially, a quantum classifier is exactly as powerful as a linear classifier with threshold zero applied to all pairwise products of features (the proof that it is possible to make a quantum model that corresponds to every quadratic form is not obvious and is left as an exercise to the reader).

Classical preprocessing helps, but only a little. Adding constant elements (method 1) allows adding linear and constant terms (so now it can recognize the classes of the form $$$q(\mathbf{f})+\mathbf{a}\cdot\mathbf{f}+b>0$$$), which makes this method the most useful. Conversely, method 2 is completely useless, because every quadratic form of a tensor product of a feature vector and a constant vector is equivalent to a quadratic form of the feature vector alone. Method 3 essentially allows using degree-4 polynomials instead of degree-2 ones, but this is not very useful without constant terms, and method 4 gives the same as method 1 plus some pairwise products of features (but there is no way to choose which).

From all this, one can notice that instead of training a quantum model it may be more efficient to train an above mentioned linear classifier and then compute an equivalent model. But I didn't do this. During the contest, I tried to do normal training (using TrainSequentialClassifier), but it took too long so I instead made all the models by hand. The process is simple: first, use matplotlib to show the points on the screen, and manually find a description of the class that should be recognized. Then, convert it to a quantum model. This is not difficult: I only used Y rotations, and they are equivalent to rotations of a 2D vector. One of the aspects that is not mentioned in the documentation is that it is the last qubit that is measured (in Z basis) to get the result. The measurement computes the sum of squares of amplitudes of states with the last qubit having value 0 and compares it with the sum of squares of amplitudes of states with the last qubit having value 1. When making a model, I used three tricks. The first trick is to use rotations by $$$\pi$$$ to swap states. By applying multiple controlled rotations, it is possible to permute the states in any way, but I used classical preprocessing to arrange the states to not require many swaps. The second: to make a sum of two numbers, I used rotation by $$$\frac{\pi}{2}$$$ (essentially, a Hadamard gate). From two numbers, it produces their sum and their difference (up to a constant). Then, to get rid of the difference, I used the third trick: when performing a rotation by $$$\frac{\pi}{2}$$$ on a pair of states with amplitudes $$$(x,0)$$$, the result is two states with equal amplitudes, which cancel each other. From this: having constants (from classical preprocessing), sums, being able to discard numbers, and comparing sums of squares, it is possible to solve all five D problems.

As I said, I am looking forward to future Q# contests. I hope that the next contest will have harder problems (29 people solving every problem is too many), better tools and no quantum classification problems.

Read more »

 
 
 
 
  • Vote: I like it
  • +264
  • Vote: I do not like it

By eatmore, history, 18 months ago, In English
 
 
 
 
  • Vote: I like it
  • +169
  • Vote: I do not like it

By eatmore, history, 19 months ago, In English

Java 14 was released a few days ago. Here are some changes that may be useful for competitive programming.

  • Switch expressions, first introduced in Java 12, are now standard:
var whatToDo = switch (outcome) {
	"CE" -> "Check if you chose the right language";
	"RE", "WA" -> "Check your logic";
	"TL", "ML" -> "Check your complexity";
	"AC" -> "Celebrate";
	default -> "???";
};
  • Pattern matching for instanceof:
public boolean equals(Object o) {
	return (o instanceof Point p) && x == p.x && y == p.y;
}
  • Helpful NullPointerExceptions: now they include a message indicating which particular value was null.
  • Records:
record Fraction(int num, int den) {
	Fraction {
		if (den == 0) {
			throw new IllegalArgumentException("zero denominator");
		}
		if (den < 0) {
			num = -num;
			den = -den;
		}
	}

	Fraction add(Fraction o) {
		return new Fraction(num * o.den + den * o.num, den * o.den);
	}

	// ...
}
  • PrintStream.write(byte[]) and PrintStream.writeBytes(byte[]) may be used for fast output, but they are just shorthands for write(byte[], int, int).

If you know about other useful features, post it here.

Previous posts: Java 8, 9, 10, 11, 12 and 13.

Read more »

 
 
 
 
  • Vote: I like it
  • +77
  • Vote: I do not like it

By eatmore, history, 23 months ago, In English

Google Code Jam 2020 has been announced. Here is the schedule:

The finals will be held in Munich, Germany ONLINE, the first Code Jam finals in mainland Europe.

The dates of other Google competitions, Hash Code and Kick Start, has been announced as well.

Unfortunately, there is no announcement regarding Distributed Code Jam, which probably means that the competition is dead. UPD: Google confirmed that there will be no DCJ in 2020.

UPD: Code Jam finals will be held online this year due to the COVID-19 pandemic.

Read more »

 
 
 
 
  • Vote: I like it
  • +336
  • Vote: I do not like it

By eatmore, history, 2 years ago, In English

Java 13 has been released a few days ago. Here are some changes that may be useful for competitive programming.

  • Switch expressions: first introduced in Java 12. In Java 13, a value is returned from a switch expression using yield statement:
String numType = switch (num) {
	0 -> "zero";
	1 -> "one";
	default -> {
		for (int i = 2; i < num; i++) {
			if (num % i == 0) {
				yield "composite";
			}
		}
		yield "prime";
	}
};
  • Text blocks: multi-line string literals, similar to Python:
String input = """
	3 3
	1 2
	1 3
	2 3""";

If you know about other useful features, post it here.

Previous posts: Java 8, 9, 10, 11 and 12.

Read more »

 
 
 
 
  • Vote: I like it
  • +37
  • Vote: I do not like it

By eatmore, history, 2 years ago, In English

Google Code Jam 2019 Round 2 will begin at 14:00 UTC. Top 1000 participants will advance to Round 3 and win a t-shirt.

Note that there will be no Distributed Code Jam this year.

Let's discuss the problems after the round is over.

Read more »

 
 
 
 
  • Vote: I like it
  • +92
  • Vote: I do not like it

By eatmore, history, 3 years ago, In English

This post in Code Jam group

Hi,

I want to discuss interactive problems in Code Jam. Code Jam didn't have interactive problems until recently, and there are still major differences between interactive problems in Code Jam versus the other competitions where interactive problems are more established.

One difference is in how sample interactions are presented. I think that in Code Jam, sample interactions are very inconvenient to read. For regular input and output, there is a place where you can read just the input and just the output, but for interactive problems, you have to read a mix of pseudocode, comments and the actual input/output.

Other competitions that have interactive problems usually use more concise formats, which are easier to read quickly. One possibility is to just present the input and the output separately, like here (problems H and I). This doesn't convey the relative order of the input and output lines, but it can be reconstructed from the description of the interaction protocol. Also, it is possible to use empty lines to show the order, like here (problem C). There are multiple approaches that can be used to show a sample interaction in a single column, like showing input and output lines in different colors, or prefixing them with different symbols (for example, left and right arrows). In all cases, it is possible to add an additional column for comments.

Another feature of interactive problems specific to Code Jam is that when a solution produces an incorrect result, the judge program will communicate this to the solution and wait for it to terminate. This doesn't make sense. Once the solution produces an incorrect result, there is no point in continuing to run it, and in other competitions, it would be immediately terminated in such a situation. But apparently Code Jam team expects the contestants to write code that would check the judge's output and handle the case where it indicates that the solution's output is incorrect. This is useless, because it wouldn't make any solution pass that wouldn't pass otherwise, and that's all that matters for a solution. I for one don't write such code (my solutions always skip this part of the judge's output), but I still need to not forget to read (and ignore) that line, something that I don't have to do in other competitions.

Finally, I'd prefer sample interactions to be correct. It's simple: for regular problems, sample inputs and outputs are (almost) always correct, so why should it be any different for interactive problems? Yet, both of this year's interactive problems available so far have sample interactions where a solution provides an incorrect answer, apparently to demonstrate the judge's response to it (the right response, as I already said, is to terminate the solution with WA verdict).

What do you think of all this?

Read more »

 
 
 
 
  • Vote: I like it
  • +186
  • Vote: I do not like it

By eatmore, history, 3 years ago, In English

It was recently announced (in Russian) that Yandex.Algorithm will not be held this year. It is sad to see another great competition disappear.

Read more »

 
 
 
 
  • Vote: I like it
  • +217
  • Vote: I do not like it

By eatmore, history, 3 years ago, In English

Google Code Jam 2019 Qualification Round will start on April 5 at 23:00 UTC and last for 27 hours. Here is the official announcement:

Hi everyone,

Code Jam is back for its 16th year, kicking off with the Qualification Round in one week! We're excited to provide you with another season of intriguing problems (including some interactive ones) and a refreshed user experience. Register today for a chance to earn the coveted title of Code Jam Champion at the World Finals in San Francisco, California, and take home the grand prize of $15,000.

Here’s what you need to know about this year’s competition:

  • The 27-hour Online Qualification Round begins on Friday, April 5 @ 23:00 UTC; registration will be open from now until the round ends.
  • Registration is now a two-part process: first, you'll create a coding competitions profile, then you'll be prompted to complete registration for Code Jam specifically. Make sure you complete both steps and lookout for a registration confirmation in the "Alerts" section of our site.
  • We hope you enjoy new and familiar features alike such as "ask a question," testing your solution on our servers, and receiving a participation certificate. You'll receive a certificate after the last round you participate in (so long as you make at least one competitive attempt during the Qualification Round.).
  • The top 1,000 contestants in Round 2 win a limited edition Code Jam t-shirt.
  • You can start warming up with previous Code Jam problems.

Visit g.co/codejam to register and review the updated Code Jam FAQs, Rules and Terms. See you on the scoreboard!

P.S. Spread the word about #CodeJam with these resources: Code Jam video, 2019 flyer and a Google Keyword blog post.

And another one:

Reminder: the 2019 Code Jam Qualification Round begins later today. We have a few exciting updates and helpful reminders to share:

  • We've made several updates to our FAQ, and are proud to offer 20 additional languages this season! Please review all of the FAQ sections before the round — especially "Competing," "Coding," and "Interactive Problems", which contain important details about supported languages, updated tools and testing on our servers

  • The round will be open for 27 hours, starting April 5 at 23:00 UTC and ending April 7 at 2:00 UTC. You can spend as much or as little of the 27 hours competing as you would like, but you must attain a final score of at least 30 points to move on to Code Jam’s Round 1. If you're a new competitor who is comfortable with programming, the odds are good that you will be able to earn enough points to advance within a few hours.

  • You can prepare by: working through past problems, reviewing the Code Jam FAQ & Rules and Terms. If you have questions during the round, use the "Ask a Question" feature in the competition interface. If you can't find an answer in the FAQ or Rules, reach out to codejam@google.com.

Good luck!

The Code Jam Team

This year, Code Jam has a new competition interface. To register for Code Jam, you will have to first create a profile in the new system (even if you participated in the previous years), and then to register for the Code Jam itself.

Also, the list of supported languages was significantly expanded. The supported languages are: Bash, C, C++, C#, Clojure, Dart, F#, Go, Groovy, Haskell, Java, JavaScript, Julia, Kotlin, Lisp, Lua, Objective-C, OCaml, Octave, Pascal, Perl, PHP, Python 2, Python 3, PyPy 2, R, Ruby, Rust, Scala, Swift, TypeScript, and Visual Basic.

You may want to read some useful posts about Code Jam:

Good luck & have fun!

Read more »

 
 
 
 
  • Vote: I like it
  • +80
  • Vote: I do not like it

By eatmore, history, 3 years ago, In English

At least, it's their policy:

Just so you and everyone else is aware, we have the policy of not monitoring CJ feedback on any external (to Google) forum. If you or someone you know has feedback that can/should be actioned, please ask them to either use this list or contact us privately at codejam@google.com. They can also use our social channels, but it's a less efficient form of communication for system-type feedback.

So, it you have a technical problem during the upcoming Qualification Round, rather than posting it here on Codeforces, email it to codejam@google.com. Or do both.

Read more »

 
 
 
 
  • Vote: I like it
  • +114
  • Vote: I do not like it

By eatmore, history, 3 years ago, In English

This post in Code Jam group

Hi all! This year, Google has again decided to change Code Jam user interface. The new interface has already been used for Code Jam to I/O for Women and Kick Start rounds, and from what I saw, it is even worse than the last year. Let's start.

This is what I saw when I opened Code Jam main page. This year it uses even longer domain name, codingcompetitions.withgoogle.com, and some text seems to be missing. After some investigation, I found that this occurs when a browser is configured to block third-party cookies. I configure my browser this way because I don't want some companies to track me around, and for nearly all sites, this causes no issues. On some sites, blocking third-party cookies causes single sign-on to stop working, but this is understandable. Code Jam website is the first one that I saw such that blocking third-party cookies breaks it so much that you can't even read text (note: looks like this have been fixed, and now the website displays a warning instead).

After getting the text to appear, I struggled with some login problems and finally got to the competition interface (this screenshot was taken after the round, but it looked almost like that during the competition as well):

Usually, when you enter a round, you expect to see the problems. This year, you start a round with a scoreboard. To get to the problems, you need to click one of the "Open problem" buttons. Doesn't matter which one: you would be able to switch between problems from the dashboard. Here it is:

As you see, in this new version there is a code editor that always takes half of your screen space. No way to resize or hide it. Doesn't matter if you want to use it or not. I, for instance, prefer to use a real IDE, because it has syntax highlighting and completion and refactoring and I can run the code locally and IT DOESN'T FORCE ME TO USE WHITE TEXT ON BLUE BACKGROUND. Also, in the old interface, there was a side bar for switching between problems, and in the older version it also showed submission statistics and the number of new announcements, but now this is all gone.

Side note: suppose that you just want to read the problems, and all you got is a tablet.

Sorry, the editor, despite being completely useless, still wastes half of your screen space. Also, looks like the problem name doesn't fit into the provided space.

Back to the previous screen.

Do you see a submit button? Me neither. It took quite some time for me to realize that to be able to submit your solutions, you have to register TWICE: once to enter your information into the new website, and once more to actually register for the competition. Otherwise, you can type the code, but the panel with submit button (and a couple of other buttons) is hidden. After I registered properly, the panel appeared (but my code disappeared, fortunately, I had it saved to a file). Then I could finally submit it.

Now, back to the scoreboard.

This is the scoreboard of this year's Code Jam to I/O for Women round. It has four problems, but you can only see three at a time. Apparently, the person who designed this is as bad in using your screen space as Windows 10 is in using your RAM. Perhaps they should learn from guys who designed this:

The old version easily fits six problems, and has space for one or two more. And even the font size is almost the same! Now, on top of the scoreboard, there are some big useless graphs. To see the actual scoreboard, you have to scroll down. For comparison, the old interface displays submission statistics below the scoreboard, as an additional row. Codeforces is doing the same.

Another change is the number of rows you can see on a single scoreboard page. On Codeforces, you can see up to 200 rows at once, which is reasonable for a round with several thousands of participants, and works for smaller rounds as well. In the old Code Jam interface, it was possible to see 30 rows at once. Now, there is a selector that switches between 10 and 20, and it doesn't even work until you switch between pages (looks like this had been fixed recently).

Here is another example of a well designed scoreboard:

Also, as I scroll the scoreboard, I have about 40% of my screen covered by the heading which doesn't scroll, which is way too much.

Now about round overview and problem analysis. Instead of being in a single convenient place, this information is now sprinkled around the interface. To see the overview, you need to click a slider on top of the scoreboard, and the overview will appear in place of those big useless graphs. To see the analysis, go to a problem and click an analysis "tab" above the text. Again, I like the old version so much more.

All in all, the new interface is worse than the old one in nearly every respect. So, what can be done to make it better? Here are some suggestions:

  1. Looks like Code Jam team already noticed that some people block third-party cookies, and made the website show a warning in this case. Still, it would be awesome if it were possible to actually log in without third-party cookies or, at least, with first-party isolation (currently, no warning is shown in this case, but login doesn't work).
  2. When entering a round, users should be directed to the problems, not to the scoreboard. This is how it worked in all previous versions, and this is how it should work. Also, the old version had a nice feature: it was possible to open the dashboard in advance of the round, and have the problems load automatically when it starts. I'd like to have this feature in the new version as well.
  3. Something has to be done with the code editor. Perhaps it should be put below the problem statement, or into a separate tab. I would also suggest to allow those users who don't need an editor to hide it completely, and to switch to an interface that supports submitting only from a file (this is how it was done before 2018).
  4. The editor should not be displayed if the user is not logged in, or not registered for the round. Instead, they should see a message stating that. Currently, a message is displayed in an editor, and only if the user is not logged in.
  5. There should be a sidebar on the left, with tabs for switching between the problems. It may show additional information as well. The old version had submission statistics and a mini-scoreboard in a sidebar, which I find quite useful.
  6. In the new interface, there is a top bar, which doesn't scroll. I think that it is not useful enough to always be on the screen, so it should be made to scroll with the page.
  7. There should be an option to receive desktop notifications when a submission is judged. Also, there should be notifications when there is an announcement, and when problem statement is changed. During the previous years, there were multiple occasions when problem statements were changed during the round without any notification whatsoever, and it was necessary to reload the page to see the changes.
  8. The scoreboard should be made to look better. It should be made to show more than three problems at once, and preferably at least 100 rows per page. Submission statistics should appear below the scoreboard, not above. The part of the header that remains on the top of the screen while the page is scrolled should be at most as tall as a single row.
  9. Round overview should be on its own page, not tucked above the scoreboard.
  10. Switching between the problems and the scoreboard should be more straightforward. Currently, to switch to the scoreboard, you have to press a left arrow in the top left corner of the screen, which is pretty unintuitive. The old "Contest scoreboard"/"Contest dashboard" links are much better.

Read more »

 
 
 
 
  • Vote: I like it
  • +467
  • Vote: I do not like it

By eatmore, history, 3 years ago, In English

The times of (most of) TCO19 Algorithm rounds has been published. Here they are:

Also, here are the times of TCO19 Marathon rounds:

Read more »

 
 
 
 
  • Vote: I like it
  • +33
  • Vote: I do not like it

By eatmore, history, 3 years ago, In English

Google Code Jam 2019 has been announced. You may read the schedule. To register, you'll have to wait until March 5.

Unfortunately, there will be no Distributed Code Jam in 2019. I think that this has something to do with DCJ platform not receiving any substantial updates for the last three years. Maybe people who made it have already left, and no one can keep it working. It will be sad if this year's Distributed Code Jam is going to be the last.

Read more »

 
 
 
 
  • Vote: I like it
  • +142
  • Vote: I do not like it

By eatmore, history, 3 years ago, In English

Previous posts: Java 8, Java 9, Java 10.

Java 11 has been released a few days ago. This post lists some improvements in this version that are useful for competitive programming.

  • New String methods: String.strip(), String.stripLeading(), String.stripTrailing(), String.isBlank() and String.lines() can help with parsing, while String.repeat() can be used to generate output.
  • StringBuilder and StringBuffer instances are now Comparable. Also, CharSequence.compare() can be used to compare String, StringBuilder and StringBuffer values without conversion.
  • Null streams: InputStream.nullInputStream(), OutputStream.nullOutputStream(), Reader.nullReader() and Writer.nullWriter() might be useful for testing.
  • It is now possible to run source files directly: java Solution.java.

As before, if you find any other relevant enhancements, please post them here.

Read more »

 
 
 
 
  • Vote: I like it
  • +45
  • Vote: I do not like it

By eatmore, history, 3 years ago, In English

Distributed Code Jam 2018 final round is already running. You can watch the scoreboard here. It's strange that no one wrote about it earlier.

Read more »

Tags dcj
 
 
 
 
  • Vote: I like it
  • +61
  • Vote: I do not like it

By eatmore, history, 3 years ago, In English

Distributed Code Jam 2018 round 1 starts on Sunday, June 10th at 14:00 UTC and lasts 3 hours. This round is open for everyone who advanced to round 3 of the regular Code Jam. Top 20 contestants will earn a trip to the final round in Toronto.

This year, Google will use the old contest system for Distributed Code Jam rounds. Only C++ and Java are supported, and the versions of compilers used are ancient: GCC 4.7.2 for C++ and Java 7. Beware that GCC 4.7.2 was released in 2012 and doesn't support many new C++ features. If you are using Java, your main class must be named Main, not Solution.

As some people found out, you must have registered for Distributed Code Jam, but it wasn't required if you already had your data in the old system. In any case, you may visit this page to check that your data is there.

Read more »

 
 
 
 
  • Vote: I like it
  • +45
  • Vote: I do not like it

By eatmore, history, 3 years ago, In English

Google Code Jam 2018 round 3 starts on Saturday, June 9th at 14:00 UTC and last 2.5 hours. Top 25 contestants will earn a trip to the final round in Toronto.

Also don't forget about Distributed Code Jam round 1, which starts a day later.

Read more »

 
 
 
 
  • Vote: I like it
  • +88
  • Vote: I do not like it