omg hi!
I will add implementations soon.
UPD: Implementations are added
1504E - Travelling Salesman Problem
# | User | Rating |
---|---|---|
1 | tourist | 3682 |
2 | ecnerwala | 3603 |
3 | Benq | 3549 |
4 | Radewoosh | 3494 |
5 | Petr | 3452 |
6 | ksun48 | 3413 |
7 | maroonrk | 3406 |
8 | Miracle03 | 3314 |
9 | scott_wu | 3313 |
10 | Um_nik | 3299 |
# | User | Contrib. |
---|---|---|
1 | 1-gon | 214 |
2 | Errichto | 188 |
3 | awoo | 187 |
4 | rng_58 | 186 |
5 | SecondThread | 183 |
6 | Ashishgup | 176 |
6 | Um_nik | 176 |
8 | maroonrk | 173 |
9 | antontrygubO_o | 171 |
10 | -is-this-fft- | 169 |
omg hi!
I will add implementations soon.
UPD: Implementations are added
1504E - Travelling Salesman Problem
omg hi!
I am pleased to invite you to participate in Codeforces Round #712 (Div. 1) and Codeforces Round #712 (Div. 2)! You will be given 6 problems and 2 hours 15 minutes to solve them. I'm happy to announce the theme of this round is déjà vu!
I would like to thank the following people:
Make sure you read all problems, and I'm happy to announce the theme of this round is déjà vu! Good luck, have fun!
The score distribution will be announced immediately because you deserve it ❤️:
Div. 1: 750 — 1000 — 1250 — 1750 — 2500 — 4000
Div. 2: 500 — 1000 — 1750 — 2000 — 2250 — 3000
UPD: Editorial
Given a set $$$S$$$ of $$$n$$$ points in the 2D plane, the Voronoi diagram is a partition of the plane into regions. The region associated with a point $$$p\in S$$$ contains precisely the points $$$q$$$ such that $$$q$$$ is closer to $$$p$$$ than any other point in $$$S$$$. In other words, a point $$$q$$$ belongs to the region of its nearest neighbor.
The Voronoi diagram can be represented by the planar graph of boundaries between regions. A vertex in this graph is where three or more segments meet (or a point at infinity), and an edge is a segment connecting two of these vertices. The way the regions were defined, we see that a segment contains the points equidistant to two points in $$$S$$$, so it is part of their perpendicular bisector. Similarly, a Voronoi vertex is equidistant to three or more vertices in $$$S$$$, so it is their circumcenter. Recall that the circumcenter is the intersection of perpendicular bisectors.
The Delaunay triangulation is the dual graph of the Voronoi diagram. That is, a Voronoi vertex is a Delaunay face and a Delaunay vertex is a Voronoi region. A Delaunay edge connects two points if and only if their corresponding Voronoi regions share a border.
In the following image, the set $$$S$$$ corresponds to the black points. The red vertices and edges are the Voronoi diagram. The black points and edges are the Delaunay triangulation.
The Delaunay triangulation only involves edges between existing points, while the Voronoi diagram creates new vertices and needs a way to represent edges going infinitely in one direction. For this reason, it is often more convenient to compute the Delaunay triangulation and only convert to the Voronoi diagram if needed.
The definition of the Voronoi diagram immediately shows signs of applications. Given a set $$$S$$$ of $$$n$$$ points and $$$m$$$ query points $$$p_1,\ldots, p_m$$$, we can answer for each query point, its nearest neighbor in $$$S$$$. This can be done in $$$O((n + q)\log(n+q))$$$ offline by sweeping the Voronoi diagram and query points. Or it can be done online with persistent data structures.
Other than this, the Delaunay triangulation has many amazing properties of its own that can make it extremely useful for solving problems.
Because this is geometry, there are some degenerate cases I have hidden from you up to this point. The first case is when all the points in $$$S$$$ are contained in a line. In this case, the Voronoi diagram has all edges connecting two points at infinity, and the Voronoi diagram is disconnected (in all other cases it's connected). Also in this case, the dual graph is no longer a triangulation. We may still decide to represent it by the path of edges connecting the points.
Another nasty case is co-circular points. Here, there are Voronoi vertices where more than three segments intersect. In this case, the dual graph has a face with more than three sides. However, we want it to be a triangulation, so we can also say that any triangulation of such faces is valid, and we will allow non-unique Delaunay triangulations.
Many algorithms exist to compute the Delaunay triangulation and/or Voronoi diagram of a set of points. Each have their advantages and disadvantages.
In this approach, you start with an arbitrary triangulation of the points. Then you repeatedly fix adjacent triangle pairs until the triangulation has the Delaunay property. The main advantage of this method is its simplicity: an arbitrary triangulation can be constructed with an incremental convex hull algorithm, and the disadvantage is that it could require $$$\Omega(n^2)$$$ flips in the worst case.
In this approach, you always maintain the Delaunay triangulation of a subset of the points and insert the points one at a time. When a point is inserted, you should delete all triangles whose circumcircle contains the newly added point. After deleting this connected set of triangles, you should repair the face it exposes. For each edge in this exposed face, you should add a triangle with that edge and the new point. This gives the straightforward $$$O(n^2)$$$ Bowyer-Watson algorithm.
If we insert the points in a random order, we can significantly reduce the number of triangle insertions and deletions. But to get expected $$$O(n\log n)$$$ runtime, we would need to avoid processing all the unaffected triangles. And this makes the implementation far more challenging. We need to maintain a search structure of the triangles in order to locate which one contains the new point.
The advantage of this method is that it achieves expected $$$O(n\log n)$$$ runtime with only integer computations, and the construction is online. The disadvantage is that it has a bad constant, isn't deterministic, and the triangle search structure can make for a tricky implementation.
The divide and conquer algorithm first sorts the points by $$$x$$$-coordinate. Then it splits the points into two sets, recursively computes their Delaunay triangulations, and merges them together. As long as you can perform the merge in $$$O(n)$$$ time, the overall algorithm is $$$O(n\log n)$$$. When merging the triangulations, it is actually quite hard to keep track of the important adjacency information. The most popular strategy is perhaps the Quad-Edge data structure.
The advantage is that this gives a deterministic $$$O(n\log n)$$$ algorithm with only integer computations. The disadvantage is the complexity of the quad-edge data structure and the costly memory usage that comes with it.
An interesting fact is that the problem of Delaunay triangulation can be reduced to 3D convex hull. If for each point $$$(x,y)$$$ we also give it the $$$z$$$-coordinate $$$x^2+y^2$$$ and compute the 3D convex hull, then the downward-facing convex hull faces correspond exactly to the Delaunay triangles. The advantage is that if you happen to have a good 3D convex hull implementation, you get Delaunay triangulation for free. The disadvantages are that $$$O(n\log n)$$$ 3D convex hull is a whole beast of its own, and the large $$$z$$$-coordinates will exacerbate precision and overflow errors in the 3D convex hull implementation. The main strategies for 3D convex hull are incremental and divide-and-conquer algorithms, which are only more complicated than their counterparts in the special case of Delaunay triangulation.
My previous blog on 3D convex hull gave an $$$O(n\log n)$$$ randomized incremental implementation, but it turns out to have quite a bad constant for practical use.
In this blog, I will focus on Fortune's Algorithm. It takes a sweep-line approach to the problem, and achieves a deterministic $$$O(n\log n)$$$ time complexity. The main disadvantage is that computations are done with floating points instead of integers, opening up the door to precision errors. However, keep in mind that the algorithms using only integers have a risk of integer overflow even on reasonably small constraints.
The main challenge in designing a sweep-line algorithm for Voronoi diagrams is that a point after the sweep-line can affect the diagram before the sweep-line. That is, if the sweep-line denotes the discovery time of a point, we cannot know the entire Voronoi diagram before the sweep-line. The way Fortune's algorithm overcomes this challenge is by maintaining the Voronoi diagram only for the area we can be certain about, regardless of what points lie after the sweep-line.
Now, which points can be affected by what lies past the sweep-line? Well, such a point must be closer to the sweep-line than to any point in $$$S$$$ to the left of the sweep-line. Recall that the set of points equidistant to a point and a line is given by a parabola, where the point is called the focus and the line is called the directrix. If the focus is a point $$$p\in S$$$ to the left of the sweep-line and the directrix is the sweep-line, then everything to the left of this parabola we can be certain of its region. Now, if we make such a parabola for every point $$$p\in S$$$ to the left of the sweep-line, everything to the right of all parabolas is precisely the uncertain region. The boundary of the certainty and uncertainty regions is called the beach line, consisting of parabolic arcs.
We are ready to understand Fortune's algorithm abstractly. We scan a sweep-line from left to right, maintaining the combinatorial structure of the beach line. The parabolas are only for visual aid, and the ordered list of foci is what is actually stored. As points get added and parabolic arcs shrink to length $$$0$$$, these events tell us precisely the structure of the Voronoi diagram and Delaunay triangulation.
There are two types of events which change the beach line structure: point events and vertex events.
In a point event, the sweep-line discovers a point $$$p\in S$$$ and inserts its parabolic arc into the beach line. In order to do this efficiently, we need to search for where it belongs in the beach line. Then it will split the arc it sees into two. An example of a point event is shown below (here the sweep-line goes from top to bottom instead).
In a vertex event, a parabolic arc disappears from the beach line, corresponding to a Voronoi vertex. To handle a vertex event, we simply remove the point whose arc length shrinks to length $$$0$$$. Such an event is generated dynamically by a triple of points adjacent on the beach line. Every time we modify the beach line, we update the relevant points for their vertex events. To compute the time a vertex event should be handled, we take the circumcircle of the three points generating the event. The rightmost point of the circumcircle is the time we should process this event.
In summary, the algorithm maintains a priority queue of events. Initially, we push all point events into the priority queue. When processing a point event, we find the correct arc in the beach line, split it up, add the new point, and add new vertex events if needed. When processing a vertex event, we add the generating triple as a Delaunay triangle, remove the arc that should disappear, and add new vertex events if needed.
My code doesn't work when two points have the same $$$x$$$ coordinate. This is handled simply by rotating all input points by $$$1$$$ radian and praying to the geometry gods.
#include <bits/stdc++.h>
#define ll long long
#define sz(x) ((int) (x).size())
#define all(x) (x).begin(), (x).end()
#define vi vector<int>
#define pii pair<int, int>
#define rep(i, a, b) for(int i = (a); i < (b); i++)
using namespace std;
template<typename T>
using minpq = priority_queue<T, vector<T>, greater<T>>;
using ftype = long double;
const ftype EPS = 1e-12, INF = 1e100;
struct pt {
ftype x, y;
pt(ftype x = 0, ftype y = 0) : x(x), y(y) {}
// vector addition, subtraction, scalar multiplication
pt operator+(const pt &o) const {
return pt(x + o.x, y + o.y);
}
pt operator-(const pt &o) const {
return pt(x - o.x, y - o.y);
}
pt operator*(const ftype &f) const {
return pt(x * f, y * f);
}
// rotate 90 degrees counter-clockwise
pt rot() const {
return pt(-y, x);
}
// dot and cross products
ftype dot(const pt &o) const {
return x * o.x + y * o.y;
}
ftype cross(const pt &o) const {
return x * o.y - y * o.x;
}
// length
ftype len() const {
return hypotl(x, y);
}
// compare points lexicographically
bool operator<(const pt &o) const {
return make_pair(x, y) < make_pair(o.x, o.y);
}
};
// check if two vectors are collinear. It might make sense to use a
// different EPS here, especially if points have integer coordinates
bool collinear(pt a, pt b) {
return abs(a.cross(b)) < EPS;
}
// intersection point of lines ab and cd. Precondition is that they aren't collinear
pt lineline(pt a, pt b, pt c, pt d) {
return a + (b - a) * ((c - a).cross(d - c) / (b - a).cross(d - c));
}
// circumcircle of points a, b, c. Precondition is that abc is a non-degenerate triangle.
pt circumcenter(pt a, pt b, pt c) {
b = (a + b) * 0.5;
c = (a + c) * 0.5;
return lineline(b, b + (b - a).rot(), c, c + (c - a).rot());
}
// x coordinate of sweep-line
ftype sweepx;
// an arc on the beacah line is given implicitly by the focus p,
// the focus q of the following arc, and the position of the sweep-line.
struct arc {
mutable pt p, q;
mutable int id = 0, i;
arc(pt p, pt q, int i) : p(p), q(q), i(i) {}
// get y coordinate of intersection with following arc.
// don't question my magic formulas
ftype gety(ftype x) const {
if(q.y == INF) return INF;
x += EPS;
pt med = (p + q) * 0.5;
pt dir = (p - med).rot();
ftype D = (x - p.x) * (x - q.x);
return med.y + ((med.x - x) * dir.x + sqrtl(D) * dir.len()) / dir.y;
}
bool operator<(const ftype &y) const {
return gety(sweepx) < y;
}
bool operator<(const arc &o) const {
return gety(sweepx) < o.gety(sweepx);
}
};
// the beach line will be stored as a multiset of arc objects
using beach = multiset<arc, less<>>;
// an event is given by
// x: the time of the event
// id: If >= 0, it's a point event for index id.
// If < 0, it's an ID for a vertex event
// it: if a vertex event, the iterator for the arc to be deleted
struct event {
ftype x;
int id;
beach::iterator it;
event(ftype x, int id, beach::iterator it) : x(x), id(id), it(it) {}
bool operator<(const event &e) const {
return x > e.x;
}
};
struct fortune {
beach line; // self explanatory
vector<pair<pt, int>> v; // (point, original index)
priority_queue<event> Q; // priority queue of point and vertex events
vector<pii> edges; // delaunay edges
vector<bool> valid; // valid[-id] == true if the vertex event with corresponding id is valid
int n, ti; // number of points, next available vertex ID
fortune(vector<pt> p) {
n = sz(p);
v.resize(n);
rep(i, 0, n) v[i] = {p[i], i};
sort(all(v)); // sort points by coordinate, remember original indices for the delaunay edges
}
// update the remove event for the arc at position it
void upd(beach::iterator it) {
if(it->i == -1) return; // doesn't correspond to a real point
valid[-it->id] = false; // mark existing remove event as invalid
auto a = prev(it);
if(collinear(it->q - it->p, a->p - it->p)) return; // doesn't generate a vertex event
it->id = --ti; // new vertex event ID
valid.push_back(true); // label this ID true
pt c = circumcenter(it->p, it->q, a->p);
ftype x = c.x + (c - it->p).len();
// event is generated at time x.
// make sure it passes the sweep-line, and that the arc truly shrinks to 0
if(x > sweepx - EPS && a->gety(x) + EPS > it->gety(x)) {
Q.push(event(x, it->id, it));
}
}
// add Delaunay edge
void add_edge(int i, int j) {
if(i == -1 || j == -1) return;
edges.push_back({v[i].second, v[j].second});
}
// handle a point event
void add(int i) {
pt p = v[i].first;
// find arc to split
auto c = line.lower_bound(p.y);
// insert new arcs. passing the following iterator gives a slight speed-up
auto b = line.insert(c, arc(p, c->p, i));
auto a = line.insert(b, arc(c->p, p, c->i));
add_edge(i, c->i);
upd(a); upd(b); upd(c);
}
// handle a vertex event
void remove(beach::iterator it) {
auto a = prev(it);
auto b = next(it);
line.erase(it);
a->q = b->p;
add_edge(a->i, b->i);
upd(a); upd(b);
}
// X is a value exceeding all coordinates
void solve(ftype X = 1e9) {
// insert two points that will always be in the beach line,
// to avoid handling edge cases of an arc being first or last
X *= 3;
line.insert(arc(pt(-X, -X), pt(-X, X), -1));
line.insert(arc(pt(-X, X), pt(INF, INF), -1));
// create all point events
rep(i, 0, n) {
Q.push(event(v[i].first.x, i, line.end()));
}
ti = 0;
valid.assign(1, false);
while(!Q.empty()) {
event e = Q.top(); Q.pop();
sweepx = e.x;
if(e.id >= 0) {
add(e.id);
}else if(valid[-e.id]) {
remove(e.it);
}
}
}
};
Let's compute what point in the polygon is farthest from its closest polygon vertex. Suppose that point is $$$p$$$. If it does not lie on a Voronoi edge, we can move it strictly farther from its nearest neighbor. If it lies on a Voronoi edge, we can move it either to a Voronoi vertex or the intersection of a Voronoi edge and a polygon edge. Check only these points.
The Euclidean MST minimizes the maximum edge weight on the path from $$$s$$$ to $$$t$$$. So we compute the Delaunay triangulation, compute the Euclidean MST from those edges, and it transforms to a problem about querying the maximum path value on a tree. Use binary lifting, for example.
We must pick a point on the circle's circumference that intersects with the best Voronoi region. The Voronoi diagram divides the circle into $$$O(K)$$$ segments. It's sufficient to check a point from each segment.
I hope you enjoyed the problems! Implementations will be added soon.
UPD: Implementations are added!
1450C1 - Крестики Нолики Errichto (простая версия)
1450C2 - Крестики Нолики Errichto (сложная версия)
1450H1 - Соединение нитями (простая версия)
1450H2 - Соединение нитями (сложная версия)
¡Buenos días! (That's Spanish for "what's up homies")
On Dec/06/2020 17:35 (Moscow time) we will host Codeforces Global Round 12.
It is the sixth round of a 2020 series of Codeforces Global Rounds. The rounds are open and rated for everybody.
The prizes for this round:
The prizes for the 6-round series in 2020:
Thanks to XTX, which in 2020 supported the global rounds initiative!
The problems were written and prepared by smart Cuban Devil and stupid Americans fivefourthreeone and 1-gon.
We would like to distribute our thanks equally to the following people who made this round possible.
You will have 3 hours to solve 8 problems (and 2 subtasks). If you want to lose rating, then we encourage you not to read all the problems.
May rating be distributed from each according to his ability, to each according to his needs!
UPD: Here's the score distribution. Good luck, have fun!
UPD: Hope you enjoyed the problems! Editorial is posted.
UPD: System testing finished, congrats to the winners!
The upcoming round has an unusual starting time. In order to keep my performance slightly less bad than usual, I will apply the "destroy my sleep schedule" strategy. For some unknown reason I have decided to document this by beginning my streaming career.
Randomization is known to be a powerful algorithmic technique, and I want to start a discussion about it specifically in the context of competitive programming. What extra considerations should we make about how online judges work when analyzing our randomized algorithms, and approving such problems for an official contest? In particular, I want to discuss algorithms whose running times are uncertain.
Let's say that the running time of an algorithm is a random variable $$$X$$$. Typically, we might say the algorithm is acceptable if the expected running time is a fraction of the time limit: $$$E(X)=T/k$$$. With Markov's inequality, we can say $$$P(X>T)\le 1/k$$$. That is, the probability of failing a test case due to timeout is at most $$$1/k$$$.
Online judges usually run your program a few times before giving the Time Limit Exceeded verdict. This fact is known to help randomized algorithms, as mentioned here: https://codeforces.com/blog/entry/78817
Let's apply this in the context of random running times. If your program is run $$$n$$$ times, the actual probability of failing a test case is at most $$$1/k^n$$$. If there are $$$m$$$ test cases total, the probability of failing at least one of the test cases is $$$1-(1-1/k^n)^m$$$
Let's suppose we call a program acceptable if its probability of failure is at most $$$p$$$. Then we are interested in what the constant $$$k$$$ should be in order to be confident of success.
Let's plug in some values. Suppose there are $$$m=200$$$ test cases, your program gets $$$n=3$$$ chances per test case, and we can tolerate a failure probability of $$$p=10^{-3}$$$. Then we should require $$$k\approx 58.47$$$, which is quite a considerable constant factor that we should be careful about when making randomized solutions.
If we make extra assumptions on the distribution of $$$X$$$ that apply widely to randomized algorithms, can we be confident with a much smaller value of $$$k$$$? What will the analysis look like on some specific cases like treaps? I'm interested to hear your thoughts.
I hope everyone enjoyed the contest!
UPD: Added implementations for all problems.
Author: antontrygubO_o
Author: hugopm
1405C - Сбалансированная бинарная строка
Author: Kuroni
Author: Maripium
1405E - Уничтожение фиксированных точек
Author: hugopm
Author: Ari
Author: 1-gon
Warning: The following contains graphic depictions of geometry, precision errors, and degenerate cases. Viewer discretion is advised.
I assume the reader is familiar with:
Recall that in the 2D convex hull problem, you are given a set of 2D points, and you must compute the smallest convex polygon containing all the given points. By convex, we mean that for any two points $$$A$$$ and $$$B$$$ inside the polygon, the entire line segment $$$AB$$$ is also inside the polygon.
The problem in 3D is completely analogous. You are given $$$n$$$ 3D points, and you must compute the smallest convex polyhedron containing all the given points. Similarly, a polyhedron is convex if for any two points $$$A$$$ and $$$B$$$ inside the polyhedron, the line segment $$$AB$$$ is also inside the polyhedron.
In the 2D case, it is more obvious what the output is. We can simply output a circular list of vertices on the boundary of the polygon. But how do we represent a polyhedron? Well, we can represent it as a list of triangular faces. A non-triangular face can always be divided into triangles, so this requirement isn't limiting.
In the 2D case, there are some degenerate cases that are easy to miss if you're not careful. For example, if there are 3 collinear points, the polygon might have two adjacent sides of the same slope, or you might combine them into one segment. Or if two points have the same $$$x$$$ coordinate, you need to handle this carefully when sorting the points by $$$x$$$ coordinate. Or if all the points are on the same line, then the convex hull isn't even a non-degenerate polygon!
Now, degenerate cases are bad enough with two dimensions. When you enter the third dimension, it only gets worse. What if four points lie on the same plane? Since we are requiring triangular faces, we could triangulate this in multiple ways. Or maybe we could choose to have one triangle of three points and the forth point lies inside this triangle, so we ignore it. What if all the points are on the same line? Or on the same plane even? Then the convex hull isn't a non-degenerate polyhedron. I will ignore such degenerate cases for now, and revisit them when applicable.
Suppose that $$$n$$$ is very small, and we are guaranteed that no four points are coplanar. Then how can we compute the 3D convex hull in the dumbest way possible?
We could simply consider every triplet of three points $$$(\vec{a}, \vec{b}, \vec{c})$$$, and check if they create a triangular face on the convex hull. In order for it to be a face, the remaining points must "see" the same side of the triangle. In other words, if we consider the plane containing this triangle, the remaining points should lie on the same side of the plane. To compute which side of a plane a point $$$\vec{p}$$$ is on, we can first take a vector $$$\vec{q}=(\vec{b}-\vec{a})\times (\vec{c}-\vec{a})$$$ orthogonal to the plane.
Then the sign of $$$(\vec{p}-\vec{a})\cdot \vec{q}$$$ tells us the side of the plane. In particular, a result of $$$0$$$ tells us that $$$\vec{p}$$$ lies on the plane.
For each triplet, we perform this check with all points, so the total time complexity is $$$O(n^4)$$$. Because of its simplicity, this should be the approach when the constraints allow it. And by examining the brute force case, we learned how to perform the most fundamental operation in any 3D convex hull algorithm: checking which side of a plane a point is on.
Now we want a more practical algorithm. The strategy of the incremental algorithm is to build the convex hull for the first $$$i$$$ points, as $$$i$$$ goes from $$$1$$$ to $$$n$$$. All we have to do is figure out how to update the convex hull in order to accommodate one new point.
Let's start by making an analogy with the 2D convex hull. Suppose we currently have the convex hull of the first $$$i-1$$$ points, and we wish to add the $$$i$$$-th point. Well, if the point is already inside the hull, we should do nothing. Otherwise, let's pretend we are looking at the polygon from the perspective of the $$$i$$$-th point. We should delete all line segments it can see, and add two new line segments: one from the new point to each of the extreme points.
A similar thing happens in the 3D case. If a point is already inside the polyhedron, we simply do nothing. Otherwise, we delete all faces the new point can see. But what's less obvious is what we should add. Well, we've left the polyhedron with a connected set of faces removed. This exposes a cycle of edges. For each of these exposed edges, we should create a new face with the new point and that edge. Effectively, we create a cone of faces to repair the opening.
Now, let's analyze the time complexity. For each iteration, we process all faces of the hull in that iteration. And it can be proven that the number of faces is at most $$$O(n)$$$. For the proof, we use Euler's Formula. It states that for any polyhedron with $$$V$$$ vertices, $$$E$$$ edges, and $$$F$$$ faces, $$$V-E+F=2$$$. All faces are triangles, so we can substitute $$$E=3F/2$$$ since each face has $$$3$$$ edges, and we count each edge twice for the $$$2$$$ faces it touches. Then we have $$$V-F/2=2$$$ and $$$F=2V-4=O(n)$$$. Since we process $$$O(n)$$$ faces in $$$O(n)$$$ iterations, the overall time complexity is $$$O(n^2)$$$.
#include <bits/stdc++.h>
#define ll long long
#define sz(x) ((int) (x).size())
#define all(x) (x).begin(), (x).end()
#define vi vector<int>
#define pii pair<int, int>
#define rep(i, a, b) for(int i = (a); i < (b); i++)
using namespace std;
template<typename T>
using minpq = priority_queue<T, vector<T>, greater<T>>;
typedef long double ftype;
struct pt3 {
ftype x, y, z;
pt3(ftype x = 0, ftype y = 0, ftype z = 0) : x(x), y(y), z(z) {}
pt3 operator-(const pt3 &o) const {
return pt3(x - o.x, y - o.y, z - o.z);
}
pt3 cross(const pt3 &o) const {
return pt3(y * o.z - z * o.y, z * o.x - x * o.z, x * o.y - y * o.x);
}
ftype dot(const pt3 &o) const {
return x * o.x + y * o.y + z * o.z;
}
};
// A face is represented by the indices of its three points a, b, c.
// It also stores an outward-facing normal vector q
struct face {
int a, b, c;
pt3 q;
};
// modify this depending on the coordinate sizes in your use case
const ftype EPS = 1e-9;
vector<face> hull3(const vector<pt3> &p) {
int n = sz(p);
assert(n >= 3);
vector<face> f;
// Consider an edge (a->b) dead if it is not a CCW edge of some current face
// If an edge is alive but not its reverse, this is an exposed edge.
// We should add new faces on the exposed edges.
vector<vector<bool>> dead(n, vector<bool>(n, true));
auto add_face = [&](int a, int b, int c) {
f.push_back({a, b, c, (p[b] - p[a]).cross(p[c] - p[a])});
dead[a][b] = dead[b][c] = dead[c][a] = false;
};
// Initialize the convex hull of the first 3 points as a
// triangular disk with two faces of opposite orientation
add_face(0, 1, 2);
add_face(0, 2, 1);
rep(i, 3, n) {
// f2 will be the list of faces invisible to the added point p[i]
vector<face> f2;
for(face &F : f) {
if((p[i] - p[F.a]).dot(F.q) > EPS) {
// this face is visible to the new point, so mark its edges as dead
dead[F.a][F.b] = dead[F.b][F.c] = dead[F.c][F.a] = true;
}else {
f2.push_back(F);
}
}
// Add a new face for each exposed edge.
// Only check edges of alive faces for being exposed.
f.clear();
for(face &F : f2) {
int arr[3] = {F.a, F.b, F.c};
rep(j, 0, 3) {
int a = arr[j], b = arr[(j + 1) % 3];
if(dead[b][a]) {
add_face(b, a, i);
}
}
}
f.insert(f.end(), all(f2));
}
return f;
}
There exist deterministic algorithms to compute the 3D convex hull in $$$O(n\log n)$$$ time. Most famously, there is a divide-and-conquer algorithm that solves it, but it is notoriously difficult to implement correctly. But don't lose hope! There is also a randomized algorithm based on the same incremental strategy, which takes $$$O(n\log n)$$$ time in expectation if the points are added in random order.
Unfortunately, we can't just shuffle the points, run the usual incremental algorithm, and call it a day. In order to achieve $$$O(n\log n)$$$, we should eliminate extra work of processing faces invisible to the new point. How can we do this? Well, before we checked for each point, which faces it can see. Instead, we will maintain for each point a list of faces it can see. Then when a face is created, we add it to the lists of all points that will be able to see it.
"Wait a minute," I hear you ask, "isn't this still $$$O(n^2)$$$ because you still process the same thing for each face-point pair, just in a different order?" The answer is yes, we're still in $$$O(n^2)$$$ territory. But, using some cleverness we can avoid checking with every future point every time. The key idea is that a new face $$$F$$$ is attached to a preexisting edge $$$e$$$, and a point $$$p$$$ can see $$$F$$$ only if it was able to see one of the two faces of $$$e$$$. Instead of checking with all future points, we will only check the points in the lists associated with these two faces. Unfortunately, this requires more work to implement. This time, we need to start caring about how the faces touch each other in order to find these two lists.
The only potential source for precision error is our check of which side of a plane a point is on. If all coordinates are integers, then our computations will stay as integers. If it overflows on 64-bit integer types, we can convert to floating points for computations, and set $$$\epsilon=0.5$$$. Note that if the coordinates are at most $$$X,Y,Z$$$ in absolute value (in the $$$x$$$, $$$y$$$, and $$$z$$$ directions, respectively), our computations are on the order of $$$XYZ$$$. Use this to determine whether it will overflow on integers or find what you should set $$$\epsilon$$$ to. Or just try every possible $$$\epsilon$$$ until it works.
To handle degenerate cases, we should make sure that the first $$$4$$$ points are not coplanar. If all the $$$n$$$ points do not lie in the same plane, we can find $$$4$$$ such points and move them to the beginning. Once this is taken care of, the only real degenerate case remaining is when we need to decide if a face is visible to a point in the same plane. I made the decision to consider the face invisible in such a case. This ensures that the algorithm won't unnecessarily subdivide faces. However, it is still possible that the algorithm can have non-unique output depending on the order the points are processed. This is inevitable since there is not a unique way to triangulate faces in degenerate cases. If we want to, though, we can combine adjacent coplanar faces afterward.
To my knowledge, the Clarkson-Shor algorithm doesn't make any guarantees about $$$O(n\log n)$$$ performance when there are degenerate cases. I think by considering coplanar faces invisible, we are safe. But if anyone has a nasty test case against it, I'm interested to hear it.
Here is my implementation of the $$$O(n\log n)$$$ algorithm with full handling of degenerate cases (I hope). If anyone finds bugs or ways to improve the efficiency and/or readability of this implementation, please let me know!
#include <bits/stdc++.h>
#define ll long long
#define sz(x) ((int) (x).size())
#define all(x) (x).begin(), (x).end()
#define vi vector<int>
#define pii pair<int, int>
#define rep(i, a, b) for(int i = (a); i < (b); i++)
using namespace std;
template<typename T>
using minpq = priority_queue<T, vector<T>, greater<T>>;
typedef long double ftype;
struct pt3 {
ftype x, y, z;
pt3(ftype x = 0, ftype y = 0, ftype z = 0) : x(x), y(y), z(z) {}
pt3 operator-(const pt3 &o) const {
return pt3(x - o.x, y - o.y, z - o.z);
}
pt3 cross(const pt3 &o) const {
return pt3(y * o.z - z * o.y, z * o.x - x * o.z, x * o.y - y * o.x);
}
ftype dot(const pt3 &o) const {
return x * o.x + y * o.y + z * o.z;
}
ftype abs() const {
return sqrt(dot(*this));
}
};
struct edge;
// The implementation becomes more challenging because we need information of adjacent faces.
// A face will have 3 edges for its adjacent faces
// e1 corresponds to the edge (a,b), e2 to (b,c), and e3 to (c,a)
// A face will store a list of future points that can see it.
// A face will also store "dead" - the index of the point responsible for deleting it (or 1e9 if alive)
struct face {
int a, b, c;
pt3 q;
edge *e1, *e2, *e3;
vi points;
int dead = 1e9;
face(int a, int b, int c, pt3 q) : a(a), b(b), c(c), q(q) {
e1 = e2 = e3 = NULL;
}
};
// an edge will store the face it leads to and a pointer to the reverse edge
struct edge {
edge *rev;
face *f;
};
// This function will glue two faces together
// e1 is a reference to the F1 edge pointer, and e2 is a reference to the F2 edge pointer
void glue(face *F1, face *F2, edge* &e1, edge* &e2) {
e1 = new edge;
e2 = new edge;
e1->rev = e2;
e2->rev = e1;
e1->f = F2;
e2->f = F1;
};
// modify this to your liking
const ftype EPS = 1e-9;
mt19937 rng(chrono::steady_clock::now().time_since_epoch().count());
// shuffles the point-set p, making sure the first 4 points are not coplanar.
// if all points are coplanar, an assertion fails
void prepare(vector<pt3> &p) {
int n = sz(p);
shuffle(all(p), rng);
vi ve;
ve.push_back(0);
rep(i, 1, n) {
if(sz(ve) == 1) {
if((p[ve[0]] - p[i]).abs() > EPS) ve.push_back(i);
}else if(sz(ve) == 2) {
if((p[ve[1]] - p[ve[0]]).cross(p[i] - p[ve[0]]).abs() > EPS) ve.push_back(i);
}else if(abs((p[i] - p[ve[0]]).dot((p[ve[1]] - p[ve[0]]).cross(p[ve[2]] - p[ve[0]]))) > EPS) {
ve.push_back(i);
break;
}
}
assert(sz(ve) == 4);
vector<pt3> ve2;
for(int i : ve) ve2.push_back(p[i]);
reverse(all(ve));
for(int i : ve) p.erase(p.begin() + i);
p.insert(p.begin(), all(ve2));
}
vector<face*> hull3(vector<pt3> &p) {
int n = sz(p);
prepare(p);
vector<face*> f, new_face(n, NULL);
// for a point i, conflict[i] is the list of faces it can see.
// It might contain faces that were deleted, and we should ignore them
vector<vector<face*>> conflict(n);
auto add_face = [&](int a, int b, int c) {
face *F = new face(a, b, c, (p[b] - p[a]).cross(p[c] - p[a]));
f.push_back(F);
return F;
};
// initialize a triangular disk of the first 3 points.
// The initial tetrahedron is handled automatically when we insert the 4th point
face *F1 = add_face(0, 1, 2);
face *F2 = add_face(0, 2, 1);
glue(F1, F2, F1->e1, F2->e3);
glue(F1, F2, F1->e2, F2->e2);
glue(F1, F2, F1->e3, F2->e1);
rep(i, 3, n) {
for(face *F : {F1, F2}) {
ftype Q = (p[i] - p[F->a]).dot(F->q);
if(Q > EPS) conflict[i].push_back(F);
// making this second check is an ugly consequence of starting with a degenerate triangular disk.
// We want to make sure each future point is considered visible to some initial face.
if(Q >= -EPS) F->points.push_back(i);
}
}
rep(i, 3, n) {
// mark all visible faces as dead
for(face *F : conflict[i]) F->dead = min(F->dead, i);
// If a dead face and alive face are adjacent, we have an exposed edge
// Vertex v will be a vertex on some exposed edge
int v = -1;
for(face *F : conflict[i]) {
if(F->dead != i) continue;
int parr[3] = {F->a, F->b, F->c};
edge* earr[3] = {F->e1, F->e2, F->e3};
rep(j, 0, 3) {
int j2 = (j + 1);
if(j2 >= 3) j2 -= 3;
if(earr[j]->f->dead > i) {
// F is dead and earr[j]->f is alive.
// We should add a new face Fn, attach it to earr[j]->f,
// combine the point lists of the two faces into Fn,
// and store Fn in new_face[parr[j]] so we can glue all the new faces together in a cone.
face *Fn = new_face[parr[j]] = add_face(parr[j], parr[j2], i);
set_union(all(F->points), all(earr[j]->f->points), back_inserter(Fn->points));
Fn->points.erase(stable_partition(all(Fn->points), [&](int k) {
return k > i && (p[k] - p[Fn->a]).dot(Fn->q) > EPS;
}), Fn->points.end());
for(int k : Fn->points) {
conflict[k].push_back(Fn);
}
earr[j]->rev->f = Fn;
Fn->e1 = earr[j];
v = parr[j];
}
}
}
// There are no exposed edges
if(v == -1) continue;
// Glue all the new cone faces together
while(new_face[v]->e2 == NULL) {
int u = new_face[v]->b;
glue(new_face[v], new_face[u], new_face[v]->e2, new_face[u]->e3);
v = u;
}
}
// Remove dead faces
f.erase(remove_if(all(f), [&](face *F) {
return F->dead < n;
}), f.end());
return f;
}
Suppose we have $$$n$$$ 2D points $$$p_1,\ldots,p_n$$$. We may want to answer queries of the form "Which point is closest to $$$q_i$$$?" for $$$m$$$ points $$$q_1,\ldots,q_m$$$.
The Voronoi diagram divides the plane into $$$n$$$ regions in a way that can answer these kinds of queries. It is defined so that the region corresponding to the point $$$i$$$ consists of all points closer to $$$p_i$$$ than any other of the $$$n$$$ points. That is, it's the region of points whose nearest neighbor is $$$p_i$$$. The Voronoi diagram can be represented by $$$O(n)$$$ line segments that border the regions (some line segments will extend to a point at infinity). We can answer our queries by sorting all line segments and queries, and performing a sweep line.
The dual of the Voronoi diagram is called the Delaunay Triangulation. It is the graph that connects two points if their Voronoi cells share a boundary edge. Conversely, the points of a Voronoi diagram are the circumcenters of Delaunay triangles, and an edge in the Voronoi diagram connects the circumcircles of two adjacent Delaunay triangles.
The Delaunay triangulation has several nice properties that come in handy. Most notably, the Euclidean minimum spanning tree consists of a subset of Delaunay edges. A nice consequence of implementing 3D convex hull is that we get Delaunay triangulation for free. We can simply map each point $$$(x,y)$$$ into a 3D point $$$(x,y,x^2+y^2)$$$. Then the downward-facing triangles of the 3D convex hull are precisely the Delaunay triangles. The proof is left as an exercise to the reader.
I hope you enjoyed the contest! I will add implementations later.
UPD I've added implementations.
1382C1 - Prefix Flip (Easy Version)
Implementation 1 Implementation 2
1382C2 - Prefix Flip (Hard Version)
Implementation 1 Implementation 2
1381D - The Majestic Brown Tree Snake
Hello, Codeforces!
I'm very glad to invite you to Codeforces Round #658 (Div. 1) and Codeforces Round #658 (Div. 2). This contest will take place on Jul/21/2020 17:35 (Moscow time). In both divisions, you will have 2 hours to solve 5 problems (and one subtask). The score distribution will be announced closer to the start of the round.
Huge thanks to:
I've worked hard to ensure the pretests are short and the statements are strong. Remember to only read the problems that you can solve, and may you have the best of luck!
Because I know you all have so many questions, I have compiled an FAQueue
UPD Here is the score distribution:
Div. 2: 500 — 1250 — (1000 + 1000) — 2250 — 3000
Div. 1: (500 + 500) — 1500 — 2000 — 2500 — 3000
UPD Editorial
UPD I filled in the answer to the FAQ. Also, congrats to the winners!
Div. 2:
Div. 1:
Hello, Codeforces!
I'm very glad to invite you to Codeforces Round #639 (Div. 1) and Codeforces Round #639 (Div. 2). This contest will take place on May/06/2020 17:35 (Moscow time). In both divisions, you will have 2 hours 15 minutes to solve 6 problems. The score distribution will be announced closer to the start of the contest.
Of course, this round would not be possible without the help of many people, who I'd like to thank:
I've done my best to write clear problem statements and strong pretests. Don't forget to read all problems, and I hope you enjoy the round. Good luck!
Because I know you all have so many questions, I have compiled an FAQ:
UPD
Here is the score distribution.
Div. 1: 500 — 1000 — 1500 — 1750 — 2500 — 2500
Div. 2: 500 — 1000 — 1500 — 2000 — 2500 — 2750
UPD Unfortunately, this contest is delayed due to problems in the Codeforces system. It is temporarily scheduled, but I will update this announcement when a final decision on scheduling has been made. It is sad that my contest is delayed, but let's be grateful that the issue was detected early instead of arising in the middle of the round. On the bright side, this may be the earliest announced score distribution in Codeforces history.
UPD The final decision on scheduling is to keep it as is. The contest will take place on May/06/2020 17:35 (Moscow time).
UPD Unfortunately due to the long queue and other issues, the contest is unrated. There was also a bug that gave us trouble in answering your questions. I'm sorry if you asked a question that went unanswered for a long time. I promise we did our best to answer as many questions as we could! Despite all the problems, I hope you enjoyed the problemset!
UPD Editorial
Name |
---|