Sometimes data structures save the day — in China we have a saying that basically means "Data structures make up for dumbness". However, data structures differ vastly. Many of them are not only useful, but also, in some ways, beautiful.

I now ask the community: which one, do you think, is the most beautiful data structure? It can be very short but elegant, or very long but easy to understand, or simply come with a sense of harmony.

Implementation

BIT

Personally, I think

DSU(Disjoint Set Union) is very beautiful.It's very short, yet very efficient. It can maintain many things while being easy to understand. Its design is also very beautiful and harmonic.

The main code of DSU can be expressed in two lines of code. However, it gives a near-linear complexity in maintaining the status of disjoint sets, which is simply amazing.

My coach had once even composed a poem about DSU.

Can you share the poem?

Sorry... I cannot find it now...

But I do remember one line from it.

I'll try to find it!

UPDThanks to JT.AK.IOI!Maybe you mean this

Solo at first, lonely trees cherish the ride.His hand merges our disjoint skies.Out of many is our root.Hands stretching, we seek our starry life.The paths are long, though recursing fromThe one.Array, the best data structure of all times. It works very fast, easy to understand, solves a lot of problems, has advanced modifications like bitset and suffix array. Moreover, it can be used to implement segment trees, treaps and many more.

Agreed. It is the basis of nearly all advanced staff in algorithms.

СпойлерBut arrays have an $$$O(n)$$$ complexity in random insert...

I like the integer data structure, nothing works without it.

I like the electron data structure, nothing works without it.

I like the quark data structure, nothing works without it.

UPD. electrons doesn't contain quarks :(

you played yourself

... I'll make a particles theory where electrons are made of something.

Eagerly awaiting riela's theory of everything

The theory of everything is too hard for physicists, we sps should do it.

Can I be your co-author?

are you well versed in QM and GR?

I'm well versed in SP

Soon I'll start the future gadgets lab. You can try to join, there will be an entrance exam (sp contest)

Electron is a string

and what is a char?

Python: Char?

I thought we are talking about algorithmic data structure which we can write on editor.

Cartesian Tree

Heavy light decomposition. It allows you to solve basically any problem where you make some updates/queries on tree in nlogn time. Although it's difficult to code it, the idea is really simple and anybody who's ever heard about segment trees will be able to understand it.

It's not difficult to code if you make it right:)

By difficulty I mean that the code is rather long so it's easy to make mistake while coding, especially when you use iterative segment tree.

The code for HLD itself is pretty short.

Agree, but every HLD uses some kind of segment trees and those two combined together can get a little bit messy. Still, I'd love to see your implementation for HLD itself.

57767880: everything is nicely separated, so you have a segment tree (optimised to be pretty fast), query/update functions, a DFS_construct() call that builds the HLD and then separately filling the segment trees over paths in the constructor.

DFS_construct computes everything you could possibly need, including preorder renumbering of vertices and the corresponding intervals, the reverse of this renumbering, explicit paths as arrays of vertices, the (path, position in it) pairs for each vertex, depths of all vertices (in a segment tree), subtree sizes, parents and sons. Less than 30 lines.

HLD doesn't work very well for problems with subtree queries I think. It is more suited for handling path queries.

It's not hard to build HLD not only for path queries but also for subtree queries

Sorry for the misunderstanding. I meant to say that one doesn't use HLD to compute subtree queries, because HLD merely optimizes path queries in a tree. Of course, we could build a data structure in a way that it supports both subtree and path queries. But the chains that are built from the HLD preprocessing are only useful for querying over paths.

If you make every heavy edge first, after running the Euler tour on the tree each heavy path (and subtree) becomes a continuous segment. You can handle all the operations in a single segment tree!

Well. That's true but HLD doesn't make your subtree queries any more optimal I think. Whereas it optimizes path queries greatly. For example, given a problem which only requires subtree queries, say find the sum of values in a subtree where values are tagged to every vertex. I don't think HLD will help in the time complexity. But in the case where we have both subtree and path queries, while HLD won't really help us in optimizing the complexity of the subtree queries, storing data in the way you described will simplify our implementation of course.

Here you go: Easiest HLD with subtree queries

Guys, here he comes up again with yet another non trivial idea to get downvotes.

treap.

Segment tree with treap on each node

HLD with persistent compressed treaps on paths :)

Merge Sort Tree

Sparce Table

std::vector< vector< vector > >

arraysplay

Adjacency listsplay trees with treaps on each node which is used to get lines from convexhull trick with heavy light decomposition and suffix tree on each line.

would you PLEASE publish the implementation of that data structure?

Chtholly Tree? It's elegantlly brute.Though not admitted by academic organizations.

You can't even spell it correctly, it is "Chtholly Tree"

That's my mistake.But it's the data structure that counts.

Voronoi Diagram.

when you make a segtree with treaps in each node, and compressing straight chains in the segtree, and making the whole structure persistant : whoa. utterly useless.

int

int128_t

Wavelet matrices. For each bit position in an array, count the ones and stable partition on that. Then you can answer queries for counting elements in any range in any subarray and also queries for the kth smallest element.

sqrt decomposition and BIT

TreeMap and TreeSet for Java ;)

Link/Cut Tree

I think all the data structures are rubbish. Why are they useful? What can we do with them? Nothing.

Obviously palidromic tree.

I think RB-Tree is the most beautiful data structure because it's colorful and useful :D

ODT(OldDriverTree)

If you don't know what's ODT,you can see it in this problem

Unfortunately some newbies (in Chinese I actually want to say “小鬼”) are trying to solve every data structure problem with ODT and keep getting TLE. When I try to tell them their solution is slower than $$$O(n^2)$$$ in worst case they just ignore me and do some stupid tweaking in their program. Of course those tweaking don't work.

That's really wasting time.

Although Chtholly Tree can solve nearly everything, its efficiency depends heavily on the randomness of test sets. The problem setters can easily hack it by making no modification operations.

However, in multi-test-case contests like CNOI, not all test case can hack Chtholly Tree, and it can easily gain points. It is way easier to code than other data structures after all.

These newbies do not actually understand it and are just using it to show off, I believe. A true coder who wishes to get high marks may learn it but not try to use it to solve everything.

UPDATE:Here is what I found on USACO Training.Pair<int,size_t> (; 8===э

And I think the block(分块) is beautiful too.

Althougth it's cost sqrt(n) (longer than log(n) )，it's very easy too understand and it can solve a lot of data structure problems

Hey guy,beaatiful(滑稽

It's a mistake

2333333

bitset :)

Although not as used as other popular data structures, trie seems to me as a beautiful data structure along with the persistency added into it if required .

Segment Tree

f

ODT tree,segment tree,Balanced binary tree:)

HeapImplicit treap

Persistent 2-3 tree

stack, it helps in solving some hard problem with ease without using some other complex fancy data structures, well this is what I think.

Sparse Table

Suffix automaton. Very easy to code (around 20 lines) but solves many difficult string problems.

Randomized data structures such as skip lists.

treap

Bloom Filter

Human brain. It is used to store info about other data structures and can apply them to a problem.(My brain does that pretty fast. Just $$$O(n^n)$$$!)

Quoting Brian Kernighan : "If you would have to choose to teach one data structure and one data structure only, you should choose the hash table. Every other DS is a derivative."

So yeah, unordered sets rock.

I'm struggling to understand how

anydata structure is a derivative of the hash table. Okay, other hash-based things like Bloom filters are, but none of the tree-based things or stuff like suffix automata are.Also hash tables may work very well, but IMO they are far from "beautiful".

Honestly I'm struggling too :P, and I'm not really sure I have correctly interpreted what he has stated here https://m.youtube.com/watch?v=Sg4U4r_AgJU&t=24m50s .

Regardless I think he is referring to the fact that almost always you can efficiently solve a given problem with an associative array/hash table, instead of another non-linear data structure like a red-black tree or a Fenwick tree, exploiting the O(1) search insert and delete time.

exploiting the O(1) search insert and delete time.It is compulsory to mention that the O(1) time complexity is only amortised (i.e. A single operation on the hash table could be very expensive but in the long run, the cost is amortised constant. Why is it important to make this distinction? Well, let's say that you are building a real-time system, a data structure that gives you amortised complexity may not be so practical because the system may not respond quickly at critical moments.)

Well, that's very nice to know !! :] Damn, I didn't knew about amortized analysis until now, thanks.

I didn't knew about amortized analysis until nowWell. Hashing is taught in every introductory Data Structures course and its amortized complexity is very likely to be mentioned. If you really want to learn how these data structures work, study formal academic material (e.g. those from opencourseware). Don't learn from some CP site/blog as it often omits important details. On the other hand, CP sites are good for learning about unorthodox optimization tricks which you won't learn in academia. So pick the correct site to learn the correct thing.

Well thanks again :) .

To be honest, I was learning DS using "Competitive Programming Book 3", though the latter skips various fundamental topics in computer science, because the author assumes the reader already knows them.

So yeah, I guess I'll give opencourseware a shot or two.

I hope your comments will get seen by others who are struggling with CP as well.

SAM,of course

Fenwick tree (sadly, it seems that it was not mentioned here). This data structure is easy to code, but powerful. It still looks like magic for me :)

Also, I like Sqrt-tree (the idea of putting sqrt-decompositions inside each other is beautiful, and yes, I like this data structure, because I invented it) :)

Fenwick tree was mentioned in the second comment.

Treap and SQRT-Decomposition

pbds

That's easy to use but I can't agree it is "beautiful". The code quality is questionable because lack of maintainance. For example, a stupid bug had been there since 2011 until I fixed it in 2017!

And now Jonathan wants to remove it from libstdc++ because there are no maintainers and only a little users (all from competitive programming).

Array, according to me, is the best data structure.

suffix array is easy to understand and beautiful idea of sort technique

fhq treap, which only uses two kinds of operation: split and merge, can be an alternative for splay in most cases(exceptions: LCT) and can be persistent. It's beautiful because of its short code and easy operations.

Persistent Treap(FHQ treap)

It can solve sequence problems that Splay can solve(Except Link-Cut Tree).The implementation is more straightfoward than Splay.All you need is to write two functions

`merge(x,y)`

and`split(rt,k)`

.Then combine these two to write`insert`

and`delete`

.Lefist Tree is also an elegant data structure. Why nobody mentioned it in this post?

My favourite data structures are Link-Cut Cactuses and Heavy-Light Decomposition on the Cactus.

How about compressed $$$0/1$$$ trie? It runs much faster than a balanced tree, and it can do almost everything a balanced tree can do! Also it can be easily used in xor problems. I learned the beautiful DS from Xiuhan Wang, who won Silver Medal in IOI 2019.

Li Chao Tree.

This is an data structure that can add line in 2D plane, and also can find the minimum y-value when $$$x=a$$$. You can use set+ConvexHullTrick to solve this problem, but with Li Chao Tree, you can also add segments. Although this fact, implementation is not so heavy, like RMQ. That’s why I think this data structure is beautiful.

Black hole — Insertion is easy. Retrieval is difficult though.

Unfortunately, the time-space are so distorted near the black hole so in your point of view the "insertion" will take infinite time (assuming you're not in the black hole). So you have an $$$O(\infty)$$$ insertion! :)

Trie ! It comes under advance data structure but actually is very easy to understand and implement. And the concept behind it is exceptionally beautiful. The time complexity reduces by a great extent too.

Tango tree

I like map data structure quite useful in various areas.

sqrt decomposition

Segment tree is a very powerful data structure, it can be used in many problem for update and query both point update/query ans range update/query. It can also be used in many tree problems for update and queries.

I think it is a binary indexed tree (Fenwick tree). It is very easy and fast to code. Also, the idea of a Fenwick tree is very beautiful.

The most beautiful (and useful) data structure is the human brain. It allows us to answer a superset of problems solvable by any DS in Competitive Programming (by simply knowing which DS to use). Too bad, many greens, grays and cyans don't know how to use this data structure.

How does this beautiful data structure allow you to leave such offensive comments?

Because this beautiful data structure allows one to post shitty blogs and post shitty comments which often can be answered by a Google search/being more hardworking.