### nor's blog

By nor, 11 months ago,

Disclaimer: This is not an introduction to greedy algorithms. Rather, it is only a way to formalize and internalize certain patterns that crop up while solving problems that can be solved using a greedy algorithm.

Note for beginners: If you're uncomfortable with proving the correctness of greedy algorithms, I would refer you to this tutorial that describes two common ways of proving correctness — "greedy stays ahead" and "exchange arguments". For examples of such arguments, I would recommend trying to prove the correctness of standard greedy algorithms (such as choosing events to maximize number of non-overlapping events, Kruskal's algorithm, binary representation of an integer) using these methods, and using your favourite search engine to look up more examples.

Have you ever wondered why greedy algorithms sometimes magically seem to work? Or find them unintuitive in a mathematical sense? This post is meant to be an introduction to a mathematical structure called a greedoid (developed in the 1980s, but with certain errors that were corrected as recently as 2021), that captures some intuition behind greedy algorithms and provides a fresh perspective on them.

The idea behind this blog is to abstract out commonly occurring patterns into a single structure, which can be focused upon and studied extensively. This helps by making it easy to recognize such arguments in many situations, especially when you're not explicitly looking for them. Note that this blog is in no way a comprehensive collection of all results about greedoids; for a more comprehensive treatment, I would refer you to the references as well as your favourite search engine. If there is something unclear in this blog, please let me know in the comments.

To get an idea as to how wide a variety of concepts it is connected to is, greedoids come up in BFS, Dijkstra's algorithm, Prim's algorithm, Kruskal's algorithm, Blossom's algorithm, ear decomposition of graphs, posets, machine scheduling, convex hulls, linear algebra and so on, some of which we shall discuss in this blog.

Note that we will mainly focus on intuition and results, and not focus on proofs of these properties, because they tend to become quite involved, and there is probably not enough space to discuss these otherwise. However, we will give sketches of proofs of properties that are quite important in developing an intuition for greedoids.

1. Motivation and definition
• Unordered version of conditions on optimality
• Weaker conditions on optimality of the greedy algorithm
2. Some examples
• Interval greedoids
• Matroids
• Antimatroids
• Other greedoids
3. Constructing more greedoids
• From matroids
• From antimatroids
• From greedoids
4. Some random facts about greedoids

### Motivation and definition

We will take an approach in an order that is different from most treatments of greedoids (which throw a definition at you), and try to build up to the structure that a greedoid provides us.

How does a usual greedy algorithm look like? Initially, we have made no decisions. Then we perform a sequence of steps. After each step, we have a string of decisions that we have taken (for instance, picking some element and adding it to a set). We also want a set of final choices (so we can't extend beyond those choices).

To make this concept precise, we define the following:

1. A ground set is any set. We usually interpret this as a set of incremental decisions that can be made.
2. A language is any set of finite sequences (or words) of elements coming from the ground set. We interpret a language as a set of possible sequences of decisions we can take in the greedy algorithm.
3. A simple language is any language where the sequences don't have repeated elements. This definition is motivated from the fact that we don't take a decision more than once.
4. A hereditary language is any language $L$ on a ground set $S$ satisfying the following two conditions:
• $\emptyset \in L$
• For any $s \in L$, every prefix of $s$ is also in $L$. This definition is motivated from the fact that we want to capture a sense of time/causality in our sequence of decisions. That is, if we have a sequence of decisions, we must have arrived at it from the just-smaller prefix of decisions.
5. A basic word in $L$ is any $s \in L$ such that there is no element $e$ in $S$ such that $s$ extended by that element (denoted by $s \cdot e$ or $se$) is in $L$. This is motivated from the fact that we want to have some ending states at the end of the greedy algorithm.

We can now rephrase the optimization problems that a large set of greedy algorithms try to solve in the following terms:

• Consider a simple hereditary language $L$ on the ground set $S$, and a function $w : L \to \mathbb{R}$. Now maximize $w(s)$ where $s$ is a basic word of $L$.

A (certain quite general type of) greedy algorithm looks something like this:

• Initially there are no decisions taken.
• At step $i$, when the decisions $x_1, \dots, x_{i - 1}$ have already been taken, pick an $x_i \in S$ such that $x_1 x_2 \cdots x_i \in L$, and this choice of $x_i$ maximizes $w(x_1 x_2 \cdots x_i)$ over all valid choices of $x_i$. If there is no such $x_i$, terminate the algorithm.

Of course, taking arbitrary $w$ doesn't make any sense, so we limit $w$ to the following kinds of functions (we shall relax these conditions later, so that we can reason about a larger variety of problems):

1. If at a certain point, a decision $x$ is optimal, then it will also be optimal at a later stage. That is, if $x$ is chosen when the prefix was $s$, then for a prefix $st$, we have $w(stxu) \ge w(styu)$ for all valid $y, u$ ($u$ can be empty, $y$ is a decision).
2. It is optimal to pick $x$ sooner than later. That is, if $x$ is chosen when the prefix was $s$, then we must have $w(sxtyu) \ge w(sytxu)$ for all valid $y, t, u$ ($t, u$ can be empty, $y$ is a decision).

To this end, we define greedoids as follows:

A greedoid is a simple hereditary language $L$ on a ground set $S$ that satisfies an "exchange" property of the following form:

• If $s, t \in L$ satisfy $|s| > |t|$, then there is a decision in $s$ that we can append to $t$, so that the resulting word is also in $L$.

This extra condition is crucial for us to be able to prove the optimality of greedy solutions using an exchange argument (the usual exchange argument that people apply when proving the correctness of greedy algorithms).

One notable result is that by this extra condition, all basic words in $L$ have the same length. This might seem like a major downgrade from the kind of generality we were going for, but it still handles a lot of cases and gives us nice properties to work with.

For people familiar with matroids

It turns out that greedoids have the following equivalent definitions which will be useful later on (we omit the proofs, since they are quite easy):

1. Let a set variant of the greedoid be defined as follows: a pair $(S, F)$ of a ground set $S$ and a family of its subsets $F$, such that $\emptyset \in F$ and for any $A, B \in F$ with $|A| > |B|$, there exists an $s \in A \setminus B$ such that $B \cup \{s\} \in F$. Then the original definition of a greedoid and this definition of a set variant are equivalent, in the sense that for any such $F$, there is exactly one simple hereditary language $L$ such that the set of set of decisions in a word for each word in $L$ is precisely $F$. (That is, when we replace sequences in $L$ with an unordered version, the resulting set of unordered sets is $F$).
2. We can replace the second condition in the definition of the set variant by the condition that all maximal sets have the same cardinality. Maximal sets are defined as sets such that the set formed after adding another element to them is not in $F$.

There is also a set analogue for simple hereditary languages: $\emptyset \in F$ and for each $X \in F$, we have an element $x \in X$ such that $X \setminus \{x\} \in F$. The intuition remains the same. Note that again, we don't need this to hold for all $x \in X$, but at least one $x \in X$.

But why do we care about greedoids at all? The answer lies in the following informally-stated two-part theorem:

Theorem 1:

1. If $L$ is a greedoid on $S$, and $w$ satisfies the conditions mentioned earlier, then the greedy algorithm gives the correct answer.
2. If the greedy algorithm gives the correct answer for all $w$ satisfying the conditions mentioned earlier, and if $L$ has the same length for all its basic words, then $L$ is a greedoid on $S$.
Brief proof sketch

Note that the definition of greedoid doesn't depend on $w$ in any way, so it can be studied as a combinatorial structure in its own right — this leads to quite non-trivial results at times. However, when we think of them in the greedy sense, we almost always have an associated $w$.

In a lot of cases, $w$ has a much simpler (and restrictive) structure, for instance, having a positive and additive weight function (which is defined on each element). In that case, the following algorithm works for matroids (special cases of greedoids): sort elements in descending order of their weight, and pick an element iff adding it to the set of current choices is still in the matroid.

##### Unordered version of conditions on optimality

Usually, $w$ is not defined on a sequence of steps, but on the set of choices you have made. However, in the general case, the "natural" relaxation from the ordered version to the unordered version of greedoids fails (both in being equivalent and in giving the optimal answer). In fact, this error was present in the original publications (since 1981) and was corrected fairly recently in 2021. For a more detailed discussion (and the proofs of the following few theorems), please refer to $[1]$, which we shall closely follow.

We shall start out with some special cases:

Theorem 2: Consider a greedoid $F$ (unordered) on a ground set $S$. Suppose $c$ is a utility function from $S$ to $\mathbb{R}$, so that the weight of a set of decisions is additive over its elements. Then the greedy algorithm gives an optimal solution iff the following is true:

• If $A, A \cup \{x\} \in F$ and $B$ is an unordered basic word (i.e., maximal set) in $F$ such that $A \subseteq B$ and $x \in S \setminus B$, then there is a $y \in B \setminus A$ such that $A \cup \{y\} \in F$ and $B \cup \{x\} \setminus \{y\}$ is also a maximal set.

The following theorem generalizes the sufficiency:

Theorem 3: Consider a greedoid $F$ (unordered) on a ground set $S$ and $w : 2^S \to \mathbb{R}$. The greedy algorithm gives an optimal solution if (and not iff) the following is true:

• If for some $A, A \cup \{x\} \in F$ and $B$ being an unordered basic word (i.e., maximal set) in $F$ such that $A \subseteq B$ and $x \in S \setminus B$, it holds that $w(A \cup \{x\}) \ge w(A \cup \{u\})$ for all valid $u$, then there is a $y \in B \setminus A$ such that $A \cup \{y\} \in F$ and $B \cup \{x\} \setminus \{y\}$ is also a maximal set and $w(B \cup \{x\} \setminus \{y\}) \ge w(B)$.

This theorem can be used to show optimality of Prim's algorithm on its corresponding greedoid.

This theorem can also be derived from theorem 6 that we shall mention later on.

Remember that we mentioned that the original claim in the 1981 paper about greedoids was false? It turns out that the claim is true about a certain type of greedoids, called local forest greedoids. Since it is not so relevant to our discussion, we're going to spoiler it to avoid impairing the readability of what follows.

Local forest greedoids and some optimality theorems
##### Weaker conditions on optimality of the greedy algorithm

As we had mentioned earlier, the constraints on $w$ while motivating the definition of a greedoid (ordered) are quite stringent, and they might not hold in a lot of cases. Here, we work upon reducing the set of constraints (which will also have implications on theorem 3 earlier).

Theorem 6: Let $L$ be a greedoid on a ground set $S$, and $w$ be an objective function that satisfies the following condition:

• If $ax \in L$ such that $w(ax) \ge w(ay)$ for every valid $y$ and $c = azb$ is a basic word, then there exists a basic word $d = axe$ such that $w(d) \ge w(c)$.

Then the greedy algorithm gives an optimal solution.

The proof is quite easy, and goes by way of contradiction. There is a special case of the above theorem (that needs some care to prove), which is applicable in a lot of cases. It turns out that it corresponds to the last part of the proof sketch of Theorem 1, so maybe it's not such a bad idea to read it again.

### Some examples

The main point of adding examples at this point is to showcase the kinds of greedoids that exist in the wild and some of their properties, so that it becomes easy to recognize these structures. I have added the examples in spoiler tags, just to make navigation easier. Some of these examples will make more sense after reading the section on how to construct greedoids.

While going through these examples, I encourage you to find some examples of cost functions that correspond to greedy optimality, so that you can recognize them in the future. Discussing them in the comments can be a great idea too. For a slightly larger set of examples, please refer to Wikipedia and the references.

When appropriate, we will also point out greedy algorithms that are associated with these structures.

#### Interval greedoids

These are greedoids that satisfy the local union property (as introduced in the section of local forest greedoids). Equivalently, they are also defined as greedoids where for every $A \subseteq B \subseteq C$ with $A, B, C \in F$, if $A \cup \{x\}, C \cup \{x\} \in F$ for some $x \not \in C$, then $B \cup \{x\} \in F$.

Some examples are:

1. All local forest greedoids.
2. Matroids (to be introduced later).
3. Anti-matroids (to be introduced later).
Directed branching greedoid
Undirected branching greedoid
Clique elimination greedoid

#### Matroids

Matroids are greedoids that satisfy a stronger property than the interval property for interval greedoids, by removing the lower bounds. More precisely, a matroid is a greedoid that satisfies the following property:

• For every $B \subseteq C$ with $B, C \in F$, if $C \cup \{x\} \in F$ for some $x \not \in C$, then $B \cup \{x\} \in F$.

Equivalently, they are greedoids where in the unordered version, for each $X \in F$, and for all (not at least one) $x \in X$ we have $X \setminus \{x\} \in F$. In other words, they are downwards closed. In such a context, elements of $F$ are also called independent sets.

Intuitively, they try to capture some concept of independence. In fact, matroids came up from linear algebra and graph theory, and greedoids were a generalization of them when people realized that matroids are too restrictive, and downwards-closedness is not really important in the context of greedy algorithms. The examples will make this notion clearer.

Note that for matroids, it is sufficient to define a basis (the set of maximal sets in $F$), and downwards closedness handles the rest for us.

Matroids have a much vaster theory compared to greedoids, due to being studied for quite a long time and being more popular in the research community. Since this is a blog only on greedy algorithms, we will refrain from diving into matroid theory in too much detail. Interested readers can go through the references for more.

Some examples of matroids are as follows.

Free matroid
Uniform matroid
Graphic matroid
Transversal matroid
Gammoid
Algebraic matroid
Vector matroid
Column matroid

#### Antimatroids

Antimatroids are greedoids that satisfy a stronger property than the interval property for interval greedoids, by removing the upper bounds. More precisely, an antimatroid is a greedoid that satisfies the following property:

• For every $A \subseteq B$ with $A, B \in F$, if $A \cup \{x\} \in F$ for some $x \not \in B$, then $B \cup \{x\} \in F$.

Unlike matroids, this does not necessarily imply upward closure, but it does imply closure under unions (which is another way to define antimatroids).

Another definition of antimatroids that is in terms of languages (and gives some intuition about their structure) calls a simple hereditary language over a ground set an antimatroid iff the following two conditions hold:

1. Every element of the ground set must appear in a word in the language.
2. The exchange property holds for any two words $a, b$ such that $a$ has an element not in $b$, rather than only restricting it to $|a| > |b|$.

Using this, we note that the basic words are all permutations of the ground set.

Another piece of intuition (derived from the above definition) that helps with antimatroids is the fact that when we try constructing sets in an antimatroid, we keep adding elements to a set, and once we can add an element, we can add it any time after that.

It also makes sense to talk about antimatroids in the context of convex geometries. To understand the intuition behind this, we take the example of a special case of what is called a shelling antimatroid. Let's consider a finite set of $n$ points in the 2D plane. At each step, remove a point from the convex hull of the remaining points. The set of points at any point in this process clearly forms an antimatroid by the above intuition. In fact, if instead of 2D, we embed the ground set in a space with a high enough dimension, we can get an antimatroid isomorphic to the given matroid!

What is so special about convexity here? Turns out we can use some sort of "anti-exchange" rule to define convex geometries in the following manner. It will roughly correspond to this fact: if a point $z$ is in the convex hull of a set $X$ and a point $y$, then $y$ is outside the convex hull of $X$ and $z$.

Let's consider a set system closed under intersection, which has both the ground set and empty set in it. Any subset of the ground set (doesn't matter whether it is in the system or not) has a smallest superset in such a set system (and it is the smallest set from the system that contains the subset). Let's call this mapping from the subset to its corresponding smallest superset in the system $\tau$. Then we have the following properties:

1. $\tau(\emptyset) = \emptyset$
2. $A \subseteq \tau(A)$
3. $A \subseteq B \implies \tau(A) \subseteq \tau(B)$
4. $\tau(\tau(A)) = \tau(A)$

Now for such a set system, it is called a convex geometry if the following "anti-exchange" property holds:

• If some distinct $y, z \not \in \tau(X)$, but $z \in \tau(X \cup \{y\})$, then $y \not \in \tau(X \cup \{z\})$.

Intuitively, consider $\tau$ as a convex hull operator. Then the anti-exchange property simply means that if $z$ is in the convex hull of $X$ and $y$, then $y$ is outside the convex hull of $X$ and $z$.

Now it turns out that the complements of all sets in an antimatroid form a convex geometry! This is not too hard to prove.

We shall now give some examples of antimatroids.

Chain antimatroid
Poset antimatroid
Shelling antimatroid
Perfect elimination antimatroid
Point-line search antimatroid
Cover antimatroid
Point search antimatroid
Line search antimatroid
Undirected point/line search antimatroid
Capacitated point search antimatroid
Transitivity antimatroid and shelling of an acyclic matroid
Chip-firing antimatroid
Universal antimatroid

#### Other greedoids

Now that we got a few special types of greedoids out of the way, we shall look at some more greedoids.

Perfect elimination greedoids on bipartite graphs
Retract, monoid retract and dismantling greedoids
Bipartite matching greedoid
Gaussian elimination greedoid
Twisted and dual-twisted matroids (which are not matroids)
Ear decomposition greedoid (undirected and directed)
Blossom greedoid

### Constructing more greedoids

Note that some constructions from polymatroids/matroids etc. were already mentioned in the section on examples, so we will focus on the ones that were not.

Truncation
Duality
Restriction
Contraction
Sums and unions
##### From antimatroids
Join of antimatroids on the same ground set
Intersection of matroid and antimatroid
Truncation
Restriction
Contraction

### Some random facts about greedoids

This is a collection of interesting facts about greedoids that just don't seem to fit in the above sections, but are important enough not to be ignored. Of course, with every important construction or idea, there are a lot of facts that lead to a combinatorial explosion of the kinds of facts we can derive about anything, so what we discussed above is way too incomplete to be a comprehensive introduction. It'll hopefully evolve over time too, so I'm open to suggestions regarding this section.

• You can construct Tutte polynomials for greedoids, just like you can for matroids.
• We can characterize greedoids into matroids, antimatroids, and none of these in terms of having certain kinds of greedoid minors.
• Antimatroids are very closely related to lattices, and in general we can reason about greedoids by their associated poset flats too.

1. Dávid Szeszlér (2021), Sufficient conditions for the optimality of the greedy algorithm in greedoids.
2. Bernhard Korte, Laszlo Lovasz, Rainer Schrader (1991), Greedoids
3. Matroid intersection in simple words
4. Goecke, Korte, Lovasz (1986), Examples and algorithmic properties of greedoids
5. Bernhard Korte, Laszlo Lovasz (1988), Intersection of matroids and antimatroids
6. Brenda L Dietrich (1987), Matroids and antimatroids — a survey

Lastly, since this is a huge blog, some mistakes might have crept in. If you find any, please let me know!

• +349

 » 11 months ago, # |   +4 Good article. However, learning matroid didn't give me any power solving greedy tasks :(
•  » » 11 months ago, # ^ |   -24 hahaha, i kinda wonder if these theories are useful in any way lol. given that a red coder also thinks like me :)
•  » » » 11 months ago, # ^ |   0 There once was a time matroid-intersection tasks popped up.
•  » » » 11 months ago, # ^ |   0
•  » » 11 months ago, # ^ |   +30 Me neither :(. But I think it is a nice way to generalize scattered ideas into one coherent theory, and maybe internalizing some examples can help in unexpected situations where greedy doesn't make sense.
 » 11 months ago, # |   +27
 » 11 months ago, # |   +1 Thanks for helping the community!
 » 11 months ago, # |   +8 Really helpful, thanks for the blog!!
 » 11 months ago, # |   +8 another nor sir blog to star :prayge:
 » 11 months ago, # |   +8 Soon top 10 contributors :prayge:
•  » » 11 months ago, # ^ |   0 donee
 » 3 months ago, # | ← Rev. 3 →   +11 nor I don't understand the second claim of the proof of theorem 1 part 1, mind clarifying? Does the subsequence need to be contiguous? As I understand the claim, if $t_1,t_2,a\in S$ and $t_1t_2,a\in L$, then $at_1\in L$. But this seems false for $L=\{\emptyset,a,at_2,t_1,t_1t_2\}$?
•  » » 3 months ago, # ^ |   +3 Also, should Theorem 3 have $w:2^S\to\mathbb R$? That is, $w$ maps subsets of $S$ to reals.
•  » » » 3 months ago, # ^ |   +3 Indeed, thanks for catching the typo.
•  » » 3 months ago, # ^ |   +10 Seems like I messed up the claim while writing up the sketch. Instead of us being able to pick an arbitrary subsequence, there simply exists a subsequence such that $st' \in L$. No, it doesn't need to be contiguous. For example, if $s = pqr$, and $t = uvwxyz$, then a subsequence that works can be the subsequence $uxz$, so that the word $st'$ in the claim is $pqravwuyx$.
•  » » » 3 months ago, # ^ |   0 The second claim makes sense now, but I don't get third claim. Is it really a continuation of the first claim? Why would $sa\in L$ imply $st'\in L$? Isn't the latter string longer?
•  » » » » 3 months ago, # ^ |   0 The part that it is a continuation is an error from when I mentioned the first claim elsewhere in the first draft, and the second claim was the first claim back then. It is also very poorly phrased in retrospect (in quite a lot of ways); it is actually connected to the subsequence $t'$ in the second claim. Here's what I meant when I wrote that (paraphrasing from my notes):Let's say we have $st, sa \in L$. Consider the $t'$ from the second claim, and the positions of the subsequence that was shifted by 1. If we move $a$ in that subsequence while preserving the order of the other elements, then the resulting $t"$ will satisfy $st" \in L$. (Looking at it in another way, we are shifting one part of the remaining subsequence to before $a$, while keeping the order of the remaining elements the same, which is essentially what the claim says).If we consider the previous example, where $s = pqr$, $t = uvwxyz$ and $t' = avwuyx$, we can move around the sequence $a$ within the characters $a, u, x$, while preserving the relative position of $u$ and $x$, any number of times to get $t"$ such that $st"$ is in $L$. Thus $t"$ can be $uvwayx$, $uvwxya$ and of course $t'$.
 » 3 months ago, # | ← Rev. 2 →   0 nor Regarding local forest greedoids: One property of this structure is that for every element ... Wouldn't $\{x\}$ be such a set? Why is this set unique (and what path does it induce)?Never mind, I confused these with matroids.
•  » » 3 months ago, # ^ |   0 Actually, I'm still confused about this property. There doesn't seem to be a unique set for the directed branching greedoid, is there?
•  » » » 3 months ago, # ^ |   +5 Now I'm not sure how this issue can be resolved either. Looking through my references, I was referring to this and this for directed branching greedoids in the context of local forest greedoids, and the first link gives the claim for branching greedoids.Also, I just noticed there are other definitions of directed branching greedoids, so I probably inadvertently mixed them up — for instance, this paper gives the definition that I used in the examples above. For the purposes of proving the correctness of certain greedy algorithms on rooted graphs, however, the two definitions "should" both lead to correct proofs.
 » 3 months ago, # |   +3 Then the original definition of a greedoid and this definition of a set variant are equivalent, ... Is it true that for any $L$ there is a corresponding $F$? If not, it doesn't seem correct to refer to these definitions as equivalent.
•  » » 3 months ago, # ^ |   0 Yes, this is true.
•  » » » 3 months ago, # ^ |   0 How do you show that this is true?
•  » » » » 3 months ago, # ^ | ← Rev. 6 →   0 Simply map a $w \in L$ to the set of its characters, call this mapping $f$. We claim that $f$ is the required mapping to go from $L$ to $F$. $|s| > |t|$ in $L$ implies, since $L$ is a simple language, that $f(s) \setminus f(t)$ is the set of characters that are in $s$ but not in $t$, and there can be at most one occurrence of any character in $S$ in the list of characters in $s$ but not in $t$ (with multiplicity). So for every $tx$ for $x \in s$ and $x \not \in t$, we have a unique element $f(t) \cup \{x\}$ in $F$. Now for any $A, B$ in $F$ with $|A| > |B|$, we can use any preimage of $A$ and any preimage of $B$ in $L$ to be able to claim that there is a $x$ in $A \setminus B$ such that $A \cup \{x\} \in F$. Abuse of notation: we will also denote the map from $L$ to the set of sets as $f$.For the other direction, considered an unordered greedoid $(S, F)$, and the maximal simple hereditary language $L$ such that $f(L) \subseteq F$. It is easy to check that $L$ is an ordered greedoid. Let's call this mapping $g$.Note that $f$ and $g$ applied in this order will map $L$ to itself — this can be shown by induction on word size.Since this gives an explicit natural bijection, we can use either definition for a greedoid. Most sources (such as Wikipedia) use an unordered version, but I find the ordered version to be much more intuitive for greedy algorithms.
•  » » » » » 3 months ago, # ^ |   0 Mind explaining the induction?
•  » » » » » » 3 months ago, # ^ | ← Rev. 6 →   +15 Let's denote $g(f(L))$ as $L'$. In whatever follows, we'll implicitly use the fact that $L$ and $L'$ are simple languages.All characters of $L$ are present as singleton sets in $f(L)$ (and they are the only singletons), so all words of size $1$ in $L'$ are in $L$ too.It'll be more concise to use well-ordering instead of induction here to show that $L'$ is a subset of $L$ (same thing since they're equivalent). Consider a smallest word $w$ in $L'$ that is not in $L$ (if it exists). $w$ has size at least $2$. Since $L'$ is hereditary, $w[:-1]$ is in $L'$. But since $w$ is the smallest word in $L'$ that is not in $L$, $w[:-1]$ is in $L$ as well. Note that $f(L')$ is a subset of $f(L)$, so there is a word $w'$ in $L$ such that $f(w')$ = $f(w)$. Since $L$ is a greedoid, apply the exchange lemma on $w'$ and $w[:-1]$. This shows $w$ is in $L$, contradiction.So $L'$ is a subset of $L$, which means $L$ and $L'$ are identical, owing to the maximality of $L'$.