Freenode Math Wiki
Advertisement



Infinitary Combinatorics and Forcing.


Today, we are going to show that (CH) and (GCH) are independent of ZFC. We already know they are both consistent with ZFC, thanks to Godel. We even have a model of where (CH) and (GCH) hold. What we are going to todo today is show that we can break both (CH) and (GCH).

Before we begin, we need to ensure that everyone is on the same page. That is to say we need to make sure you all know a couple of very basic things.


Preliminaries



=> The Language and Axioms of Set Theory.

The first few things we need to be on the same page about are: The Axioms of ZFC, and the language of set theory.

Firstly, the language of set theory is simple, its just the extension of first order logic generated by adding the symbol ∈.

Now, because I'm lazy, and going over each axiom one by one would comprise a seminar all by it self. I will do what amounts to telling the lot of you "get bent, and RTFM"

http://en.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel_set_theory

What we will need for the practical purposes of this seminar, is that for every set A, the set of all subsets of A, P(A) is defined. The same goes for unions of collections of sets, pairing of two sets, etc... Moreover, by the Axiom of Choice, the cartesian product of a collection of non-empty sets, is non-empty and a set... There are no sets which satisfy X ∈ X. There is an empty set, {}. There is a set of integers N, and this set is infinite. My cat is not a set, where as my fish might be... and so on.

Moreover, for any set Z, and well-formed statement Q(z, v0,..,vn) from our language (i.e. for each (z,v0,..,vn), Q is either true or false, and makes sense), Y = {z ∈ Z: for all vi( Q(z, v0, ..., vn))} is a set.


=> Ordinals, and Cardinals (under AC)

The second couple of things you'll need to know are how to construct ordinals, and cardinals (Under AC):

In our case we will begin with the von Neumann definition of ordinal.

We define the collection ON of all ordinals as follows:

0 = {} ∈ ON. For every a ∈ ON, define a+1 = S(a) = a ∪ {a}, then a+1 ∈ ON. For every set S ⊂ ON, define sup(S) = ⋃ { x: x ∈ S }, and inf(S) = ⋂ { x: x ∈ S }, then sup(S) ∈ ON, and S!={} => inf(S) ∈ ON.

Now, some things need to be said about ON. Firstly, every member of ON is well-ordered by ∈. Moreover, every set A ⊂ ON is well ordered by ∈, (key point here is that A is a set) and has a supremal in ON (namely sup(A)).

It follows that ON is not a set. The proof that ON is not a set goes like this: Suppose On is a set, then sup(ON) ∈ ON, and sup(ON) + 1 ∈ ON. But for every a ∈ ON, we must have a ∈ sup(ON) ∈ ON ∈ sup(ON)+1, which implies sup(ON)+1 ∉ ON, a contradiction.

Moving forward, every well-ordered set (X, <) is "order-isomorphic" to an ordinal (hence the name): To see this, we will enumerate the set X, using the ordinals: First select some q ∉ X, and extend < so that for all x ∈ X, x < q. Now set u(0)=min(X), and for every a ∈ ON\{0}, define u(a)=min( (X \ {u(b) : b∈a}) ∪ {q} ). It follows that, because X is a set, S = { a ∈ ON: u(a) ≠ q } is a set, with S ⊂ ON, so letting type(X,<) = sup(S), we must have (X,<) is "order-isomorphic" to (type(X,<), ∈) because by construction u is a bijection, and for every a,b ∈ type(X,<) (a < b => u(a) < u(b)).

It follows that the natural numbers, being well-ordered, are isomorphic to some ordinal, we call this ordinal ω.

Next we are going to define what cardinality and cardinals are. Firstly, for every set A, we can construct an equivalence classes of sets [A], using our normal intuition about cardinality, that is to say, [A]= {B: B is a set, and there is a bijection between A and B}. Traditionally, we say that the equivalence class [A] is the cardinality of A. However, under the Axiom Of Choice, every set can be well-ordered, and so it follows that we may define the cardinal number of A, as c(A) = inf({a ∈ ON: A can be made order isomorphic with a, using AC}), which is well-defined, etc, etc. The important point here is that we can pick representatives of each equivalence class [A], which are ordinals. Moreover, we can order theses representatives, using ON. Doing this we can define the sequence of infinite cardinals ω(a), with a ∈ ON, by setting ω(0)=ω, and ω(a) = c( {A: c(A)<=ω(b), for some b<a } ). Moreover, this sequence completely enumerates every infinite cardinal number!

This brings us to: Theorem(AC): For every infinite set A, there exists an a ∈ ON: ω(a) = c(A). Proof: an exercise.

Now, you can define the arithmetic with ordinals, and cardinals. In addition you can define cardinal and ordinal exponentiation, and so forth. Defining these operations in a natural manner, one can show that c(P(a)) = 2^(c(a)), and that all the regular laws of arthemtic hold. Moreover, using AC, we can show that for any three cardinals k, h, and g, we have

k + g = max{ k, g } = k * g, and h^(k + g) = (h^k) * (h^g) = (h^(k*g)) = (h^k)^g.


=> CH, GCH and Some Annoying Questions.

We will make use of the following two statements throughout the rest of this seminar.

(CH) is the statement that ω(1) = 2^(ω) = c(P(ω)) (GCH) is the statement that for any a in ON: ω(a+1) = 2^(ω(a))

The previous definitions lead us to very natural questions like: (1) If c(A) = ω = ω(0), then for which a ∈ ON, is c(P(A)) = ω(a) = 2^(ω(0))?

Clearly, under the assumption of (CH), a is just 1. But what if (CH) fails? Moreover, does (CH) even follow from the other axioms?

(2) What about ω(a) for a in ON? Can we decide which b in ON, satisfies ω(b) = c(P(ω(a)))=2^(ω(a))?

Again, assuming (GCH), we get b=a+1. But does (GCH) follow from the other axioms, or is it even true?

The answer to questions (1) and (2) turns out to be: whatever you want, as long as it doesn't break ZFC, have fun.

To see how this is even possible we need:


Some Motivation (and Casual Sex.)


Traditional mathematical thinking cannot answer problems (1) and (2).

The problem is that we don't really have good intuition when it comes to infinite objects which aren't countable, so we have to rely on logic, but logic is finitary. So we get stuck in this terrible loop of trying to build new finitary constructions that we want to use to settle questions about infinite objects which we can reason about using finitary means. All the while never leaving the countable universe, because quite frankly, using the standard tricks, we can't.

The first part of the challenge is we need an object which allows us to "redefine logic and truth" in a consistent manner, while at the same time produces a method to construct new infintie objects we didn't know could exist.

We solve this problem by developing the idea of "Infinitary Combinatorics."

The second part of the challenge is getting out of our very comfortable countable universe. Or atleast, figuring out how to model any given part of our set-theoretic universe using a countable model. Then, using the things we devised while developing "Infinitary Combinatorics", extend this model so that it will contain the new infinite object we constructed, and relate that back to our own universe using the consistency part of the construction.

We solve this second problem with a lot of hand waving, and by developing the method of "Forcing."


Infinitary Combinatorics



=> Partially ordered sets.

A partial order is a pair, (P,<) where P is set of conditions ordered by <. Moreover, the order relation < must be transitive, anti-symmetric, and reflexive. For a PO-set (P,<), we say that two elements p,q are compatible iff there exists an r in P: r < p and r < q. When p, and q are not compatible they are incompatible, which in case we write p ⊥ q. In addition, when p < q, we say that p extends q.

Some simple examples of partial orders are: (P(X), ⊆) with X any set, (R, ≤), The open subsets of a topological space ordered by ⊆.

One particular example of a partial order is very important, so much so that it has its own theory and definition.

The Cohen PO-set (also known as the finite partial functions PO-set) is defined for any two sets I, and J, and denoted by Fn(I,J). It is defined as the set Fn(I,J) = { p ⊆ I⨯J: p is a function, and dom(p) ⊆ I, ran(p) ⊆ J, and c(p) < ω }, ordered by reverse inclusion. So, for every p,q ∈ Fn(I,J), p < q iff p ⊃ q.

Getting back to P.Os in general, an anti-chain A ⊆ P, is a collection of conditions which are pairwise incompatible, that is to say: for any p, q in A, p ≠ q implies p ⊥ q. When every anti-chain contained in a given PO-set (P,<) is at most countable, we say that (P,<) has the Countable Chain Condition (ccc) (the name comes from topology where in fact having ccc on the open sets implies that every chain of open sets is countable) .

As an example, (P(ω), ⊆) does not have the ccc (can you see why?), while (Fn(I,ω), <) does.

To see that for any non-empty set I, (Fn(I,ω),<) has the ccc. Suppose that we had an anti-chain A, of size ω(1). Then by AC, we may write A as A={a(x): x < ω(1)}. It follows that, c(dom(a(x))) < ω for all x < ω(1), implies there is a finite subset R of I, and an uncountable subset K of ω(1) such that, for every u,v in K, dom(a(u)) ⋂ dom(a(v)) = R (This follows from the ∆-system lemma, the proof of which I decided to omit). Now, R is finite, so the collection of all functions f:R->ω is countable, and it follows that for each u in K, there are only countably many possibilities for a(u) restricted to R. Thus, there is an uncountable subset of K, for which the restrictions of each a(u) to R, are the same, which implies they are compatible, a contradiction. (As a side note, this exact same proof works, when you replace ω with any countable set J.)

Now, a subset D ⊆ P is called dense, when every p ∈ P, has an extension d ∈ D. That is to say: for every p ∈ P, there exists some d ∈ D: d < p.

In keeping with our use of Fn(I,ω) for examples. Let x ∈ I, and define D(x) = { p ∈ Fn(I,ω): x ∈ dom(p) }. Then D(x) is dense in Fn(I,ω).

To see this, suppose p ∈ Fn(I,ω). Then either x ∈ dom(p) or x ∉ dom(p), if x ∈ dom(p), simply take d=x. However, if x ∉ dom(p), then pick some y ∈ ω and let d = p ⋃ { (x,y) }. It follows that d < p, and d ∈ D.


After all of the above, some of you might be curious as to "Why are we so interested in partial orders?"

Well it turns out they are great for building "things". The conditions which make up a partial order act like segments or pieces of something we want to construct. For example Fn(I,J) consists of finite parts of functions mapping I->J. So if we wanted to pull a new function from I->J out of thin air, we would start by using the building blocks in Fn(I,J).

But how do we assemble the new function? Moreover, how do we make sure that the parts fit together, and will actually give us a function?

The first question we will answer in a moment. The second is easy, we make sure they are ordered according to the ordering relation we defined.

In the Fn(I,ω) example, if p extends q then p ⊃ q. So every element of I, where q is defined, p is defined, and has the same value q did.

This is no accident, Fn(I,J) was constructed with this in mind. The same goes for any other forcing PO-set you may encounter (thats right Fn(I,J) is used for forcing). (In fact Fn(I,J) was the very first PO-set used for forcing, Cohen used it to break (CH) and (GCH), as we will do today)

But that still doesn't answer how we assemble the function we want to make, hell or even make sure they do what we want! What we need is a way to select objects from P, in a consistent manner.

Enter:



=> Filters (On a PO-Set)

Given a PO-set (P,<). We say that a subset F ⊆ P, is a filter iff

1) For every p,q ∈ F there is some r ∈ F ( r < p and r < q )
2) For every p ∈ F, and q ∈ P ( q < p implies q ∈ F )

What does this mean? (1) says that every pair of elements in F is compatible, while (2) says that if q extends p for some p ∈ F, then q must be F.

While this looks kinda strange, think about the order relation as material implication. Under this interpretation, a filter is just a consistent way of selecting consequences of some condition (hence the name condition) in P.

But the question still remains, how do we control this selection process?

Answer: We use dense subsets of P.

Given a collection D, of dense subsets of P, we say that a filter F ⊆ P is D-generic iff for every V ∈ D ( V ⋂ F ≠ {} ).

Its not just any filter we care about, we care about D-generic filters for some collection of dense subsets D.

For an example of how the dense subsets we pick, control the object we want to construct, we will return to Fn(I,J).

Let D = { D(x) ⊆ Fn(I,J): x ∈ I }, and F be D-generic in Fn(I,J). Now, define f = ⋃ F. (that is f is the union of all the parts that F selected). Then:

1) f is a function
2) dom(f) = I

To see that f defines a function, suppose that it didn't, then there would be some pair y1 ≠ y2 in J, and x in J such that {(x,y1), (x,y2)} ⊆ f. This implies that for some p,q ∈ F, p < {(x,y1)}, and q < {(x,y2)}. The only problem with this is that clearly {(x,y1)} ⊥ {(x,y2)}, and so have no common extension. This implies that p and q have no common extension, and that F isn't a filter, a contradiction.

Next, to see why dom(f) = I. All we need todo is remember that F was D-generic, and so for every x ∈ I ( D(x) ⋂ F ≠ {} ). This implies that for every x ∈ I, there is some condition p ∈ F, with x ∈ dom(p). The fact that p ∈ D(x) ⋂ F, _forces_, x ∈ dom(p), which in turn _forces_, x ∈ dom(f).

The lesson here is that by carefully selecting/constructing a PO-set, with the right dense subsets you can in theory construct a _HUGE_ variaty of different objects.

The only problem is: How in the hell do we know there is a D-generic filter?



=> Martin's Axiom (almost what makes forcing work)

Suppose α is an infinte cardinal. Then MA(α) is the statement:

Given any C.C.C. partial order P, p ∈ P, and any D which is a collection of α many dense subsets of P. There exists a D-generic filter G, with p ∈ G.

Having defined MA(α) lets see if we can prove MA(α) for some carefully selected α.

Theorem (ZFC):
1) MA(ω) is true.
2) MA(2^ω) is not.

Proof:
1) Suppose that (P,<) is any partial order (ccc or not), with p ∈ P, and D = {d(k): k ∈ ω} a countable collection of dense subsets of P. Then, there is some p0 ∈ d(0) such that p0 < p. Continuing this process, suppose we have pn, then there is some p(n+1) ∈ d(n+1) such that p(n+1) < p(n). Now let G be the subset of P, G = { q ∈ P: ∃n ∈ ω ( q < p(n) ) or q < p }, then G is a D-generic filter, with p ∈ G (as clearly {p} ⋃ {p(n): n ∈ ω} ⊂ G.

2) (This is the one you need to Pay Attention Too!!!!)
(It hints at how we can break things, and punch a hole in our countable universe)

By way of contradiction: Assume MA(2^ω)
Let S = { f ⊆ ω ⨯ 2: f is a function from ω->2 }. Then, clearly c(S) = 2^ω, so we may enumerate S as follows S = { h(α): α < 2^ω }.
Now, take P = Fn(ω,2). Then, for each α < 2^ω, and n < ω define the subsets:

D(n) = { p ∈ P: n ∈ dom(p) }
E(α) = { p ∈ P: ∃m ∈ dom(p) ( h(α)(m) ≠ p(m) ) }.

It follows that both D(n) and E(α) are dense for every α < 2^ω, and n < ω. Now, the conditions in E(α) _force_ any function we might construct using something from E(α) to be different from the function h(α). Now, define the collection of dense subsets: D = { D(n): n ∈ ω } ⋃ { E(α): α < 2^ω }. Then, D is a collection of 2^ω many dense subsets of P. So let p0 ∈ P.

It follows that, by MA(2^ω), there must be some D-generic filter G ⊆ P, with p0 ∈ G.

Moreover, g = ⋃ G, will be a function g:ω->2. And so it must be given by h(α) for some α in 2^ω.

But, E(α)⋂G ≠ {} implies that g and h(α) differ at some point.

(A Contradiction).

(The more observant of you might have figured out what just happened, and why its so important for what we want todo)

I'm going pause and let you ponder this point for a moment.

The Big Picture of What Just Happened:

GIVEN A LIST OF SUBSETS OF ω, WE JUST MADE A NEW SUBSET OF ω, NOT ON THAT LIST!

Moreover, we even have MA(ω) as a theorem of ZFC!!!!!!

The only problem is we got a contradiction. :( So if are going to make this work in our favor we need to figure out a way to do the same thing only using MA(ω) or some variant.

Before we go any further, we need to breifly talk about truth, and consistency.



=> Boolean Algebras and Relative Truth

This is just going to be a quick hand waving overview of the ideas. Because if we focused to much on boolean algebras and there relation to truth and logic, we are going to eat up all the remaining time. So here are the key points:

1) Every partial order can be densly embedded into some boolean algebra.

2) Any Ultra-filter over a boolean algebra is a consistent truth assignment.

3) Given any generic-filter which hits every dense subset of a PO, we may extend this filter to an ultra-filter in the embedding boolean algebra.

Such an assignment gives us a wittness to the consistency of the existence of any new object we decide to construct using that particular PO.
The reason for this is because we can map the language of set theory into any boolean algebra. So that in a sense, the ultra-filter we induce from a generic filter witnesses that what we did, didn't result in a violation of the language, because we can still consistently assigns true and false to statements of set theory.
In short, the fact that the ultra-filter exists shows that (provided ZFC is consistent) everything we have done is consistent.

In addition, the above argument doesn't break ZFC, because it must take place outside of ZFC. The reason for this is: inside of ZFC you cannot quantify over forumlas in the language of set theory. However, it does make any proof of consistency we produce a relative consistency proof, predicated on the assumption that ZFC was consistent to begin with.

Now that we know anything we make with MA(ω) will not break ZFC, we can finally get to:



Forcing
------------------------------------------------------------

In all honesty, the hard part is done. We just have to figure out how to do everything using only MA(ω).

Every forcing argument comes from using some PO-set, and applying the forcing lemma (essentially MA(ω)). The main difference between forcing and what we were just doing is that forcing takes place simultaniously inside and outside some countable model.

(Considering that we are probably running out of time, I'm going to skip some of the proofs, and just give results until we get to the big ones)



=> Some Model Theory

What we need now is a Model of ZFC (or enough of it) to prefrom the construction of the disproof of MA(2^ω). Thankfully, we have the Reflection Principle and the Löwenheim–Skolem theorem.


Reflection Principle: Given any suitable finite list of forumlas S from the language of set theory, and k any infinite cardinal. There is a set H: c(H) > k, and H models S.

Löwenheim–Skolem theorem (Roughly): Given any countable first-order language L, and infinte cardinal k. If L has an infinite model, then L has a model of cardinality k.

Using these two theorems we can do some gnarly things.

Firstly, we can go back over everything we have just discussed through out the entirty of this seminar with a fine toothed comb, and pick out every single instance of any of the axioms of ZFC we needed to get to this point (Barring the meta theorems and non-ZFC stuff) and put all of these into a set S(0).

Then, the set S(0) will be finite.

It follows, by the Reflection Principle, that there is an infinite set K, which models all of S(0) (and the consequences there of).

Moreover, by the Löwenheim–Skolem theorem, there is a model of S(0) (call it M) which is countable!



=> It's Lonely in Flatland.

Now things start to get kinda strange. What we have to do is this:

Working outside of our countable model M, we have to show that a person living totally inside our model M, could formalize everything we need in order to get to the point where our proof of MA(2^ω) failed (that is to say the point where it was not ok for _us_ to make new subsets of ω).

Once we get to that point, we are home free, because M is countable, so any list of subsets of ω which is complete inside of M, cannot and will never be a complete list of the subsets of ω. Therefore it is prefectly ok for us to add this new subset the people in M tried to create.


Conceptually our end goal is this: Given a countable transitive model M of ZFC, and some PO-set P contained in M. We can produce a generic filter G, which hits every dense subset of P contained in M (we can do this outside of M, because to us M is countable and so we can take our PO-set P, and apply MA(ω)). Once we (being outside of M) have our generic filter G, we can construct the minimal model M[G] which contains M, G, and still statisfies everything M did.

But how do we communicate with people inside M? Moreover, how do we translate information about the PO-set into M?

Well, the simplist way, is to define some relation, and show that it holds outside of M, iff it holds inside of M.

To this end we define:



=> The Forcing Relations: ⊩ and ⊩*

Intuitively the conditions in our PO-set tell us something about an object we are trying to build from a generic filter. Moreover, any property we actually care about can be expressed in the language of set-theory as the universal closure of some formula Q.

Here comes the hand waving part. Using induction on the complexity a given formula Q, we can define ⊩* inside of M. (If you want to see the definition pick up any modern textbook on forcing or advanced set theory) Roughly speaking the definition works as follows:

For any p ∈ P, and Q in the language of set theory, we define p ⊩* Q to be true iff "if we could have some P-generic filter G inside of M, with p ∈ G, then M[G] satisfies Q."

Moreover we can define the relation ⊩ ouside of M, where we can actually consider all the P-generic filters to be: (p ⊩ Q) iff for every P-generic G, with p ∈ G, M[G] satisfies Q.

(I am ignoring a shit load of technical details here, but all you need to know is that we can define both ⊩ and ⊩*, and show that ⊩* relativized to M is equivalent to ⊩)

Theorem: For every p ∈ P, and Q which is the universal closure of some formula in the language of set theory. [ M satisfies ( p ⊩* Q ) ] iff (p ⊩ Q).

Proof: Omitted.



The Reason Cohen Got a Fields Medal
------------------------------------------------------------

With all of that out of the way, we can now actually prove that (CH) is independent of ZFC.

So here it goes: assuming that ZFC is consistent, we will show that given any list of subsets of ω, which is of size ω(1), we can build a subset of ω not on that list. Which establishes that ω(1) < 2^ω is consistent with ZFC, so adding that with the fact that it is consistent that ω(1) = 2^ω proves that (CH) is independent.

Proof:

Assume that ZFC is consistent, then every finite collection of axioms has a model.

Note: we will use quotes to distinguish what is happening inside of M.

So let R be any collection of finite axioms of ZFC, and let T be the union of R and S(0) (remember S(0)? we defined it awhile back). Then, again, adding another finite list of axioms we can ensure that inside of M: "there is a list of subsets of ω, call this list X={X(α): α ∈ ω(1)}, which has size ω(1)."

Moreover, Fn(ω,2) ∈ M, and "for every α ∈ ω(1), E(α) ⊆ Fn(ω,2), and is dense." It follows that E(α) ∈ M.

Now, M is countable, and so any list which it contains is countable. Moreover, inside of M, "Fn(ω,2) has ccc."

We may list all of the dense subsets of Fn(ω,2), call it D. Moving forward, D ⋂ M = D' is countable so using MA(ω) we can construct a non-empty D'-generic filter, G.

Therefore, inside of M[G], there is a subset of ω, not listed in X. Since we can do this for any potential list of subsets of ω, it follows that it is consistent that ω(1) < 2^(ω).


Exercise: Generalize the above construction and show that (GCH) is independent.

Advertisement