We’ve seen how limits are formed, and that they exist iff products and equalisers do. Now we get to see about continuous functions and colimits, pages 105 through 114 of Awodey.
The definition of a continuous functor is obvious in hindsight given the real-valued version: it “preserves all limits”, where “preserves a particular limit” means the obvious thing that limits of cones of the given shape remain limits when the functor is applied.
The example is the representable functor, taking any arrow in category to its corresponding “apply me on the left!” arrow in Sets. That is basically to the relevant commutative triangle in . I hope the following proof will help me understand the representable functors more clearly.
Representable functors preserve all limits: we need to preserve all products and all equalisers. Awodey shows the empty product first, which is clear: the terminal object goes to the terminal object. Then an arbitrary product gets sent to , which is itself a product because corresponds exactly with . (Indeed, the projections give ; conversely, the UMP of the product gives a unique for the collection .)
This has given me the intuition that “the representable functor preserves all the structure” in the sense that the diagrams will look the same before and after having done the functor.
Equalisers are the other thing to show, and that falls out of the definition in a completely impenetrable way. I can’t distill that into “the representable functor preserves all the structure” so easily.
Then the definition of a contravariant functor. I’ve heard the terms “covariant” and “contravariant” before, several times, when people talk about tensors and general relativity and electromagnetism, but I could never understand what was meant by them. This definition is clearer: a functor which reverses input arrows with respect to the objects. Operations like would be contravariant, for instance.
The representable functor is certainly contravariant, taking to and an arrow to by . The contravariant functor reverses the order of arrows in its argument; it takes arrows to co-arrows, so it should take colimits to co-colimits, or limits. I need to keep in mind this example, to avoid the intuition that “functors take things to things and cothings to cothings”: if the functor is covariant, it flips the co-ness of its input.
Example: a coproduct is a colimit, so should take the coproduct to a product. That might be why we had as Boolean algebras: the functor might be contravariant. What does it do to the arrow ? Recall that an arrow in the category of Boolean algebras (interpreted as posets) is an order-preserving map. Huh, not contravariant after all: the functor seems covariant to me. There must be some other reason; it turns out that I’m mixing up two different functors, one of which is covariant and takes sets to sets, and one of which is contravariant and takes sets to Boolean algebras.
“The ultrafilters in a coproduct of Boolean algebras correspond to pairs of ultrafilters”: recall that the functor takes an ultrafilter to the corresponding set of indicator functions picking out whether a given subset is in the filter, and an arrow of ultrafilters to the arrow by , and so it is representable. (I barely remember this. I think I deferred properly thinking about representable functors until Awodey covered them properly.) At least once we’ve proved that, we do get “ultrafilters in the coproduct correspond to pairs of ultrafilters”, by the iso in the previous paragraph.
The exponent law is much easier - it follows immediately from the same iso.
(Oh, by the way, we have that limits are unique up to unique isomorphism, because they may be formed from products and equalisers which are themselves unique up to unique isomorphism.)
Next section: colimits. The construction of the co-pullback (that is, pushout) is dual to that of the pullback: take the coproduct and then coequalise across the two sides of the square. So the coproduct of two rooted posets would be the pushout of the two “pick out the root” functions: let , and be rooted posets with roots . Then the pushout of by and by is just the coproduct of the two rooted posets.
Ugh, a geometrical example next. Actually, this is fairly neat: the coproduct of two discs, but where we view two points as being the same if they are both images of the inclusion. That’s just two circles glued together on the boundary, which is topologically the same as a sphere. In the next lower dimension, we want to take two intervals, glued together at their endpoints, making a circle.
Then the definition of a colimit, which is the obvious dual to that of a limit. I skip through to the “direct limit” idea, where the colimit is taken over a linearly ordered indexing category. I can immediately see that this might be associated with the idea of a limit in , but I’ll save that until after the worked example, which is the direct limit of groups.
The colimit setup is all pretty obvious in retrospect, but I didn’t try and come up with it myself. (The exercises will show whether it really is obvious!) The colimiting object does exist because coproducts and coequalisers do, and we can construct it as the coproduct followed by a certain coequaliser - namely, the one where “following a path through the sequence, then going out to the colimit, is the same as just going straight to the colimit”. That is, such that , where the are the maps into the colimit. The equivalence relation whose quotient we take, is therefore: if , then iff there is some such that if we follow along the homomorphisms starting from and , we eventually hit a common element. (Indeed, if there existed elements which didn’t have this property, then .) I think I’ve got that.
The operations are the obvious ones, and we’ve made a kind of “infinite intersection” of these groups, where the maps are the “inclusions”. Universality is inherited from Sets, so as long as the limiting structure obeys the group axioms, we have indeed ended up with a colimit.
What does it mean, then, for functor to “create limits of type ”? For each diagram in of type , and each limit of that diagram, there is a unique cone in which is sent to by , and moreover that cone is itself a limit.
In the example above, is the forgetful functor Groups to Sets, is the ordinal category . For each diagram in Sets of type , the colimit of the diagram is given by taking the coproduct of all the , and identifying (where is the arrow in corresponding to the arrow in from to ). Then we can pull this back through the forgetful functor to obtain a corresponding cocone in Groups, and we can check that it’s still a colimit. That is, creates -colimits.
Why does it create all limits? Take a diagram and limit in . Then we need a unique Groups-cone which is a limit for . The Set-limit can be assigned a group structure, apparently. It’s obvious how to do that in the case that the limit was an ordinal - it’s the same as we saw above - but in general…
I’ll leave that for the moment, because I want to get on to adjoints sooner rather than later (they’re apparently treated very early in the Part III course).
The idea behind the cumulative hierarchy construction is clear in the light of the example above, and this makes it immediately obvious that each is transitive. The construction of the colimit is the obvious one (although I keep having to convince myself that it is indeed a colimit, rather than a limit).
What does it mean to have all colimits of type ? A diagram of -shape is an -chain. A colimit of that chain would compare bigger than all the elements of the chain (that’s “there is an arrow ” - that is, “it is a cocone”), and would have the property that if for all then (that’s “all other cocones have a map into the colimit”). The colimit is a “least upper bound” for the specified chain. A monotone map is called continuous if it maintains this kind of least upper bound.
Then we have a restated version of the theorem that “an order-preserving map on a complete poset has a fixed point”, which I remember from Part II Logic and Sets. The proof here is very different, though. I follow it through, doing pretty natural things, until “The last step follows because the first term of the sequence is trivial”. Does it actually make a difference? If we remove the first element of the chain, I think it couldn’t possibly alter anything in this case, even if the first element were not trivial, because we’ve already taken the quotient identifying “things which are eventually equal”.
I was a little confused by the statement of the theorem. “Of course has a least fixed point, because it’s well-ordered” was my thought, but obviously that’s nonsense because is not well-ordered. So there is some work to do here, although it’s easy work.
The final example seems almost trivial when it’s spelled out, but I would never have come up with it myself. Basically saying that “you need to check that your proposed colimit object actually exists”, and if it doesn’t, you might have to add things to your colimit until it starts existing”. I don’t know how common a problem this turns out to be in practice, but the dual says that we can’t assume naive limits exist either.
This was another rather difficult section. Fortunately the exercises come next, and that should help a lot. I’ve dropped behind a bit on my Anki deck, and need to Ankify the colimits section.