FANDOM


This seminar by kommodore took place on 3rd August 2008 16:00 UTC, in #mathematics channel of irc.freenode.net.

The timestamps displayed in the IRC text are UTC+1.
Any explanation or errors regarding this seminar should be recorded in the discussion page.


Topic Edit

Differential Geometry foundations for Riemmanian geometry. Part of the series outlined at Introductory Riemannian Geometry

Seminar Edit

16:54:14 Jafet: kommodore, what kind of prerequisites for today's audience
                do you have in mind?
16:56:48 kommodore: Differentiation in R^n, I'll try to start by defining
                    manifolds and move quickly to vector bundles

17:00:00 ChanServ changed the topic of #mathematics to: SEMINAR IN PROGRESS.
                  If you want to ask a question, say ! and wait to be called

17:00:08 kommodore: Shall I start?
17:00:38 kommodore: OK.  Today I am supposed to talk about some differential
                    geometry foundation to Riemannian geometry.  This is going
                    to be very boring for many, because it involves many
                    definitions and I probably don't have enough time to give
                    interesting examples.
17:01:03 kommodore: For those of you wanting interesting examples, maybe come
                    back next week....
17:01:10 kommodore: I assume everyone here knows what differentiation in R^n
                    means.
17:01:33 kommodore: So let's get started with some historical remarks and
                    motivations, as is the usual procedure for anyone giving a
                    seminar.
17:01:41 kommodore: Riemannian geometry started with Riemann (who else?) in his
                    1854 thesis, in which he generalised some of Gauss' ideas to
                    higher dimensions.  It received very little attention back
                    then, until Einstein came along and show it is a "useful"
                    subject and deserves attention.
17:02:32 kommodore: With Weyl formalising the notion of manifolds in circa 1912, people
                    started to generalise many theorems on surfaces to higher dimensional
                    manifold.  Many classical theorems in Riemannian geometry typicially
                    have two or more names attached, one for the discoverer in 2-D, and one
                    for n-D.
17:03:25 kommodore: To name two examples: Bonnet-Meyer (if Ricci>k everywhere, then
                    diam<pi/sqrt(k)), Chern-Gauss-Bonnet (Euler characteristic=integral of
                    some expression of curvature).  Both of these I hope to cover later
                    when I got time....
17:03:59 kommodore: So let's start with the basic object of study --- manifolds.
17:04:25 kommodore: A (topological) manifold of dimension m is, roughly speaking, something
                    which looks locally like R^m.  The technical definition is: a Hausdorff,
                    paracompact space locally homeomorphic to R^m.
17:05:07 kommodore: For this seminar, I will additionally assume connectedness, unless
                    stated otherwise
17:05:47 kommodore: The Hausdorff and paracompact condition is just technicality that is
                    used to rule out some exotic examples and give us a nice tool called
                    the partition of unity, the locally homeomorphic to R^m condition is
                    the really important condition here.
17:06:31 kommodore: We usually denote the manifold with uppercase alphabets such as M, and
                    if we want to stress the dimension, we write it as M^m.
17:07:16 kommodore: To start doing calculus, we want a bit more smoothness, called the
                    differential structure.  So we define a smooth manifold to be a
                    manifold with the additional property that, for any two overlapping
                    local neighbourhoods phi: U->R^m, psi: V->R^m, the "transition function"
                    is the composition
17:07:55 kommodore: psi o phi^{-1}: phi(U\cap V) (in R^m) --> U\cap V --> psi(U\cap V) (in
                    R^m)
17:08:17 kommodore: As a map between open subsets of R^m, we want it to be infinitely
                    differentiable.
17:08:54 kommodore: Here is a good place to put the usual abuse of notation note.  We do
                    not put explicit restriction of function's domain here
                    (the \vert_{U\cap V}'s) because it just obscure what is really going
                    on here.
17:09:40 kommodore: A collection (phi_i,U_i) with Union U_i=M is called an atlas for M.
                    There is the usual convention that a manifold is equipped with its
                    maximal atlas, unless otherwise specified.
17:10:16 kommodore: The maximality is not a big restriction at all, because once you have
                    an atlas, there is a unique maximal atlas containing it, viz. throwing
                    all charts that are smoothly compatible with your atlas into the
                    maximal atlas.
17:10:40 ness: !
17:10:48 kommodore: go ahead
17:11:05 ness: can there be distinct maximal atlases for the same manifold?
17:11:52 kommodore: yes for topological manifolds.  For example, the topological 7-sphere
                    has 28 distinct differential structures
17:12:29 ness: thanks
17:12:45 kommodore: those 27 that are different from the usual S^7 in R^8 are called exotic
17:13:24 kommodore: so... examples of manifolds
17:13:44 kommodore: R^n is obviously a manifold, so is any open subset of R^n
17:14:37 kommodore: As an example of that, the group of invertible matrices GL(n,R) is a
                    manifold
17:15:29 kommodore: The submersion and immersion theorems are the basic high-brow tool for
                    constructing a lot of smooth manifolds.
17:16:17 kommodore: we define a map f:M^m->N^n between smooth manifolds to be smooth if,
                    for all p in M^m and every local neighbourhood phi: p in U in
                    M^m -> R^m, psi: f(p) in V in N^n -> R^n, the map psi o f o phi^{-1}
                    is smooth
17:17:09 kommodore: Now we will introduce two important object constructed from this
                    manifold M, namely the tangent bundle and the cotangent bundle.
17:17:27 kommodore: Before that, we need to make precise our intuition about tangent vectors
                    and differentials.
17:17:56 kommodore: There are at least 4 different ways to define what a tangent vector is.
                    I'll just pick the most visual one:  A tangent vector at a point p is
                    an equivalence class of smooth curves through p.  The equivalence
                    relation is the following:
17:18:35 kommodore: Since the point p is on M^m, there is a chart phi: p in U-> R^m.  Then
                    two curves gamma, delta: (-epsilon,epsilon)->M^m with
                    gamma(0)=delta(0)=p are equivalent if
                    (phi o gamma)'(0)=(phi o delta)'(0).
17:19:02 kommodore: This definition is independent of the chart phi in the maximal atlas
                    being used, because the chain rule tells us that switching from phi to
                    some other psi just multiply both sides of the definition by the
                    deriviative of (psi o phi^{-1}) at phi(p), which is an invertible
                    linear map of R^m.
17:19:57 kommodore: The tangent vector corresponding to phi(p)+t*e_i, where e_i is the
                    i-th basis vector for R^m, is usually denoted by @_i=@/@x^i
                    (@=\partial).  Note that this is honestly a derivation (it acts on
                    functions by f|-> @f/@x^i), and we can also now make sense of the
                    familiar chain rule @/@y^i=(@x^j/@y^i)(@/@x^j).
17:21:03 kommodore: Note that Einstein summation convention is used here.  In general, if
                    an index appear both upstair and downstair once, we sum over that
                    index.
17:21:59 kommodore: the usual vector space structure of R^m induces a vector space
                    structure on tangent space at p, T_p(M)={tangent vectors at p to M}.
17:22:30 kommodore: Now we can define the tangent bundle TM.  The tangent bundle is the
                    collection of all tangent spaces.  It is a smooth manifold when we give
                    the obvious charts induced by the charts of M and R^m.  Namely,
17:22:56 kommodore: if phi is a chart at p with neighbourhood U, then from the
                    construction, we can pick representative for any tangent vector v to be
                    the curve that is phi(p)+tv under phi.  Since phi is smooth, this gives
                    a chart from collection of all tangent vectors at some point of U, to
                    phi(U)xR^m.
17:23:32 kommodore: The transition function is just (transition function for phi,psi on M,
                    its derivative).  So we do indeed get a smooth structure.
17:24:23 kommodore: The cotangent bundle T^*M is similarly constructed, using the
                    cotangent space (dual to the tangent space).  Elements of the cotangent
                    bundle are called differentials.  The dual to @_i is dx^i.
17:24:55 kommodore: Also, we can replace R^m by any vector space V, and we have a covering
                    of M by charts phi_i, then for any collection of smoothly varying
                    function (in p) phi_{ij}(p) in GL(V), subjected to three conditions:
17:25:13 kommodore: 1. phi_{ii}=id;  2. phi_{ij}=phi_{ji}^{-1};
                    3. phi_{ij}phi_{jk}=phi_{ik}
17:26:11 kommodore: then we can repeat this construction to give a _vector bundle_ E->M,
                    with fibre V
17:27:12 kommodore: i.e. the transition from phi_i to phi_j is via (phi_j o phi_i^{-1},
                    phi_{ij})
17:27:31 kommodore: So we can construct tensor powers of TM and T^*M.  In particular, the
                    smooth sections of the k-th exterior powers of T^*M are called
                    differential k-forms on M, and is usually denoted by Omega^k(M).
17:28:12 kommodore: Example: Omega^0(M) is just the space of smooth functions on M.
17:29:34 kommodore: Vector bundles isomorphic to a product MxV are called trivial bundles
17:30:08 kommodore: not every vector bundle is trivial, for example, TS^2 is not trivial
                    (this is the so-called hairy-ball theorem)
17:30:34 kommodore: A smooth f:M->N induces a map between tangent bundles, i.e.
                    [gamma] -> [f o gamma].  This map is well-defined, linear on each
                    tangent space, and is usually denoted by df or f_*, called the
                    differential or derivative of f.
17:32:31 kommodore: The dual map (f_*)^* is usually just denoted by f^*, a map from
                    T^*N->T^*M
17:33:14 kommodore: The exterior derivative operator d:Omega^0(M)->Omega^1(M) can be
                    generalised to taking Omega^k to Omega^(k+1).  It is just
17:33:37 kommodore: f dx^{i_1}\wedge...\wedge dx^{i_k}
                                             -> (df)\wedge dx^{i_1}\wedge...\wedge dx^{i_k}
                    extended linearly
17:34:09 kommodore: In other words, we can write
                    (d alpha)_{ij...k}=(@/@x^{[i})alpha_{j..k]}, where [...] is the
                    antisymmetrisation of indices.
17:34:45 kommodore: If the manifold is oriented (i.e. you can choose all transition
                    functions to have positive determinant), then we have a theory for
                    integrating differential m-forms \int_M: Omega^m -> R
17:35:07 kommodore: First we use partition of unity to restrict to integrating differential
                    m-forms supported inside a coordinate neighbourhood.
17:35:36 kommodore: Then we define the integral of the differential m-form
                    f(x) dx^1\wedge dx^2\wedge ... \wedge dx^m on R^m is just the usual 
                    integral \int f(x) dx^1...dx^m
17:35:58 kommodore: and transport back to the manifold.  Later (i.e. next seminar if I get
                    to it) we will see that this partition of unity is not necessary
                    --- there is a closed set of dimension <m which, when deleted from the
                    manifold, gives us an open set diffeomorphic to R^m.
17:36:41 kommodore: The exterior derivative d satisfies d o d = 0, because of symmetry of
                    partial derivatives.  So (\Omega,d) is a complex, called the de Rham
                    complex.  The cohomology of this complex is the de Rham cohomology.
17:37:03 kommodore: A manifold-with-boundary is an obvious generalisation, making charts
                    take value on the closed upper half space H^n instead of R^n.  Then
                    Stoke's theorem tells us that with respect to the pairing (m-manifold,
                    m-forms)->R, d and @, the boundary operator, are adjoint.  The proof of
                    Stokes is by applying partition of unity, assume the form has compact
                    support, then integrate just the first variable as in multivariate
                    calculus.
17:37:49 kommodore: Finally, I get to talk about connexions....
17:38:11 kommodore: Abstractly speaking, given a vector bundle E->M, there are no way of
                    comparing adjacent vector spaces E_p and E_q for p,q in some
                    trivialising neighbourhoods (which is what we need for "directional
                    derivative").  This is because we can always twist any trivialisation
                    with any smoothly varying GL(V)-valued function.
17:38:58 kommodore: So in order to compare, we need some way of saying what the
                    "constants" are.
17:39:11 kommodore: for any p in M, we have an obvious exact sequence
17:39:51 kommodore: 0 -> V -> T_pE -> T_{pi(p)}M -> 0; pi:E->M
17:40:29 kommodore: oops, make that p in E
17:40:54 kommodore: We want to split this exact sequence
17:41:27 kommodore: The image of T_{pi(p)}M->T_pE is called the horizontal subspace
17:41:45 kommodore: and image of V->T_pE is the vertical subspace
17:42:00 kommodore: Note that V=ker(d pi), so the vertical subspace is independent of
                    trivialisation.
17:42:25 kommodore: So if we have a splitting, then there must be (dim V) linearly
                    independent linear functionals theta^1,...,theta^{dim V} such that
                    T_pM appears in TE as the common kernel.  Now if we write a^i for the
                    V-coordinates and x^k for the M-coordinates in some trivialisation,
                    then we can choose
17:42:49 kommodore: theta^i=da^i+e^i_k(a,x) dx^k
17:43:11 kommodore: Moreover, if we further demand that e^j_k is _linear_ in a, i.e.
17:43:26 kommodore: e^i_k(a,x)=Gamma^i_{jk} a^j
17:43:51 kommodore: Then the horizontal subspace ker(theta^1,...,theta^{dim V}) is actually
                    a linear subspace.  The Gamma^i_{jk} are called the coefficients of the
                    connexion.
17:44:38 kommodore: Note the connexion is not in general a differential form, but the
                    difference between two connexions is a matrix of differential forms
                    (we will see that later).  So the space of connexion is an affine
                    linear space.
17:45:02 kommodore: A connexion give rise to covariant derivative, as follows:
17:45:18 kommodore: A covariant derivative is a mapping
                    D:(vector field)x(section of E)->(section of E) satisfying
                    Omega^0(M)-linearity on the first coordinate, and behaves like a
                    derivation on the second, i.e. R-linear with
17:45:37 kommodore: D(v,fs)= v(f)s + f D(v,s).
17:46:00 kommodore: If we move the vector field to the other side of the arrow, then D
                    becomes a mapping (section of E)->(section of T^*M\otimes E).  This is
                    usually how one defines a covariant derivative.
17:46:38 kommodore: Then a connexion corresponds to a covariant derivative by:
                    Gamma^i_{jk}=(e_i) component of D(@_k,e_j), where (e^i) is a basis for
                    V, and extend by linearity.  We also write D_v(s) for D(v,s).
17:47:02 kommodore: Given connexions on E->M, F->M, there are natural induced connexion of
                    E\otimes F, E^*, tensor powers, etc, by requiring appropriate Leibniz
                    rule to hold.
17:47:35 kommodore: We sometimes say "a connexion on M" for "a connexion of TM->M".
                     A connexion on M is torsion-free if Gamma^i_{jk} is symmetric
                    in j and k.
17:48:36 kommodore: So now we can do first covariant derivative.  What about higher ones?
17:48:59 kommodore: If we think of covariant derivative as a map from sections of E to
                    sections of T^*M\otimes E, or equivalently, an End(E)-valued 1-forms
                    of M (written Omega^1(M;End(E))), then we can write
17:49:27 kommodore: D=d+A\wedge, where A=(Gamma^i_{jk}dx^k) is a matrix of 1-forms of M.
17:49:53 kommodore: (here is where you see the difference of two connexions is a matrix of
                    1-forms)
17:50:14 kommodore: Then we can repeat this procedure and have second, third, ...
                    covariant derivatives.  The second covariant derivative is
17:50:27 kommodore: DDs=d(ds+As)+A\wedge(ds+As)
                       =(dA)s-A\wedge ds+A\wedge ds+(A\wedge A\wedge)s
17:50:50 kommodore: the minus sign comes from the fact that the entries of A are 1-forms,
                    so the antisymmetrisation is going to give a minus sign
17:51:22 kommodore: so DD: s|->(dA+A\wedge A)s is actually a linear algebraic operator.
                     We call F(A)=dA+A\wedge A the _curvature form_ of the connexion.
                     If F(A)=0, we say the connexion is _flat_.
17:51:50 kommodore: The Bianchi identity for F(A) states that the curvature form is
                    covariantly constant: D(F(A))=0, for any A defining a covariant
                    derivative D.
17:52:16 kommodore: One-line Proof: This comes from the obvious (DD)D=D(DD) and Leibniz
                    rule.  QED.
17:52:52 kommodore: Example: the trivial connexion on the trivial bundle M\times V is
                    D(u,f^iv_i)=u(f^i)v_i; i.e. just the usual derivative on each
                    component.  It is obviously flat
17:54:33 kommodore: Let me end today by introducing the Riemannian metric and cometrics,
                    and define the Levi-Civita connexion
17:54:53 kommodore: A (smooth) Riemannian metric g on M is a smoothly varying positive
                    definite symmetric bilinear form
17:55:07 kommodore: g_p: T_pM\times T_p M -> R
17:55:25 kommodore: and we say (M,g) is a Riemannian manifold
17:55:56 kommodore: In local coordinates, we can write g=g_{ij}dx^idx^j
17:56:15 kommodore: Every manifold admit Riemannian metric, because the positive-definite
                    condition is convex.  So just saying a Riemannian manifold doesn't
                    really tell you much about the manifold.  (Contrast this with Lorenzian
                    metric in GR --- AFAIK we don't have a full characterisation of
                    manifolds admitting Lorenzian metrics)
17:56:55 kommodore: From linear algebra, we know g also induces a cometric
                    g^*: T^*M\times T^*M -> R.  In local coordinates, g^*=g^{ij}@_i@_j,
                    where (g^{ij}) is the inverse matrix to (g_{ij}).
17:57:34 kommodore: The Levi-Civita connexion is a connexion on M which is uniquely
                    determined by two conditions:
17:57:43 kommodore: (1) D is torsion-free; and
17:58:11 kommodore: (2) g is D-covariantly constant, i.e.
                         g(D_X(Y),Z)+g(Y,D_X(Z))=X(g(Y,Z)).
17:58:51 kommodore: The existence and uniqueness is what is known as the fundamental
                    theorem of Riemannian geometry.  We also have the Kozsul formula
                    for D:
17:59:04 kommodore: 2g(D_X(Y),Z)=X(g(Y,Z))+Y(g(X,Z))-Z(g(X,Y))
                                 + g([X,Y],Z)-g([X,Z],Y)-g([Y,Z],X)
17:59:38 kommodore: Example: if G is a compact Lie group with its bi-invariant metric,
                    then the Levi-Civita connexion is D_X(Y)=[X,Y]/2 for left-invariant
                    vector fields X,Y
18:00:09 kommodore: Proof: Let X,Y,Z be left-invariant vector fields, then the Kozsul
                    formula gives
18:00:22 kommodore: 2g(D_X(Y),Z)=X(g(Y,Z))+Y(g(X,Z))-Z(g(X,Y))
                                 + g([X,Y],Z)-g([X,Z],Y)-g([Y,Z],X)
18:00:43 kommodore: By bi-invariance, the first three terms cancel (because
                    g(X,Y)=constant, etc.)
18:00:53 kommodore: and so we are left to prove the last two terms does not contribute.
18:01:23 kommodore: This is done by noticing the adjoint representation Ad is an isometry
                    of the Lie algebra = T_{id}G
18:01:34 kommodore: (because it is a composition of left and right translation, so
                    isometry by bi-invariance of metric)
18:01:42 kommodore: so
18:01:52 kommodore: (d/dt|_{t=0})g(Ad_(exp(tZ))X, Ad_{exp(tZ)}Y)=0
18:02:10 kommodore: so upon remembering the derivative of Ad is ad, which is the Lie
                    bracket, we get the last two terms cancel. QED.
18:02:37 kommodore: Questions?  Maybe I've bored everyone out of existence?
18:04:27 _llll_: i was lost after about 20mins, i dindt really understand what TM was, or
                 how it was a manifold
18:05:04 ness: me too
18:05:55 kommodore: when M=open set of R^m, TM=MxR^m
18:06:07 kommodore: so that is a manifold
18:06:40 _llll_: probably me being slow, but what *is* TM?
18:06:42 kommodore: you can visualise this TM as the space of arrows having base at some
                    point of M
18:07:21 kommodore: TM=Union_{p in M} T_pM as a set
18:07:41 ness: how can v \in TM "act" on elements of M (or functions on M?)?
18:08:11 kommodore: essentially this is done by taking directional derivative
18:08:15 ness: I didn't at all get the part where v is a kind of derivative
18:08:35 kommodore: my definition of v is an equivalence class of curves, right?
18:08:40 ness: yes
18:09:20 kommodore: so you can make sense of (f o gamma)(t) for t in (-epsilon,epsilon),
                    where gamma represents v
18:09:49 ness: ok
18:09:53 kommodore: then we define v(f) to be (f o gamma)'(0)
18:10:01 ness: oh
18:10:14 _llll_: ah... makes a bit more sense
18:10:40 kommodore: yes
18:11:09 kommodore: there are other equivalent definitions of what a tangent vector is
18:11:22 kommodore: one of them is a derivation at p....
18:12:20 ness: kommodore: would you mind me asking more basic questions like this?
18:12:24 kommodore: another one is "something whose components transform according to
                    @/@y^i=(@x^j/@y^i)(@/@x^j)
18:12:39 kommodore: ness: not at all
18:13:47 ness: what *is* the cotangent bundle T^*M? or to start with, what is T_p^*M?
                    Is that just the dual space of T_pM
18:13:54 kommodore: ness: yes
18:14:58 eigenval: some year ago i tried to understand general relativity. i had not.
                   but i've remebered something, here. nice compendium of the underlying
                   maths :-). my question: what is an interpretation of the
                   torsion-freeness of a connexion?
18:16:19 ness: So T_p^*M is the space if linear functionals from directional derivatives
               at p to R. In what sense are elements of T_p^*M (they are called
               differentials, right?) related to "traditional" differentials
              (I do understand that traditional differentials are't so well defined at
               all)?
18:18:21 kommodore: eigenval: connexions with torsions are generally a pain to
                    work with... you don't have the Bianchi identities
18:19:26 kommodore: ness: the traditional differentials is supposedly transform in the
                    same way as elements of T_p^*M
18:19:57 kommodore: i.e. you want df=(df/dg)dg in 1-dimension
18:21:19 kommodore: and the obvious higher-dimensional analogue df=(@f/@x^i)dx^i
18:22:47 ness: here df and dg are elements of T_p^*M, and (df/dg) is normal derivation?
18:23:40 kommodore: df and dg are "traditional differentials"/"elements of T_p^*M",
                    the same formula holds
18:24:50 kommodore: and using these, we have apparently "proved" the change of variable
                    formula in multiple integrals....
18:26:07 ness: indeed
18:28:10 kommodore: because
                    dx^1\wedge...\wedge dx^n
                       =[(@x^1/@y^{i_1})dy^{i_1}]\wedge ...\wedge [(@x^n/@y^{i_n})dy^{i_n}]
                       = ...
                       =\sum_{sigma in S_n} (@x^1/@y^{sigma(1)})...(@x^n/@y^{sigma(n)}) sign(sigma) dy^1...dy^n
18:30:17 kommodore:    = J(x;y) dy^1\wedge...\wedge dy^n
18:30:36 ness: which is an expression for the jacobian determinant. and where the sign
               changes follow from the antsymmetry properties, right?
18:30:44 kommodore: yes
18:30:53 kommodore: so now stick the integral sign and voila!
18:31:26 ness: after defining the cotangent bundle you go on to "replace R^n by any vector
               space V" and you sorta completely lost me here. Can you describe what this
               is about? are you "generalizing" manifolds to look locally like V instead of
18:31:34 ness: R^n
18:32:04 kommodore: R^m was referring to the tangent/cotangent space
18:33:22 kommodore: so now instead of forcing how we patch the U_ixR^m together
                    (by (phi_j o phi_i^{-1}, Derivative of that)), we now want to stick
                    U_ixV's together, where V is some vector space
18:35:44 kommodore: the thing that we replace is R^m by V, and "Derivative of
                    (phi_j o phi_i^{-1})" by some smoothly varying GL(V)-valued function
                    from U_i\cap U_j
18:36:34 kommodore: that GL(V)-valued function from U_i\cap U_j I denote by phi_{ij}
18:36:42 _llll_: i think you're just defining "vector bundle" here?
18:36:50 kommodore: yes
18:38:20 kommodore: so we are doing E=(Union_i U_ixV)/{identifying (u,v) in U_ixV with
                    (u,phi_{ij}(v)) in U_jxV for u in U_i\cap U_j}
18:40:08 kommodore: I didn't mention it in the talk, but GL(V) can be replaced by any 
                    subgoup of GL(V)
18:40:24 _llll_: makes much more sense if you know about sheaves and etale spaces
18:41:09 kommodore: well, sheves is a generalisation of what we do with vector bundles...
18:42:19 _llll_: so the result here is that TM is a vector bundle over M i suppose, is it
                 also a manifold?
18:43:05 kommodore: yes, did I not say that in the talk?.... searching
18:44:27 _llll_: it may be in there, but if so, i didnt follow it :)
18:45:00 kommodore: I did... the obvious charts to cover TM are the U_ixR^n chart with
                    transition function (U_i\cap U_j)xR^n being (phi_j o phi_i^{-1},
                    derivative of that)
18:45:55 _llll_: can you explain that a bit further? a chart for M is U-->R^dim(M) ?
18:45:59 kommodore: err... not saying that coherently
18:46:50 kommodore: a chart for M is U->R^dim(M), giving rise to a chart of TM
                    TU->R^dim(M)xR^dim(M)
18:47:25 kommodore: and TU may be identified with phi(U)xR^dim(M)
18:47:56 kommodore: because in R^dim(M) you have translating by (u_2-u_1)
18:48:22 kommodore: which maps (curves through u_1) to (curves through u_2)
18:48:32 _llll_: is this meant to be obvious how to construct TU->R^dim(M)xR^dim(M)
                 from U-->R^dim(M)?
18:50:00 kommodore: yes... you identify (u,v) in R^dim(M)xR^dim(M) with the curve
                    t|->u+tv, which is an element of T_u(R^dim(M))
18:51:20 kommodore: if U-->R^dim(M), we might as well think of it as an open subset of
                    R^dim(M), so this gives TU
18:53:00 _llll_: maybe it's just notation, or more likely it's me, but i dont find any of
                 this clear so far
18:53:06 kommodore: I didn't dare to mention naturality and stuff like that, but the
                    construction of all tensor bundles T^{(r,s)}M are natural
18:54:32 kommodore: Let's call it phi: U->R^dim(M)
18:54:54 kommodore: then we have constructed T(phi(U)), right?
18:55:30 _llll_: ok
18:56:35 kommodore: Now we say that *is* TU, by identifying u with phi(u) for all u in U,
                    in the first R^dim(M) coordinate of T(phi(U))
18:59:06 kommodore: so we get TU=UxR^dim(M), via this phi
19:02:13 kommodore: the problem is then: what happens if we have another chart V which
                    overlaps with U?  In the middle we get T(U\cap V) which we identify
                    with an open subset of R^{2 dim(M)} in two different ways
19:03:21 eigenval: let me repeat, what i believe to have understood: we have the
                    "fundamental theorem of Riemannian geometry": there is a unique
                    connextion, the L-C-connextion, such that the corresponding D
                    is (2) compatible with the given riemann metric and that is
                    (1) torsion-free. so why one takes that torsion-free connextion?
                    just because one can easier work with it?
19:06:10 kommodore: eigenval: torsion-free connexions has many nice properties.
                    If we dropped the torsion-free condition, there will be no uniqueness
                    - just add any torsion
19:05:39 _llll_: are you just applying T to phi:U-->R^m to get
                 T(phi):TU->T(R^m)~R^m x R^m ?
19:06:44 kommodore: _llll_: yes.
19:06:53 _llll_: ah, ok i sort of follow a bit now
19:07:08 _llll_: so that makes TM a manifold, and also a vector bundle with fibre R^m
19:07:12 kommodore: The hard work then is to prove T is natural
19:08:11 kommodore: T(phi)=(d phi)
19:09:10 kommodore: proving T is natural is what the construction will show
19:13:20 _llll_: so presumably, if f:V-->W then TV --> VxR^m --fxid--> WxR^m -->TW is the
                 map T(f)?
19:13:24 kommodore: To be honest, we really only need the tensor bundles only in what I planned, not the full-blown generality of vector bundles
19:13:44 kommodore: _llll_: no
19:14:00 _llll_: oh
19:14:19 kommodore: You will need to transform the second coordinate by the derivative of
                    f at the relevant point
19:16:26 kommodore: so Tf(v,x)=(f(v), (df)(v)(x)), when V,W are open subsets of R^n, R^m
19:17:40 kommodore: because a curve x+tv is mapped under f to
                    f(x+tv)=f(x)+tf'(x)(v)+higher order
19:18:11 _llll_: ah, interesting
19:22:11 _llll_: so could you have a higher order version of TM where the equivalence
                 relation defining the tangent vectors involves the double derivative
                 not just the single derivative at zero?
19:22:15 kommodore: The tangent bundle is a special case of the jet-bundle, which is
                    keeping track of all Taylor coefficients.  The tangent bundle keep
                    only the first order information
19:23:12 kommodore: The transformation rule for jet bundles are horrible to write out
19:24:02 kommodore: If you keep up to the k-th order information of a map M->N, it is
                    called the k-jet bundle J^k(M,N)
19:24:40 kommodore: (the J is probably going to be in calligraphic letter)
19:39:56 ~mary]]: well, transformation rules for jet bundles may be horrible to write out,
                  but they aren't really that bad...  if you look at the diagonal map
                  d : M -> M x M, the ideal sheaf defining the closed submanifold M is I.
                  The cotangent bundle is basically I/I^2.  Looking at I^n/I^{n+1} will get
                  you higher order taylor coeff's

19:36:38 _llll_: what is the topic for next week?
19:40:17 kommodore: "Riemannian geometry II: Curvature" and I'll talk about the various
                    curvature tensors, maybe onto how they affect the global topology

20:22:26 ChanServ changed the topic of #mathematics to:
          NEXT SEMINAR: Introductory Riemannian Geometry 2: Curvature by kommodore
          on Sunday 10 August 16:00UTC | Transcript of last seminar:
          http://www.freenode-math.com/Introductory_Riemannian_Geometry_1:_Differential_Geometry_Primer
          | Other seminars (past and future): http://www.freenode-math.com/index.php/Seminars

Ad blocker interference detected!


Wikia is a free-to-use site that makes money from advertising. We have a modified experience for viewers using ad blockers

Wikia is not accessible if you’ve made further modifications. Remove the custom ad blocker rule(s) and the page will load as expected.