Instructions on how to study for the resit exam in September 2016 are the same as for the May 2016 exam.
Wednesday, 20 July 2016
Sunday, 15 May 2016
Picard's Theorem
One of the students has asked me: "In the past papers, the statement of the Picard Theorem is in
a different form from the version given in the notes. Which one shall we
use?"
There are various versions of the Picard's Theorem, which although they may be formulated with different technical details, all essentially state the same thing, i.e. that unique solutions of ODEs can be found by means of Picard iteration. If you would be asked to state "Picard's Theorem" then of course I would be happy with any correct and sensible version of this Theorem. However, note that if I would be asking you to prove a certain result, your proof should of course be related to the Theorem that you provide, or a specific version of this Theorem (from the lectures) that I would ask you to prove.
Past exam papers are useful to get an idea whether or not you are broadly well prepared for the exam. There is little point to learn answers to past exam papers by heart. Also in most cases, the provided "model answers" are not necessarily the unique correct formulation to an answer...
In general, let me assure you that I will give generous credit to answers to exam questions that demonstrate your understanding of the material that you are asked about, rather than split hairs over whether or not you provide exactly the answer that I would have given as a model...
There are various versions of the Picard's Theorem, which although they may be formulated with different technical details, all essentially state the same thing, i.e. that unique solutions of ODEs can be found by means of Picard iteration. If you would be asked to state "Picard's Theorem" then of course I would be happy with any correct and sensible version of this Theorem. However, note that if I would be asking you to prove a certain result, your proof should of course be related to the Theorem that you provide, or a specific version of this Theorem (from the lectures) that I would ask you to prove.
Past exam papers are useful to get an idea whether or not you are broadly well prepared for the exam. There is little point to learn answers to past exam papers by heart. Also in most cases, the provided "model answers" are not necessarily the unique correct formulation to an answer...
In general, let me assure you that I will give generous credit to answers to exam questions that demonstrate your understanding of the material that you are asked about, rather than split hairs over whether or not you provide exactly the answer that I would have given as a model...
Sunday, 8 May 2016
Persistence of transverse intersections
In Example 1.4.9 we discuss the persistence of transverse intersections as an application of the Impicit Function Theorem. Someone asked me about this in the second revision class. I will try to elucidate the final conclusion in this example from the notes.
"Persistence" of the isolated intersection of two curves in R2 in this example, means that if we "perturb" the curves slightly, then there remains to be a unique isolated intersection of these two curves near the original isolated intersection.
We represent the two curves by differentiable functions that parametrize these curves: f,g:R→R2. We assume the intersection to be at f(0)=g(0). We now consider a parametrized family of functions fλ,gλ:R→R2, representing "perturbations of the original curves", where λ serves as the "small parameter" so that f0=f and g0=g. We furthermore assume that the perturbations are such that the derivatives Dfλ(0) and Dgλ(0) are continuous in λ near λ=0.
In the example it is proposed to consider the function hλ(s,t):R2→R2 defined as hλ(s,t):=fλ(s)−gλ(t). By construction h0(0,0)=(0,0) and indeed the intersection points of the curves represented by fλ andgλ are given by h−1λ(0,0).
It follows that Dhλ(s,t)=(Dfλ(s),−Dgλ(s)), as in the notes. This two-by-two matrix is non-singular (ie has no zero eigenvalue, or - equivalently - is invertible) if and only if the two-dimensional vectors Dfλ(s) and Dgλ(t) are not parallel (ie not real multiples of each other).
We now use this to analyze the intersection at λ=0: when Df0(0) and Dg0(0) (which are the tangent vectors to the respective curves at the intersection point) are not parallel, then Dh0(0,0) is invertible and the intersection of the two curves at f0(0)=g0(0) is isolated (there is a neighbourhood of this point, where there is no other intersection).
Considering a small variation of λ, we note that by application of the Implicit Function Theorem to h0, for sufficiently small λ there exist continuous functions s(λ) and t(λ) so that (s(λ),t(λ)) is the element of h−1λ(0,0) near (0,0)=(s(0),t(0)). This unique "continuation" of the original solution 0,0 is of course also isolated since if Dh0(0,0) is invertible then so is Dhλ(s(λ),t(λ)) by continuity of all the dependences; so the vectors Dfλ(s(λ)) and Dgλ(t(λ)) will not be parallel for sufficiently small λ.
"Persistence" of the isolated intersection of two curves in R2 in this example, means that if we "perturb" the curves slightly, then there remains to be a unique isolated intersection of these two curves near the original isolated intersection.
We represent the two curves by differentiable functions that parametrize these curves: f,g:R→R2. We assume the intersection to be at f(0)=g(0). We now consider a parametrized family of functions fλ,gλ:R→R2, representing "perturbations of the original curves", where λ serves as the "small parameter" so that f0=f and g0=g. We furthermore assume that the perturbations are such that the derivatives Dfλ(0) and Dgλ(0) are continuous in λ near λ=0.
In the example it is proposed to consider the function hλ(s,t):R2→R2 defined as hλ(s,t):=fλ(s)−gλ(t). By construction h0(0,0)=(0,0) and indeed the intersection points of the curves represented by fλ andgλ are given by h−1λ(0,0).
It follows that Dhλ(s,t)=(Dfλ(s),−Dgλ(s)), as in the notes. This two-by-two matrix is non-singular (ie has no zero eigenvalue, or - equivalently - is invertible) if and only if the two-dimensional vectors Dfλ(s) and Dgλ(t) are not parallel (ie not real multiples of each other).
We now use this to analyze the intersection at λ=0: when Df0(0) and Dg0(0) (which are the tangent vectors to the respective curves at the intersection point) are not parallel, then Dh0(0,0) is invertible and the intersection of the two curves at f0(0)=g0(0) is isolated (there is a neighbourhood of this point, where there is no other intersection).
Considering a small variation of λ, we note that by application of the Implicit Function Theorem to h0, for sufficiently small λ there exist continuous functions s(λ) and t(λ) so that (s(λ),t(λ)) is the element of h−1λ(0,0) near (0,0)=(s(0),t(0)). This unique "continuation" of the original solution 0,0 is of course also isolated since if Dh0(0,0) is invertible then so is Dhλ(s(λ),t(λ)) by continuity of all the dependences; so the vectors Dfλ(s(λ)) and Dgλ(t(λ)) will not be parallel for sufficiently small λ.
Friday, 6 May 2016
2009 exam question 2
A student has asked me go in more detail of the model answers for parts 2009 (c)(iii) and (d)(ii)
First let me recall some generalities about the Jordan Chevalley decomposition. In the (complex) Jordan form, the diagonal part of the matrix is semi-simple - as it obviously has a diagonal complex Jordan form - and the remaining off-diagonal part is nilpotent (as it is upper or lower triangular with zero diagonal; one easily verifies that taking powers of such matrices eventually always results in the 0 matrix). One also easily verifies in Jordan form that the diagonal part and the upper or lower triangular part of the matrix commute with each other. A very convenient fact is that the properties "semi-simple", "nilpotent" and "commutation" are intrinsic and do not depend on the choice of coordinates:
If Ak=0 then (TAT−1)k=0 for any invertible matrix T.
If A is complex diagonalizable, then so is TAT−1 for any invertible matrix T.
If A and B commute, i.e. AB=BA, then also (TAT−1)(TBT−1)=(TBT−1)(TAT−1) for any invertible matrix T.
So we observe that we can prove the Jordan-Chevalley decomposition (and its uniqueness) directly from the Jordan normal form.
We proved in an exercise that exp(A+B)=exp(A)exp(B) if AB=BA. So in particular, if A=As+An is the Jordan-Chevalley decomposition, then exp(At)=exp(Ast)exp(Ant). This is very useful since the first part exp(Ast) depends only on the eigenvalues of A, and thus contains terms depending only on eλit where λi denote eigenvalues of A (if eigenvalues are complex this leads to terms with dependencies eℜ(λi)tcos(ℑ(λi)t) and eℜ(λi)tsin(ℑ(λi)t)), were ℜ and ℑ denote the real and imaginary parts, respectively).
We know that sometimes we also have polynomial terms appearing in the expression exp(At). These polynomials come from the second part exp(Ant) since exp(Ant)=∑k−1m=0Amnm!tm (this follows from the fact that Akn=0).
The question (c)(iii) is about the Jordan-Chevalley decomposition of exp(At). The only thing to check is that we can write this as sum of a nilpotent and a semi-simple matrix which commute with each other. (The Jordan-Chevalley decomposition theorem than asserts that this decomposition is in fact unique.)
The question contains the hint that exp(Ast) is semi-simple. We can see this by verifying that if TAsT−1 is (complex) diagonal, then so is Texp(Ast)T−1=exp(TAsT−1t).
Let us check whether indeed the semi-simple part of exp(At) is equal to exp(Ast) (in the sense of the Jordan-Chevalley decomposition). We write exp(At)=exp(Ast)+N where N:=[exp(At)−exp(Ast)]. Now we recall that exp(At)=exp(Ast)exp(Ant) so N=exp(Ast)[exp(Ant)−1] and as these two factors commute we have Nk=exp(Astk)[exp(Ant)−1]k and if Akn=0 we also have [exp(Ant)−1]k=0 since [exp(Ant)−1]=p(An) is a polynomial in An with p(0)=0. Thus N is nilpotent, and it is readily checked that N also commutes with exp(Ast). So exp(At)=exp(Ast)+N is the Jordan-Chevalley decomposition of exp(At) where exp(Ast) is the semi-simple part and N=exp(Ast)[exp(Ant)−1] is the nilpotent part.
In part d(ii), B is the projection to the eigenspace for eigenvalue −1 with as kernel the generalised eigenspace for eigenvalue +1, and D=An. As there is a Jordan block for eigenvalue +1 (and not for eigenvalue -1), the range of A_n is the eigenspace of A for eigenvalue +1 (check this by writing a 2-by-2 matrix with a Jordan block); the kernel of An is spanned by the eigenspaces of A. Since the range of D lies inside the kernel of B, and the range of B in the kernel of D, it follows that DB=BD=0.
First let me recall some generalities about the Jordan Chevalley decomposition. In the (complex) Jordan form, the diagonal part of the matrix is semi-simple - as it obviously has a diagonal complex Jordan form - and the remaining off-diagonal part is nilpotent (as it is upper or lower triangular with zero diagonal; one easily verifies that taking powers of such matrices eventually always results in the 0 matrix). One also easily verifies in Jordan form that the diagonal part and the upper or lower triangular part of the matrix commute with each other. A very convenient fact is that the properties "semi-simple", "nilpotent" and "commutation" are intrinsic and do not depend on the choice of coordinates:
If Ak=0 then (TAT−1)k=0 for any invertible matrix T.
If A is complex diagonalizable, then so is TAT−1 for any invertible matrix T.
If A and B commute, i.e. AB=BA, then also (TAT−1)(TBT−1)=(TBT−1)(TAT−1) for any invertible matrix T.
So we observe that we can prove the Jordan-Chevalley decomposition (and its uniqueness) directly from the Jordan normal form.
We proved in an exercise that exp(A+B)=exp(A)exp(B) if AB=BA. So in particular, if A=As+An is the Jordan-Chevalley decomposition, then exp(At)=exp(Ast)exp(Ant). This is very useful since the first part exp(Ast) depends only on the eigenvalues of A, and thus contains terms depending only on eλit where λi denote eigenvalues of A (if eigenvalues are complex this leads to terms with dependencies eℜ(λi)tcos(ℑ(λi)t) and eℜ(λi)tsin(ℑ(λi)t)), were ℜ and ℑ denote the real and imaginary parts, respectively).
We know that sometimes we also have polynomial terms appearing in the expression exp(At). These polynomials come from the second part exp(Ant) since exp(Ant)=∑k−1m=0Amnm!tm (this follows from the fact that Akn=0).
The question (c)(iii) is about the Jordan-Chevalley decomposition of exp(At). The only thing to check is that we can write this as sum of a nilpotent and a semi-simple matrix which commute with each other. (The Jordan-Chevalley decomposition theorem than asserts that this decomposition is in fact unique.)
The question contains the hint that exp(Ast) is semi-simple. We can see this by verifying that if TAsT−1 is (complex) diagonal, then so is Texp(Ast)T−1=exp(TAsT−1t).
Let us check whether indeed the semi-simple part of exp(At) is equal to exp(Ast) (in the sense of the Jordan-Chevalley decomposition). We write exp(At)=exp(Ast)+N where N:=[exp(At)−exp(Ast)]. Now we recall that exp(At)=exp(Ast)exp(Ant) so N=exp(Ast)[exp(Ant)−1] and as these two factors commute we have Nk=exp(Astk)[exp(Ant)−1]k and if Akn=0 we also have [exp(Ant)−1]k=0 since [exp(Ant)−1]=p(An) is a polynomial in An with p(0)=0. Thus N is nilpotent, and it is readily checked that N also commutes with exp(Ast). So exp(At)=exp(Ast)+N is the Jordan-Chevalley decomposition of exp(At) where exp(Ast) is the semi-simple part and N=exp(Ast)[exp(Ant)−1] is the nilpotent part.
In part d(ii), B is the projection to the eigenspace for eigenvalue −1 with as kernel the generalised eigenspace for eigenvalue +1, and D=An. As there is a Jordan block for eigenvalue +1 (and not for eigenvalue -1), the range of A_n is the eigenspace of A for eigenvalue +1 (check this by writing a 2-by-2 matrix with a Jordan block); the kernel of An is spanned by the eigenspaces of A. Since the range of D lies inside the kernel of B, and the range of B in the kernel of D, it follows that DB=BD=0.
2014 exam question 4 (v)
A student asked me about the model answer, which is very short and perhaps not so obviously correct.
If an equilibrium y has a stable manifold, then the ω-limit set of every point on this manifold equals y (as the point x converges to y under the flow). However, if we have a saddle point, there exists also an unstable manifold. If an initial point x does not lie on the stable manifold of an equilibrium, then by definition it does not converge to the equilibrium. It is a more subtle question whether it could accumulate to the equilibrium. I did not make this exam question, but the model answer if perhaps a bit too brief. It could namely be that a for point x that does not lie on the stable manifold of an equilibrium y, we still have y∈ω(x). For instance, there could be a heteroclinic or homoclinic cycle (consisting of equilibria and connecting orbits) to which x accumulates, and that contains the saddle equilibrium y. In this exam question, there is only one equilibrium, so this implies that we only could have a homoclinic cycle (connecting orbit to one saddle), but this homoclinic cycle to a saddle would imply the existence of an equilibrium inside the area enclosed by the homoclinic loop (by PB arguments similar to the conclusion about the existence of an equilibrium inside the area enclosed by a periodic orbit) and as there is only one equilibrium in the system under consideration, this cannot be the case here. So there is no homoclinic cycle, there is only one equilibrium in A and orbits cannot leave A but also do not converge to the equilibrium. Then by PB they need to accumulate to a periodic solution (in A).
The ω-limit set of a point x are the points to which ϕt(x) accumulates, ie points y so that there exists an increasing sequence tn such that limn→∞ϕtn(x)=y.'there exists x in A such that the ω−limit set of x is not contained in the stable manifold of the singularity. Hence A contains a periodic orbit.' .
If an equilibrium y has a stable manifold, then the ω-limit set of every point on this manifold equals y (as the point x converges to y under the flow). However, if we have a saddle point, there exists also an unstable manifold. If an initial point x does not lie on the stable manifold of an equilibrium, then by definition it does not converge to the equilibrium. It is a more subtle question whether it could accumulate to the equilibrium. I did not make this exam question, but the model answer if perhaps a bit too brief. It could namely be that a for point x that does not lie on the stable manifold of an equilibrium y, we still have y∈ω(x). For instance, there could be a heteroclinic or homoclinic cycle (consisting of equilibria and connecting orbits) to which x accumulates, and that contains the saddle equilibrium y. In this exam question, there is only one equilibrium, so this implies that we only could have a homoclinic cycle (connecting orbit to one saddle), but this homoclinic cycle to a saddle would imply the existence of an equilibrium inside the area enclosed by the homoclinic loop (by PB arguments similar to the conclusion about the existence of an equilibrium inside the area enclosed by a periodic orbit) and as there is only one equilibrium in the system under consideration, this cannot be the case here. So there is no homoclinic cycle, there is only one equilibrium in A and orbits cannot leave A but also do not converge to the equilibrium. Then by PB they need to accumulate to a periodic solution (in A).
Tuesday, 3 May 2016
Questionnaire
In the final revision class I handed out a questionnaire to get a more detailed feedback about the course beyond SOLE. If you have not filled out and handed in the form to me yet, you can find an electronic copy of the questionnaire here. Please fill it out and send it to me by e-mail or print it out and leave it in my pigeonhole. Your feedback is very much appreciated.
Second revision class
I discussed the application of Poincare-Bendixson theory to make sketches of phase portraits. The short note/summary I used can be found here
Subscribe to:
Posts (Atom)