Source: Lucas Vieira (LucasVB), Wikimedia Commons
ABOUT: This is where I will post math projects, current interests/studies, and other cool finds I come across in the world of mathematics.
RELATED: Mathematics Book List
11/20/2024
Over on my blog page I've written a sort of discussion about the limits of our physical knowledge of numbers titled "Rational Numbers Are All We Will Ever Know." I am not very good with titles, that is always the last thing I do. The premise is that we cannot measure things to arbitrary precision, and ultimately we can only ever describe the world using rational numbers. In fact, the larger portion of the reals, the irrationals, will never truly be a part of the real world. This likely mirrors a lot of other discussions of the question "are numbers real?" with the eventual answer of "no, but they give us a means to describe the world," much like with words. Words may not be real, only mental constructs, but the things they represent and convey certainly are real. Having said that, I do not make any claim that my conclusion is unique or original, but this is just my perspective on the specific case of whether or not real numbers are "real." I also use this as an opportunity to introduce the idea of the density of the rationals in the reals. I figured while I was on the topic of math I may as well sprinkle in something from set theory/analysis.
11/08/2024
Awhile ago I was watching the video "Kepler's Impossible Equation" by Welch Labs on YouTube and before I got through it I wanted to try coding up the method described on my own. I have previous experience with numerical analysis in Python and Matlab, so I thought it could make for a fun project, but it turned out to be a good refresher in Python syntax too!
Kepler's equation is usually given as
I created the code both in Spyder (a Python IDE for scientific computation) and using the notepad and the command line in Debian (more challenging but educational). Both work fine, but I will show it in Spyder here for clarity. Below is an image of the first steps in our code. We want to start the process by importing these two libraries into our code. We ask the user to input the parameters
From here we want to define a function that will run our fixed point iteration scheme. Before I discuss the code itself, I do want to look at fixed point iteration methods a bit. If you don't really care for the math, then skip to here.
A fixed point of a function is a point where
Many examples of fixed points exist and this behavior is critical to creating convergent numerical analysis schemes. There are also plenty of theorems that provide sufficient conditions for a fixed point to exist. One such theorem is the Banach Fixed Point Theorem. If you are familiar with analysis then there is a good chance you will have seen this before. If not, this is not necessary for the practical task of coding our solution and can be skipped if you'd like. First, it is important to define the term "contraction mapping." Given a metric space
Banach Fixed Point Theorem
Let
Now, cosine is in fact a contraction mapping and the real numbers are a non-empty complete metric space (I will skip the proof, but they are readily found). Therefore, the conditions of the Banach Fixed Point Theorem are satisified and we could expect cosine to admit a fixed point, which we saw in our previous demonstration. So, what about Kepler's equation? Well, let's instead set this up as
Mean Value Theorem
Let
So, waving my hand a bit, we see the mean value theorem implies that the mapping
We define a function that takes the previous inputs of _ M, ecc, and n. The function begins by defining two arrays in which we will store our data. The first is where we store our values of
Then, as seen starting at line 30 in the image above, the function is ran and assigned to the variable E_vals which will store both the
Next, we want to use that pandas library we imported to give us a user-friendly table of data in the console. The two data arrays are combined into one bigger array that can be given to the pandas DataFrame tool. The rows are labeled with an array of two strings "E values" and "Local Error." The columns are numbered
Great! That is all it takes to solve this problem numerically. If we use the numbers Welch Labs uses in his video, we can take
Notice here how the local error here falls off to zero on the last two iterations. Now, of course, there is still error in the small, but with the truncation we performed there is no error in the large! In fact, according to John D. Cook's blog, Kepler only used two iterations and called it good. So this is a highly effective iteration scheme for large objects like planets. I'd also like for you to notice how quickly the error decays away. By the third iteration our error was of order
If you've had a course of single variable calculus, you have probably seen the Newton-Raphson method. This is a numerical analysis scheme for finding the roots of an equation iteratively. In this case we would want to find the values of
We basically code the inputs identical to how we had them before. The function for the iterative method on the other hand is a little different, which you can examine below. Here we set up the iterative fixed point iteration using the Newton-Raphson scheme above (line 25 in the code below). I actually made an error here which gave me poor convergence results. I'll discuss that later as a cautionary tale.
We also use the same DataFrame tool from pandas to display the results. If you want to see the whole code for this one, it is also available at the Github link from the beginning of this entry. Now, I will run 5 iterations with the same mean anomaly and eccentricity values as above. This gives us the result below.
What is different here from Kepler's method? We should immediately be seeing that the local error decays a bit quicker than Kepler's method. At the third iteration the local error is still of order
We will still use 5 iterations to make the comparison fair. Below are the results for Kepler's Method:
And here are the results for the Newton-Raphson Method:
See how Kepler's method bounces around and continues to have local error around the first decimal place? Now look at the results from Newton-Raphson. The method still has local error of order
Finally, I just wanted to talk about an error I ran into while writing this. I was using the Newton-Raphson Method with the function defined below.
Can you see the problem? The index in the cosine term is
08/19/2024
I've recently began trying to learn differential geometry using a couple books and some online notes. I've had the book Differential Geometry in Physics by Gabriel Lugo (can be found here) on my bookshelf for a long time and always thought I'd just use that. I started to take read it and take notes, but quickly ran into some things I disliked. For one, it doesn't have any exercises at the end of the sections/chapters. Also, there are numerous grammatical errors within the first 15 pages and the author makes references to material that has yet to be introduced. I feel that the definitions are not very well motivated and further obfuscated by these remarks regarding more advanced material the reader has yet to see. I haven't totally written the book off, but I opted to study primarily out of another text.
I found a great deal on a hardcover copy (with dust jacket!) of A Course in Differential Geometry and Topology by A. Mishchenko and A. Fomenko from Mir Publishers Moscow. It was a serious treat to be able to get a book from Mir Publishers in hardcover, since so many of these tend to command quite the premium. For anyone interested, many of the titles are available online at the Internet Archive here. The text I am using can be found here. The authors write in a very clear way, using (from what I can tell) mostly standard notation, and they provide many examples. I appreciate the lengths they go to in emphasizing the lessons the reader should draw from the examples. The text also has some exercises, but I have not gotten to the point of attempting any of them. There is a separate problem book to accompany this text as well, but I haven't looked into it. It's also available on the Archive.
I have also been able to find an assortment of online notes to supplement my book, and one set I was able to get printed off, three-hole punched, and put in a binder. To me, a physical copy in my hands will always be superior to a PDF. One thing I have noticed though is I haven't been able to find a lecture series on YouTube that covers diff geo from the perspective of an advanced undergrad/early grad student. I want more than just computational-based differential geometry of curves and surfaces (some discussion of tensor products, manifolds, etc. would be nice), but those topics tend to be found in lectures at the "600 level", i.e., for those a couple years into a Ph.D.
08/26/2024
I've continued to study from the Mir book, but retired Lugo's book. I will need to look through it again, but I am not sure if I even want to keep it. Perhaps it would be best donated to the Richard Sprague Undergrad Lounge at ISU for someone else to find use from it. The first chapter of the Mir book focuses on curvilinear coordinate transformations in Euclidean domains. Typically, the author will use examples from polar, cylindrical, and spherical coordinate transformations. That is, transforming from Cartesian coords. to those systems. The studying was mostly smooth sailing until getting to the construction of the matrix representation of the metric tensor
In order to supplement the Mir text, I also got a copy of Erwin Kreyszig's Differential Geometry. This one is focused on curves and surfaces in 3-D, and perhaps has a less theoretical approach, but I thought this would be a good way to ground the content I was reading in the other one. A major plus with the Kreyszig book is that he has solutions for, I believe, all of the problems in the book! That is a huge help when self-studying. I have yet to do the problems from Chapter 1 of the Mishchenko and Fomenko book, but they don't look too crazy.
10/08/2024
I've placed my differential geometry studies on hold in favor of something a bit more palatable. I think the text I was using started off with complete generality, which made it harder to follow and lead me to get bored with it. Also, as I've been getting off escitalopram the withdrawal symptoms have made it much harder to focus and tackle challenging material. Instead, I've opted to study John R. Taylor's Classical Mechanics, which is much more reader friendly and the content is familiar enough to me so that I stick with it. Eventually I will come back to differential geometry, but for now it is on an indefinite pause.
Originally posted by the mathematician Anthony Bonato on Twitter, I thought this image was particularly illustrative of the relationship between topological spaces, metric spaces, and vector spaces. I particularly enjoy studying the properties of the
I was able to find this post as a source for the image.
Below are slides for a presentation I prepared for the Junior Analysis Seminar at Iowa State University. Here is the presentation itself (redirects to YouTube).