an apology for series, without apologies

Newton invented Calculus before Leibniz, but Leibniz’s version had already spread by the time Newton made his version public. According to Westfall, Newton was not very polite with his editor, and was understandably agry. He had a copy of Newton’s series method when Leibniz visited his office, so it would not be impossible that the latter had the opportunity to inspect that text. In any case, Leibniz’s version is quite different from Newton’s, not only in notation and nomenclature, but also in other, more important aspects.

Leibniz’s calculus is what kept the fame, and today is taught at schools and universities. We talk about derivatives and integrals, instead of Newton’s equivalent terms ‘fluxions’ and ‘fluents’. We use dx/dt for the time derivative of x instead of Newton’s ẋ (and ẍ for the next iteration), although we physicists still use this notation (for time derivatives at least). Today we are used to y’ for dy/dx and y” for d²y/dx², a notation introduced by Lagrange. Newton used different notations for his fluents but none remained and most of the time we use a ∫ wich is a kind of S which literally means Summation.

But the considerations of who invented it first or with which notation is, to me, quite irrelevant in comparison with the substantial difference in depth between the two versions. It is very sad that we are only introduced to Leibniz’s calculus, because Newton’s is infinitely better and deeper, albeit more difficult.

The word ‘calculus’ literally means ‘little stones’, and this means to be able to count, i.e. to provide a number as a result. The fundamental difference between Newton’s and Leibniz’s is that with the former you can calculate everything to an arbitrary precision, whereas the former’s is way more limited.

In school, our mathematics teachers give us two tables, like God did with Moses: the derivative and the primitive (integral) tables. I find quite funny that the tables that remained were a second set. Perhaps the first one was too Newtonian and Moses was enraged by their complication, so he went up Mount Sinai again for a second, easier-to-understand set. Furthermore, these tables are to be taken by faith, since any attempt to show their results need to either use Newtonian calculus or dizzying epsilon-sorcery.

For example, the derivative of d(sin(x))/dx = cos(x) is a beautifully simple rule, although it completely masks its meaning. Or the integral of 1/x being the logarithm of x. Where does it come from? And, what is worse, if I need to calculate the definite integral of sin(x) between x=0.1 and x=0.3, what is the result? In school we have two choices: either to write cos(0.1) - cos(0.3) or to use a calculator to get the result. The question, then, is what is the calculator actually doing! Fun fact: calculators are purely Newtonian!

In essence, Leibnizian calculus helps us to manipulate some well-known functions in quite a tidy way. When these rules are applicable, we can get an ‘exact’ result. The quotes meaning that most of the times this exactitude is only apparent. And, most of the times, functions are not integrable by Leibniz’s method. Derivatives are always available for every gentle function, though, although the results are also expressed as a function of well-known functions, and by ‘well-known’ I mean ‘easy to introduce in a calculator’.

Newton’s ‘Method of Fluxions’ is an ode to literal calculus, meaning ‘providing results as a number’. Before diving into fluxions and fluents, he introduces us to the manipulation of power series. How to divide, multiply, power and even invert them. If you are not familiar with these methods, it may seem daunting, but… you ARE familiar with these methods, because it is what we do when we manipulate numbers. Why is so? Well, because numbers, at least when represented by decimal digits (in base 10 or any other), are power series. Take number 1234.5, for example. It can be written as 1·X³ + 2·X² + 3·X¹ + 4·X⁰ + 5·X⁻¹, where X is, unsurprisingly, 10. A number with a finite number of digits is simply a polynomial, and when the digits don’t end, an infinite series. The methods of manipulating series are very similar, sometimes identical, to the methods of manipulating numbers.

When we say cos(π/4) we can use the Pythagoras’ theorem to get an exact result. If we are asked for cos(1), though, what use if such theorem, or any geometrical interpretation of this function? From a practical point of view, we need the actual result, as 0.5403… which is good enough for any basic engineering. How do we calculate this number (the same question as ‘how do we program a calculator’)? It is not a quick calculation by hand, so once it is made, we better store the result in a table, something that has been done and used for centuries. Could we build a right-angled triangle so that we could measure the division of the contiguous side with respect to the hypotenuse? Absurd.

But let’s notice something here. In terms of ‘elegance’, writing cos(1) looks pretty. Also, the geometrical interpretation of the cosine as a proportion of a right-angled triangle is quite appealing. However, saying that cos(1) = 1/0! - 1/2! + 1/4! - 1/6! + &c is, at first sight, quite ugly. An etcetera (&c)? Does this mean we need to calculate forever? And those ugly factorials! But there is something here that we didn’t have before: the power to actually calculate the result. If we take two terms we get 0.5, which is already a fair result. Take three terms and get 0.541666…. already a very good result. With four terms, you get 0.5402777…, for all purposes touching the true result. Do you find this inelegant or dirty? My opinion is that this is dirty, and because of it, it is powerful. As with many other things, you need to get your hands dirty to actually achieve something. This is no different. The tidiness of the ‘closed-form expressions’ (i.e. sin(x), log(x), etc) is misleading.

Consider we need to calculate the area under a Gaussian y = exp(-x²) between x = 0 and x = 0.2. There is no way you can actually calculate it with Leibniz with the usual tables (not even the primitive, let alone the numerical result) while using Newton’s method you can get to it really fast, in a couple of minutes, with fair precision. Interestingly, we could use an expanded table and say, yes, the primitive of exp(-x²) is (√π / 2) · erf(x), where erf(x) is the error function. It is a closed-formed expression, as valid as sin(x). But it is not in the high-school calculator, no matter how ‘scientific’! It seems so elegant to write (√π / 2) · erf(0.2), while à la Newton we would need to expand the exp(x) function as 1 + x + x²/2 + &c, and obtain, successively, 1, 1.2, 1.22, 1.221333, etc. How can this be more elegant than the other expression, which after all includes π, the pinnacle of elegance? Well, let me bring two news here. Firstly, the actual result is something like 1.2214027581601696, so Newton got extremely close to it. And secondly, π is just another closed-form expression for what is actually a series. Let me recall that π is not a number until expressed in numbers. Of course, this one can be found in the calculator, and you can also find tables for it, but what if you were stranded in a desert island and needed to calculate this integral? How do you get π? Not so easy!

So, Newton’s method consists ‘basically’ (it is an extremely sophisticated art) in getting the function in series form and then trivially integrate it. We could ask here: are we really expanding the ‘true’ functions like sin(x) into dirty series just to be able to produce some numbers for engineers? This sounds like descending into impurity, doesn’t it? Well, no. This text is to defend the position that series are the true forms, and closed-form expressions are like handy shorthands that are ultimately useless without the series. Power series are the true ID of the function, and we should really learn functions through their series forms. But no! Instead, we are given the Taylor formula, that expands the closed-form functions into their series form. So, one could think, Newton is using Taylor all the time, right?

Well, no! Taylor is the film explained backwards. You are told that the sine of x is sin(x), and, when a dirty engineer asks for an actual value, you use Taylor to uglify the function, and rapidly go back to the pure form. What I am defending here is that we should learn the most common series in series form, and of course identifying their shorthand names, but without forgetting what are their true signature. Only then we can learn how to manipulate them in their true form, so that we don’t need Taylor, just the binomial theorem (and in fact, not even it).

To conclude this apology, we should not forget that, when going from the real to the complex numbers, power series not only remain useful and beautiful: their significance explode. Firstly, while the real part of a complex series is called the ‘Taylor’ part (quite unfair terminology), the imaginary part is a Fourier series, which infinitely enhance the power of mathematics. Secondly, it is only on the complex plane where we understand the radius of convergence of every series, as collisions with singularities. And thirdly, difficult complex integrals are solved by the way of series as well, without which no pole could be calculated.

As you can see, we are usually given a lite version of calculus, both in high-school and in university. We fill our boards with Leibnizian calculations while we let calculators and computers find the numbers with Newtonian methods. Nobody can deny the usefulness and beauty of Leibniz’s calculus, but upon inspection, its power is severely limited when away from classic academic examples. Newton’s calculus is so powerful that even allows you to compute results for almost every differential equation.

Surprisingly, Newton’s Principia does not use series-based calculus, nor Leibniz’s version. He invented a third way of doing calculus with the help of geometry. This approach is so beautiful that every secondary and high school should begin with this third method. What is the intuition behind the minus sign in dcos(x)/dx = - sin(x)? This method clearly shows its meaning. However, the need for number computation remains, and for that, series are the true pillars of analytic mathematics.