We have anticipated this content, but here is where Euler explains it.
The squares of positive and negative numbers are positive: (+2)² = (-2)² = +4.
We also assumed that we could only take square roots of positive numbers, which had two results: √4 = ±2.
When we need to extract the square root of a negative number, "a great difficulty arises". If we want to extract the square root of -4, which number, multiplied by itself, would give -4?
It is surely not +2 and certainly not -2 either.
The answer cannot be either a positive or a negative number, but it must belong to an "entirely distinct species of numbers".
We defined positive numbers as those greater than 0, and negative numbers as those that are smaller than 0. But the square roots of negative numbers are neither greater nor smaller than nothing. But they are not 0 either, since when multiplied by itself it does not give 0.
What is said above leads us to the conclusion that these are "impossible quantities". They are called imaginary numbers, because "they exist merely in the imagination".
It seems Euler is quite reticent to accept these numbers!
Expressions like √(-1) or √(-2) "are consequently impossible", or imaginary numbers.
They are neither nothing, nor greater than nothing, nor less than nothing.
But, after all, "these numbers present themselves to the mind". "They exist in our imagination." "Nothing prevents us from making use of them, and employing them in calculation."
The square of √(-3) must be equal to -3. In general, (√(-a))² = -a.
If we want to extract the square root of -a, we can write this as √(-a) = √(-1)·√a. where √a is a possible or real number.
This means that the impossibility of numbers is always reduced to the presence of √(-1).
Euler does not seem to call this quantity the number "i" anywhere. I am not sure who invented the symbol or when.
We want now to multiply two numbers like √(-2) by √(-3).
There is an important danger here.
Do you remember when we discussed that √(+4) can be interpreted as ±2 or as just 2? We found that it was quite elegant to consider the symbol √4 already carrying the two signs without having to explicitly writing ±√4.
But now look what Euler does:
√((-2)·(-3)) = √((-2)·(-6)) = √6.
So, is the correct result plus/minus the square root of 6?
With symbol i, we would write
√2i · √3i = √6 · i² = -√6
I see all this as a source for a lot of confusion.
Without imaginary numbers, it is nice to consider √4 as both +2 and -2, since both, when squared, give 4. Why bothering with explicitly writing +√4 for +2 and -√4 for -2?
However, when we have √(-4) = 2√(-1), the burden of the signs falls upon the sign √, as always, and if we define i as √(-1), the burden of the sign falls upon the letter i.
Then, if we want to do i·i, as each instance of the symbol implicitly carries both + and -, we have the dilemma of multiplying ± by ±. Or maybe ± by ∓ ? If the former, we will obtain i·i = +1, while for the later we get i·i = -1, which we know to be the correct result.
So, if you want to conserve the two signs implicitly hidden in the symbol √, when multiplying two of them you need to take into account that real numbers would multiply as (±)·(±) while imaginary numbers would multiply as (±)·(∓). Isn't this weird?
This way, √2·√2 = (±)·(±)2 = +2, while √(-2)·√(-2) = (±)·(∓)2 = -2.
We are departing from Euler's text, but I consider this important.
When we want to take the magnitude of an imaginary number, we usually multiply the number by its complex conjugate *. For example, if y = 3i, its conjugate is y* = -3i. The conjugation consists in adding a - in front of every i.
We can write, then, i·(i*) = i·(-i) = -i² = -(-1) = +1. See how this is the result that Euler is giving, creating a lot of confusion for the modern reader.
But don't forget he is considering the symbol √ as carrying the two signs, so he must be thinking that in i·i we are just multiplying ± by ±, actually giving + as the final sign of the result.
In modern notation, we almost never consider the two signs as hidden in √, and we require that we actually and explicitly express the sign of it, understanding that √ means +√ and that for its negative version we actually require -√.
Under this convention, we can also become confused if writing √(-1)·√(-1) because nothing seems to prevent us to rewrite it as √((-1)·(-1)) = √(+1) = 1.
So it is actually a good thing to consider the signs hidden in √, but then we cannot happily multiply two square roots without considering the "polarity" of their signs.
To avoid much confusion, you can either avoid this conflict and accept that i²=-1 or to accept the following:
Each √ carries either an implicit ± or an ∓. When there is a single √ we don't worry which one it carries. But when there are two √ that are to be multiplied, we need to see the signs of the content inside the √.
If the contents are positive and positive, as in √(+1)·√(+1), then the hidden signs can be thought as the opposite of electric charges, attracting each other when they are equal and repelling each other when they are opposite, so they organise as ±±, giving the result +. Then, √1·√1 = +1.
It the contents are negative and negative, as in √(-1)·√(-1), the hidden signs can be thought as electric charges, and organising as ±∓, and giving -1.
What if we have √(+1)·√(-1)? Our result should be ±i, which is actually equivalent to say that we are delaying the actual multiplication of the hidden signs, since we are explicitly writing one (±) and having the other one hidden within i.
Recall here that while 1=+1, √(+1) = ±1.
So in √(+)√(+) we have anti-charges, arranging as ±± or ∓∓. In √(-)√(-) we have regular charges, organising as ±∓ or ∓±.
But in √(+)√(-) or √(-)√(+) we don't actually say how the charges behave. You can say that they are charges refusing to organise in any way. Or that they simply don't interact.
A similar thing happens with division.
In √(+1)/√(+1) we get the arrangement ±/± or ∓/∓, always giving +.
In √(-1)/√(-1) we get the arrangement ±/∓ or ∓/±, always giving -.
In √(+1)/√(-1) or √(-1)/√(+1), the hidden signs refuse to reach an agreement, so all we can say is that √(+1)/√(-1) = ±1/i and √(-1)/√(+1) = i/(±1).
For 1/√(-1) = 1/i we can multiply up and down by i, and getting
1 √(-1) i ------ · ------- = ---- = -i √(-1) √(-1) -1
Then, √(+1)/√(-1) = ∓i and √(-1)/√(+1) = ±i.
What we have in √2·√3 ? We have two √ with positive content, so they should agree in saying that their hidden signs organise as ±± or ∓∓, so the result should strictly be +|√6|.
In √(-2)√(-3) we should write -|√6|.
But all this is quite a fantasy, since modern notation will assume that in √2 - √3 you are actually implying something like 1.4142 - 1.7321, and not ±1.4142 ∓ 1.7321.
You can be safe using the assumption that the symbol √ is assumed as +√ and that you need to explictly write -√ otherwise. However, be aware that you can get confused if, as you write that i=√(-1), you want to do i·i=√(-1)·√(-1)=√((-1)·(-1))=√1=1, which is wrong! As long as you multiply or divide two √ symbols, beware of the signs of their contents. If they are both positive, carry the modern assumptions. If they are both negative, carry the modern operations with an extra - in front of the result. And if they are contents of opposing signs, refuse to merge the two √'s.
What if you have √(z+a)·√(b-w)? You actually have no idea about the signs of the contents of the √. The answer is that you are safe with the modern notation as long as you are not tempted to convert the symbol i into √(-1).
What about Newton's notation where we write the square root of 4 as 4¹⸍²? If we want to be explicit with signs, we would say this is equal to +2. What if we want the result -2? Should we write -4¹⸍²?
This is curious, isn't it? Because, being strict, we should say -4¹⸍²=(-1)·(4¹⸍²)=-2, while for 2i we should write (-4)¹⸍². So many tacit assumptions!
What if we would like to keep both signs hidden, like in √? Then, they should be hidden within the exponent. And for 2¹⸍²·2¹⸍² we would get 2⁽¹⸍²⁾⁺⁽¹⸍²⁾=2¹. Can we claim that 2¹ is equal to ±2? It would be quite complicated! As roots and powers are unified as fractional powers in Newton's notation, it is most convenient to follow the modern, conventional notation of expliciting the signs when you want to assume negative, otherwise assuming positive.
What if we want to convert i·i into (-1)¹⸍²·(-1)¹⸍²? While in √(-1)√(-1) we are tempted to do √((-1)·(-1))=√(+1)=1, here we must do (-1)¹=-1, which is correct!
My final conclusion is that the most powerful and non-confusing notation is to always assume positive unless specified and to always use Newtonian powers, so that you can safely apply the definition of i and still get correct results.
The number 4 has two square roots, which are ±4¹⸍²=±2, and there is no confusion here. Moreover, the number -4 has two square roots as well, which are ±(-4)¹⸍²=±2i.
The symbol √ is, then, a bit dangerous when multiplying and dividing by another √. Don't fall into its trap!
Euler remarks that √4 has two hidden signs, +2 and -2, and that √(-4) has two values as well, which he confusingly writes as +2√(-1) and -2√(-1). Why here √ is not carrying two signs while in √4 it does? So confusing. Also, the perspective from the future is so easy! Too easy. Euler was infinitely smarter than all of us combined, so don't get fooled by this confusion. And I would not discard a confusion by the person copying this text while Euler, already blind, dictated.
Euler warns us not to think that because of "impossible" these numbers are useless. On the contrary! "The calculation of imaginary quantities is of the greatest importance."