WELCOME BACK TO SECTION 1, SUCKERS. Yes, we are returning back to the very first exercise to unstar a
star. To deface a diva.
SEEYA IN SECTION 3 (it'll take a few days for me to read through it, so you can relax for a bit
from my pink assault).
And therein lies another lie: the elongation of few into nine. Ten, actually, but it didn't rhyme. Have you relaxed?
Are you relaxed? Some of you may have been edging the whole time, waiting for Hartshorned to give you a release.
And, if you
are, listen: stay with me, buddy. Don't let it out yet. Read through my post and at the very end I'll give you
cumming permissions. Look at me. *Grabs your shoulders* LOOK AT ME. This is your only chance to do so.
Once you've read this post once, the spontanaeity of the interaction will have been exhausted and
there is nothing to have sex with but spent souls. DON'T YOU UNDERSTAND THE SEVERITY
OF THIS SITUATION. "u kno wish I cood experience _____ for the first time again (T_T)."
<—–
DONT LET THIS BE YOU, GODDAMNIT. I CARE ABOUT YOU BABY. I DON'T WANT TO SEE
YOU FALL.
To open your mind a bit, here's a treat: What are the different types of
conic sections?
The three types of conic section are the hyperbola, the parabola, and the ellipse; the circle is a
special case of the ellipse, though historically it was sometimes called a fourth type.
Parabola, hyperbola, ellipse, eh? You think that's an apt classification here in these parts? Hey, fuckface: You know
where you are? Yeah, we're in alg geo over
k alg closed. Ellipses and hyperbolas are, like, the same thing. Don't look
at me, look at the writing on the screenshot of the pirated pdf. At the end of 3.1a I will tell you that an ellipse is
isomorphic to a hyperbola, and you will kneel.
Now, a
general conic has the form
f = Ax^{2} + Bxy + Cy^{2} + Dx + Ey + F,  (A,B,C not all zero)   
(as the exercise states, we're assuming f is irreducible)
Now, this form is super inconvenient, as I learned by struggling for an entire day. Given f, I'd like to convert it into
one of the following standard forms
And this can apparently be done via a mere "rotation and translation of axes". Now here's the question: Do I get
an isomorphic coordinate rings by rotating and translating axes? If W = Z(f) (f irreducible) and
W′ = Z(f′) is the conic defined by rotating and translating W, is it the case that A(W) ≃ A(W′)?
Intuitively, the answer is yes, duh, but you know what they say: TO USE IT, YOU HAVE TO PROVE IT – a rule
that I've broken 3000 times on this blog, and yet for some reason I decided to go all the way this time.
Let's work this out. Here's a little lemma to help me:
LEMMA 1:
If ϕ : A → B is a ring isomorphism, and I is an ideal of A and J = ϕ(I), then ϕ induces an isomorphism
A∕I ≃ A∕J.
PROOF:
See here: https://math.stackexchange.com/a/2151542.
LEMMA 1 FIN.
Now, let's show the translation thingy in a fairly general setting:
LEMMA 2 (translation):
Let X be an algebraic subset in A^{n}.
Given T = (T_{1},…,T_{n}) ∈ A^{n},
Let Y = X + T = {P + TP ∈ X} be the set X translated by T.
Then A(X) ≃ A(Y )
PROOF:
Given f ∈ k[x_{1},…,x_{n}], let's find the corresponding polynomial for the translated points: Let
f_{T}  = f(x_{1}  T_{1},…,x_{n}  T_{n})   
So clearly
And thus, I(Y ) = (f_{T}f ∈ I(X)) (the ideal generated by the f_{T}s)
Now I want to show that
A(X)  ≃ A(Y )  

i.e. k[x_{1},…,x_{n}]∕I(X)  ≃ k[x_{1},…,x_{n}]∕I(Y )   
To do that, I'm going to make a kalgebra morphism:
ϕ : k[x_{1},…,x_{n}]  → k[x_{1},…,x_{n}]  

x_{i} 
x_{i}  T_{i}   
Which, clearly, is an isomorphism with ϕ(I(X)) = I(Y ). Hence applying LEMMA 1 finishes us off.
LEMMA 2 FIN.
Now for rotations. Actually, you know what? I'm going to include reflection as well. No, no. I can get
even more general: I can make this work for ANY INVERTIBLE LINEAR TRANSFORMATION.
Check this out:
LEMMA 3 (linear transformation):
Let X be an algebraic subset of A^{n}.
Given Q an invertible matrix,
Let Y = QX = {QPP ∈ X} be the image of X under the linear transformation Q
Then, A(X) ≃ A(Y )
PROOF:
Again, given f ∈ k[x_{1},…,x_{n}], I'd like the corresponding polynomial for the linearly transformed points.
To keep things general, let's just consider an arbitrary invertible matrix A.
What's the corresponding polynomial for the points transformed under A? Like in the translation case, to get the
polynomial for the transformed set, I need to apply the inverse of the transformation on the coordinates. This time,
let me use k[x_{1},…,x_{n}] for the original coordinates and k[x_{1}′,…,x_{n}′] for the new coordinates. And let me write
x = [x_{1},…,x_{n}] for the vector of indeterminates in the original coordinate system, and x′ = [x_{1}′,…,x_{n}′] for the
vector of indeterminates in the new coordinate system. Then the transformation can be expressed as
x′ = Ax. But I want the old coordinates in terms of the new coordinates (since I want to translate f to
the new coordinate system). So I take A^{1} of both sides to write x = A^{1}x′. Hence, I'll define
So, thinking of x as a point,
f(x)  = 0  

⇐⇒f(Q^{1}Qx)  = 0  

⇐⇒(f_{Q})(Qx)  = 0  

  
We can conclude that f(P) = 0⇐⇒f_{Q}(QP) = 0, so the ideal of Y = QX is I(Y ) = {f_{Q}f ∈ X}
Now to construct the isomorphism for LEMMA 1, I'd like to say is that (f_{Q})_{Q1} = f_{Q1Q} = f_{I} = f
(the last equality is obvious: f_{I}(x) = f(Ix) = f(x)). So I'm going to one up a bit and prove that in general,
This took me 3 days.
We need to actually look inside the matrix. Let's write
A^{1}  =
  
Note that
So the change of coordinates map is
ϕ_{A} : k[x_{1},…,x_{n}]  → k[x_{1}′,…,x_{n}′]  

x_{l} 
w_{l} ⋅ x′   
(If you were confused earlier, this is what I meant by writing the old coordinates in terms of the new coordinates).
Note that this defines a kalg morphism such that ϕ_{A}(f) = f_{A}
Specifically,
f_{A}(x)  = f_{A}(…,x_{l},…)  

 = f(…,ϕ_{A}(x_{l}),…)  

 = f(…,w_{l} ⋅ x′,…)  

  
Now, let me write B as
B^{1}  =
 

 =
  
Note that b_{i,j} = c_{j,i} (You'll see why I included the column vectors soon enough). And the map for B (let's use
coordinate rings k[x_{1}′,…,x_{n}′] → k[x_{1}′′,…,x_{n}′′] for B) is:
ϕ_{B} : k[x_{1}′,…,x_{n}′]  → k[x_{1}′′,…,x_{n}′′]  

x_{l}′ 
b_{l} ⋅ x′′   
Showing that f_{BA} = (f_{A})_{B} is equivalent to showing that
and it is sufficient to show that on the generators x_{l}
Now, recall that
f_{BA}(x)  = f((BA)^{1}x′′)  

 = f(A^{1}B^{1}x′′)   
In particular,
So the map representing BA looks like
ϕ_{BA} : k[x_{1},…,x_{n}]  → k[x_{1}′′,…,x_{n}′′]  

x_{l} 
w_{l} ⋅ c_{1} ⋅ x_{1}′′ +
+ w_{l} ⋅ c_{n} ⋅ x_{n}′′  

 = ∑
_{j=1}^{n}w_{l} ⋅ c_{j} ⋅ x_{j}′′   
My goal is to compute ϕ_{B} ∘ ϕ_{A}(x_{l}) and check that I get the same thing:
ϕ_{B}(ϕ_{A}(x_{l}))  = ϕ_{B}(w_{l} ⋅ x′)    

 = ϕ_{B}([w_{l,1},…,w_{l,n}] ⋅ [x_{1}′,…,x_{n}′])    

 = ϕ_{B}(w_{l,1} ⋅ x_{1}′ +
+ w_{l,n} ⋅ x_{n}′)    

 = ϕ_{B}(∑
_{i=1}^{n}w_{l,i} ⋅ x_{i}′    

 = ∑
_{i=1}^{n}w_{l,i} ⋅ ϕ_{B}(x_{i}′)  (ϕ_{B} a kalg morphism)    

 = ∑
_{i=1}^{n}w_{l,i} ⋅ b_{i} ⋅ x′′    

 = ∑
_{i=1}^{n}w_{l,i} ⋅∑
_{j=1}^{n}b_{i,j} ⋅ x_{j}′′    

 = ∑
_{i=1}^{n} ∑
_{j=1}^{n}w_{l,i} ⋅ b_{i,j} ⋅ x_{j}′′    

 = ∑
_{i=1}^{n} ∑
_{j=1}^{n}w_{l,i} ⋅ c_{j,i} ⋅ x_{j}′′    

 = ∑
_{j=1}^{n} ∑
_{i=1}^{n}w_{l,i} ⋅ c_{j,i} ⋅ x_{j}′′    

 = ∑
_{j=1}^{n}w_{l} ⋅ c_{j} ⋅ x_{j}′′    

    
And thus, we are fucking done. using the isomorphism (now ignoring the primes on the coordinate rings)
ϕ_{Q} : k[x_{1},…,x_{n}]  → k[x_{1},…,x_{n}]   
(It's an isomorphism because, as I showed, ϕ_{Q1} is twosided inverse for it), clearly ϕ_{Q}(I(X)) = I(Y ) (since for
any f, f_{Q} = ϕ_{Q}(f)). Hence, LEMMA 1, finishes off the proof.
AND THAT'S LEMMA 3 FOLKS
Reader, if you're still edging, Godspeed. Now we finally get to do the actual proof. Indeed: We haven't even started
the actual proof. Reminder? Those lemmas were meant to just let me rotate and translate the conics (and yes, they
took me three days to prove). Now I get to translate and rotate them and start actually doing the
exercise. You know what they say: If you're edging, keep edging. Woops, I meant: If you use it, prove it.
Now, given an irreducible quadratic f, I can translate and rotate it so that it fits one of the standard
forms:
However, using LEMMA 3, I can simplify things even further. Using the change of basis matrix
I can
assume a = 1 in the circle equation. Using
, I can assume a,b = 1 in the ellipse equation. (So really, I can
merge the circle and ellipse cases into x^{2} + y^{2} = 1. Similarly, I can rotate and stretch the parabola to be y = x^{2}.
And finally, I'll rotate and stretch the hyperbola to be xy = 1. Hence, here are my new equations:
Circle: x^{2} + y^{2} = 1  

Parabola: y = x^{2}  

Hyperbola: xy = 1  

  
Any conic can be reduced to one of those (up to isomorphic coordinate ring). Now I'm going to merge the Circle and
Hyperbola cases into one to finish the exercise, and so that you can finish along.
Now, I'd like a kalgebra morphism
To be an isomorphism mapping xy  1 to x^{2} + y^{2}  1. In which case I can apply LEMMA 1 and be done with
this exercise.
To make a kalg morphism, I need to figure out what x and y should map to. I decided that perhaps the easiest
way to do this is to make it my goal to map xy to x^{2} + y^{2}. I need x to map to a "component" of
x^{2} + y^{2} and y to map to another "component" of x^{2} + y^{2}. In other words, I need to FACTOR
x^{2} + y^{2} into something like SṪ, so I can decide to map x
S and y
T. But is that even possible?
Even more, I'd like ϕ to be surjective, so ideally I would have S and T to be both linear polynomials. So, I'd like to
factor x^{2} + y^{2} = (Ax + By)(Cx + Dy)... But again, is this possible?
Let's give it a shot:
x^{2} + y^{2}  = (Ax + By)(Cx + Dy)  

x^{2} + y^{2}  = ACx^{2}(AD + BC)xy + BDy^{2}  

  
Hence, we'd need
AC  = 1  

AD + BC  = 0  

BD  = 1   
Note that if we've decided on A and B, that immediately determines C and D (equations 1 and 3). Now look at
the 2nd equation:
Let me set A = 1, then that yields

Now, reader, if you're edging right now, then hold on for a second. Actually, you know what? Keep going. Keep
going, my strong sailor. but also PAY ATTENTION: This is a VERY IMPORTANT MOMENT: Direct your sexual
energy to focus on this. Let me ask you a question: Does this equation, B^{2} + 1 = 0 have a solution? Typically, in
your familiar field R of real numbers, this would not have a solution, and our proof would be doomed. But since k
is algebraically closed, this equation is guaranteed to have a solution. E.g. for k = C the complex
numbers, B = i (the unit imaginary number) is a solution. What I'm essentially saying here is that,
circles and hyperbolas are different beasts over the real numbers. But once you algebraically close your
field, circles and hyperbolas are THE SAME. At least by algebraic geometry standards. (Well, in this
exercise, I'm showing that their coordinate rings are the same, but I'll give you the goods in 3.1a)
So, basically, I can factor x^{2} + y^{2} = (Ax + By)(Cx + Dy). And since I set A = 1, I know that C = 1, so I
can just write x^{2} + y^{2} = (x + By)(x + Dy). Also note that B≠0, since 0 can't be a solution to
B^{1} + 1 = 0 (we're assuming 0≠1 in k). So also D≠0. One thing I'll also note is that B≠D. Otherwise
AD + BC = 0
AB + BC = 0
1B + B1 = 0
2B = 0
B = 0, a contradiction.
THUS, I would like to define ϕ as follows:
ϕ : k[x,y]  → k[x,y]  

x 
x + By  

y 
x + Dy   
It's a kalg morphism that's "clearly" injective (EXERCISE LEFT TO READER... AFTER I LET YOU CUM), so
I just have to verify surjectivity. In other words, I have to verify that x and y are in the image of ϕ.
Note that
(that logic works since I showed earlier that B≠D)
Also note that
And note that if
 1 = 0
= 1
D = B, a contradiction. So
 1≠0, and we can thus write
HENCE: ϕ is surjective and we're done!
YAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAY. So hyperbolas
and circles (and ellipses) have the same coordinate rings (over k alg closed). In exercise 3.1a, I'll tell you that this
does in fact mean that a hyperbola and circle are isomorphic (over k alg closed): They're "the same", by alg geo
standards. SEEYA IN 3.1a. Also, you can cum now.