derivation of Sylvester’s matrix for the resultant
Since square matrices
enjoy properties which matrices of arbitrary size do not, we will add more equations so as to come up with a new matrix equation involving a square matrices. There is no harm in adding an equation of the forn xkp(x)=0 or xkq(x)=0 to the system p(x)=0,q(x)=0 because the enlarged system will have exactly the same solutions as the original system of two equations. Consider the system
This system may be written as a matrix equation
|
(00a0a1a20a0a1a20a0a1a2000b0b1b2b3b0b1b2b30)(x4x3x2x1)=(00000) |
|
Now we have a square matrix. One important property of matrix equations involving square matrices is that they only have non-trivial solutions when the determinant
of the matrix vanishes. The system p(x)=0,q(x)=0 only has a solution when p and q have a common root. Hence the determinant will vanish whenever p and q have a common root.
Note that, at this stage, we cannot jump to the converse
conclusion
that p and q always have a common root when the determinant vanishes. All we can say is that, if the determinant vanishes, there will be some non-zero vector in the kernel of the matrix, but we cannot say that the vector will be of the special form (x4,x3,x2,x,1) that appears in the system. To assert the converse conclusion, we need to first prove that the determinant indeed equals the resultant. For this proof, please see the entry proof that Sylvester’s determinant equals the resultant.