Make your own free website on Tripod.com
Let f element of F[X] be irreducible and of degree > 1. Then all zeros of f in F are simple. Let alpha element of E be algebraic over F. alpha the zero of a monic irreducible polynomial p element F[X]. The polynomial p is called the irreducible polynomial of alpha over F denoted IrrF(alpha,X) The algebraic closure of the field greatest common divisor The set of natural numbers. The set of rational numbers. The set residue classes mod n. We say that alpha is algebraic over F if there exists a nonzero polynomial f element of F[X] such that f(alpha) = 0. If alpha is not algebraic over F, then we say that alpha is transcendental over F. If f has a leading coefficient 1, then we say that f is monic. A field is a nontrivial commutative ring with identity in which every nonzero element has an inverse with respect to multiplication. A field F is said to be algebraically closed if every nonconstant polynomial if F[X] splits into a product of linear factors in F[X].

A Restrictive Assumption

Let F be a field. F its algebraic closure, f element of F[X] a monic, nonconstant polynomial. In F[X], we may factor f into linear factors:

f = product over i(X - alphai),   alphai element of F.

Suppose that m of the alphai are distinct. Let us renumber alpha1,...,alphan so that alpha1,...,alpham are all different. Then

f = product 1 to m(X - alphai)vi,     vi > 1.

The positive integer vi is called the multiplicity of the zero alphai. If vi = 1, then alphai is called a simple zero of f; if vi > 1, then alphai is called a multiple zero of f. A multiplicity > 1 is called a multiple zero.

An irreducible polynomial f element of F[X] can have multiple zeros. For example, let F = Z2(Y), where Y is transcendental over Z2, and let f = X2 - Y element of F[X]. Then f is irreducible in F[X]. Otherwise, f factors into linear factors in F[X] and there exists Z element of F such that Z2 = Y. But Z is the quotient of two polynomials in Y, so that we get an algebraic equation for Y over Z2, a contradiction to the fact that Y is transcendental over Z2. Thus, f is irreducible in F[X]. In F[X],

f = (X - square root of Y)(X + square root of Y),

where square root of Y denotes a fixed square root of Y in F. But 1 + 1 = 0 in F, so that

(1 + 1)square root of Y = 0
impliessquare root of Y = -square root of Y.

Thus,

f = (X - square root of Y)2

and square root of Y is a multiple zero of f.

In order to avoid this and other pathologies, let us restrict our fields somewhat. Henceforth, unless explicit mention to the contrary is made, we will assume that all fields are extensions of Q. For example. we rule out Z2(Y) since if F is an extension of Q, 1 + 1 not equal 0 in F. It is possible to develop field theory somewhat more generally than we will attempt to do here. So we will develop a somewhat specialized Galois theory, for extensions of Q. In the remainder of this section, let us show what this restriction does for us. First we will prove that the phenomenon exhibited above cannot occur.

Theorem 1: Let f element of F[X] be irreducible and of degree > 1. Then all zeros of f in F are simple.

Before proving Theorem 1, let us introduce a few tools. Let n element of N, a element of F. In the chapter on rings we defined n · a or the scalar product as follows:

n · a = 0         if n = 0,
= a n times   if n > 0.

If h = a0 + a1X + ... + arXr element of F[X], let us define the formal derivative Dh of h by

Dh = 1 · a1 + 2 · a2X + ... + r · arXr-1.

Then Dh element of F[X]. It is trivial to verify the following properties of the formal derivative:

(1)
D(ag + bh) = aDg + bDh   (a,b element of F; g,h element of F[X]),
(2)
D(gh) = hDg + gDh    (g,h element of F[X]),
(3)
D(x - a)v = v(x - a)v-1   (a element ofF, v >1).
(4)
If deg(f ) = n, then deg(Df ) = n - 1  (n > 1, f element of F[X]).

Lemma 2: Suppose that f has a multiple zero. Then f and Df have a nonconstant factor in F[X].

Proof: If alpha is a multiple zero of f, then f = (x - alpha)vg, g element of F[X], v > 1. By (1)-(3), we have,

Df = v(X - alpha)v-1g + (X - alpha)vg.

But since v > 1, alpha is a zero of Df. If f and Df have no nonconstant common factor in F[X], then there exist gamma,beta element of F[X] such that 1 = gammaf + betaDf. Therefore, if we replace X by alpha in this last equation, we get 1 = 0, which is a contradiction. Thus, (f,Df)not equal 1.

Proof of Theorem 1: Without loss of generality, assume that f is monic and assume that f has a multiple zero. Then deg(f ) > 2, and if deg(f ) = n, then deg(Df ) = n - 1 > 1. Moreover, by Lemma 2, (f,Df )not equal1, so that f and Df have a nontrivial factor in common. Since f is irreducible, this implies that Df is divisible by f. But this is impossible since 1 < deg(Df ) < deg(f ).

The next result is another extremely useful consequence of our restrictive assumption.

Theorem 3 (Primitive Element Theorem): Let E = F(alpha,beta) be an algebraic extension of F. Then there exists gamma element of E such that E = F(gamma). Thus, E is a simple extension of F.

Proof: Let F be the algebraic closure of F and let f = IrrF(alpha,X), g = IrrF(beta,X). Then in F, we have

f = (X - alpha1)...(X - alphan),   alphai element of F, alpha1 = alpha,
g = (X - beta1)...(X - betam),   betaj element of F, beta1 = beta.

Consider the following set of elements of F:

(5)
alpha over beta   (1 < i < n, 2 < j < m).

This set is finite and, since F super set of Q, we can choose t element of F distinct from all the elements of (5). We will prove that E = F(gamma) with gamma = alpha + tbeta element of E, it is clear that E superset of F(gamma). Let us prove the reverse inclusion. Consider the polynomial

h(X) = f(gamma - tX) element of F(gamma)[X].

Then h(beta) = f(gamma - tbeta) = f(alpha) = 0. Therefore, in F[X], h(X) is divisible by X - beta1. If h(X) is divisible by X - betaj for some j > 1, then h(betaj) = 0, which implies that f(gamma - tbetaj) = 0. Therefore, gamma - tbetaj is a zero of f, and thus gamma = tbetaj = alphai for some i. But since alpha1 + tbeta1, this implies that

t = alpha over beta,

which contradicts the choice of t. Thus h(X) is divisible by X - beta1, but not by X - betaj for j > 1. Therefore, in F[X], the g.c.d. of h(X) and g(X) is X - beta1, and consequently, in F(gamma)[X],, the g.c.d. of h(X) and g(X) is either 1 or X - beta1. If it were 1, then there exists a(X), b(X) element of F(gamma)[X] such that a(X)h(X) + b(X)g(X) = 1. Setting X = beta1, we then get 0 = 1, a contradiction. Therefore, the g.c.d of h(X) and g(X) in F(gamma)[X] is X - beta1. In particular, beta1 element of F(gamma). But then gamma = beta1t = alpha element of F(gamma), so that F(alpha,beta) subset of F(gamma). This completes the proof of the theorem.

Corollary 4: Let F(alpha1,alpha2,...,alphan) be an algebraic extension of F. Then there exists gamma element of F(alpha1,...,alphan) such that F(alpha1,...,alphan) = F(gamma).

Proof: Induction on n.