The set of real numbers. The set integers. A commutative ring R is an integral domain if R contains no zero divisors. In other words, R is an integral domain if the product of any two nonzero elements of R is nonzero.

Polynomial Rings

As every student of high school algebra knows,

X + X2,   5 + X3,   17 + XY + Z2W,   3 + 2X

are all examples of polynomials. In these examples, the coefficients of the polynomial all belong to the field of real numbers. In this section we will generalize the notion of a polynomial to allow coefficients in an arbitrary ring. In doing so, we will show that the set of all polynomials in X having coefficients in the ring R is itself a ring, with respect to suitably defined addition and multiplication of polynomials. These rings of polynomials provide us with yet another interesting class of rings. Moreover, rings of polynomials are important in themselves, since they will provide us with a technical tool by means of which we can study the properties of arbitrary rings.

Let R be a ring and let X be a symbol. (X is an indeterminate.) Then a polynomial in X with coefficients in R is a formal sum

a0 + a1X + a2 + ...

where ai element of R for all i; and ai = 0 for i sufficiently large. The set of all polynomials in X with coefficients in R will be denoted by R[X]. Note that every element of R is contained in R[X]. The elements of R are call constant polynomials.

A word of caution. The element X is not an unknown or variable element of R. It is not an element of R in any sense and the reader should avoid thinking of it as an element of R. It is a definite fixed element of R[X].

Let us adopt a notational convention for writing polynomials. In specifying a polynomial, we will omit all terms which appear with a zero coefficient. Thus, instead of

1 + 2X + 0X2 + 0X3 + ... element of Z[X],

we will write

1 + 2X.

By this convention, the typical polynomial for which ai = 0 for i > n will be denoted

a0 + a1X + ... + anXn.

The one exception to this convention will be the polynomial

0 + 0X + 0X2 + ...

which will be denoted simply by 0. The polynomial 0 will be called the zero polynomial.

Let

(1)
f = a0 + a1X + a2X2 +...,
(2)
g = b0 + b1X + b2X2 +...

be two polynomials. We say that f is equal to g, denoted f = g, if

a0 = b0, a1 = b1, a2 = b2, ...

Thus, two polynomials are equal if their coefficients are equal.

Addition of Polynomials

Let f and g be given by (1) and (2), respectively. Then the sum f + g is defined by

(3) f + g = (a0 + b0) + (a1 + b1)X + (a2 + b2)X2 + ...

Thus, to add two polynomials, we merely add corresponding coefficients. Note that (3) actually defines a polynomial - that is ai + bi = 0 for all sufficiently large i. Suppose that ai = 0 for i > N and that bi = 0 for i > M. Then ai + bi = 0 for all i > max(M,N), where max(M,N) denotes the larger of M and N. It is trivial to see that with respect to the operation +, R[X] becomes an abelian group. The identity element is 0 and the inverse of f under addition is

-f = (-a0) + (-a1)X + (-a2)X2 + ...

Multiplication of Polynomials

Let f and g be defined by (1) and (2), respectively. We define the product f · g by

f · g = c0 + c1X + c2X2 + ...,

where

c0 = a0 · b0

c1 = a0 · b1 + a1 · b0

c2 = a0 · b2 + a1 · b1 + a2 · b0

.

.

.

cn = a0 · bn + a1 · bn-1 + ... + an-1 · b1 + an · b0

Let us see what this definition really says. Let us make the agreement that the polynomial a0X0 is just the polynomial a0 + 0X + 0X2 + .... Then the definition of multiplication essentially amount to the following: To form f · g, take the product of each term of f by each term of g using the rule aiXi · bjXj = ai · bjXi+j (i,j > 0), and collect all terms containing the same power of X. Thus, our definition of multiplication coincides with the familiar definition of multiplication of polynomials with coefficients in R.

Note that f · g is a polynomial in X with coefficients in R - that is, ci = 0 for i sufficiently large. Indeed, let M and N be so large that ai = 0 for all i > M and bi = 0 for i > N. Assume that i > M + N. Then

ci = a0· bi + a1· bi-1 + ... + ai· b0.

The typical term in this sum is aj· bi-j (0 < j < i). Since i > M +N, we must have either j > M or i - j > N. [Because if both of j < M, i - j < N hold, then we have i = j + (i - j) < M + N.] Therefore, for every i > M + N, we must have either aj = 0 or bi-j = 0 (0 < j < i), so that every term in the expression for ci is zero. Therefore, ci = 0 for i > M + N, and f · g is a polynomial.

It is not obvious that multiplication of polynomials is associative or that the distributive laws are satisfied. By way of illustration, let us verify the right distributive law. Let f and g be given by (1) and (2), respectively, and let

(4)
h = c0 + c1X + ... element of R[X], ci element of R.

Then

(f + g) · h = d0 + d1X + d2X2 + ...,

where

(5)
di = (a0 + b0)ci + (a1 + b1)ci-1 +...+ (ai + bi)c0
(i = 0, 1, ...)

Moreover,

f · h = e0 + e1X + e2X2 + ...
g · h = f0 + f1X + f2X2 + ...

where

(6)
ei = a0 · ci + a1 · ci-1 + ... + ai · c0
(7)
fi = b0 · ci + b1 · ci-1 + ... + bi · c0

Finally,

f · h + g · h = (e0 +f 0) + (e1 + f1)X + ....

However, by (5),(6),(7) and the distributive law in R, we see that ei + fi = di, so that

(f + g) · h = f · h + g · h.

This proves the right distributive law in R[X]. Similarly, it is possible to prove the left distributive law and the associativity of multiplication. These can be proven easily by an interested reader. Thus we have

Theorem 1: R[X] is a ring.

The ring R[X] is called the ring of polynomials in X over R. The next result is typical in the theory of polynomial rings. It asserts that certain properties of the ring R carry over to the ring R[X].

Theorem 2: Let R be a ring, and X an indeterminate over R. Then

(1) If R is a ring with identity 1, then R[X] is a ring with identity

1 = 1 + 0X + 0X2 + ....

(2) If R is commutative, then R[X] is commutative.

(3)If R is an integral domain, then R[X] is an integral domain.

Proof: (1) Let f element of R[X] be given by

f = a0 + a1X + a2X2 +...,

then

f · 1 = c0 + c1X + c2X2 +...,

where

c0 = a0 · 1 = a0

c1 = a1 · 1 + a0 · 0 = a1

c2 = a2 · 1 + a1 · 0 + a0 · 0 = a2

.

.

.

cn = an · 1 + an-1 · 0 + ... + a0 · 0 = an.

Therefore, f · 1 = f. Similarly, 1 · f = f. Therefore, R[X] is a ring with the identity 1.

(2)Let f and g be given by

f = a0 + a1X + a2X2 +...,
g = b0 + b1X + b2X2 +...

and let

f · g = c0 + c1X + c2X2 + ...,
g · f = d0 + d1X + d2X2 + .....

But if R is commutative, ai · bj = bj · ai for all i, j and therefore cn = dn for all n. Thus, f · g = g · f. Since f and g are arbitrary elements of R[X], we see that R[X] is commutative.

(3) By definition of an integral domain it suffices to show that if R is an integral domain, then the product of two nonzero polynomials with coefficients in R is nonzero. Let

f = a0 + a1X + ... + amXm,   am not equal 0,
g = b0 + b1X + ... + bnXn,   bn not equal 0.

We may assume that f and g have this form since both are assumed to be nonzero. Then the coefficients of Xm+n in f · g is given by am · bn. However, since am not equal 0, bn not equal 0, and R is an integral domain, we have am · bn not equal 0. Thus, f · g has a nonzero coefficient and hence nonzero.

Corollary 3: Let F be a field. Then F[X] is a commutative integral domain.

Proof: F is a commutative integral domain with identity.

Note, however, that F[X] is not a field since, for example, X has no multiplicative inverse.

Let R be a ring, X is indeterminate over R and f a polynomial in R[X]. If f is not the zero polynomial, then

f = a0 + a1X + ... + anXn,   an not equal 0.

In this case, we say that f has degree n, and we write deg(f) = n. Defining the degree of the zero polynomial, 0, presents something of a problem. Strictly as a mater of convenience, let us introduce a symbol -infinity called minus infinity. We will perform arithmetic with -infinity according to the following rules:

-infinity + n = -infinity,    -infinity + (-infinity) = -infinity,   -infinity < n

for every integer n. Then let us set the degree of the zero polynomial equal to -infinity. Thus the zero polynomial has smaller degree than any nonzero polynomial. The reason for this strange choice of degree for 0 is explained by the following result:

Proposition 4: Let R be an integral domain, X and indeterminate over R, f,g element of R[X]. Then deg(fg) = deg(f) + deg(g).

Proof: If either f or g is zero, then f · g = 0 and both sides of the above equation are -infinity. Therefore, it suffices to assume that f and g are both nonzero. Assume that deg(f) = m, deg(g) = n. Then f and g have the form

f = a0 + a1X + ... + amXm,   am not equal 0,
g = b0 + b1X + ... + bnXn,   bn not equal 0.

Then f · g is of the form

f · g = c0 + c1X + ... + cm+nXM+n,

where, in particular, cm+n = am· bn. But since R is an integral domain and am and bn are nonzero, we see that cm+n not equal 0. Therefore, deg(f+g) = m+n.

Since we made the agreement that -infinity < n for all integers n, we may state the following analogue of Proposition 4.

Proposition 5: Let R be a ring, X an indeterminate over R, f,g element of R[X]. Then deg(f + g) < max(deg(f),deg(g)), where max(deg(f),deg(g)) denotes the larger of deg(f) and deg(g). If deg(f) not equal deg(g), then the inequality may be replaced by equality.

We have thus far considered the addition, subtraction, and multiplication of polynomials. Let us now begin to consider the problem of one polynomial by another. Let us recall what happens in the case of polynomials with real coefficients. Let f and g be polynomials with real coefficients, g not equal 0. In high school algebra we considered the problem of "dividing" f by g. We learned a procedure called "long division", whereby f/g could be written as a quotient q plus a remainder of the form r/g, where q and r are polynomials. For example, if f = X5 + 2X3 + X + 1 and g = 2X3 + 2, we have q = 1/2X2 + 1, r = -X2 + X - 1. The reader is urged to carry out this computation in order to recall the procedure. By inspection of the long division process, we can see that deg(r) < deg(g). Thus we can express the process of long division as follows: There exist polynomials q and r such that f = qg + r, and deg(r) < deg(g). It makes sense to ask whether such a process of long division can be carried out in a polynomial ring. Note that in the above example, the coefficients lie in Z, but nevertheless, the coefficients of q and r involve rational numbers. This suggests that it would be wise to confine our investigations to polynomial rings over a field. With this restriction, the process of long division can be carried over to polynomial rings. The resulting process in F[X] is called the division algorithm in F[X].

Theorem 6: Let F be a field, X an indeterminate over F, and let f, g element of F[X], g not equal 0. Then there exist polynomial q and r belonging to F[X] such that

(a) f = qg + r,

and

(b) deg(r) < deg(g).

Proof: If f = 0, then we may set q = r = 0. Then deg(g) < 0, since g not equal 0, so that deg(r) < deg(g). Thus we may assume that deg(f) > 0. Let us proceed by induction of the degree of f. We leave it to the interested reader to verify the theorem if deg(f) = 0, in which case f is a nonzero constant. Let us assume the theorem for polynomials of degree < n, and let us assume that f has degree n+1. Suppose that

f = an+1Xn+1 + anXn + ... + a0,   aielement of F,   an+1not equal 0,
g = bmXm + ... + b0,   bielement of F,   bmnot equal 0,

If m > n+1, then we may set f = r, 0 = q. Therefore, we may assume that n+1 > m.

Set

f ' = f - (an+1bm-1)Xn+1-m g.

Then a quick computation shows that deg(f ') is at most n. Therefore, by induction there exist polynomials q' and r such that f ' = q'g +r, deg(r) < deg(g). But then, if we set

q = (an+1bm-1)Xn+1-m + q'

we see that f = qg +r, deg(r) < deg(g). Thus the induction is complete.

Remark: The polynomials q and r satisfying both (a) and (b) above are unique.

Thus far, we have considered polynomial rings in only one indeterminate X. However, if R[X] is a polynomial ring and Y is an indeterminate over R[X], then we may form the polynomial ring R[X][Y]. A typical element of R[X][Y] is of the form

a0(X) + a1(X)Y + ... + an(X)Yn   ai(X) element of R[X],

or

b0 + b01X + b10Y + b20X2 + b11XY + b02Y2 + ....

Let us make the convention that XY = YX in R[X][Y]. Then, in particular, we see that R[X][Y] = R[Y][X]. We will denote R[X][Y] simply by R[X,Y]. Then by what we have said, R[X,Y] = R[Y,X]. This ring is said to be a polynomial in two variables (indeterminates) over R. Similarly, we can construct rings R[X1,...,Xn] in n variables over R. Such rings are of critical importance to geometrical investigations, since geometrical objects (curves, surfaces, etc.) are described by equations in several variables.