m '^(^^.''it/^i,: ^\^ >^^^ RSITt OF CtLIFORIIU >: MH44< RSITV OF CUIFORKU > ^m¥m . RSITV OF CAllFORIIIk LIBRARY OF THE OmVERSITY OF CiLIFORIIIt (5\V LIBRA LIBRARY OF THE UNIVERSITY OF CALIFORNIA LIBRA LIBRARY OF THE UNIVERSITY OF CALIFORNIA LI6RI YEBSITY OF CAliFORNIA LIBRARY OF THE UNIVERSITY OF CALIFORNiA ;l^ I LIB) VERSITY OF CALIFORNIA LIBRARY OF THE UNIVERSITY OF CALIFORNIA kM^^ ^''^^^^"^ ■'^' --f^^^> HyXA,W^ ^,-^^^^5:!^^ ^ 1.^ VERSITY OF CALIFORNIA LIBRARY OF THE UNIVERSITY OF CALIFORNIA LIB^ Digitized by the Internet Archive • - ? in 2007 with funding from j IVIicrosoft Corporation j http://www.archive.org/details/elementsofvectorOOsilbrich ^ iITt I ELEMENTS OF VECTOR ALGEBRA BY THE SAME AUTHOR Elements of the Electromagnetic Theory of Light. Crown 8vo, 3s. 6d. net. Simplified Method of Tracing Rays through any Optical System of Lenses, Prisms, and Mirrors. With Diagrams and blank pages for the reader's notes. 8vo, 5s. net. LONGMANS, GREEN AND CO. London, New York, Bombay, Calcutta and Madras ELEMENTS OF VECTOR ALGEBRA BY L. SILBERSTEIN, Ph.D. LECTURER IN NATURAL PHILOSOPHY AT THE UNIVERSITY OF ROME WITH DIAGRAMS LONGMANS, GREEN AND CO 39 PATERNOSTER ROW, LONDON FOURTH AVENUE & 30TH STREET, NEW YORK BOMBAY, CALCUTTA AND MADRAS I9I9 PHYSICS D^ Plvu/i^ ax|xtA/Ji- CCrPTRlSHT • ••-»«•••. • • • • PREFACE This little book was written at the instance of Messrs. Adam Hilger, and, in accordance with their desire, it contains just what is required for the purpose of reading and handling my Simplified Method of Tracing Rays, etc. (Longmans, Green & Co., London, 1918). With this practical aim in view, all critical subtleties have been purposely avoided. In fact, it is scarcely more than a synop- tical presentation of the elements of Vector Algebra covering the needs of those engaged in geometrical optics. At the same time, however, it is hoped that this booklet will serve a more general purpose, viz. to provide everybody unacquainted with the subject with an easy introduction to the use of Vector Algebra. It is scarcely necessary to explain that the deductions given in this book are based on Euchd's axioms, notably with the inclusion of his postulate of parallels — upon which the equality of vectors is most essentially based. Those readers who are desirous of seeing how the formal rules here given can be generalized so as to be valid independently of the axioms of congruence and of parallels, may consult the author's Projective Vector Algebra (Bell & Sons, 1 91 9), and a sequel to it published in Phil. Mag. for July, 1919, pp. 1 15-143. It is, however, advisable for the student to become first thoroughly familiar with the euclidean vector algebra as here presented. I take the opportunity of expressing my sincere thanks to Messrs. Hilger for enabling me to make this further contribution towards the promotion of the more general use of this powerful and convenient language of vectors, and to the Publishers for the care they have bestowed upon this little book. L. S. London, August, 1919. 415488 CONTENTS rxGE 1. Vectors Defined - i 2. Equality of Vectors Defined 2 3. Addition of Vectors 3 4. Subtraction of Vectors 10 5. Scalar Product of Two Vectors 11 6. The Vector Product of Vectors 17 7. Expansion of Vector Formulae 21 8. Iteration of Vectorial Multiplication - - - 23 9. The Linear Vector Operator 25 10. Hints on Differentiation of Vectors - - - - 38 Index 41 ELEMENTS OF VECTOR ALGEBRA 1. Vectors defined. Whereas common algebraic magnitudes, such as the number of inhabitants of a village, or the mass of a body, or the energy stored in an accumulator, having nothing to do with direction, are called scalars, any magnitude such as a displacement, a velocity or an acceleration, which has size as well as direction in space, is called a vector. The visual, or tangible, representative of any vector whatever is a segment of a straight Hne of some length, representing the vector's size, and of some definite direction in space, together with its sense (say, from a point M towards a point iV), giving the direction of the vector. Vectors will be printed in Clarendon, thus A, B, etc., or n, r, s, etc., and their sizes, regardless of direction, or their tensors (as they are called) will be denoted by the same letters in Italics. Thus, A will be the tensor of A ; B, n will be the tensors of B, n, and so on. Returning once more to the above definition, we may as well say that any vector A = OE is given by the ordered couple or pair of points, the origin and E the end-point of the vector ; the tensor, called also the absolute value, of the vector being the mutual distance of and E. In short symbols, and using the familiar bar for the distance, A = 0->£, A=^OE. The tensor of a vector is thus an ordinary, absolute or essentially positive number. A vector whose tensor is (in a conventionally fixed scale) equal to unity, is termed an unit vector. Thus, if r = I, the corresponding r will be a unit vector. It will be understood that the denomi- nation of A is that of A. That is to say, if A is, for instance, the S.V.A. A •SiLiEMfiNTS: Of 'VJECTOR ALGEBRA displacement of a particle, A will mean so many centimetres ; and if A represents a velocity, A will be a number of cm. per second, and so on. As far as will be possible we shall reserve small (in distinction from capital) Clarendon letters for unit vectors. Thus, if the contrary is not expressly stated, a, b, etc., will stand for unit vectors, so that a = i, b = i, etc. In MS. work the reader will, at least in the beginning of his vector career, find it useful to underline all his vectors once or twice. Or he may write them thicker, and imitating somehow the printer's type. Then, everyone will soon find out his most agreeable manner of writing. 2. Equality of vectors defined. We have just seen that the two essential features of a vector are : its size or tensor, and its direction in space. In some branches of physico-mathematics it is important to consider the position of the vectors in question (besides their sizes and directions), i.e. to localize their origins, either by fixing the origin of each vector altogether or by allowing it only to move freely in its own hne. Such vectors are usually called '* locaHzed '* vectors. In a vast class of investigations, however, the position of these directed magnitudes is of no avail, and it is then obviously convenient not to include position among the determining char- acteristics of a vector. Such vectors, in distinction from localized ones, are called free vectors. These and these only will here occupy our attention. The adjective will be dropped, however, and the beings in question will be called shortly vectors. With' this understanding, the definition of their equality may be put thus : By saying that two vectors, A and B, are equal to one a7iother^ and by writing A=B or B=A, we mean that their tensors are equal, A=B, and that they have the same direction or, in other words, that the straight segments representing these two vectors have the same length and are con- currently parallel to one another. In short symbols, A=B means as much as A=B and A 1 1 B. EQUALITY OF VECTORS DEFINED 3 Thus, if a pair of points, 0, £, represents a vector A = -> E, the 00^ point pairs 0\ E' or straight segments O'E' of equal length with and concurrently parallel to OE are all equal to A, no matter where their origins are situated. Notice that through every point 0' of euclidean space there is one and only one parallel to OE^ so that from every space point 0' as origin one and only one vector can be drawn which is equal to the given A. Of course, the laying off, from 0\ of the length 0'E' = OE implies the use of some " rigid transferer," such as a pair of compasses. Equivalently, we may say that the rigid translation (parallel shifting) of a given vector is irrelevant, or does not change the vector. Provided it is not being rotated, stretched or contracted, we can, by the accepted definition, " transfer " it to any place we Hke best. Two vectors A, B drawn from the same origin are termed coinitial. By what has just been said, any two vectors can be made coinitial, by shifting one of them or both parallel to them- selves. If A=B, then making them coinitial, fuses them into one straight segment. If only A==B (equal tensors only), then making the vectors A, B coinitial will still leave a certain non- vanishing angle, or direction difference, between them, sufficient by itself to declare the two vectors as being different from one another. We will say that two or more vectors form a chain if the end- point of one serves as the origin for the other, and so on. As before, any two vectors A, B can be linked up into a chain, to wit in two manners : end-point of A coinciding with origin of B (or A preceding B), or vice versa. This Hcence will be seen to be of capital importance for the vector sum to be defined presently, inasmuch as it will confer upon that sum the extremely convenient property of commuta- tivity. It will, therefore, be important to keep these latter, apparently trivial remarks well in mind. 3. Addition of vectors. Let A and B be any two vectors, drawn anywhere. Shift B so as to bring its origin to coincidence with the end-point of A, as shown in Fig. I. The vectors being thus linked up into a chain we call sum of A and B and denote by S=A+B a third vector S which runs from the beginning to the end of the chain, i.e. from the origin of A to the end-point of B. 4 ELEMENTS OF VECTOR ALGEBRA ^ This is the definition of the vector sum. The operation, vector ^ addition, thus defined has the so-called group-property, that is to | say, being performed on vectors it gives again a vector, in much , the same way as five apples added to three apples give again a 1 certain number of apples. j fi Fig. I. ; i The above vectorial expression will be read : B added to A. 1 But we might as well have linked the two given vectors so that| the end-point of B=B' falls into the origin of A, as shown in the! lower part of Fig. I. Then their sum, say S', would — according! to the definition — be I S'=B+A, ■! which reads : A added to B. The natural question arises : What " is this new vector S' } Is it equal to S ? j The answer is in the affirmative. For, by construction, B' is j parallel to B and B' = B, so that Oaft and aOy are congruent] triangles, and S' = S. At the same time the angles P and y are ' equal to one another, so that, aj8 being parallel to yO, so are also j 0/3 and ya, or S 1 1 S'. Therefore, by Section 2, S' = S, what was] to be proved. 4 Thus we have "* A+B=B-f-A, (l)^ the commutative property of vector addition. The order of the ; addends, in the vector chain, is irrelevant for their sum. i Again, we might have shifted B to the position B" (Fig. i), j retaining also the previous B = a-^/S and constructing A' = S->/? = A. ! ADDITION OF VECTORS 5 Then, OafiS being a parallelogram and S = 0^> ^ one of its diagonals, we should have the following construction of the sum of two coinitial vectors A, B (Fig. 2) : Through a, the end-point of A, draw a parallel to B, and through /?, the end-point of B, draw a parallel to A. Then y, the cross of these parallels, will be the end-point of the required vector sum A+B or B+A, and the common origin of the two addends will be the origin of their sum S = 0->y==A+B-B-HA. (2) This is known as the parallelogram construction of a vector sum. We might have started from it as a sum definition. It has the advantage of being immediately symmetrical with respect to the two addends. At any rate we see that the chain and the parallelo- gram constructions are (in virtue of Euclid) wholly equivalent to one another. Thus far the case of two vector addends. Now, the sum of these being again a vector, S=A+B, we can add to S any third vector C, thus obtaining S + C = (A+B)+C=C + (A-f-B), the latter by the commutative property. Similarly for the sum of four and more vectors. Again, linking up the vector addends A, B, C into a chain, we see without difficulty (Fig. 3) that (A+B)+C=A-h(B+C), (3) the result being in both cases the same vector, viz. that drawn from the beginning to the end of the chain. The same property holds for the sum of any number of vectors. The brackets become 1 6 ELEMENTS OF VECTOR ALGEBRA superfluous, and either of the above expressions can simply be ' written i A+B+C \ or B+A+C, and so on. \ The addition of vectors is thus seen to be associative as well as commutative, exactly as the ordinary algebraic addition of scalars. ; If by any appropriate parallel shifting of any number of given \ vectors, say A, B, C, D, they can be linked up, as in Fig. 4, into i a closed chain (or a polygon), plane or not, then the sum of these ! vectors is a nil vector or simply nil, S=A+B+C+D=o, and therefore also A + C + B+D = o, etc. It is scarcely necessary ] to say that a vector is nil or zero, S = 0, if \ that is, if its tensor vanishes, and conversely ; or, in other words, i if its end-point and origin coincide, such precisely being the case of our closed chain. The vector sum, which shares with the ordinary algebraic sum the two capital properties of commutativity and associativity, i ADDITION OF VECTORS 7 contains the algebraic sum as a particular sub-case, to wit, when the vector addends are all parallel to one another. For, such being the case, they can always be brought into one line or made colHnear. Parallel vectors, no matter what their tensors, are therefore called also collinear vectors. Now, if A, B are collinear vectors, the tensor of their sum is A±B, according as A, B are of equal or of opposite senses. The tensor of a sum of vectors, as S=A+B, can conveniently be denoted by S=|A+B|, as is usual for the absolute value of ordinary algebraic magnitudes. Thus we shall have, for collinear vectors, |A+B| = |^±5|. But, it will be well kept in mind that in general, for non-collinear addends, since |A+B| is the length of the third side of a triangle whose two other sides are A and B. By what has just been said, the sum of two equal vectors, which is written A+A or 2A, is a vector coinciding with A in direction arid having 2A for its tensor. Similarly for 3A, 4A, and so on. Again, if B be such a vector that 2B=A, we shall write B = iA, and similar meanings will be attached to JA, JA, etc. In this manner, and applying in the case of irrational factors the well- known limit-reasoning, we easily obtain the meaning of the expression wA, where n is any positive scalar number, integral, fractional or irrational. We can say shortly that nk is the vector A stretched in the ratio n\ i. If w is negative, then (as justified in Section 4, 8 ELEMENTS OF VECTOR ALGEBRA infra) wA will be the vector A stretched in the ratio |w| : I and then inverted in its sense, or first inverted and then stretched. In particular, if a is an unit vector, the " unit of A," as we have said before, we shall obviously have A = ^a. (4) Here A^ the tensor of A, is an ordinary positive number. Let a, b be any two non-collinear unit vectors. (Imagine them shifted so as to be coinitial.) Then any vector R contained in or parallel to the plane a, b can obviously be expressed by R=^a + yb, (5) where x, y are some scalar numbers. For the plain meaning of \ this assertion is that, starting from 0, the origin of a, b, any other point of the plane a, b can be reached by making a number {x) Fig. 5. % of steps a and then a number (y) of steps b (or first yb and then \ x^). If both X and y are positive, then, with as origin, R will lie in the region I of the plane a, b (Fig. 5) ; if a; < o, y > o, it ' will fall into II ; if ;«• < O, y < O, into III, and finally, \i x> O, ; y < o, into the region IV. The scalars x, y in (5) are called the components of R along \ a, b as axes. ! Similarly, if a, b, c be any three non-coplanar vectors * which \ we may again take as unit vectors, then any vector whatever can i be expressed in the form , R=A;a + yb+sc. (6) * I.e. such as cannot be made coplanar by parallel translations. ' ADDITION OF VECTORS 9 The scalars x, y, z are called the components of R taken along a, b, c as axes. These axes may be chosen at our will (if we wish at all to split our vectors B into components), either perpendicularly or obliquely to one another, the only condition for covering all possible vectors (R) being that a, b, c should not be coplanar. The three vectors a, b, c or, as we will say, the reference system a, b, c, being fixed conventionally, we see from (6) that any vector is fully determined by three scalar data x, y, z and not less than three. The same thing is obvious from formula (4), according to which any vector R can be represented by In fact R is one scalar number, and r (a direction) implies two more scalar data, for instance two angles, which makes in all three independent scalar data as above. In so-called polar coordinates, for instance, we have (Fig. 6) R = i?{[icos <^-i-jsin of the end-point of R, with as origin. Thus A =B means as much as A-^^B^ and A^^B^ and A^ = B^, if the suffixes i, 2, 3 are used for the components of the vectors along i, j, k, or as much as ^A=Rb, ^A=^Bj ^A=^£, if the suffixes ^^ are used to distinguish the polar coordinates of the end-point of A from those of the end-point of B. But it must henceforth be urged that any such splitting of a vector should be avoided as much as is possible in the course of a vector investigation of any kind. For the utility of vector method lies precisely therein that it enables us to treat vectors as wholes instead of the triad of " components " of each of them. 4. Subtraction of vectors. This will require but a few remarks. In fact, as in common algebra, the difference of two vectors A ' and B, to be denoted by A-B, may be defined as such a vector C, which added to B gives A. In symbols, we say that if B+C = A./ ^1. From this definition we see at once that if A, B are made coinitial (Fig- 7)> the vector A-B runs from the end-point of B to the end-point of A. From the same figure, and by what was explained previously, we see that A + B and A - B are represented by the two diagonals of the parallelogram constructed upon A, B. SUBTRACTION OF VECTORS ii Apply the above definition (8) to the particular case A=0; then C = o-B= -B, andB+C = 0; therefore, B + (-B)=o. (9) This settles the meaning of the vector denoted by -B ; it is the vector which runs from the end-point towards the origin of B, or the reverse of B. This also justifies the interpretation given before to a negative scalar factor of a vector. Henceforth, for any A, B, A-B will stand for the same vector as A + (-B). The above remarks complete the meaning of wA, where A is a vector and n any real scalar, positive, nil or negative. The concept of such a product of a vector by any scalar n does not contain, in fact, anything besides the previous concept of vector »sum or difference. It is derived from their special case, viz. relating to collinear vectors. To say it once more, nh is simply the vector A stretched in the ratio \n\\ i and, if n < O, turned through i8o° (in any plane passing through A). Finally, as the reader himself will easily prove, for any A, B, and any scalar factor n, n (A+B)=wA + wB. (lo) Similarly for three or more vector addends. This settles all questions concerning the multiplication (or division) of a vector expression by any scalar number. 5. Scalar product of two vectors. We now come to a new concept, transcending that of vector addition which hitherto has occupied us. The *' scalar product " of two vectors A, B which will be denoted by AB is, first of all, not a vector but a scalar. (Thus the scalar multiplication of vectors does not respect the group requirement ; it yields a result not contained in the class of operands : it takes two vectors and constructs out of them something which is utterly deprived of direction. None the less 12 ELEMENTS OF VECTOR ALGEBRA | .1 it is a very useful operation.) The value of this scalar is, by | definition, proportional to the tensors of both the factors and to the cosine of the angle (A, B) included between them. In short, the definition of the scalar product is AB = ^5cos(A, B). (II) I This can also be read : AB is the projection of A upon B multi- plied by B, or the projection of B upon A multiplied by A. i Since AB^BA, for A, B are common numbers, and ; cos(A, B)=cos(B, A), i we see at once from the very definition (ii) that I AB=BA, (12) i the commutative property. According as the angle (A, B) is < - or > - f but <— ), the | product AB is positive or negative ; for A, B are themselves : essentially positive. And if (A, B)=- or A J_ B, then ] AB=o, I no matter what the (finite) tensors of A, B. In this case the i operation (scalar multiplication) deprives the material operated , upon not only of direction but of size. It annihilates it. j Conversely, if of two vectors A and B we know only that AB =0, ' then the only conclusion we can draw from it is that A±B, \ but by no means that one of the factors vanishes, unless we happen i to know beforehand that the two vectors cannot be perpendicular. ! It is of prime importance to keep this well in mind : i AB=o means in general only as much as A J_ B. The scalar product AB contains the ordinary algebraic product as a special case, to wit, when A, B are collinear vectors. For if \ such be the case, we have cos (A, B) = ± i, and therefore, ; AB=±^5, (13) \ according as A and B have the same or opposite directions. SCALAR PRODUCT OF TWO VECTORS 13 Since the tensor of the vector mA is mA, we see at once from (li) that vtAnB = mnAB. Thus, for example, if a, b be the units of A, B, we have AB = ABfih, where, again by the definition (ii), ab = cos(a, b), (14) vahd for any pair of unit vectors a, b. Thus, for instance, if a, b make with one another the angle of 45°, we have ab = — 7=, and if V2 (a, b)=90°, ab=o. For the three normal unit vectors i, j, k used above we have ij =jk=ki=«o. As a sub-case of (13) we have the scalar square of a vector, or better, its autoproduxt, AA or A2 = ^2, and if a be a unit vector, a2 = a2=i. Thus, i2 = 32 = k2=i. Again, if R is any vector whatever and n a unit vector, Rn is the (scalar) component of R along n, or the orthogonal projection of R upon n as axis, Rn = i?cos (R, n). By what has been said we see that if A, B be rigidly Hnked together and thus moved about in space in any arbitrary manner whatever (spun round, etc.), the value of the product AB is not changed. It is thus an invariant of the pair of vectors with respect to their common rigid motion. In fact, AB depends only on the tensors of A, B and on their relative direction, i.e. the angle (A, B). By the fusion of A, B into AB all directional properties of the factors are gone. The result has nothing more to do with direction in space ; it is an ordinary scalar, like the tensor of each of the two vectorial factors. Thus, if C be a third vector, (AB)C or C(AB) will simply mean the vector C magnified (stretched) AB times, assuming, that is, that AB is a dimensionless or pure number ; if AB is an area and C, say, a displacement, then (AB) C, the tensor u ELEMENTS OF VECTOR ALGEBRA of fAB)C, is a volume, of course, and so on. If D is a fourth | vector, i (AB)(CD) i will again be a scalar, and so on. The brackets are here used as j separators. They are, of course, indispensable in such and similar ; cases. For, to take only three factors, ABC would, in general, be ■ ambiguous, since * (AB)C is a vector along C, while A(BC) is a vector along A, and thus entirely different from the former. Instead of brackets \ dots may conveniently be used as separators, thus (AB)C=AB.C, ? (AB)(CD)=AB.C3D, ■ and so forth. The reader will soon find that this need of precaution , gives rise to no serious inconvenience. ' The scalar product AB is commutative owing to the symmetry i of its very definition with respect to A, B. In this it resembles! the ordinary product. But, what is most important, it has also ; the distributive property, viz. for any A, B, C, ■ A(B + C)=AB+AC. (15)! For, by the definition, A(B+C) or (B+C)A is the projection'^ of the vector B + C upon A multiplied by A. But the projectioaj of the sum of two (or more) vectors upon any axis is equal to thej algebraic sum of the projections (Fig. 8), whence the proof of the! distributive law (15). \ Similarly, ; A(B+C+D+E + ...)=AB+AC+AD+AE + ... , , and also jl (A+B)(C+D)=(A+B)C + (A+B)D .] = C(A+B)+D(A + B)=AC+BC + AD+BD. SCALAR PRODUCT OF TWO VECTORS 15 And since B-C is the same thing as B + (-C), we have also A(B-C)=AB-AC. In fine, the scalar multiplication of vectors is commutative as well as distributive, and any two vector polynomials are multiplied out precisely as in ordinary algebra. This makes the scalar multiplica- tion of vectors a powerful operation. As examples we may quote (A+B)(A-B)=A2-B2 = ^2_52^ meaning that the product of the lengths of the diagonals of a parallelogram multipHed by the cosine of their included angle is equal to the difference of the squares constructed upon the sides of the parallelogram ; again, (A+B)2 = ^2 + 52 + 2AB, or (Fig. 9), remembering that kB=AB cos [ir-6)= -AB cos 6, C^ = A^ + B^ T 2 AB cos d, the well-known trigonometrical relation. In particular, if A J_ B, (A+B)2 = ^2 + 52, the theorem of Pythagoras. As a third example, let us quote the scalar product of two coinitial unit vectors, written as in (7a), Fi = [i cos i + j sin <^J sin ^^ + k cos O^, Tg = [i cos 2] sin d^ + k cos d^, and representing (by their end-points) two places on the Earth * whose geographic colatitudes and longitudes are ^j, t^^ and ^2> <^2- If s be their geodesic or shortest distance, i.e. the angle (r^, Fg), we have cos5 = rir2. Now i^=i, etc., and ij=jk = ki = o. Thus, * Assumed to be ideally spherical, of radius taken for unit length. i6 ELEMENTS OF VECTOR ALGEBRA i multiplying out the two trinomials we have, for the required dis- tance 5, cos 5 = cos 6^ . cos ^2 + sin 0^ . sin dglcos <^i . cos «^2 + sin i . sin ^. Again, calling for the moment a, b two equatorial unit vectors i having the longitudes of the two places (cf. Fig. 6), viz. a=i . cos <^i + j . sin !, V=i cos <^2 + 3 ^in <^2» we have y j ' '» ab = cos (<^i - 2) = cos 2 + sin i sin c^gi the well-known formula of plane trigonometry, so that the geodesic distance of the two places, (dj, <^i) and ($2, ^j becomes cos s = cos $1 cos $2 + sin 6j sin 62 cos (^^ - <^2), (7^) an important formula for navigators, which is at the same time the fundamental "cosine formula" of spherical trigonometry. In: fact, N being the pole (^ = 0), formula {7b) concerns the spherical triangle 1N2 (Fig. lo), whose sides are s, 6^, $2, and whose angle N •^8 included between the latter two is 2-i' Notice that this is! valid for any spherical triangle ; for one of its corners can always . be considered as our pole, ^ = 0. * The reader will not be astonished to see the comparatively! compHcated theorems of euclidean geometry thus to follow with-' out the least trouble from squaring the sum of vectors or from] multiplying scalar ly two unit vectors. For essentially all euclidean | relations have been condensed into the above vectorial definitions: and rules of operations (addition and scalar multiplication). Still, as such a condensed system, the vector algebra is exceedingly] useful. The reader will find for himself that the vector equality! THE VECTOR PRODUCT OF VECTORS 17 and the vector addition alone, as explained in Sections i to 4, even without the help of the scalar product, are sufficient to demonstrate formally a large number of euclidean theorems, such, for instance, as the mutual bisection of the diagonals of a parallelogram, the common cross of the three medians of a triangle, and so on. The scope and purpose of this booklet do not permit us to enter into all these attractive details. The wiUing reader will, however, find no difficulty in treating them as exercises which he will soon find to be easy as well as interesting and useful, when skill in handling the vector method is aimed at. 6. The vector product of vectors. Two non-collinear vectors, A and B, can always be said to define a plane A, B, by making them coinitial, for instance, as in Fig. 1 1. We already know that one Fig. II. of the previous operations, AB, deprives them of all their properly vectorial^characteristics, and the other, A+B, or more generally xA+yB, gives us only vectors which are again in the plane A, B. The operation to be now introduced is in this respect particularly interesting, since it yields a vector outside the plane of the operands A,B. Definition. We call vector product of A into B and denote by VAB a third vector C normal to A, B and drawn so that for an observer glancing along C the rotation turning A into B, through an angle smaller than 180°, is clockwise. This fixes the direction and the sense of the vector product C = VAB, and its tensor is defined as equal to the area of the parallelogram constructed upon A, B as sides, i.e. C=|VAB|=^5|sin(A,B)|. (16) From this definition we see, first of all, that the vector product is not commutative, inasmuch as we have VBA=-VAB. (17) S.V.A. B i8 ELEMENTS OF VECTOR ALGEBRA \ Again, if A,B are parallel to one another, i.e, collinear, we have ! VAB=o. ' \ And if A _LB, then sin (A, B) = ±1, and \ C=|VAB|=^5, \ while A, B and C form a right-handed normal system of three] vectors. If A points upward and B towards the right, then: C = VAB points forward. ! If we know of two vectors A, B that their vector product vanishes, \ then we can conclude only that they are parallel (collinear), i.e.\ that \ where m is some undetermined scalar number, but by no means | that one of the vectors vanishes (unless we know beforehand that , they cannot be parallel). This is, mutatis mutandis^ analogous to! what has been said in Section 5 with regard to the scalar product. ' From (16) we see at once that the vector product of m times Ai into n times B is equal to \ mnVAB. 1 Thus, for instance, '^ VAB = ^5Vab, (18)! where a, b are the units of A, B. Similarly AB = ABB,h. For a right-handed system of normal unit vectors, as the previous j i, j, k, we have Vij=k, Vjk=i, Vki=j, {a) three relations derivable from one another by cyclic permutations; of i, j, k. At the same time we have, of course, as for every vector, \ Vu=Vjj=Vkk = o. Contrast these relations with the previous ones, i2=j2 = ij2_,i g^^^ ij =jk=ki = 0. The latter follow also from {a) ; for, by the second! of (a), for instance, i = Vjk is normal to j, and therefore ji = jVjk =0. ] It is scarcely necessary to explain that jVjk means the scalar] product of the vectors j and Vjk. j More generally we have, for any two vectors A, B, by the very! definition of VAB, AVAB=BVAB=o. j Let now A, B, C be any three vectors whatever, generally non \ coplanar with one another. Then the scalarly-vectorial product,! AVBC, THE VECTOR PRODUCT OF VECTORS 19 which is itself a scalar, has a very simple geometrical meaning. In fact, let A, B, C (in the order as they are written) form a right- handed system, Le. such that a person glancing along C sees the rotation from A to B (through less than jt) clockwise. Construct upon A, B, C as edges a parallelepipedon (Fig. 12). Then VBC will be perpendicular to the base B,C, and its tensor will be equal to the area of this base ; in symbols, VBC = (area of base) n, where n is a unit vector perpendicular to the base. Therefore, AVBC = (area of base) An, and An being the height of the parallelepipedon, we see that AVBC = volume of parallelepipedon A, B, C, provided that A, B, C is a right-handed arrangement of the edges. (If it were a left-handed arrangement, then AVBC would be equal to minus the volume.) Now, the same volume can be expressed by taking C, A or A, B as base. Thus we obtain the important property AVBC=BVCA = CVAB, (19) or in words : the cyclic permutation of the three factors of AVBC does not influence the value of the product.* Inverting the cyclic order is equivalent to changing its sign. For VCB is minus VBC. The particular property {A;A-{-yBjVAB=0 can now be interpreted geometrically by saying that the volume of a parallelepipedon * The validity of formula (19) is by no means based upon this volume- proof (or rather illustration), which is given here only because it best appeals to simple intuitions. In fact, (19) can be proved algebraically, without any appeal to the concept of ' volume.' 20 ELEMENTS OF VECTOR ALGEBRA vanishes when its three edges become coplanar, that is to say, when all its faces collapse into one plane. If of any three vectors A, B, C we know that AVBC = o, then the only thing we can conclude is that A, B, C are coplanar, but by no means that one of these vectors vanishes. Conversely, if A, B, C are coplanar, we have AVBC = 0. The theorem expressed by (19) is of great utility in many applications, and it deserves, therefore, to be well kept in mind. As in the case of the scalar product, one of the most important properties of the vector product is its distributivity, i.e. for any three vectors A, B, C, VA(B +C) =VAB + VAC. (20) This capital property can be proved in a variety of ways. First of all, by an immediate geometrical construction of both the right- and the left-hand member of (19), — which will be left as an exercise for the reader. (It will be enough if the reader con- structs it for the simplest case of coplanar A, B, C.*) Another, comparatively simple proof, based upon (19), is this: Let us write VA(B + C)-VAB-VAC = X. Then our problem is reduced to proving that X vanishes. Now, all the three addends being perpendicular to A, so is their sum X, i.e. XA = o. Again, XB = B V A (B + C) - BV AC = (B + C)VBA-CVBA, by (19), = BVB A + CVB A - CVB A = o, and similarly XC = 0. Thus, the vector X either vanishes or is normal to each of the three vectors A, B, C. Now, if these are not coplanar, the latter case is excluded, so that X = 0. Thus, Tor non-coplanar A, B, C the distributive property (20) is already proved. And if A, B, C happen to be coplanar, add to C, for instance, a fourth vector D inclined to the plane of A, B, C. Then the new vectors A, B, C+D will not be coplanar, and VA(C + D)-l-VB(C + D)=V(A+B)(C + D), * For the case of non-coplanar A, B, C is more easily dealt with by the following analytical method. THE VECTOR PRODUCT OF VECTORS 21 and since D can always be so chosen as to make the three relevant vectors in each of these products non-coplanar, they may be expanded, giving VAC + VAD + VBC + VBD = V(A +B)C + V(A +B)D ; but, by the above, VAD+VBD=V(A+B)D, whence VAC+VBC=V(A+B)C, or, changing the sign of both sides, VCA+VCB=VC(A+B). Thus the distributive property of vector multiplication is proved for any A, B, C, coplanar or not. The product of two binomials (or polynomials) does not call for lengthy explanations. Thus, V(A+B)(C + D)=V(A+B)C+V(A+B)D = - VC(A +B) - VD(A + B) =VAC +VBC+VAD +VBD. The vector multiplication of any two vector polynomials is thus seen to obey the same rules as ordinary algebraic multiplication, the only difference being that vector products are not commutative. A reversal of the order of the two factors changes only the sign of their product, which is easily remembered. 7. Expansion of vector formulae. Basing ourselves upon the distributive property just proved, we can at once expand the vector product of any two vectors into its cartesian or any other form. Thus, if h, = A{\ + A^-\-A^, and B = B-^i-irB^^Bjs,, we have, remembering that Vii = o, Vjk= -Vkj=i, etc., VAB =i(A53 - ^3^2) + J {-^3^1 - ^1^3) +kMA - ABi\ (21) exhibiting ^2^3~^3-^2> ^^c. (by cyclic permutation), as the three rectangular components of the vector product. Since |VAB| is the area of the parallelogram constructed upon A, B as sides, we see at the same time that ^2^3~^3^2» ^^^-j ^^^ ^^^ areas of the projections of this parallelogram upon the planes j, k ; k, i ; i, j, — a well-known result which, however, is more easily seen on the 22 ELEMENTS OF VECTOR ALGEBRA vector method. The last formula, (21), is easily memorized in its determinantal form, which is VAB = 3 A A k A B.. [2ia) '1 ^2 -^3 In exactly the same way the reader will show himself that the cartesian expansion of AVBC, the triple product representing the volume of the parallelepipedon A, B, C, is AVBC ^1 A -^i B2 B2 t'O Co (22) A A A, Bi 5i B, B, = Ci C^ c. c. A This, in fact, is the most familiar expression for the volume of : the parallelepipedon constructed upon A, B, C as edges. Formula (22) gives also an immediate verification of the property 1 AVBC =BVCA, etc., as in (19). For I B2 -B3 ^2 Cg A ^! and so on. For the scalar product we have immediately, remembering that 12= I, ij=o, etc., AB = ^151 + ^2^2 + ^3^3. (23) As particular cases of (21) and (23) note the results for two unit vectors a, b which include the angle trr, singer = [aj}^ - a^b^Y + (a^b^ - a^b^y-h (ajb^ - aj)^\ cos 73 = fli&i + ^2^2 + ^3^3> ^ii ^i> ^tc, being now the direction-cosines of a, b relatively to i, j, k as axes. For such is the meaning of the components of unit vectors. In order to give at least one illustration of the utility of AVBC, let us consider three coinitial unit vectors whose end-points may be conceived as the vertices i, 2, 3 of a spherical triangle drawn on a unit sphere. Let us use the colatitude and the longitude as in (7a). Without any loss to generality we may put the pole EXPANSION OF VECTOR FORMULAE 23 (^=0) into the vertex i and take the first meridian along the side 12 ; thus, a^ being the angle at I, and ^g, ^3 the sides of the spherical triangle opposite 2 and 3, r2=icos 53 + j sin 53, 13 = i cos 52 + sin ^2 . [j cos a^ + ksin a^]. This gives for the scalarly-vectorial product, by (22), since the first vector has no second and no third component, and the second vector no third component, riVrgTg = sin ^g sin Sq sin a^, which, by the cyclical property {19), is also equal to r^t^Ti and to rsVriig, and these products are obviously equal to sin 53 sin s^ sin 02 and to sin Si sm So sm a 3) where og, a^ are the remaining two angles of the spherical triangle. Thus, sin ai_sin Ug^sin a^ sin Si " sin $2 " sin 53 (I9«) the fundamental ** sine formula " of spherical trigonometry, following on the vector method as easily as the '* cosine formula " given before. It is interesting to note that the " sine formula " is, in this circle of ideas, but the statement of the triple expressi- bility of the volume of the parallelepipedon r^, Fg, 13, viz. as r^r^^i or rgVrgTi or r3Vrir2. Other examples are left to the care of the reader. 8. Iteration of vectorial multiplication. There is but one more important formula to be noted in connection with the vector product of vectors, viz. a formula giving a convenient vector expansion of the result of repeated or iterated vector multiplication, VA(VBC) or simply VAVBC, which reads : having obtained the vector product of B, C, multiply it, again vectorially, by A. This ternary product, which occurs very often, is, of course, again a vector, to wit, perpendicular to A and to VBC ; but the latter being itself perpendicular to B, C, 24 ELEMENTS OF VECTOR ALGEBRA ] I our new vector VAVBC is coplanar with B, C, so that we know i beforehand that the result will be of the form * \ VAVBC = /3B + yC, ' where /S, y are some scalars. Since the ternary product is per- i pendicular to A, we have /3(AB) +7(AC) =0, so that j VAVBC = A{B(CA) -C(AB)}, \ where A, is a scalar. It remains to determine its numerical value. ; This can be done, for instance, in the following manner. First of all, A can always be assumed to be coplanar with B, C, since ] its part normal to B, C contributes nothing. Next, dividing both ' sides by ABC, the equation becomes : VaVbc = A{b(ca)-c(ab)}, i where a, b, C are the units of A, B, C. Now, multiply both sides ' scalarly by b, and notice that, by (19), bVaVbc = (Vbc) (Vba) =sin (b, c) . sin (b, a). Thus, i ' I sin (b, c) . sin (b, a) = A.! [cos (c, a) - cos (a, b) . cos (b, c)] ; : but, the three vectors being coplanar, we have ! cos (c, a) =cos (b, a) . cos (b, c) +sin (b, a) . sin (b, c), j so that .A, = I • ] The required formula is, therefore, I VAVBC =B(CA) - C(AB). (24) ; As an exercise, the reader may verify it by an iterated application i of the cartesian expansion (21) or {21a). Having once obtained^ this important formula, there will be no difficulty in dealing with i quaternary vector products, as VDVAVBC, which becomes « (CA)VDB-(AB)VDQ, etc. But such products will hardly occur' in practice. A notable property of the above ternary product and of its two cyclical MgpjKitations is that ^0^"^^ VAVBC + VBVCA + VCVAB = o, (24^) ■ identically. For the six right-hand terms of (24) and of the two| similar equations destroy themselves in pairs. * The trivial case of B, C collinear can be discarded ; for then VAVBC =0. ITERATION OF VECTORIAL MULTIPLICATION 25 A particular case of (24) which often occurs is that in which C is equal to A and is a^ unit vector u, say. Then we have VuVBu = B-(Bu)u, {24b) whence we see also that VuVBu is the part of the vector B normal to u, in both size and direction. For (Bu) u is the part of B along u. To close this section, and at the same time the essential part of the whole Vector Algebra, but a few more remarks which will be useful in connection with problems often occurring in practice. Let X be an unknown, and A, u two given vectors, the latter an unit vector. If we know of X only that VXu = A, (a) we cannot fully determine X. For to a solution of this equation we can add any vector mu (since Vuu = o), and X + mu will again be a solution of this equation. In order to determine X uniquely we must have one more (scalar) datum. Let this be Xu=(r, (b) where o- is a given scalar. Then X is completely determined. In order to find its value expHcitly in terms of the given A, u, cr, multiply the equation {a) vectorially by u; then, in virtue of (24), X-(Xu)u=VuA, and by {b), X=(m+VuA, {c) which is the required solution. This simple rule, {c), for solving the equations (a) and (&), will often be found helpful. 9. The Linear Vector Operator. Let R be a variable vector, that is to say, one that can assume in turn all possible sizes (tensors) and directions. Of each of these determined vectors we can speak as of the special value of the variable R. To have a good picture of such an abstract concept, imagine R as a straight, extensible and contractile string fixed at one of its ends at a permanent point ; then its free end-point P occupying in succession all possible points of space, OP will represent the various values of R. The vector R can, in such a connection, be advantageously called the position vector of the point or, if one prefers, of the particle P. Now imagine that there is another particle P\ and let its position vector, with the same origin (9, be called R'. Let there be some mechanism, or else our own imagination, which to every chosen 26 ELEMENTS OF VECTOR ALGEBRA position of P makes correspond a certain position of P'. This we may express by saying that to every value of R corresponds a certain value of R', by writing R' = wR, (25) and by calling R' a vector function of the variable vector R. If, as we assume, to every R corresponds but one R', determined in size and direction, we will say that R' is a monovalent function of R, and we will call CT a monovalent vector operator, the symbol of some operations to be performed on R in order to obtain R'. We can think of such operations in the algebraical, as well as in the physical sense of the word, as turning round the representative string, stretching or contracting it according to some more or less complicated prescription. It is needless to explain that an equation such as (25) is equivalent to three scalar equations : each of the components of R' equal to some function of, in general, all the three components of R. Suppose now that R is represented as the sum A+B of some two vectors. In general the operations embodied in zs may be such that cr(A+B) is not the same thing as rnA + zsB. A good example of such an operator is that which converts an incident luminous ray into the refracted ray (cf. Simplified Method, quoted in Preface). But the operations represented by ZJ may also, in particular, be such that ct(A+B)=C7A + ctB, whatever the vectors A and B. If such be the case we call TS a distributive operator. An example of this kind is afforded by the *' reflector," i.e. that operator which converts the incident ray into the reflected one. The simplest example of a distributive vector operator is, however, a scalar number for symmetrical operators only. (In fact, without the circumflex this last letter of the Greek alphabet has some symmetry.) Manifestly the symmetrical operator w will be a great deal simpler than the asymmetrical ts. It is, therefore, very agreeable to see that any ZS can be split into an w and some other asymmetrical, but very simple, operator which is called an antisymmetrical (or skew-) operator and which we will denote by a. The latter is defined most conveniently by saying that, for any A, B, AaB=-BaA, .*. AaA=0, (29) and therefore also a^^= -<^baj ^tc, and a^ = o, etc., so that the table for such an operator becomes -«a6 o a^ (29a) which justifies the name. The announced property can shortly be written CT = (jo -ha, which is a symboHc short for C7R = (oR + oR, where R is any vector operand. The said property is easily proved. 30 ELEMENTS OF VECTOR ALGEBRA i I In fact, let zs' be the conjugate of the given operator ST. Then! we have, identically, i CT = J(t7 + CT') + i(sy - CT'). (30) I But the first term represents a symmetrical operator, because,] by (28), I A(sT + CT')B=AcyB+AWB = BcT'A + BcTA=B(cT + cy')A, ! which precisely is the definition (27) of a symmetric operator.; And the second term is antisymmetric, for ' A(CT-t3r')B=BCT'A-BCTA= -B(cT-w')A, i as in (29), the definition of antisymmetric operators. This proves; the statement, without the slightest need of splitting ZS into itsi nine constituents TS^, etc. .; We thus see that every linear vector operator can be written j sy = G) + a, (31) where its symmetrical part is a> = i(CT + zs') and its antisymmetrical': part a = J(cy-CT'). If the reader so desires he can introduce the nine coefficients of] these operators. Then j «^«6 = i(^a6 + W6a)=W, bat proving again that w is self-conjugate, and ,j ««« = O, etc., a„s = J(CT„j - CT J =* - a^j, j proving that a is antisymmetric. ; Turning now to the antisymmetric operator a we can see from its definition (29) that it has a very simple meaning. In fact, . let R be the vector operated upon. Then, by the second of (29), j whatever the value of R, aR is a vector normal to R. Now, this condition can be satisfied by putting | aR = VwR, \ where w is some fixed vector. But such being the case, we have J also, for any A, B, ) AoB = AVwB = - BVwA = - BaA, ] so that the general definition (29) is completely satisfied. Thus, ' the antisymmetric operator is, dropping the arbitrary operand, ] a=Vw; ; in words, to operate with a is to multiply vectorially by a certain \ vector w. THE LINEAR VECTOR OPERATOR 31 Ultimately, therefore, we can write, instead of (31), for any- linear vector operator, w = ^^^ so on, while aR = VwR, we find without difficulty that, if a, b, c be a right-handed system, 2w=a(CT,,-trrJ+b(CT«-CTj+c(CT^-CTj, (33) which is the required expansion of w. Having thus shown that the antisymmetric part of any operator cy is simply a vectorial multiplier Vw, it will henceforth be enough to study the remaining part of CT, that is to say, the symmetrical operator , both + x and ~x counting for one axis. This is merely a definition. Let us now see whether at all such axes and how many of them do exist, and what are their mutual relations. Let us start with the last question. Suppose then that there are two different principal axes x and y. Then, by the very definition of such axes, (OX = WjX, 2 = W3, \ then every direction whatever will be a principal axis with the . same principal value, in which case the operator w degenerates ' into an ordinary scalar factor. ' Thus, in the most general case the symmetrical operator co can i have three different,* mutually perpendicular principal axes, x, y, i z ; and only three. Because the fourth, if it existed and carried a new W4, would have to be normal to those three which, in our ■ space, is nonsense ; and if 0)4 were equal to (a^, say, then the whole \ plane passing through the fourth and the first axis would consist i of principal axes, and since this plane would cut the y, z plane, 0)2 and 0)3 could not be different from one another, against the j assumption. i * I.e. such to which correspond different principal values. ■ THE LINEAR VECTOR OPERATOR 33 Having thus settled the question about the number of the possible different principal axes of (u and their mutual orientation, it remains to see whether they exist, or better, to find them. The technical side of the latter problem will depend upon the manner how I (O, (U "'oaj ^bb) ^cc > ^ab} ^bc) ca . with respect to some arbitrarily fixed framework of normal unit vectors a, b, C, or — which is the same thing — that the three vectors (oa, (ob, (oc are given, say, equal to A, B, C, respectively, so that (w being symmetrical) Ab=Ba, etc. Let x be a principal axis and n the corresponding principal value (both to be found). Then if x^, x^, jtg are the direction cosines of x with respect to a, b, c, so that x=XiB>+xjb+XQC, we have (i)X = x^iaa. + X2(t)b + x^oiC = x-^^A. + XgB 4- X2,C, and since o)X = «x, XjA. + xjB +x^C = n {x^Si + xjb + x^c) or Xi{A,-nSL) +a;2(B -wb) +^^3(0 -nc) =0.. (35) From this equation we see that the three vectors A-na, etc., are coplanar, so that the volume of the parallelepipedon constructed upon them is nil, i.e. (A - na) V(B - nb) (C - no) = o. (36) Since A, B, C are given, this is a cubic equation for the unknown n. Multiply it out and remember that a = Vbc, and therefore aVbc = 1. Then the result will be w3 - n2(Aa + Bb + Cc) + w (aVBC + bVCA + cVAB) - AVBC = o. (36a) Each of the coefficients of this cubic equation for the principal values of the operator o) has a simple geometric meaning : the first is the sum of the projections of the vectors A = a)a, etc., upon the conventional a, b, c, the second the sum of the volumes of the parallepipeda a, B, C, etc., and the last is the volume of the parallelepipedon A, B, C. At the same time we see that these three expressions are invariants of w, i.e. independent of the choice of the reference system a, b, c. In fact, if w^, «2> ^3 be the principal S.V.A. C 34 ELEMENTS OF VECTOR ALGEBRA values of w, which manifestly are intrinsic properties of the operator, I independent of the reference framework, we have, by (36a), < aVBC + bVCA + cVAB = n^n^ + ng?^ + WiWg, \ (37) ! AVBC = WiWgWg, where A = coa, etc. J | These are very important formulae, exhibiting the three invariants \ of the symmetrical operator w. j Now, if only A, B, C are real, as we assume, all these invariants, ' i.e. the coefficients of the cubic (36a) are real. That equation has, \ therefore, at least one real root. Let this be Wj, and let us take the corresponding principal axis * as our reference axis a. Then \ A = o)a = Wia; .'. bVCA = «iCc, cVAB=«iBb, \ and the left-hand member of (36a) becomes at once «3 - nH^ + (n«i - w2) (Bb + Cc) 4- (n - «i)aVBC, j which is, as it should be, divisible by n - w^, leaving for the remain- ; ing two principal values Wg, n^ the quadratic | n^-n (Bb + Cc) + aVBC = o, which gives 1 ^2 = J(Bb + Cc) ±>/i(Bb + Cc)2-aVBC, (38^) \ or, in terms of the coefficients a)5j = bB, etc., since ; aVBC = (OjjO),^ - (0^, 1 r^«iK + ±ViK-cuJ2 + <, (38)] '«'3 I SO that, if only all the coefficients w^^ are real, these two principal: values and, therefore, also the corresponding principal axes are' real. That they form with the first axis a normal system wC; already know. I We have written down the two roots (38) in the assumption, that 0) was given by prescribing its coefficients w^^ or the vectors | A, B, C, with respect to an arbitrary framework a, b, C. But/ as a matter of fact, this expansion of the roots is superfluous.! For, having taken a as one of the principal axes of o), we know' beforehand that b, c will be its remaining two axes, i.e. that ! B = a)b = W2b, and C = n^Q. \ * Whose direction cosines with respect to any a, b, c might at once be determined Irom (35) by taking in it n =«i. THE LINEAR VECTOR OPERATOR 35 Now, with these values, we have aVBC = W2W3aVbc = W2W3, so that (38a) becomes ^ = iK + «3)±Vi(«2-n,)» which is, as it should be, an identity. Thus, the only necessary thing was to state that the cubic (36a) has at least one real root, and this was immediately clear. Having thus ascertained the general properties of the principal axes of CO, let us take them as our (natural) reference system a, b, c, which we will now call i, j, k. Then, Wj, Wg, W3 being the corresponding principal values, the most general symmetrical linear vector function will be wR = n{\ (iR) + n^ (jR) + Wak (kR) , that is, «i times the first component of R along i plus, etc., or using the dot, instead of brackets, as separator, , as before. The conjugate of the general © will be t3r' = (rii.l + o-2J .m + o-gk.n. (41) The special symmetric dyadic t=i.i + j. j+k.k leaves, of course, any vector operand B intact, and is, therefore, called an idemf actor. It is also, for all purposes, equivalent to i. And if or be any scalar, then crt as an operator is equivalent to a- itself, as an ordinary numerical factor. Thus, expressions such as cr + a . b will again be dyadics, and require no further explanations. To close this section it will be enough to make a few remarks on the " multiphcation," i.e. the successive application of dyadics. If <^ = a . b and ^ = c . d be two dyads, and R any vector operand, we have obviously where \l/ is the dyad a (be) . d = (be) a . d. Similarly, if 7 be a third dyad, we have #(rR)=(\^)R = (<^^)7R, the associative property, so that each of these expressions can be simply written <^\^yR. And the same is easily seen to hold if , ^, etc., stand for binomial or polynomial dyadics. Again, since the scalar product of vectors is distributive, we have for any dyadic and any vectors R, S, <^(R+S)=<^R + <^S, and also, if ^, y be two more dyadics, the operational equation <^(^ + y)=«^^ + = xf-i-y. In short, the distributive property holds for the multiplication of any polynomials of dyads, and therefore of dyadics. Such products can, therefore, be expanded as in ordinary algebra, the only neces- sary precaution being to keep the order of the operators and of the constituents of the dyads intact, since (in general) the com- mutative property does not hold. Thus, for instance, (a . b + c . d) (e . f +g . h) = (be)a . f + (bg)a . h + (de)e . f + (dg)e . h. 38 ELEMENTS OF VECTOR ALGEBRA \ Vectors not separated by dots are fused into scalar products, as i (be), (bg), etc., and here of course the order is irrelevant ; but it ^ must be carefully preserved in the resulting dyads, such as a.f, I not f.a (unless a, f are coUinear). Apart from this precaution, ! the multiplication of dyadics is as easy and convenient as the ; common multipHcation of polynomials, and it will be found to j render inestimable services in the treatment of many geometrical and physical, especially optical, problems. Some illustrations of \ the latter kind will be found in the " Simplified Method, etc.," mentioned before. The final result of such multipHcations of two ^ or more polynomials will be a polynomial of dyads, say ^ A.B+C.D+E.P+G.H+etc. ; but since each of these antecedents and consequents can be j expressed in the form xa. + yh+ 2C, where a, b, c are any non-coplanar ■ unit vectors, any such result can, in the first place, be reduced to s a sum of nine dyads, viz. (T^iB, . a + o-22b . b + (TggC . c 4- cTggb . c + (TggC . b + . . . + (Tgib .a, and it can be proved that this can always be reduced by a proper ] choice of two orthogonal systems, i, j, k and 1, m, n, to the normal \ form (40), which is that of any, generally asymmetric, linear 1 operator CT. Ultimately, the latter can with advantage be split > into a symmetric operator in (32). i 10. Hints on Differentiation of Vectors. The concepts of ■ differentiation and integration as applied to vectors do not ' belong to the subject proper of this booklet, which is vector ^ Algebra. Yet a few elementary remarks on the differentiation ' of vectorial expressions may be here added, as they are likely to be useful to some readers, and as they do by no means require I much space. Let R be a variable vector. To have a possibly desirable picture '■ think of R as the position-vector of a particle moving about in ^ space, round a fixed origin. Let t be any independent scalar [ variable, say the time. Then, AR being the vector increment, \ i.e. the vector drawn from the position of the particle at the instant \ t to that at a later date t + At, the quotient AR/At will be a certain j vector, having a definite tensor (size) and a definite direction. ' DIFFERENTIATION OF VECTORS 39 We may call it provisionally the average vector-velocity of the particle. If this quotient (a vector) tends, with indefi- nitely decreasing A^, to some definite limit, definite both in size and direction, we call this limit-vector the derivative or the fluxion of R with respect to t (or the vector velocity of the particle), and denote it by -j- or R. In short symbols dR ^ J. AR _ = R=Lim^. This vector will, in our illustration, be tangential to the orbit of the particle, and its tensor will represent the particle's speed. From this definition it follows at once that where R, S are any vector functions of the variable t. And, if r be the unit of R, so that R = i?r, we have of course ■R = Rt + Rt. Again, since the scalar product of two vectors is distributive, so that A(RS)=RAS + SAR plus terms of higher order, we have j^(RS)=RS + RS. In particular, if r be a unit vector, so that 1^=1, we have, by differentiating the latter condition, rr = o, so that r J_ r, which is also an obvious property. Similarly, for the vector product, which again is distributive, ^VRS = VRS + VRS, at the only precaution being to preserve the order of the factors, or — if this be inverted — to change the sign of the product in question. In quite the same way we have 4. AVBC = AVBC + AVBC + AVBC, at and so forth. Even the case of linear vector functions, such as CJR, does not call for lengthy explanations. If not only the operand R, but also the nature of the operator TS varies with t, we have ^(trrR)=CTR + CTR, 40 ELEMENTS OF VECTOR ALGEBRA since ZS is distributive. Here nr is the derivative of the operator. If, for instance, CT is represented as a dyadic, say A.B + C.D+E.F, we have CT=A. B+A.B + ...+E.F. And if the form a> + Vw is used, we have ^7 = a) + Vw, where a> can again be expanded, as the derivative of a symmetrical dyadic. It is scarcely necessary to add any further explanations. INDEX Addition of vectors, 3-7 algebraic sum, 7 antecedent, of dyad, 35 antisymmetrical operators, 29 area, of parallelogram, 17 associativity, of addition, 6 of dyadics. 37 asymmetrical operators, 29 autoproduct, scalar, 13 axes, of operator, 31-35 Chain of vectors, 3 closed chains, 6 coinitial vectors, 3 coUinear , 7 commutativity of addition, 6 of scalar product, 12 components, 8 conjugate operators, 29 consequent, of dyad, 35 constituents, of vector operator, 28 continuous operators, 26 coplanar vectors, 20 cosine formula, 16 Derivative of vector, 39 of operator, 40 determinantal form of vector pro- duct, 22 difference of vectors, 10 differentiation of vectors, 39 distributivity, of dyadics, 37 of linear vector operators, 26 — — of scalar product, 14 of vector product, 20 dyads, and dyadics, 35-38 Equahty of vectors, defined, 2-3 Free vectors, 2 function, vector-, 26 Gibbs, 35 Heaviside, 35 Idemf actor, 37 invariants, of operator, 33 iterated multiplication, 23-25 Linear vector operator, 27 localized vectors, 2 Multiple of a vector, 7 multiplication of dyadics, 37-38 Negative factor, 7 nil vector, 6 normal and longitudinal parts of a vector, 36 normal form of dyadic, 36 Operators, 26 origin, of vector, i Parallelogram, and vector sum, 5 polar coordinates, 9 position vector, 25 postfactor, 35 prefactor, 35 principal axes of operator, 31 values , 32 product of vectors, scalar, 12 vectorial, 17 projection, 14 Pythagoras' theorem, 15 Reference system, 9 reflector, 26 refraction, 26 right-handed system, 18 41 42 INDEX Scalai'S', ; i' , ; c ' '. 1 ; f ; /'', "acaldi 'prodnit "of vectors/ i*i*-'i'3 self-conjugate operators, 28 separators, 14 sine formula, 23 skew operators, 29 spherical trigonometry, 16, 23 square of a vector, 13 stretcher, 26 subtraction of vectors, 10- 11 sum of vectors, 3 symmetrical dyads, 36 operators, 28 Tensor, i translation, 3 Unit vectors, i, 8 Values, principal, of operator, 32 vector, defined, i vector product, defined, 17 volume, of parallelepipedon, 19, 22 PRINTED IN GREAT BRITAIN BY ROBERT MACLBHOSE AND CO. LTD. AT THE UNIVER.SITY I'RESS, GLASGOW. ^.^c^SSSf^^ p\|>,, Lid LIB! •IM^ ^ fK. i yiv. Qg^S 7 DAY USE RETURN TO DESK FROM WHICH BORROWED PHYSICS LIBRARY This publication is due on the LAST DATE stamped below. FF-frffT^^^ (ERSIIV Of 01 ((^ /^l OCT g ' 1QB8 # '"^"' "J!i! 0GUiL4869(7. C::c 051992 w£CEIVED BY HFC 5 1991 CttCULATlON BE? v^ ^'^■"yrjrx~jn\ ^^ i7-30m-5,'57 ^y-^^^KKl^ (C6410s20)4188 General Library University of California Berkeley I INJA fllA ? k VERSin OF CAIIFORNII LieflHRY OF THE UHlVERSITy OF CtLIFORHIt Lli 0>j g^