THE LIBRARY OF THE UNIVERSITY OF CALIFORNIA LOS ANGELES The RALPH 0. LIBRARY .MA LOS ANGELES, CALIF. TEXT BOOKS BOUGHT & SOLD COLLEGE BOOK COMPANY 725 W. 6th ST. LOS ANGELES, CALIFORNIA -Or GENERAL PRINCIPLES OF THK METHOD OF LEAST SQUARES, WITH APPLICATIONS, DANA P. BARTLETT, S.B., PROFESSOR or MATHEMATICS. MASSACHUSETTS INSTITUTE or TECHNOLOGY. THIRD EDITION. BOSTON THE AUTHOR 1915. COPYRIGHT, 1915. BY DANA P. BARTLETT. TECHNOLOGY BRANCH HARVARD COOPERATIVE SOCIETY 76 MASSACHUSETTS AVENUE, CAMBRIDGE, MASS. 1933 Geology Library PREFACE. The preparation of this volume was undertaken with the view of presenting in as simple and concise a manner as possible the fundamental principles of the Method of Least Squares. While it is believed that everything essential to the solution of all ordinary problems has been included, no attempt has been made to develop at length those special methods and forms that are so useful and almost necessary in case large numbers of observations of certain kinds, such, for instance, as those met with in geodetic and astronomical measurements, are to be adjusted. Frequent references throughout the text, and more particu- larly the list oi works given on page v of the Appendix, will, however, enable the student to extend his studies in what- ever special direction his profession may require ; it being expected that this book will in such cases be looked upon merely as an introductory treatise. All of the works men- tioned have been freely consulted in the preparation of these pages, and the author desires in particular to acknowledge his indebtedness for many of the examples. DANA. P. BARTLKTT. CONTENTS. CHAPTER I. GENERAL PRINCIPLES. PAGES 1. Object of the Method of Least Squares. 3. Errors. 4. Constant Errors ; Theoretical, Instrumental, Personal. 6. Mistakes. 6. Accidental Errors. 7. Direct Observa- tions ; The Arithmetical Mean. 8. Real Errors. 9. Resid- uals. 11. Weighted Observations. 13. The General Mean. 16. The Curve of Error. 17. Laws of Errors of Observa- tion. 20. Derivation of the Equation of the Curve of Error. 22. The Method of Least Squares 1-16 CHAPTER II. THE ADJUSTMENT OF OBSERVATIONS. 23. Indirect Observations. 25, 27. Rules for Forming the Normal Equations. 28. Reduction of Equations to Weight Unity. 29. Relation Between the Weight of an Observation and its Measure of Precision. 30. Computation of Correc- tions. 32. Significant Figures. 33. Conditioned Observa- tions. 35. Special Cases. 36-43. Empirical Formulas and Constants. 42. Periodic Phenomena. 43. The Logarith- mic Solution. 44. Reduction of Equations to the Linear Form 17-36 CHAPTER III. THE PRECISION OF OBSERVATIONS. 48. The Constant k. 49. The Value of k in Terras of h. 61. The Mean of the Errors, or Average Deviation. 52. The Mean Error. 53. The Probable Error. 55, 56. The Rela- tions between p, r, a.d., h, p, and p. 58. Representation of (*, a.d., and r on the Curve of Error 37-46 CONTEXTS. CHAPTER IV. COMPUTATION OF THE PRECISION MEASURES. PAGES 59-61. Direct Observations all of the Same Weight. 62. Direct Observations, the Weights Not Being All Alike. 64-71. Func- tions of Independent Observed Quantities. 72. The Preci- sion of Measurements. 74. Functions of the Same Vari- ables. 76-85. Indirect Observations. 76. First Method of Computing the Weights. 77. Rule I. 78. Second Method of Computing the Weights. 79. Rule II. 80. Third Method of Computing the Weights. 81. Rule III. 82. The Mean Error of an Observation. 85. Observations of Unequal Weights. 86-89. Conditioned Observations 47-82 CHAPTER V. MISCELLANEOUS THEOREMS. 90. The Distribution of Errors. 92. The Rejection of Obser- vations. 93. Criterion for the Rejection of a Single Doubtful Observation. 95. The Huge Error. 96. Constant Errors. 98. Combination of Determinations having Different Con- stant Errors. 100. The Weighting of Observations. 101-103. Special Laws of Error. 104. Contradictory Obser- vations 83-96 CHAPTER VI. GAUSS'S METHOD OF SUBSTITUTION. 107. Checks on the Formation of the Normal Equations. 108. The Reduced Normal Equations and the Elimination Equations. 109. Checks ou the Solution of the Normal Equations. 110. Most Convenient Arrangement of the Com- putations. 111. Application of the Checks. 113. Solution of the Elimination Equations. 115. The Weights of the Unknown Quantities 97-111 GAUSS'S METHOD OF CORRKLATIVES 111-116 EXAMPLES 117-142 APPENDIX. THE THEORY OF PROBABILITY. 200. Definition ; Simple Events. 202. Compound Events. 204. Dependent Events i-v BIBLIOGRAPHY v-vi TABLES. . vii-xi THE METHOD OF LEAST SQUARES. CHAPTER I. GENERAL PRINCIPLES. 1. In scientific investigations of all kinds it is frequently necessary to determine the values of certain quantities by means of actual measurements either with or without the aid of instruments. The observations may be made directly upon the values of the unknown quantities or upon certain functions of the unknowns. In the latter case the values of the required quantities must be obtained by computation from the observed values of the functions. In order to obtain more accurate values of the unknowns than would be given by a single measurement, or set of measurements, the observations are usually repeated either in the same way and under the same conditions or in a variety of different ways and under vary- ing conditions. Under these circumstances it will invariably be found that the different measurements give discordant results, the amount of the discrepancies varying with the character of the observa- tions ; and the question that now presents itself is how to determine from these discordant observations the true values of the required quantities. From the nature of the case, however, we can not expect to obtain our values with absolute accuracy; all that we can hope for is to obtain those values which are rendered most probable after all the observations are taken into account, and, further, to determine the degree of confidence that can be placed in those values. 2 METHOD OF LEAST SQUARES. 2. The attainment of the above results constitutes the primary object of the Method of Least Squares. The method is also employed in comparing the relative worth of different measurements of the same quantity, and in determining the equation of a curve which shall suitably represent the relation between two variables in cases where the exact law connecting them is not known. Also, before making any observations, we may employ the method to determine how precise the component measurements of a series must be in order to yield a required degree of pre- cision in the final result; or, conversely, to determine what the precision of the final result will be, knowing the precision attainable in the component measurements. This latter appli- cation of the method will be treated at length in the course on " The Precision of Measurements." 3. Errors. The cause of the discrepancies between the results of our different observations is that every observation that is a measure is subject to error. These errors are of two kinds, Constant or Systematic Errors and Accidental Errors. 4. Constant Errors are errors which in all measures of the same quantity, made with the same care and under the same conditions, have the same magnitude, or whose presence and magnitude are due to some fixed cause. These constant errors may be of several classes, which are designated as follows: First. Theoretical Errors, such as those due to the refrac- tion or aberration of light, the effect of a definite change in temperature or moisture on our standards of measurement, etc. As soon as their causes are known the magnitude of these errors may be calculated and their effect eliminated from the observations. Second. Instrumental Errors, such as errors of division of graduated scales, defects in micrometer screws, eccentricity of circles, etc. These errors will be discovered by an examina- tion of the instruments and their effects eliminated from the GENERAL PRINCIPLES. 3 observations, either by a particular method of using the instruments or by subsequent computation. Third. Personal Errors. These are due to personal pecu- liarities of an observer, who always answers a signal too soon or too late, always estimates a quantity smaller than it is, etc. The character and magnitude of these errors may be deter- mined by a study of the observer, his "Personal Equation" may be obtained, and his observations thus corrected for this source of error. 5. Mistakes. Although of a somewhat different character, these should be considered in connection with constant errors. A mistake is made when a figure 3 is read for a figure 8, or when in reading a graduated circle which is numbered in both directions the angle is read 43 instead of the complementary angle 47, etc. These mistakes are usually of such a charac- ter that they may be detected by an inspection of the observa- tions and a proper correction made. 6. Accidental Errors are errors due to irregular causes, whose effect upon the observations is not determined by any circumstances peculiar to that particular set of measurements, and which cannot therefore be computed and allowed for beforehand. Such errors are those due to sudden changes in refraction owing to sudden and unobserved changes in tem- perature; unequal expansion of different parts of an instru- ment with change in temperature; shaking of an instrument in the wind, etc. But most important of all are those errors which arise from imperfections in the sight, hearing, and other senses of the observer, which render it impossible for him to adjust and use his instruments with absolute accuracy. After a full investigation of the constant errors, the observer should diminish the accidental errors as much as possible, both in number and magnitude, by taking every precaution and care in the measurements themselves. The problem now remains to combine the observations so that the remaining accidental errors shall have the least probable effect upon the results, 4 METHOD OF LEAST SQUARES. and it is to bring about this combination of observations that we employ the Method of Least Squares. When no more observations are made than are sufficient to determine one value for each of the unknown quantities, we must accept these values as the most probable ones. But if additional observations are made leading to discordant results, we can not take any one of them as the correct value, and in fact, as already stated, we shall probably not be able to obtain the true values of the unknowns. All that we can do is to find values of the unknowns which shall remove the discrepancies between the different observations and which shall be those values that are rendered most probable by the existence of the observations themselves. On first thoughts it may seem that these accidental errors, being due to so many different and unknown causes, will be beyond the scope of mathematical investigation. Neverthe- less, the theory of probability requires that these errors shall follow in magnitude and frequency a law that is capable of exact mathematical expression, and experience confirms the correctness of this law. For more extended remarks on these subjects see Holman, " Discussion of the Precision of Measurements," pp. 1-14. Merriman, " Text-Book of Least Squares," pp. 1-6. Chauvenet, " Spherical and Practical Astronomy," pp. 469-473. Wright, " Treatise on the Adjustment of Observations," pp. 11-18. LAWS OF ERRORS OF OBSERVATION. 7. The derivation of the general laws of the occurrence of errors of observation, and of the processes for determining the most probable values of the unknown quantities, will be based upon the following Axiom. If a series of n direct observations, M, M^ . . . M n , is made upon the value of a quantity M, all the observations being made with the same care and under the same circum- GENERAL PRINCIPLES. 5 stances, the most probable value M$ of that quantity is the arithmetical mean of the observations. Or f . 8. The Real Error (a) of an observation is the difference between the observed value of the measured quantity and the real value. 9. The l-tesidual (w) of an observation is the difference between the observed value of the measured quantity and the value rendered most probable by the existence of the observations. 10. Example. Eight observations are made upon the resistance of a coil of wire, the true resistance being 512. Find from these observations the most probable resistance, and also the real errors and residuals. Observations. Real Errors. Residuals. M. x v 512.4 -f .4 + .30 512.2 -J-.2 -J-.10 511.9 - .1 - .20 ^ 512.3 -f .3 -f .20 511.8 - .2 - .30 512.3 -f .3 -f .20 511.9 - .1 - .20 512.0 .0 - .10 Mean = 512.10 2v = ^00 From the observations, then, we should say that the most probable resistance of the coil is 512.10. It will also be noticed that the sum of the residuals is zero. That this is a general result following from the assumption of the arith- metical mean as the most probable value may be proved as follows : If the observations are J/,, J/j, . . . M n , the 6 METHOD OF LEAST SQUARES. arithmetical mean M^ and the residuals v^ u 2 , . . . v n , then we have t?! = J/i MO, v 2 = My MQ, . . . y n = M n M . 2,v='%lU~ nM = 2 M- ?,M, since M = n .-. Sv = (2) 11. Weighted Observations. The weight of an observa- tion expresses its relative worth compared with other obser- vations. Thus, if six observations are made upon the value of a quantity, five of which give the same result, while the sixth differs, in combining these two different results to obtain the most probable value of the unknown, the first value ought to have five times the influence upon the final result that the second has, since it has taken five times as much labor and time to obtain it. Hence in general we may say 12. The Weight (p) of an observation may be considered as representing the number of times the observation has been repeated and the same result obtained. The weights assigned to observations may be due to a variety of causes, as difference in skill of observers, difference in the instruments used or the circumstances under which the observations are made, etc. But whatever the cause, the effect on the final values of weighting an observation will be the same as indicated in the preceding paragraph. 13. Example. Suppose n observations, J/ 1} M 2 , . . . M n , of weights PI, p z , . . . p n , are made upon the value of a quantity M. To find the most probable value M of the quantity. From the above interpretation of the meaning of weight, we may consider that the whole number of observations is P\~\~Pz-\- - ' Pm or 2/>, and that the result M l has been GENERAL PRINCIPLES. 7 obtained in /> x observations, M 2 in p 2 observations, etc., Therefore, by (1) p l JWJ) is called the General Mean. If the residuals are v^ v 2 , . . . v n , we have Q (4) Which shows that in the case of direct observations of differ- ent weights the sum of the weighted residuals is zero. 14. If the observations are not made directly upon the values of the required quantities, the method of adjusting the results so as to obtain the best possible values of the unknowns will depend upon the laws which govern the dis- tribution of the errors of these observations. It is found in practice that the accidental errors of observations follow cer- tain well defined laws, and what these are may best be seen by taking an actual example. 15. Example, One thousand shots are fired at a target which is divided into a number of horizontal sections by lines one foot apart, the centre line of the target being in the middle of one of these spaces. The shots were distributed as follows : In Space. Shott. In Space. Shots. In Space. Shott. ! -Hi to + i 19 ~ 2 i to -3 79 4 4- i " -- i 212 -3^ -4 16 10 - \ " -1 204 -4 " -5 2 89 -1 " -2 193 In this case the errors are evidently the distances of the shots from the centre of the target. Further, as far as can 8 METHOD OF LEAST SQUARES. be judged from these one thousand shots, if another shot is fired the probability that this shot will fall between the lines .204 193 .010 .089 .190 .212 and -: -ii " ' .079 .016 .002 -\- 5 and -f 4 is .001 _^_4 4- 3 i " - 004 + 3* " + 2* '< + H " + + * " ~ The sum of the above probabilities is unity, and, therefore, as far as the preceding shots show the 1001st shot will cer- tainly hit the target. 16. Now using as abscissas the distances of the horizontal lines from the centre of the target, and as ordinates the num- ber of shots falling in the corresponding spaces, we may construct the following figure : Figure 1. And if the entire area of this figure is taken as unity then the area of each rectangle will denote the probability of a GENEEAL PRINCIPLES. 9 shot, if fired, falling within the corresponding space of the target. The graphical representation of the accidental errors of observation will always give a figure similar to the above. Hence denoting errors by abscissas, and their frequency by ordinates, the law of error of any series of observations may be represented by a curve whose general form is determined by Figure 1. This curve is called the " Curve of Error," and is shown in Figure 2. /I // K' Figure 2. PDil In order that this curve may represent exactly the distri- bution of the errors in any given series of observations it ought to meet the axis of X at some definite distance to the right and left of the origin and coincide with the axis from there on, for in all actual observations there is a limit beyond which no errors occur. But as the exact point of meeting could not be determined for any given case, and as it would not l)e possible to obtain the equation of such a curve, we make it asymptotic to the axis of X, taking care that the error thus introduced shall in any set of observations be so small as to be negligible. 10 METHOD OF LEAST SQUABES. 17. An inspection of Figures 1 and 2 will now exhibit some of the general lawc of errors of observation and the corresponding properties of the curve of error. Lavs of Error derived from an inspection of Figure 1. Representation of these laws by the Curve of Error. First. Small errors are more frequent than large ones. Second. Positive and negative errors of the same absolute mag- nitude are equally likely to occur. Third. The probability of the occurrence of very large errors is very small. Fourth. The frequency of any error depends upon the magnitude of that error. The maximum point of the curve is on the axis of Y. The curve is symmetrical with respect to the axis of Y. The curve is asymptotic to the axis of X. The equation of the curve will be of the form if ~ *^\ / \ / 18. If, now, the total area between the curve and the axis of X be denoted by unity the probability that the error of any given observation will fall between the magnitudes x and x-\-dx will be represented by the area included between the curve, the axis of X, and the ordinates of the curve at the errors x and x -\- dx> or by y dx = (x) dx (6) And this probability will be known as soon as we find the form of the function <(#). 19. The above expressions in (5) and (6) are the ones that we should use if we regard the curve of error as repre- senting the law of occurrence of errors of observation. If, however, we look upon the curve as expressing the law to which we must make the residuals conform, in order that the values of the unknown quantities obtained from them may be GENERAL PRINCIPLES. the most probable values, we should replace x by u and use the expressions y=*() (7) and ydv = $(v)dv (8) for (5) and (6), respectively. THE EQUATION OF THE CURVE OF ERROR. 20. Let n observations, all of the same weight, with results MI, MZ, . . . M n , be made upon any function or functions of a number of unknown quantities z^ z 2 , . . . z q ; and let the residuals of M^ M^ . . . M n be v iy v 2 , . . . y n , and the probability of the occurrence of these residuals be (Vi) dv, <(v 2 ) do, . . . (v n ) dv, respectively. Then the probability of the simultaneous occurrence of all these residuals will be . . . (v n ) (civ)* (9) Each different method that might be adopted for computing the values of the unknowns z 1? z 2 , . . z q would lead to a dif- ferent set of residuals v^ w 2 > y n > but obviously that set of values of 2,, z 2 , . . . z q should be considered the best which corresponds to the particular set of residuals v t , v 2 > u the probability of whose occurrence is greater than that of any other set. Therefore the most probable values of z t , z 2 , . . . z q are those that make P in (9), or log/* in (10), a maximum. The values of z u 2 2 , . . . z q corresponding with this latter condition are those that satisfy equations (11). It maybe noticed that these equations also express the preliminary conditions leading to a minimum value of log / J , but the 12 METHOD OF LEAST SQUARES. nature of the problem is such that a maximum value of P evidently exists while a minimum does not, and it is there- fore unnecessary to investigate further the mathematical conditions for a maximum. Hence we have , __ n _ 1 d <(Vi) f ti \ 3 2 ? an( i there are as many equations as unknowns, hence as soon as we find the form of the function u) we can solve these GENERAL PRINCIPLES. 13 equations for the most probable values of z l5 2 2 , . . . z q . Since we have considered the general case, and the above results are to hold true whatever the number of unknown quantities and the form of the functions observed, we may deduce the form of \f/(v) by solving a special example. Example. Let n observations of equal weight be made upon the value of a single unknown z l5 with results M^ J/,, . . . M n , and let the residuals be v x , y 2 > v n . Then the most probable value of z t is given by differentiating with respect to z x , dv! dv 2 dv n 1 .. _ _ _ ^^_ _ i. __ ._, / o | dZi ~ dZi ~ 3z t substituting (a) in (14), changing all the signs, we have *0>i) + *(")+ . .^() = (b) But in this case, as was shown in (2), 1 + U 2 + ' = ( C ) In order that (b) and (c) may both be true the functional symbol i/> must indicate multiplication by a constant. That is, in general !/r(t;) = cw (15) Substituting this in (13) and (12), dv dv therefore ^ = cy -^~ (v) dz dz Integrating, log <(y) = ^cy 2 -J I 14 METHOD OF LEAST SQUARES. Since y=(v) is the equation of the curve of error, (7), we may therefore write it But on examination of the curve, y is seen to be a decreas- ing function of v, and hence the exponent of e is essentially negative. Accordingly we will write our equation in the form y=ke- h ^ (16) the values of k and h depending upon the character of the observations, but in all observations of the same kind and weight having the same values. This equation represents the law in accordance with which the residuals must be distributed in order that the best results may be obtained from our observations. But, as before mentioned, if we wish our curve to represent the most prob- able distribution of the real errors of observation we should write the equation in the form y=ke-*** (17) Hereafter we shall use without further remark either form of the equation according to the aspect in which we are considering our curve. An inspection of the above equation will show that it satisfies all the conditions noted in discussing the form of the curve of error in paragraph 17. 21. It is important to notice that in all discussions in the Method of Least Squares the number of observations is sup- posed to be large and always greater than the number of unknown quantities. As will be illustrated later on, para- graph 91, whenever this is the case there is a remarkable agreement between the results obtained in practice and those GENERAL PRINCIPLES. 15 indicated by the theory. And even when the observations are few in number the method still affords the best means at our command for their adjustment, the results obtained merely having a smaller weight than they would have had if derived from a greater number of observations. THE METHOD OF LEAST SQUARES. 22. We are now in a position to see whence comes the name " Least Squares." In paragraph 20 it was pointed out that whenever we make a series of observations, each observation of the set having the same weight, the most probable system of values of the unknown quantities will be that which corresponds with the set of residuals the probability of whose occurrence is a maximum. That is, the best set of values of the unknowns will be that which gives a maximum value to But from equation (16) this reduces to P = k n e~ h = 1 7 1 = z. j 7 .1 (d) substitute (d) in (b) K-.J = 5 .5 (e) " (d) in (c) 3 = 2 .5 (*) An inspection ,of the work in this example will show that for the adjustment of observations of equal weight on linear functions of the unknowns we may derive the following: THE ADJUSTMENT OF OBSERVATIONS. 19 25. Rule. For each observation write an "Observation Equation " / then for each unknown form a " Normal Equation" by multiplying the first member of each observa- tion equation by the coefficient of that unknown in that equation, adding the results and placing the sum equal to zero. Solve these equations simultaneously for the values of the unknowns. In solving for the most probable values of the unknowns the second members of the observation equations are very commonly written zero instead of v^ v 2 , . . . v n . For this is the form in which the equations naturally appear, and if the observations were exact the residuals would actually all be zero. The method of solution is the same in either case. 26. Observations of Unequal Weight. If the observations are not all of equal weight the same method will apply, except that in the formation of the normal equations each observation equation will be used the number of times denoted by its weight. Thus in the last example if the observations have the weights 4, 9, 1, 4, the normal equations will have the same form as in (B), page 18, but each part of each equation will be multiplied by the weight of the observation equation from which it is derived. This will give the NORMAL EQUATIONS. 4(2 1 -2 2 -1.7) + (-2 1 + 2 2 + 2 3 -1.0)(-l):=0 4( 2l - 2 2 - 1.7)(- 1) + (- ai 4- 2a + 2 , _ 1.0) 9(2, - 2.4) + (- 2l + 2 2 + 2 3 -1.0) + 4(2 8 -2,-3.0)(-l) = or reduced 5 2l 52 2 - 2 3 5.S=0 (a) 52!-(-92 2 32 3 6.2 (b) . 2l 32 2 +142 8 10.6=0 (c) the solution of which gives z, = 7.07 2 2 = 5.42 z :} 2.42 20 METHOD OF LEAST SQUARES. Further it will at once be seen that if p^ p 2 , . . . p n , are the weights of the corresponding observations, equations (20) take the general form: WEIGHTED NORMAL EQUATIONS. 5^2 , 9v H A '-.nVn--- 8v n Hence for the formation of the normal equations in weighted observations on linear functions of the unknowns, we have the following: 27. Rule. For each observation write an " Observation Equation " / then for each unknown form a "Normal Equation" by multiplying thejirst member of each observa- tion equation by the coefficient of that unknown in that equation and by the weight of that equation, adding the results and placing the sum equal to zero. Solve these equations simultaneously. 28. The same result will be obtained if we begin by multiplying each observation equation by the square root of its weight and then proceed according to the first rule (paragraph 25). This result illustrates the important principle that multi- plying a set of equations by the square roots of their weights reduces them all to equivalent equations of weight unity. 29. Relation between the Weight of an Observation and the Value of h. If in paragraph 20 the n observations THE ADJUSTMENT OF OBSERVATIONS. 21 have weights jt> x , jt> 2 , . . . p n , and the quantity A values AU A 2 , . . . A n , then equation (18) becomes P=k l e~ h ^ v ^ Jc 2 e- h ** v f . . . k n e~ h ^ v ^ (dv) n The most probable set of values of the unknowns is that which makes P a maximum, and P is a maximum when li\ Vi' 2 + ^2 2 ^2 2 + hrfvn 2 is a minirmim. (23) The conditions for a minimum value of this expression are the following, which are then for this case the NORMAL EQUATIONS. But equations (21) are also the normal equations for this case. Hence (21) and (24) must be identical, and pi : Pz : Pn = /*i 3 : hf : . . . h,? (25) Tliat is, the square of A is proportional to the weight of the observation. Accordingly, since A increases in value as the quality of the observations is improved, it is culled " The Measure of Precision." 22 METHOD OF LEAST SQUARES. Further, it follows from (23) and (25) that the most probable system of values of the unknowns will be that in which Pi Vi*-\-p a Vt*-{- -Pn^n is a minimum. (26) And this is the most general form of statement of the principle of " Least Squares." The same principle is repre- sented in equations (21). 30. Computation of Corrections. If large numbers occur in the observations it is better to compute the most probable corrections to apply to the observed values rather than the most probable values of the unknowns themselves. In this way we can often avoid a large amount of numerical work. 31. Example. P^ jP 2 , jP 8 , P 4 , P& are five points whose altitudes above the mean level of the sea are to be determined from the following observations of difference of level. P x = 573.08 P 4 -P 2 = 170.28 P y -P l = 2.60 P t P,= 425.00 P 2 = 575.27 P 6 = 319.91 P S P 2 = 167.33 P 6 = 319.75 P 4 -P 8 = 3.80 An inspection of these observations shows that we may put ^ = 573-1-2! P 4 =745 + 2 4 (A) P 8 =742 + 2 3 where z l5 2 2 , 2 S , 2 4 , 2 6 , are small corrections whose most probable values are to be determined. We now have for OBSERVATION EQUATIONS. 573 _|_ Zl _ 573.08 = or 2!-. 08 = _ 573_ 2l _ 2.60=0 or 22-2!-. 60 = 575 -f 2 2 - 575.27 = or 2 2 .27 =ty THE ADJUSTMENT OF OBSERVATIONS. 23 742 + z 8 - 575 -z 2 - 167.33 = or z 8 -z 2 -.33 = 745 _|_ 2 4 742 - z 3 3.80=0 or z 4 -z 3 .80=0 745 + z 4 575 z a - 170.28=0 or 2 4 -z 2 -.28=0 745 _|_ 2 4 _ 320 -z 6 - 425.00=0 or z 4 z 5 =0 320 + 2 6 - 319.91 = or z 6 + .09 = 320 -i-z 6 - 319.75=0 or z 6 -f.25 = From these we now form the NORMAL EQUATIONS. 22 X - z 2 + .52=0 _ 2l _|_42 2 _ z s z 4 .26=0 - z 2 -f2z 8 -. z 4 -j- .47 = _ 2 2 _ 2 8 -|-3z 4 z 5 1.08=0 - z 4 -f-3z 5 -f- .34 = and solving, 2l =-.19; z 2 = .14; z s =.05; z 4 =.43; z 5 =.03 Substituting these in equations (A) we have for the most probable altitudes, P l = 572.81 P s = 742.05 P 5 = 320.03 P 2 = 575.14 P 4 = 745.43 If the original observation equations had been retained, the independent terms in the normal equations would have been 570.48 240.26 163.53 599.08 214.66 32. Significant Figures. The adjustment by the Method of Least Squares of observations which occur in practice, although not difficult, is apt to be long and laborious. Hence to reduce this labor as much as possible it is of great import- ance that careful attention should be given in the solutions to the proper use of significant figures. When in doubt, 24 METHOD OF LEAST SQUARES. however, as to the proper number of figures to retain it is better to keep too many rather than too few, as the superfluous figures can be rejected at the end of the computation ; while if too few are retained the results obtained from the compu- tations will be worthless. For a general discussion of the subject of significant figures see Holman's " Precision of Measurements," pages 76 to 84, but for the present the following rules will suffice for most cases. Rule 1. In casting off places of figures increase by 1 the last figure retained, when the following figure is 5 or over. Rule 2. In the precision measure retain two significant figures. Rule 3. In any quantity retain enough significant figures to include the place in which the second significant figure of its precision measure occurs. Rule 4. When several quantities are to be added or subtracted, apply Mule 3 to the least precise and keep only the corresponding figures in the other quantities. Rule 5. When several quantities are to be multiplied or divided into each other, find the percentage precision of the least precise. If this is 1 per cent or more, use four significant figures. .1 " " " " five " " .01 " " " " six " " in all the work. If the final result obtained in this way conflicts with Rule 3, apply the latter. Rule 6. When logarithms are used, retain as many places in the mantissce as there are significant figures retained in the data under Rule 5. The application of these rules is not always possible in the course of the work, since the precision measures may not be known until the end of the computation. But as a general rule it is sufficient in direct observations to retain one more THE ADJUSTMENT OF OBSERVATIONS. 25 place of figures than is given by the individual observations, and in indirect observations to retain two additional places. CONDITIONED OBSERVATIONS. 33. Conditioned Observations are those in which the unknown quantities must be determined not only so as to satisfy as closely as possible the observation equations, but also so as to satisfy exactly certain other conditions. These conditions must be less in number than the unknown quantities, otherwise the unknowns could be determined from the con- ditions alone. The adjustment of observations of this class may be reduced to the method already used for unconditioned observations in the following way. The observations are represented by " Observation Equa- tions," and the conditions by certain other equations, called " Condition Equations." Between these two sets of equations we will eliminate as many unknowns as there are conditions. From the resulting equations, which will be the same in number as the observa- tions, we will form in the usual manner the " Normal Equations" for the remaining unknowns. Having solved these normal equations and substituted the results in the con- dition equations, we shall obtain the values of the unknowns first eliminated. All conditions of the problem are now fulfilled, for the condition equations are satisfied exactly and, moreover, according to the principle of Least Squares our results are those rendered most probable by the existence of the observations. As in the example last considered, it is often more advantageous to compute corrections to the observed values of the unknown quantities rather than the values of the quantities themselves. 26 METHOD OF LEAST SQUARED. 34. Example. Find the most probable values of the angles of a quadrilateral from the observations, ^ = 101 13' 22" weight 3 J5 = 93 49 17 " 2 .. O= 87 5 39 2 D= 77 52 40 1 0' 58" The condition to be satisfied is in this problem (B) Let 2u z 2 , 2 8 , z 4 be the most probable corrections to add to the observed values. This gives for OBSERVATION EQUATIONS z l =Q weight 3 *;=o " 2 < C) and for the CONDITION EQUATION Eliminating z^ between (D) and (C), the equations from which the normal equations are to be derived become = weight 3 (E) z 2 = " 2 a, = " 2 THE ADJUSTMENT OF OBSERVATIONS. 27 Applying the rule in paragraph 27, these give the NORMAL EQUATIONS (F) Solving, and substituting the results in equation (D), we find z l =- 8.29 e,= - 12.43 Q . 2 2 = - 12.43 s 4 = 24.85 Applying these corrections to the observations (A), the most probable values of the angles are -4 = 101 13' 13".71 .#=93 49 4.57 H , C= 87 5 26.57 D= 77 52 15.15 Note. In eliminating unknowns between the observation and condition equations care must be taken that the obser- vation equations are not combined with each other or multi- plied by any quantity. For if this is done the weights of the observation equations will be altered. (See 28.) 35. In the above example it is evident that the corrections to be applied to the different observations are inversely as their weights. And, in general, when there is but one equation of condition, the observations expressing direct determinations of the unknowns, the corrections will be pro- portional to the coefficients of the unknowns in the equation of condition divided by the weights of the corresponding observations. A proof of this is given in paragraph 117. 28 METHOD OF LEAST SQUARES. The most common case is that in which these coefficients are all unity, as in the example just solved, and we may then derive the Rule. Find the difference between the theoretical and observed results and divide this correction among the observations in the inverse ratio of their weights. In the last example the sum of the observed angles exceeds 360 by 58". Therefore the correction to be applied to A is - 58 X - -=-58xi=- 8.29 EMPIRICAL FORMULAS AND CONSTANTS. 36. In the work so far considered the observations are supposed to be made either directly upon the values of the unknown quantities or upon some function of the unknowns whose form, and the constants entering into it, are definitely known. But another sort of problem frequently occurs, in which observations are made upon the values of a certain variable and the corresponding values of some function of it, the exact form of the function not being known. The object in this case is the determination of the most probable form of the function and the values of the constants involved ; that is, the derivation of the algebraic expression best representing the law connecting the variable and function. This expression may be looked upon as the equation of a curve, abscissas denoting values of the variable and ordinates values of the function, and for all values of the variable within the range of the observations we may determine from it the most probable values of the function corresponding. But except in special cases, where the number of observations is large, where the law connecting variable and function is well defined, and where the equation obtained is an accurate THE ADJUSTMENT OF OBSERVATIONS. 29 representation of this law, it cannot be assumed to apply beyond the range of the observations. And in no case would it be safe to make use of the curve very far beyond the limits of the observations. 37. The Method of Least Squares will not assist in deter- M , mining the form of the function. This must be settled upon beforehand, either from theoretical considerations or by constructing a plot, using values of the variable as abscissas and of the function as ordinates, when the smooth curve drawn through the points thus obtained will indicate the form of equation to be used. It is to be observed that this is a method of trial and will not necessarily give the most probable form of the function ; and in fact we may not be able to obtain the form that would be absolutely best. Further, several forms of equation may be known which would represent well the plotted points. In such a case that should be considered the best in which the sum of the squares of the residuals is found to be the least. 38. As soon as the form of the function is decided upon it should be reduced to the linear form, and the determination of the values of the constants involved is then a simple application of the preceding methods. As the " Observation Equations " in any given problem will all be of the same kind, it is usually advisable to write out the general form of the "Normal Equations" and arrange the computations in tabular form, while the retention of the proper number of significant figures is of particular importance in this work. 39. A case that frequently occurs is that in which the quantity y is a constantly increasing function of the vari- able a;, or where the plotted curve is approximately parabolic in form. Here the equation ... (27) may be taken to represent the relation between the variable 30 METHOD OF LEAST SQUARES. and the function. The larger the number of terms taken in the second member, the more accurately may the equation obtained be made to represent the results of the observations ; but the labor involved increases rapidly with increase in the number of terms, and if the plot shows a very nearly straight line the first two terms alone may suffice. 40. Example. In measuring the velocity of the current of a river the following results were obtained : Depths. Velocities. x V 1 4.86 2 5.14 3 5.15 4 4.85 5 4.24 6 3.36 7 2.16 8 0.67 The velocity at the surface is 4.250. Find the equation of a curve which will express the relation between x and V. Plotting the observations we find the curve ADJUSTMENT OF OBSERVATIONS, This is approximately parabolic in form and passes through the fixed point (0, 4.25). Therefore the relation between x and V may be expressed by the equation (A) and substituting in this the corresponding values of x and V as given by the observations, we shall have eight observation equations from which the most probable values of B and (7 are to be computed. All of the observation equations being of the form (A) we have the NORMAL EQUATIONS 4 4- B Sx 8 + 4.25 2z 2 - 2 Fa; 2 = (a) (b) For computing the coefficients in these equations it is most convenient to arrange the following table. X V Vx X* Fa; 2 x 8 x* 1 4.86 4.86 1 4.86 1 1 2 5.14 10.28 4 20.56 8 16 3 5.15 15.45 9 46.35 27 81 4 4.85 19.40 16 77.60 64 256 5 4.24 21.20 25 106.00 125 625 6 3.36 20.16 36 120.96 216 1296 7 2.16 15.12 49 105.84 343 2401 8 0.67 5.36 64 42.88 512 4096 86 111.83 204 525.05 1296 8772 2* SFse 2-e 2 2Fz 2 2* 2* 4 32 METHOD OF LEAST SQUARES. Substituting these results in (a) and (b) we have 8772 6^-1-1296^ -f 341.95 = (c) 1296(7+ 204^-f- 41.17 = (d) and solving, C =.1493 J? = .7465 (e) Therefore the required equation is V = 4.25 -f- .7465z - .1493(x) dx. Hence the sum of all the errors of the observations is ,%<*> xQO n I y (x) dx or 2w I x (x) dx. */- Jt 40 METHOD OF LEAST SQUARES. Dividing this last expression through by n we have /* D 2A /* a.d. = 2 I x(x)dx = I (x) dx, and the sum of the squares of these errors is n x 2 (x) dx. Therefore the sum of the squares of all the errors is n I x 2 (x) dx _ H a = - - C e -h-x* X 2 j x (%) But as shown in paragraph 49, h dx = I or e-l dx = If (b) VTT^-" ^- h Differentiating (b) with respect to A, -2A _00 THE PRECISION OF OBSERVATIONS. 41 and replacing the integral in (a) by its value as determined from equation (c), we have THE PROBABLE ERROR. 53. The Probable Error (r) of an observation is an error such that one-half the errors of the series are greater than it and the other half less than it. Or it is an error of such a magnitude that the probability of making an error greater than it in any given observation is just equal to the probability of making one less than it, both probabilities being one-half. The probability that the error of an observation will fall between x and x-\-dx being (x) dx, the probability that the error will fall between the limits r and r is P = (36) If r is the probable error, P is one-half, or (37) and from this definite integral r is to be found. Let t = A, .-. dt = hdx. Also when x = r, we have t = hr, and when x = 0, t = 0. Substituting these results in (37), we have VTT (38) 42 METHOD OF LEAST SQUARES. Denote hr by p. Then by interpolation in a table of values of this integral, the value of hr in (38) is found to be p = hr = .47694 .__.. .17691 = ft : h 54. When Z is small the values of J Jo found by expanding e~^ into a series and integrating the successive terms. Thus, by Maclaurin's Theorem, fV* Jo *8 /6 *7 - + 3 5[2_ 7[3_ RELATIONS BETWEEN Jl, r, .d., A, AND p. P 1 . From (40) r = -, and from (34) a.c?. = h 55 r = p a.d. V/TT or r = .8453 a. a.d. > r (44) 56. Further, since in (25) it was shown that p oc A 2 , it follows at once from (43) that * x h ' (45> That is, the weights of different determinations of a quantity vary inversely as the squares of their Mean Errors, their Probable Errors, or their Average Deviations. It is to be observed, however, that the determination of the relative weights of quantities from a comparison of their precision measures according to (45) applies only when the quantities are of the same kind and subject to the same con- stant errors, if any of the latter exist. (See 98.) The applications of (45) are numerous and important. 57. Example A. Suppose n direct observations, all of the same weight, be made upon a quantity, and that the probable error of a single observation is r. Then since the 44 METHOD OF LEAST SQUARES. weight of the arithmetical mean is n, its probable error r will be given by r 2 1 r -~ = - or r = = (46) Or in general, suppose 8 is any precision measure of an observation of weight p, and suppose p is the weight of a second similar quantity or observation, then the correspond- ing precision measure 8 of the latter will be (47) The case of most common occurrence is that in which p =. 1, and then we have 80 = -^= (48) Example J5. A line is measured five times and the aver- age deviation of the mean (A. D.) found to be .016 feet. How many additional measurements are necessary in order that the A. D. of the mean may be reduced to .004 feet ? Let x be the total number of observations required. Then x : 5 = .000256 : .000016 x = 80 Consequently the number of additional measurements required is 75. Example C. In two determinations of the quantity L there were obtained L v = 427.320 0.040, Z 2 427.30 0.16 Find their relative weights, and the most probable value of L and its probable error. THE PEECISION OF OBSERVATIONS. 45 Note. The above is the method commonly employed to denote that the probable errors of the observations are 0.040 and 0.16. , 16 2 16 From (45) * = - - = - ^2 4 2 1 From (3), the most probable value of L will be given by ' ^ = 427 + 16 X .32 + .80 = 427.319 and the weight of L being 17, by (48), r = -^ = .039 vrf Therefore we should write the result Z = 427.319 .039 REPRESENTATION OF [1, a.d., AND r ON THE CURVE OF ERROR. 58. To find the points of inflection of the Curve of Error we have .- ax y = ke-- - 2 h*k x d*y For a point of inflection, , ' 2 = 0, or 2 A 2 k e~ h - x ' 2 (2 A* x 2 - 1 ) = x = = ^ by (35). Ay 2 46 METHOD OF LEAST SQUARES. That is, the Mean Error is represented by the abscissa of the point of inflection of the Curve of Error. See OM in figure 2, page 9. Next, for the abscissa of the centre of gravity of the area to the right of OY, we have I y x dx aso = OD = Jo i\i /-/ rp y ' ' Ju 2 A / , a = 7= f \ir J o for, ( 18), j^y dx = ^ Integrating, 1 = a.d. by (34). Finally, if an ordinate PP' be drawn so as to bisect the area to the right of the origin between the curve and the axis of X, the Probable Error will be represented by the dis- tance of this ordinate from the axis of Y, For the proba- bility of the occurrence of an error less than the amount OP is then equal to the probability of the occurrence of an error greater than OP. This being the case, OP is the probable error by definition. CHAPTER IV. COMPUTATION OF THE PRECISION MEASURES. / DIRECT OBSERVATIONS. 59. Observations of Equal Weight. Given n direct obser- vations all of the same weight on a single quantity M, to find the Mean and Probable Errors and Average Deviation of a single observation and of the Arithmetical Mean. Let the observations be J/i, M^, . . . M n . u " arithmetical mean be M Q . " " real errors be x^ x 2 , . . . x n . " " residuals be v l5 v 2 v n- Denote the mean and probable errors and average devia- tion of a single observation by /*, r, and a.d., respectively, and the corresponding quantities for the arithmetical mean by fio, r , and A.D. Then by definition, and /* = If MQ represented the true value of M, the residuals would be the same as the real errors, and we should have /* = and if n is large this formula is practically exact. Hut when n is small a more accurate expression is necessary. To 48 METHOD OF LEAST SQUARES. obtain this let M -\- x be the true value of M. Therefore Xi = M l (M -}- x ) = v v x (- *o) = v z x -j- x ) = v n x Squaring, adding, and dividing by n = f = (2y 2 - 2 x< V + nx U* . w by (48) by (42) and (n - l)^ 2 = 2u 2 (49) (50) (51) (52) fKit\ i/ 2? 2 IL \/ V^_i t/ 2v 2 U. \/ - V n(n-l) /M flTl'i i/ t/ 2t? 2 / .fi7, i \/ COMPUTATION OF THE PRECISION ME AS USES. 49 60. In order to avoid the use of the squares of the residuals we may proceed as follows : From (49) n On the average, the values of the residuals will then be / n - 1 * = : \ -^~ * I Vn = \ a \ n I n 1 Adding and dividing by n, neglecting the signs of the residuals, H \ ^ / n \ n n \ n-l a.d. a.d. = (54) V / n(w--l) by (48) ^l.D. = - (55) w ^ n 1 by (41) r = >8453 Sty (56) ^n(n-l) .8453 i> and r = inzi (57) w- Vw 1 The mean errors may also be computed from the above by using the table of equivalents in paragraph 55, but this is not customary, formulas (50) and (51) being used for this 50 METHOD OF LEAST SQUARES. purpose. Results derived from (50) are to be regarded as more accurate than those obtained from (54), the latter being a second approximation. 61. Example. From the following measurements on the length of a base line find the most probable length and the values of the various precision measures. M V y 2 455.35 .02 .0004 .35 .02 4 .20 - .13 169 .05 - .28 784 .75 .42 1764 .40 .07 49 .10 - .23 529 .30 - .03 9 .50 .17 289 .30 - .03 9 3.30 + .70 .3610 455.330 - .70 M* Sw = Sw 2 By (50) and (51) .3610 r = .6745 p By (54) and (55) 1.40 a.d. = V/90 r == .8453 a.d. .20 .13 .15 .13 = .063 r a = .6745 u = .042 A ' D - = IF = - 047 r n = = .8453 A.D. = .040 COMPUTATION OF THE PEECISION MEASURES. 51 and we should write for the most probable length of the base line J/o = 455.330 .042 62. Observations of Unequal Weight. Using the same notation as above, with slight modifications, we will Let M Q represent the General Mean. " Pit Pzi Pn be the weights. " a.d.^ a.d. r 2i - - r n be the probable errors. " a.c?., /x, r, and v refer to observations of weight unity. Then by (48) t j. etc. If the " Observation Equations " are formed for this case they will be MI Jf = v M 2 3/o = -y 2 , . . . M n J/o = v n . And, as was shown in paragraph 28, if these equations are each multiplied by the square root of the weight of the corre- sponding observation, they will all be reduced to equivalent equations of weight unity. On performing this operation it will be seen that the residuals of the new equations become And evidently to these reduced observations the formulas of paragraph 59 apply. V 52 METHOD OF LEAST SQUARES. Therefore * pv l r = .6745 H (58) n I K = V n = .6745 |i, (59) = N/ . (60) Also, by a method similar to that used in paragraph 60, it may be shown that a.d. = = r = .8453a.y' M = = 1 4' 31".6 = 17.7 a* = r = .6745 a = 11.9 150 a.d. = = 15.8 V90 V74 r = .6745 uo = 1.4 .8453 A.D. = 2.1 = 4.0 1.8 1.5 M : 1 4' 31". 6 54 / METHOD OF LEAST SQUARES. FUNCTIONS OF OBSERVED QUANTITIES. 64. Theorem. Given any number of quantities and their Mean and Probable Errors and Average Deviations, to find the Mean and Probable Errors and Average Deviation of auy function of the quantities. Let the quantities be J/i, M 2 , . . . M q . " " mean errors be /u. t , ^ . . . p. q . " " function be M=f(M 1 ,M 2 , . .. M q ). " " mean error of M be E. " " probable error of M be R. " " average deviation of M be D. The derivation of the general formula will be simplified if we consider first a few special forms of functions. 65. Case I. Suppose Jff = _Mi J8f a . The number of observations from which M v and M^ and hence ^ and ^ have been determined is not necessarily known, but we may assume that for each quantity it is any large number n, and that the real errors of the observations are for J/i, a;/, a^", i'", . . . u Jf x ' X " X.,'" Then the real errors of M, computed from the separate observations on J/i and M 2 will be y. ' + n. > ~ " -|_ ~ " rf HI \ III * \ ~^~ * " 5 \ ' * 9 \ ""* n or JE 2 = |t 1 2 +ti 2 2 (64) since in the most probable case the term ^x^x^ will dis- appear, as there most likely will be as many positive as COMPUTATION OF TUB PRECISION MEASURES. 55 negative products of the same absolute magnitude of the form X-L x z . By successive applications of the above the same principle may be extended to cover the algebraic sum of any number of quantities. So that if M = M, M z . . . M q then E* = m 2 + H^ 2 + . . - lV = 2> 2 () Since probable errors and average deviations differ from mean errors merely by a constant factor, we ishall likewise have (66) . 2 (67) 66. Example. Given the telegraphic longitude results, h. m. sec. sec. (A) Cambridge west of Greenwich, 4 44 80.99 0/23 (B) Omaha "Cambridge, 1 39 15.04 0.06 (C) Springfield east " Omaha, 25 8.69 0.11 Find Z,, the longitude of Springfield, and its probable error. Z = A 4- B C = 5* 58 m 37'.34 0'.26 for by (66) R = y/(.23 2 -f .06 2 -f- .II 2 ) = .26 67. Case IT. Suppose 3 = a^f^ Using the same notation as in paragraph 64, the real errors of Jf will be n : f" 1 or E = CT.Ji! (68) 56 METHOD OF LEAST SQUARES. 68. Case III. Suppose M = a\ M v a 2 M 2 . . . a q M q By combining (65) and (68) also JJ 2 = SwV 2 (69) and J> 2 = S a 2 . 58 METHOD OF LEAST SQUARES. Find the length of the hypotenuse c and its probable error. In this example M = dM y/ 2 -f 6 2 By (70) J2-= ^r + T 70.65 _70.65j E = .78 and c 70.65 0.78 .Example B. If the probable error of x is r, what is the probable error of the common logarithm of x ? In this case M = Iog 10 MI = Iog 10 a; 9 lgio * lgio e da; a; Example C. If the weight of x is p, what is the weight p of sin x ? Denoting the mean errors of x and sin x by /x. and JE', respectively, we have, by (45) and (70), COMPUTATION OF THE PRECISION MEASURES. 59 Po V? and = p E* P p = - = p sec 8 x COS* X 72. Equation (70), which expresses the law of propaga- tion of error in functions of observed quantities, is one of the most important in the whole theory of the Method of Least Squares. Upon it in particular is based the discussion of th* " Precision of Measurements." This subject treats, in the first place, of the methods of finding the precision of a quantity obtained by computation from a series of measured quantities; ?.nd, in the second place, it investigates the pre- cision with which the component measurements of a series must be made in order to obtain a required degree of pre- cision in the final result. The following simple example will illustrate the character of the solutions : 73. Example. In the determination of a current bv a tangent galvanometer we have I = 10 tan < G where I is the current in amperes, II the horizontal com- ponent of the earth's magnetic force, G the galvanometer constant, and the angle of deflection. Given the errors 8i 8j 8s> i n -"> @ an( l tan <, to find the error A in I. By (70) 100 100 // 2 100 ZT 2 - V (a) 60 METHOD OF LEAST SQUARES. 100 // 2 Dividing this equation by 1 2 = - tan 2 we nave TAT P-TJ P bJ = bJ + bJ That is, the square of the percentage error in I is equal to the sum of the squares of the percentage errors in H, G, and tan . Hence if If is determined within .4 per cent. Q u n .2 " " tan " " " .1 " " then = V .16 -f- .04 -j- .01 = .46 per cent. (c) Next, suppose the value of I is required to within .1 per cent. To find the necessary accuracy in the determinations of H, 6?, and tan < when the error in each of these quan- tities is to have the same influence upon the total error. From (b) we shall now have A = - = .000577 8j_ = .00058^ 8 2 = .00058 G (e) 8 3 = .00058 tan <^> It is comparatively easy to obtain the necessary accuracy in the measurements of G and tan <, but difficult in the case of H. For additional work of this kind see Holman's " Precision of Measurements." COMPUTATION OF THE PRECISION MEASURES. 61 74. Combination of Functions of the Same Variables. It is to be noticed that equation (70) applies only when M is a function of independent quantities. If J/i, M 2 , . . . M q are merely different functions of the same quantities we must proceed as follows : Let MI = (2 X , 2 2 , . . . z fc ) M, = $ (2 U 2 2 , . . . t ) M = f ( Jf te Jf f ) If any single observations of 2 1? z 2 , . . . 2 fc are subject to errors a^, a; 2 , . . . ic^, the corresponding errors in Jl/j and J/2 will be for J/j, Jfi = a 1 x l -\- 2 2 -(-... a^x k (a) " J^, JT 2 = a/Xi -|- a 2 'a; 2 -f- . . . a k 'x k (b) Where a^ a z , . . . a k are the differential coefficients of J/i, and /, 2 ' . . . a t ' the differential coefficients of Jf 2 with respect to 2 X , 2 2 , . . . 2^. The corresponding error in M will then be X = AX : + A'X 2 (c) Where A and A are the differential coefficients of M with respect to M v and J^. Substituting in (c) from (a) and (b), X = (Aa-i.-\- A'a\) x^ -\- (Aa^ -}- A'a' z } x 2 . . . = a Xt -\- fix-i -{-... \X k Then if the number of observations or values of X be denoted by , . . . XV. 2 (d) 62 METHOD OF LEAST SQUARES. since in the most probable case the product terms will cancel out. Expanding (d) we have E* = (Aa, 4- .4V) 2 Mi 2 + (Aa, + A'a z ')* rf + . . . = ^'(^v -f 2 V . ) + ^"KV + 2 'V . . .) + 2yl^'(iiVi 2 + 2 2 W+...) (71) 75. Example. As a very simple problem take Jfi = 22!, 3/ 2 = 82^ /*! = 0.1, and M = J/i + Jf s . Then .4 = 1, A' = 1, a t = 2, a t ' = 3. By (71) ^ a = 4 X -01 + 9 X -01 4- 2 X 2 X 3 X -01 = .25 or ^ = 0.5 In this particular example the result may be found directly from (68) by substituting at first in M the values of M^ and M.,. Thus M = 23! 4- 3z t = 52 t .E' = 5/i t = 0.5 If M and Jf 2 had been independent quantities, by (64) or (69) we should have had E = V(2 X 0.1) 2 + (3 X O.I) 2 = 0.36 INDIRECT OBSERVATIONS. 76. The determination of the precision measures of the unknown quantities in case the observations are indirect involves a knowledge of the weights of the unknowns, and consequently the method of computing these weights must COMPUTATION OF THE PRECISION MEASURES. 63 first be demonstrated. It will be assumed at present that all the observations are of weight unity. FIRST METHOD OF COMPUTING THE WEIGHTS. Let the observations be in which M^ M^ . . . M n denote the actual observations, and Zj, z 2 , ... z q the most probable values of the unknown quantities. Let &! J/i = m u ki M 2 = m^ . . . k n M n = m^ Then the above equations give rise to the OBSERVATION EQUATIONS (A) By the rule, paragraph 25, we now form the NORMAL EQUATIONS 2i 2 a 2 -}- 2 2 2 ab -j- . . . z v 2 ay + 2 am = z t 2 ab _j_ 2 2 2 * 2 -f ... 2^ 2 i 2 2 ba. + ... $ 8 2 ?a = & ty (H) and (H') (I) Let fi be the mean error of an observation of weight unity. Let p zj be the mean error of z x . Let p Zl be the weight of z^. The mean errors of m^ wz 2 , . . . m n are the same as the mean errors of J/i, M^ . . . M n and each is accordingly equal to p. Therefore from (G), by (69) Pv* = a! 2 /* 2 + 02 V 2 + oV a = M 2 2a 2 = Ci/*' by (I) (J) But by (48) ^ = - Pzi Comparing this with ( J) we see at once that Qi = (K) Pzi Therefore, for the First Method of computing the weights we have the following : COMPUTATION OF THE PRECISION MEASURES. 67 77. Rule I. In the normal equation for z^ write 1 for the absolute term 2 am, and in the other equations zero for each of the absolute terms 2 bm, 2 cm, ... 2 qm. The value of z l found from these equations, is the recipro- cal of the weight of the value of z x obtained by the solution of the normal equations. To Jind the weights of z 2 , z 3 , . . . z q , proceed in a similar way, forming a corresponding set of equations for each unknown. SECOND METHOD OF COMPUTING THE WEIGHTS. 78. Write equations (B) of paragraph 76 in the form z t 2 2 + z 2 2 ab -\- . . . z q 2 aq -|- 2 am = A Zi 2 ab -f- z 2 2 b* -\- . . . z q 2 bq -f- 2 bm = B bq -f- qm = Q Then in the solution by the preceding method, equation (E) becomes -f- (M) in which, as was proved in (K), Q l is the reciprocal of the weight of Zi. Whatever method of elimination is employed in the solution of the normal equations, the coefficient of A in the value of z x must necessarily be always the same. Hence we have 79. Rule II. Write A, B, ... Q instead of zero in the second members of the normal equations and carry out their solution in any convenient way. Then the most probable values of z^ z 2 , . . . z are yiven by those terms in the results which are independent of A, Tt. . . . Q, 68 METHOD OF LEAST SQUARES. The weight of z is the reciprocal of the coefficient of A in the value of z x . The weight of z 2 fa the reciprocal of the coefficient of JS in the value of z 2 , etc., etc. THIRD METHOD OF COMPUTING THE WEIGHTS. 80. From the second, third, . . . equations of (L) find the values of z 2 , z s z q i n terms of z v and substitute in the first of (L) without reduction. Then the first of (L) becomes Rzi = T + A + terms in , (7, ... Q Where T is the sum of all the numerical quantities result- ing from the substitutions. Dividing through by Jl, T A. z l = -f -f terms in B, <7, . . . Q (N) I\ Jf T in which is the most probable value of z t , and, as was -B shown in deriving the second method, R = A. (O) From this follows at once 81. Rule III. Substitute in the normal equation for z t the values of 2 2 , z st . . . z q in terms of z as found from the remaining equations. Then before freeing of fractions or introducing any reduction factor, the coefficient of 2j COMPUTATION OF THE PRECIS ION MEASURES. 69 in this equation is the weight of the value of s t obtained in the solution. To find the weights of 2 2 , s 3 , . . . z q , proceed in a similar way with the normal equations for each of these unknowns. For the solution of an example by the three different methods see paragraph 84. THE MEAN ERROR OF AN OBSERVATION. 82. The next step will be to derive /x, the mean error of an observation of weight unity. In the following demon- stration the equations referred to by letters are those in paragraphs 76 to 81. Let the real values of 2 1} 2 2 , ... z q be and substituting in (A) we have i( 2 i -h i) + &!(Z 2 + Je,) -f . . . qi(z q -f- X q } -f- mj = A! (i + ^i) + ^2(22 + ^2) + ^O? + ff ) + 2 = A 2 (P) n(2i + *!> + ft w (3 2 + a; 2 ) -f . . . q n (z q -f- a?,) 4- 7n n = A n where A u A 2 , . . . A n are the real errors of J/i, J/^, . . . J/J,, or of wij, ra 2 , . . . m n . Multiply the first of (P) by a u the second by 2 , ... and add the results. This gives Zi 2 2 -)- z 2 2 A -f- ... z 7 2 ? -f- 2 aw -j- i 2 a 2 + x 2 2 * + . . . x q 2 ay = 2 a A But by the first of (B) the first line in this equation is equal to zero, and therefore 70 METHOD OF LEAST SQUARES. x l 2 a 2 -j- x 2 ab -f- . . . x q 2 aq 2 aA = Also x l 2 ab + x 2 2 W -f . . . x q 2 fy - 2 A = ......... (Q) a?! 2 ? + x 3 2 6? -f- . . . x q 2 2 , . . . and add the results. Then (z 2 + x^bv + . . . (z q -f- and multiplying the first of (A) by a 1} the second by a 2 , . . . and adding the results Say = Zj 2 2 -f- z 2 2 ab -j- . . . z 7 2 a Zi = - (d") 3 5 For finding the weight of z s the equations are 2z, -- 2z 2 z^ = (a"') - 2 Si + 3 z 2 =0 (b'") - z x + 3 z 8 - 1 = (c'") Solve for z 8 4 from (b'") 2 z 2 = - z t ' O substitute in (a'") 2 Zj 3 z 8 =0 2 X (c'") - 2z, + 6z 3 -- 2 = 3 z s - 2 = 2 3 SOLUTION BY THE SECOND METHOD. The normal equations will now be modified so as to appear in the following form : 2z t -- 2 z 2 - z 8 -- 0.7 = A (a) - 2 z l -4- 3 z 2 - 2.3 = B (b) - z l -f 3 z 8 - 0.4 = C (c) COMPUTATION OF THE PRECISION MEASURES. 75 Solving, 3 x (a) 6 z l - 6 z 2 - 3 z 8 - 2.1 = 3 A (c) - z l _ -{- 3 g, -- 0.4 = (7 5 Zl _ 6 z 2 -- 2.5 = 3 ^4 -j- C - 4 g t -4- 6 z 2 - 4.6 = 2 jg _ 2l 7.1 = ZA + 2J3+C (8) Zi = 7.1 and p tl = (d) o Substituting (S) in (b) 3 z = 16.5 _-6^ 5^--2<7 3 2 2 = 5.5 and jo^ = (e) 5 Substituting (S) in (c) z s = 2.5 and p Zi = (f) SOLUTION BY THE THIRD METHOD. The normal equations are now taken in their original form. 2 z, -- 2 z 2 - z 3 -- 0.7 = (a) - 22! + 3 z 2 - 2.3 = (b) - z, + 3 z a - 0.4 = (c) 76 METHOD OF LEAST SQUARES. To obtain z t and its weight we proceed as follows : z l .4 from (c) 2, = + 2 z l 2.3 from (b) 2 2 == 1- Substitute in (a) 4 4.6 2 t .4 2 z v - Zi _--_ --- 0.7 = 33 3 3 z l 7.1 Collecting terms, = o o 2l = 7.1 and p zi = - (d) For z 2 3 X (a) -j- (c) 5 sjj 6 z 2 2.5 = 6 z x = - s a + 0.5 5 12 Substitute in (b) z* 1.0 -f 3 3 2 2.3 = 5 3 Collecting terms, - 2 a 3.3 = 5 3 Zt = 5.5 and / = - (e) For 2, 3 X (a) + 2 X (b) 2 2l 3 z 3 - 6.7 = 3 6.7 2, = 2. + 2 ! 2 O ft 7 Substitute in (c) 2, 1- 3 2 8 0.4 = 2 COMPUTATION OF THE PRECISION MEASURES. 77 3 75 Collecting terms, z a - =0 L ' g z s = 2.5 and p sa = (f) A It is evident that the three methods give identically the same results and that the work is about the same in each case. COMPUTATION OF THE PRECISION MEASURES. Substituting the values found for z^ z 2 and z 3 in the observations equations (A), we have 7.1 _ 5.5 - 1.7 = Vt = - .1 .01 = v^ 2.5 2.4 = v a = -j- .1 .01 = v 2 2 - 7.1 -f 5.5 + 2.5 - 1.0 = v s = - .1 .01 = v, a 5.5 - 2.5 3.0 = v 4 = .0 .00 = v 4 2 .03 = In this example n = 4, q = 3. By (72) p = y = .17 By (74) r = .6745^ = .12 By (73) ^ = = .30 r zt = .20 3 9 By (75) u.d. = = .15 z t are a ^ s * n this case the residuals of the observations, and therefore to compute the precision measures we have V V* pv 8.3 68.9 206.7 12.4 153.8 307.6 12.4 153.8 307.6 24.9 620.0 620.0 1441.9 = = 1 By (81) M = = 38 = 20 r 2 = 13 V/3.5 =24 r^ = r Z3 = 16 = 29 r = 20 yl.75 We shall accordingly write for the most probable values of the angles of the quadrilateral A = 101 13' 14" 13" B = 93 49 5 16 ri 87 5 27 16 D = 77 52 15 20 In the original solution in paragraph 34 it is evident that the results were carried out to a greater number of places of significant figures than the character of the observations warranted. CHAPTER V. MISCELLANEOUS THEOREMS. THE DISTRIBUTION OF ERRORS. 90. Having developed the processes for the adjustment of observations according to the Method of Least Squares, it will now be interesting to show how closely the distribution of errors found in actual practice corresponds to the theo- retical distribution upon which our methods of solution are based. By formula (36), the probability that the error of a single observation will be numerically less than a is p = T= e ~^ dx < 82 > VTT Jo Let t = hx, .. dt = h dx. Also when x = 0, t = ; and when x = a, t =. ha = p^L. Sub- r stituting in (82), P = p -'* = ~ C p re-'*dt (83J VTTJo Values of P for values of the argument are given in T Table I. Also for any series of observations this quantity P will represent the fraction of the entire number which should have errors less than the amount a. Hence if P is multiplied by the whole number of observations the result will be the number of errors which should be less than the limit a. 84 METHOD OF LEAST SQUARES. 91. Example. Forty measurements on the diameter of Saturn's ring were made by Bessel, with the following results : M v M v M v M v 38".91 - .40 39".35 +.04 39".41 +.10 39".02 -.29 39 .32 +.01 39 .25 .06 39 .40 +-09 39 .01 - .30 38 .93 - .38 39 .14 .17 39 .36 +.05 38 .86 - .45 39 .31 .00 39 .47 +.16 39 .20 - .11 39 .51 +.20 39 .17 .14 39 .29 - .02 39 .42 +.11 39 .21 - .10 39 .04 .27 39 .32 +.01 39 .30 - .01 39 .17 - .14 39 .57 +.26 39 .40 +-9 39 .41 +.10 39 .60 +.29 39 .46 +.15 39 .33 +.02 39 .43 +.12 39 .54 +-23 39 .30 - .01 39 .28 - .03 39 .43 +-12 39 .45 +-14 39 .03 - .28 39 .62 +.31 39 .36 +.05 39 .72 +.41 From these the most probable value of the diameter is found to be D = 39".308 0".022 the probable error of a single observation being r = 0".136. Compare the theoretical and actual distribution of errors between u over 0".00 and 0".05 .05 " .10 .10 .20 .20 " .30 .30 " .40 .40 In the following table the first column gives the successive values of the limiting error a, the second column the values a of -, and the third column the corresponding values of P. MISCELLANEOUS THEOREMS. 85 The fourth column contains the differences between the suc- cessive values of f, and by multiplying each of these differences by 40, the number of observations, we have the quantities in column five, which are the numbers of errors that according to the theory should fall within the corre- sponding limits. Column six shows the actual number of residuals occurring between these limits. a a r P d n n' 0.00 0.000 0.000 0.196 8 9 0.05 0.10 0.368 0.735 0.196 0.380 0.184 0.299 7 12 6 12 0.20 1.471 0.679 0.184 7 8 0.30 2.206 0.863 0.090 4 3 0.40 2.942 0.953 0.047 2 2 00 00 1.000 i This is a close agreement considering that the number of observations is not very large. Also the number of errors greater than the probable error should be equal to the num- ber less than it. On counting the residuals we find twenty-one less than 0".136 and nineteen greater. THE REJECTION OF OBSERVATIONS. 92. After a series of measurements have been made, it is frequently found that one or two of the observations differ widely from the others, and hence it becomes a matter of great importance to establish, if possible, some criterion by which we may determine whether such discordant observa- tions should be rejected or not. We are not concerned here 86 METHOD OF LEAST SQUARES. with the question of the detection of a mistake or constant error, which a consideration of the circumstances of the observations or of the instruments might reveal, but it is assumed that there is nothing whatever to guide us except the mere fact of the unusual size of the residuals of the observations under discussion. To reject an observation merely because it differs considerably from the others is entirely unjustifiable, while to retain it without any investi- gation is a neglect of the evidence furnished by the observa- tions themselves. The adoption of any rigid criterion based upon the magni- tude of the residuals is perhaps more satisfactory from a mathematical standpoint than from that of a practical observer, and some of the latter are of the opinion that no observation should be rejected entirely, even the most widely discordant ones being given a certain weight. In this latter case, however, the Theory of Probability will furnish a guide as to the proper weights to assign to the different observations. Of the various criteria that have been proposed, that developed by Pierce (see Chauvenet, page 558) is perhaps the most complete. The derivation and application of this criterion is, however, somewhat long and complicated, and for all ordinary cases the following simple methods will give practically as good results. 93. Criterion for the Rejection of a Single Doubtful Observation. It was shown in (83) that in a series of n observations the number of errors numerically less than a should be HP, and therefore the number of errors greater than a should be n - nP = n(\ P) (84) If the value of the expression in (84) is less than one-half, the occurrence of an error of magnitude a will have a greater probability against it than for it, and hence the obser- vation corresponding may be rejected. MISCELLANEOUS THEOREMS, 87 Accordingly the limit of rejection, a, of a single doubtful observation is obtained from the equation n (1 - P) = - V ' 2 or P = 2 n - I (85) 94. JZxample. Fifteen observations on the value of an angle are made. Ought any of the observations to be rejected ? M y V 2 v' v' 2 v" v" 2 2 23'.90 .30 .090 - .41 .168 - .33 .109 23.76 - .44 .194 - .55 .303 - .47 .221 25.21 + 1.01 1.020 + .90 .810 24.68 + .48 .230 + .37 .137 + .45 .203 23 .96 - .24 .058 - .35 .123 - .27 .073 24.26 + .06 .004 - .05 .003 + .03 .001 24.82 + .63 .397 + .52 .270 + .60 .360 24.07 - .13 .017 -.24 .058 - .16 .026 23.98 - .22 .048 - .33 .109 .25 .063 24.14 - .06 .004 - .17 .029 - .09 .008 24 .40 + .20 .040 + .09 .008 + .17 .029 24.38 + .18 .032 + .07 .005 + .15 .023 24.59 + .39 .152 - .28 .078 + .36 .130 24.10 - .10 .010 + .21 .044 - .13 .017 22.80 - 1.40 1.960 2 24 .20 4.256 2.145 1.263 j METHOD OF LEAST SQUARES. Using all the observations we find 3f ft = 2 24 '.20 r = .6745 t = .37 t/ 4 - 256 = .J V 14 OQ By (85), P = = .967 By Table I, - = 3.17 .-. a = 1.17 As the residual 1.40 is larger than a, we reject the last observation. From the remaining observations we now compute a new mean value and a new set of residuals. And we find o 145 M' 2 24'.31 r' = .6745 V/ = .27 By (85), P = = .964 By Table I, - = 3.11 .-. a = .84 The third observation may accordingly be rejected. From the thirteen observations that remain we find M" 9 = 2 24V23 r" = .6745 i/ L263 = .22 v 12 P = . = .962 26 - = 3.08 .-. a = .68 r Therefore no more observations are to be rejected. MISCELLANEOUS THEOREMS. 89 95. The Huge Error. In cases where the number of observations is not unusually large, a simple and safe criterion for the rejection of a doubtful observation is found in the use of the " Huge Error." This is an error of such a magnitude that 999 out of every 1000 errors are less than it and only 1 as large as or greater than it. Therefore the probability that the error of any given observation will be less than the " Huge Error " is .999, and from Table I, when P = .999, = 4.9 r a = Huge Error = 4.9 r = 3,3 jx (86) = 4il a.cl. Then in any limited series of observations, if an error greater than the huge error is found, we should reject the observation corresponding. See also, Holman, page 30 ; Wright, page 131. CONSTANT ERRORS. 96. Throughout our discussion of the methods of adjusting observations so as to obtain from them the most probable values of the unknown quantities, all constant errors are supposed to have been eliminated before the Method of Least Squares is applied in deducing the results. If this is not done, and each observation is subject to the same constant error, the final result will be affected by an equal amount, and in short, the Method of Least Squares is not capable of removing or reducing the effect of errors of this kind. All that is accomplished by the use of the method is to reduce to a minimum the effect of the Accidental Errors. 90 METHOD OF LEAST SQUARES. Hence it will be seen that although by increasing the number of observations of a given kind we may increase the precision, that is, reduce the probable error, of our final result as much as we choose, yet we do not in this way necessarily increase the accuracy of the determination. But if the unknowns can be determined in several ways, or under a variety of different circumstances, with various instruments, or by different observers, then it is most proba- ble that the constant errors of the different sets of measure- ments will be grouped about the true values of the unknowns according to the exponential law of error. Accordingly a combination of such observations will enable us to increase not only the precision, but also the accuracy of the final result, the constant errors of the different sets tending to cancel each other in the same way that the accidental errors of a single set do. It is for this reason that determinations of a quantity from observations made in a variety of ways are more valuable than those obtained merely from different sets of measure- ments of the same kind. 97. The probability of the existence of a constant error may often be expressed in the following manner. Example. A standard 100 ohm coil is compared with a Wheatstone's bridge and the mean result found to be 100.90 0.20. To find the probability that there is an error in the bridge between -|- 0.30 and -(- 1.50 ohms. Suppose the result 100.90 0.20 is treated as a single observation, and we find by an application of (83) the prob- ability that the error of this observation is numerically less than 0.60 ohms. .f\ Here - = = 3.00 .-. P = .957 r .20 Hence, as far as is shown by the observations, the proba- bility that 100.90 ohms is within 0.60 ohms of the true MISCELLANEOUS THEOREMS. 91 value is .957. But since it is known that the true resistance is 100 ohms, it follows that there is the same probability that there is a constant error in the bridge between -|- 0.30 and -|- 1-50 ohms. 98. Combination of Determinations having Different Con- stant Errors. In case two or more determinations of a quantity, together with their probable errors, are obtained, the method of combining them so as to secure the best final result was considered in paragraph 56, and in Example C, paragraph 57. But it was there assumed that all the results were subject to the same constant errors, while if this is not true the probable errors of the separate determinations bear no relation to their weights, and accordingly in such cases another process must be adopted. To determine whether the different measurements may fairly be considered to have the same constant errors we may proceed as follows : Let the determinations of the quantity M be -fl/i ^ (a) MI r z (b) and let the difference between these results be d = J/i Hf a (c) Then the probable error of d is by (66), 72 = vV + r, (d) If d is of such a magnitude that an accidental error as great as it may reasonably be expected, we may assume that the constant errors of (a) and (b) are the same, and pro- ceed as in Example C, paragraph 57. But if the probability of making two determinations which differ by the amount d is very small, we had best consider J/ t and MI to have the same weight, provided there are no 92 METHOD OF LEAST SQUABES. special reasons for regarding one better than the other. The final value ot M will then be the arithmetical mean of MI and JH/ 2 , and its probable error will be found by (53). 99. Example A. An angle is measured by a theodolite and by a transit with results By Theodolite, 24 13' 36".0 3".l By Transit, 24 13' 24" 14" What is the most probable value of the angle and its prob- able error? Referring to the preceding paragraph, ^ = 3.1, r 2 = 14, d = 12, and the probable error of d is R = V3.1 2 + 14 2 = 14 Then from Table I the probability that the accidental error of a determination will be at least as large as 12 is found from *- = 1? = .86 ,. 1 - P = .57 r 14 That is, there is more than an even chance that two such determinations of the angle will differ by as much as 12. Hence it is fair to assume that the two determinations are not affected by constant errors of different magnitudes, and they would be combined as in Example C, paragraph 57. Example J3. Suppose the zenith distance, M, of a star, observed at two different culminations, is found to be J/i = 14 53' 12".10 0".30 M 2 = 14 53' 14".30 0".50 What is the best final value ? Here d = 2.2, 72 = ^ .09 -f .25 = .58 and for - = = 3.8, 1 - P = .01 r .58 MISCELLANEOUS THEOREMS. 93 Therefore the chance that the difference in the two deter- minations, due to accidental errors, will be as large as 2.2 is only one in a hundred. It is to be concluded then that the constant errors of observations at the two culminations differ by about 2.2, and as there is nothing to show that one meas- urement is more accurate than the other we will give them both the same weight and take the mean. Then the best value for the zenith distance is M = 14 53' 13".20 0".74 For M. = 14- 53- + 13 '' 10 + 14 " 3 2i = 14 53' 13".20 2.42 By (53) r = .6745 V^-^; = -74 For a more extended treatment of this subject see Johnson, " The Theory of Errors and Method of Least Squares," chap. vii. THE WEIGHTING OF OBSERVATIONS. 100. In case the relative worth of observations is not settled by methods already discussed, the proper weight to assign to each quantity in the final adjustment can only be determined from a full knowledge of all the circumstances of the measurements. Even then considerable experience in the particular work in hand is required before the best values for these weights can be assigned. The weight given to a quantity should never be considered final, but always subject to revision whenever new information with regard to the quantity is obtained. Thus an observation which at first is supposed to deserve a high degree of confidence is often found on later investigation to possess very little value, and vice versa. See also Wriyht, page 118. 94 METHOD OF LEAST SQUARES. OTHER LAWS OF ERROR. 101. Although in the great majority of cases the distribu- tion of errors follows the exponential law thus far considered, there are a few special cases in which some of the suppositions made in deriving that law do not hold, arid hence for the adjustment of such observations the corresponding special laws of error must be determined. For instance, in applying the exponential law we assume a large number of observations, that each observation is sub- ject to the same law of error, that small errors are more likely to occur than large ones, and that positive and negative errors are equally probable. Now it is easy to conceive of cases where only positive errors can occur, or where the probability of the occurrence of a small error may not be greater than that of a larger one, etc. If we can determine the different sources of error in any case and the relative effect of each upon the quantity sought, we shall arrive at the law of error for that particular set of observations. The case of most common occurrence is the following. 102. Suppose all errors between the limits a and a are equally probable, and that there are no errors beyond these limits. Then if y = (x) is the equation of the Curve of Error, and its area is represented as in para- graph 18, we have (x) dx = 1 (a) or 2 (x) I dx = 1 (b) since by the supposition made (x) must be a constant. Integrating and solving for (), we have y = * ct MISCELLANEOUS THEOREMS. 95 To find the Mean Error we have by definition as in para- graph 52 p* = C a x 2 <(a;) dx / -a _ i C a a Jo x* dx a 7? The Probable Error is derived from the equation C r 1 <(a:) dx = J -r 2 1 C T j 1 _ I dx = a c/o 9 Finally, for the Average Deviation we have /"*flt a.d. = x < (x) dx J -a = _ I x dx a Jo 7 And the Curve of Error has the form (88) (89) (90) 96 METHOD OF LEAST SQUARES. That the average deviation and probable error are in this case equal to one half of a may also be seen from the defini- tion of these quantities. Example. In taking a logarithm from a four place table, what is the probable error of the mantissa ? In this case the maximum error is .00005, and all errors between .00005 and .00005 are equally probable. Therefore r = .000025 103. The only other special case of common occurrence is that in which the error of a quantity is due to two sources, each of which can with the same probability assume all values between a and a. Here it may be shown that the curve of error consists of two straight lines whose equa- tions are 2a x 2a -\- oc V = -;- and = _ (91) Also (i 1 = - a?, r = (2 - ] means the same as 26. For the sake of simplicity in demonstration it will be assumed that the observations are all reduced to weight unity. 98 METHOD OF LEAST SQUARES. 107= Checks on the Formation of the Normal Equations. If, as in paragraph 76, we take for OBSERVATION EQUATIONS -f- *22 + Q&q + m 2 = 2 (A) we shall have for NORMAL EQUATIONS ? + [aw] = [aft] % + [ftft] ,+ ... [fty] z 9 + [ftm] = ......... (B) i + [ft?] t + . [??] z g + [?m] = Let i + *i + . q\ + m i = 5 i 8 + *2 + ' 2* + ^2 = *2 ......... (C) a n -f- ft -f- . . . q n -j- m n = s n ... [a] + [ft] + . . . [ ? ] 4. [m] = [s] Multiplying the first of (C) by 7W 1? the second by w 2 , . . . and adding, there results [am] + [6w] -j- . . . [gra] + [wiwi] = [m] (93) Next multiplying each of equations (C^ by its a and adding, and then each by its b and adding, etc., we have GAUSS'S METHOD OF SUBSTITUTION. 99 [cm] + [6] + . . . [ag] + [&] = [*] [aft] + [&&] + . . . [6g] + [&w] = [&] ......... (94) [ag] + [6g] + . . . [gg] -}- [gm] = [g] Equation (93) will be satisfied if the absolute terms in the normal equations are correct, and equations (94) when the coefficients of the unknown quantities are correct. These check the formation of the normal equations. 108. The Reduced Normal Equations and the Elimination Equations. The value of z l in terms of the remaining unknowns, derived from the first of equations (B), is [oft] [oc] [am] ' [aa] 2 " [aa] Z * ~ ' [aa] Substituting this in the remaining n 1 equations, they become - [ao] + . . [cm] - [o] = L J ' -f L J [aa] And letting (F) 100 METHOD OF LEAST SQUARES. the above equations take the following form, which, being the same as that of the original normal equations, they are called the FIRST REDUCED NORMAL EQUATIONS, [ftfl, 1] Z 2 + [ftc, 1] Z 3 + . . . [fy, 1] z q 4- [ftm, 1] = [ftc, 1] z 2 + [cc, 1] 2, + . . . [eg-, 1] z g 4- [cm, 1] = (G) \bq, 1] 2 2 4~ [ C 3S 1] z s 4~ [?!?> 1] s g 4~ [? m 1] = An inspection of equations (F) will render it easy to form a rule for writing out any one of them. Now by means of the first of equations (G), eliminating z 2 from each of the others in the same way that z^ was eliminated from the normal equations, there results the SECOND REDUCED NORMAL EQUATIONS [cc, 2] z 3 4~ [c<7, 2] z q 4- [cm, 2] = (H) [eg, 2] z 3 4~ [? 2] z q 4~ [$^> 2] = In which [ftc, 1] [c, 1] [cc,2] = , 1] (I) Continuing this process we shall finally arrive at the single equation - 1 ] = o (J) from which the value of z q is determined. GAUSS'S METHOD OF SUBSTITUTION. 101 The value of z q _ will then be obtained by substituting the numerical value of z q in the first of the preceding set of equations, and so on, until finally 2 t is obtained from the first of the original normal equations. The equations from which the unknowns are actually determined are then the following, called the ELIMINATION EQUATIONS. -f- . . . [a#] z q -f- [aw] = ]* ff + [6m, 1] = (95) 4- [> q-i] = o It may be seen from the rule in paragraph 81 that <1 1] i 8 ^ e weight of 2 g , and the weight of any unknown might be found at the same time as its value by making it the last in the order of elimination, but except in special cases the weights had best be obtained by the general process of paragraph 115. 109. Check on the Solution of the Normal Equations. Multiplying the first of the Observation Equations (A) by m t , the second by m 2 , . . . and adding the results, we have [my] = [am] z t -j- [6m] 2 2 -f~ \. < l m ~\ z q ~\~ [ W4 >"] But in equation (T), paragraph 82, it was shown that [my] = [vy]. Therefore [yy] = [am] z l -j- [6m] 2 2 -(-... \_qni] z q -f- [mm] Substituting in this the value of z l from the first of (95), we get the result [ww] = [6m, 1] z a -|- [c/, 1] z 3 -f . . . in which 102 METHOD OF LEAST SQUARES. n i-i n n C a ^l [W*1 [bm, 1] = [im] L _ J L J [aa] r in r -> [mm, 1] = [mml (K) [am] [am] [(/(/] being similar in form to equations (F). Next eliminating z a i n a like manner, we get [uw] = [cm, 2] a, + . . . [?m, 2] z ? -J- [mm, 2] and continuing this process it finally appears that [yv] = [mm, q] [96] 110. Arrangement of the Computations. In computing the coefficients that appear in the "Auxiliary" or "Reduced Normal Equations" it is most convenient to arrange the work in tabular form. The arrangement of the solution will be illustrated for an example containing four unknowns but it will be evident that the process can be extended to cover any case. Let the Observation and Normal Equations be represented by equations (A) and (B), there being only four unknowns z \i 2 2> z s> Z 4 an( i arrange a table as on the next page. In this scheme of solution the upper lines of the rows in the first compartment contain all the quantities that appear in the Normal Equations, together with [mm] and the quantities in the column headed * which are used in checking the results in accordance with equations (93) and (94). The other compartments contain the corresponding quantities for the Reduced Normal Equations, and the first line in each compartment gives the coefficients in the Elimination Equations. GAUSS'S METHOD OF SUBSTITUTION. 103 SCHEME A, SOLUTION OF THE NORMAL EQUATIONS. a b e d m I [] 0*] log [on] log [aA] [ae] log [ae] [arf] log [arf] [am] log [am] log [at] \ gA b log^ A [aA] log ^6 [ ac l [4rf] [4m] At, [am] log A b [am] [4,]' log A b [a] log At [ec] log A c [ac] [erf] [em] A e [am] log A c [am] W AM log A e [as] log J rf [arf] [rfm] At [am] log AJ [am] [rf5] 4M log ^ rf [aj] [mm] A m [am] log A m [am] log J., [at] [44, 1] log[, 1] [4*. 1] log[6e.l] [Arf,l] Iog[4rf, 1] [fa, I] log [4m, 1] log [4*. 1] log B. log 5. [ee, 1] log's, [be, 1] [erf, 1] lof/t^l] [c, 1] log \ [4s, 1] B* [W. 1] [rfm, 1] ^,[4m, 1] log Z?,f [4m, 1 ] [rfs, 1] ^ [4*. 1] [mm, 1] bf^Jn logC. 0- 2] log [ec, 2] [erf, 2] log [erf. 2] [cm, 2] log [cm, 2] [ 2] log [ci, 2] [rfrf,2] C'_[crf,2] log C, [erf, 2] [rfm, 2] C d [em, 2] logC,,[em,2] [rf, 2] Cd [e, 2] log C, [e,, 2] [mm, 2] C n [cm, 2] log C m [cm, 2] [nu.2] CL, ['.*] logC m [c,,2] log ., = log- 4 [rfrf.3] log [rfrf, 3] [rfm, 3] log [rfm, 3] [rf,. 3] log [rf, 3] [mm, 3] D m [rfm, 3] og /) [rfm, 3] [m*, 3] Dm [rf<. 3] log />. [rfj, 3] M = [mm. 4] [m,. 4] 104 METHOD OF LEAST SQUARES. The logarithms of the quantities in the first row of each compartment are also written in, and from these by proper subtractions are obtained the logarithms in the margin, where [aft] [aa]' A c = c = [ac] ... A m ... B m [am] [aa]' [ftc, 1] [ftw, 1] U>t>, ij - [ftft, 1] Now in each compartment adding the logarithms at the margin to each of the logarithms in the first row of that com- partment we obtain the corresponding logarithms written in the other rows. The numbers represented by these logarithms are next written above them, and if each of these quantities is then subtracted from the one above it the result will be the corresponding quantity in the compartment below. Some of the squares in each compartment are left vacant as the quantities belonging to them have already appeared above. 111. Application of Checks. Also, by (93) and (94), in the first compartment the quantities in the first lines of the last column should be equal to the sum of all the quantities in the first lines of the corresponding rows plus the quantities similarly situated above the first terms of the rows. Similar checks will apply in each compartment; for if from the second of equations (94) we subtract the product of the first equation multiplied by AI, we have [W, 1] + [ftc, 1] + . . . [fon, 1] = [ft., 1] (97) In the same manner we may show that a corresponding check holds throughout, so that finally we shall have [mm, 4] = [ma, 4] (98) GAUSS'S METHOD OF SUBSTITUTION. 105 The last compartment of the table is added to give this final check and the value of [yy] in accordance with equa- tion (96). If the multiplications and divisions are simple or if a table of squares or products or a computing machine is used the logarithms will of course be omitted from the scheme of solution. 112. Example. In order to illustrate the systematic for- mation of the coefficients that appear in the Normal Equations as well as the solution of the latter by the above method we will take the OBSERVATION EQUATIONS Z 2z 2 = 0.1 = 0.4 First form a table containing the coefficients in these equa- tions and also the sums s. As a first check the sum of the quantities in column s should be equal to the sum of all the quantities in all the other columns. I. COEFFICIENTS IN THE OBSERVATION EQUATIONS. No. a b c d m s 1 _ i 1 1 1 .1 1.9 2 1 1 - 1 - 1 - .6 .6 3 1 2 - 1 1 .1 - 1.1 4 1 - 1 - 2 - .3 - 2.3 5 1 _ 1 1 .1 1.1 6 1 - 1 - .4 ^4 Sum. 3 - 3 - 1.4 1.4 106 METHOD OF LEAST SQUARES. From these we now compute the coefficients in the Normal Equations (B), and also the necessary quantities for the check equations (93) and (94). II. COEFFICIENTS IN THE NORMAL EQUATIONS. No. aa ab ac ad am as bb be bd bm i 2 1 1 - 1 - 1 -.6 - .6 1 -1 i -1 -.6 3 1 -2 -1 1 -.1 -1.1 4 2 -2 .2 4 1 -1 -2 -.3 -2.3 1 2 .3 5 .0 .0 1 -1 1 .1 6 1 -1 -.4 - .4 .0 Sum 5 -3 -4 -3 -1.3 -6.3 8 1 1 -.1 No. bs cc cd cm cs dd dm ds mm ms 1 1.9 1 1 -.1 1.9 1 -.1 1.9 .01 -.19 2 - .6 1 1 .6 .6 1 .6 .6 .36 .36 3 2.2 1 -1 .1 1.1 1 -.1 -1.1 .01 .11 4 2.3 .0 .0 4 .6 4.6 .09 .69 5 1.1 1 -1 -.1 -1.1 1 .1 1.1 .01 .11 6 .0 1 .4 .4 .0 .0 .16 .16 Sum 6.9 5 .9 2.9 8 1.1 7.1 .64 1.24 If the coefficients in the Observation Equations are large, so that it becomes convenient to use logarithms, other tables corresponding to I and II would be formed to contain these logarithms. GAUSS'S METHOD OF SUBSTITUTION. 107 Substituting these quantities now in the general tabular scheme of paragraph 110 we have the results on the following page. The work of the first compartment is performed without the use of logarithms, as the numbers are simple. The quantities in the last column should all be zero accord- ing to the check equations, what small differences there are being due to the rejection of figures beyond the third place in the decimals. The decimal points in logarithms to which correspond negative numbers have been replaced by the letter n. The demonstrations that have been made now enable us to see at once from an inspection of the results in this table that z^ = 0.238 p Z4 = 1.6 [wv] = .007 Therefore substituting in equations (73) and (74) we have p. Z4 = .047 r^ .032 If the two values of [uu] obtained in the solution had differed at all we should have taken the mean of the two. 113. If the Elimination Equations (95) are divided by [aa], [>, 1], [cc, 2], [cfo?, 3], respectively, they become -f A b z 2 -\- A c z z -f- A d z 4 -f A m = *a + -#c3 + -#rf*4 + J*m = C m = J>m = And the solution for the unknowns can be effected most conveniently by arranging the computations in the manner illustrated on page 109. 108 METHOD OF LEAST SQUARES. SCHEME A. SOLUTION OF THE NORMAL EQUATIONS. a b c d m s 8 5 -3 -4 -3 -1.3 -6.3 -.6 -.8 -.6 -.26 8 1.8 1 2.4 1 1.8 - .1 .78 6.9 3.78 ~0~ "o" 5 3.2 2.4 .9 1.04 2.9 5.04 8 1.8 1.1 .78 7.1 3.78 .64 .34 1.24 1.64 9 n 3537 9 B 1107 9 n 1521 6.2 0.7924 -1.4 O n 1461 - .8 9 n 9031 - .88 9 n 9445 3.12 0.4924 ~0~ 1.8 .316 9.4998 -2.4 .181 9.2568 - .14 .199 9.2982 -2.14 - .705 9 n 8479 6.2 .103 9.0138 .32 .114 9.0552 3.32 - .403 9 n 6049 ~b~ .30 .125 9.0966 - .40 - .443 9 n 6463 O n 2403 9 n 3587 1.484 0.1715 -2.581 O n 4118 - .339 9 n 5302 -1.435 O n 1569 i T T 6.097 4.489 0.6521 .206 .589 9.7705 3.373 2.496 0.3972 .175 .077 8.8889 .043 .328 9.5156 9 M 3769 = Iog-s 4 1.608 0.2063 - .383 9 n 5832 1.227 0.0888 2 T .098 .091 8.9601 - .285 - .292 9 n 4657 0.238 = z 4 [uv] = .007 .007 .007 d d m d d GAUSS'S METHOD OF SUBSTITUTION. 109 SCHEME B. SOLUTION OF THE ELIMINATION EQUATIONS. - Iogz 4 Iogz 2 log A log C d z t log A d z^ 114. Filling out this table for the example just solved, we have .238 .228 .142 .260 .414 .031 .143 .145 .514 .191 .238 .642 .318 1.108 9.3769 9.8075 9.5024 O n 2403 9 B 3537 97782 9 B 1107 9 W 9031 9 B 7782 9 n 2806 9 n 1612 9 n 6172 81 U7ft 9 n 7106 n 4O 1 D 9 B 1551 110 METHOD OF LEAST SQUARES. 115. The Weights of the Unknowns. In order to deter- mine the precision measures of z t , z 2 , and 2 3 , it would next be necessary to compute the weights of the latter quan- tities. The demonstration of the processes by which these weights may be found will not be taken up here, as the best method to adopt varies a good deal with the character of the example, but a statement of the results in the general form of solution will be given. By treating the Elimination Equations in a way similar to that used in deriving equation (E) of paragraph 76 from equations (B) of the same paragraph, we may show that Zl + A m -f B m a, -f- C m a, + D m a s = *, + 3 m + C m & + D m fr = (100) * + C m + A, 73 = where the a's, /?'s, y's, are determined from the equations a, = B d + C d h + h = Q A c +^ c0l + a, = J? c +& = (101) A b + ai = C d + 73 = Then by an application of the principles of Rule 1, para- graph (77), it may be shown that 1 Is I dl a 2 2 a s a p \ [] [ & &> 1 I ] " " [cc, 2] " [rfrf, 3] v \ [fed, 1 1 + [cc, 2] + [dd, 3] i " V (102) [cc, 2] [cirf, 3] 1 [dd, 3] GAUSS'S METHOD OF SUBSTITUTION. HI These equations can of course be extended to cover any number of unknown quantities, and tabular schemes for the computations of the a's, /?'s, y's, . . . and of the weights Pzj Pz z > Pz z , ... can readily be arranged. For a general demonstration of these results, and also for a discussion of special methods of solution, consult Johnson, "The Theory of Errors and Method of Least Squares," chap. ix. Wright, " Treatise on the Adjustment of Observations," chap. iv. Chauvenet, " Spherical and Practical Astronomy," pp. 530-649. THE METHOD OF CORRELATIVES. 116. The method of adjusting "Conditioned Observa- tions " explained in paragraph 33 is perfectly general, but where there are many conditions to be satisfied the solution is apt to be very laborious. For the case that occurs most fre- quently in practice, in which the observations are direct and equal in number to the number of unknown quantities, the process of solution devised by Gauss and called the " Method of Correlatives" is the most convenient. This method is derived as follows : Let q observations, M^ M^ . . . M q , of the respective weights />!, p.,,, . . . p q , be made directly upon the values of q unknown quantities, and let the most probable values of the unknowns be = J/i -f VH z a = J/a -f q . Where v 1? u 2 , . . . v q , are the most probable corrections to apply to the observed values as well as in this case the residuals of the observations. If the ri condition equations are not linear they may be reduced to that form by the method of paragraph 44, so that we may assume for our 112 METHOD OF LEAST SQUARES. CONDITION EQUATIONS l"l + a 2 2 + a qVq + -\ b l v l -f- 2 t? 2 -f- . . . ? u 9 + w* 2 = (A) In which the quantities m lt w 2 , . . . m n ', would all be zero if the observations were exact. It is to be observed that the coefficients a, d, . . . are not arranged in the same order in these equations as they are in the observation equations of paragraph 107. The values of v t , 2 , v q , must be determined so as to satisfy the above equations and also by the principle of Least Squares, so as to make -f- Pz^ + Pq v q = a minimum. Corresponding with a minimum value of this last we have PlVldOl + P 2 V 2 dv 2 -f . . . p q V q dv q = (B) for all possible simultaneous values of do^ c?y 2 , . . . dv q ; that is, for all values which satisfy the equations, -f- a 2 dv 2 -|~ . . . a q dv q = = (C) l q dv q = lidVi -J- l z dv z -f- Iqdvq = obtained by differentiating equations (A). Therefore, denoting the first member of (B) by R and the first members of (C) by S^ S 2 , ... ,>, it will be nec- essary that R _ k^ - k 2 S z - ... k n ,S n , = (D) where & t , k%, ... k n <, are undetermined coefficients. GAUSS'S METHOD OF SUBSTITUTION. H3 This last equation will be satisfied if the coefficient of each differential in it is made equal to zero, that is, if (103) Pq v g == k v a q -f- KyOq -(- k n 'l q All that remains therefore is to find values of v iy v 2 , ... v q and A^, A; 2> . . . AV, which will satisfy simultaneously equa- tions (A) and (103), and that this may be done is easily seen from the fact that we have the same number of equations as unknowns. Substituting the values of w x , v 2 , ... v q from equations (103) inequations (A) we have the following : CT ab al MZ- - # 2 i. *vZ jr. y ,, - && ,. y ^^ to * -p- ' - *fe * *T -' 2 p- ' (104) CT? 6? II The solution of these equations gives at once the values of A*,, & 2 , . . . Ay, which are called the "Correlatives" of the Condition Equations. The values of Vj, u 2 , . . . v q are then found by substituting the values of the k's in equation (103). 117. As equations (104) are of the same general form as a set of Normal Equations, Gauss's Method of Substitution can be advantageously employed in the solution. 114 METHOD OF LEAST SQUARES, When there is but a single equation of condition the second members of equations (103^) reduce to their first terms, and equations (104) reduce to the single equation CLd ^2 + m, = (105) and the values of v lt v 2y ... v q in (103) become a, v = aa ' P It is from these results that the rules in paragraph 35 are derived. 118. Example. Suppose we have given the observations MI = 2.02, weight 3 M t = 4.13, 2 M s = 2.52, " 5 (a) MI = 2.67, 7 M & = 2.84, 4 and let the most probable values of the unknowns be repre- sented by Z, = Mi + !, 2 2 = M 2 -\- V t , . . . 2 5 = JJf 5 + V S (b) Also suppose that the unknowns are subject to the conditions Z 2 2 S 2 14-0 -3. = 1.5 (c) GAUSS'S METHOD OF SUBSTITUTION. 115 Then expressing these conditions in terms of the corrections by means of (a) and (b), we have the CONDITION EQUATIONS ^1 + V . + V 3 + U * + U 5 + -18 = w 2 - v t - .04 = Referring to paragraph 116, we see that in this example n' = 2, m 1 = 0.18, m z = 0.04 For the purpose of computing the coefficients in equations (104) we next arrange the following table. p a b aa ab Ib P P P 1 3 1 3 1 1 1 2 1 1 2 2 1 5 1 IT i i 1 7 1 - 1 - 7 7 7 1 4 1 4 599 5 9 420 14 14 116 METHOD OF LEAST SQUARES. Substituting these results in equations (104), we have ^ 4- & 2 4- .18 = 420 14 (e) 5 9 k, -\ >L .04 = 14 14 Solving, &! = .1647 ( ^ & 2 = .1537 Then from equations (103) we get at once t>! = .0549, v 4 = .0455, v 2 = .0055, v s = .0412. (g) v s = .0329, And by substituting these results in (b) we can obtain the values of z^ 2 2 , 2 8 , 2 4 , z 6 . The above is the solution of Example 70, page 127. EXAMPLES. 1. An urn contains five black balls, three red balls and two white balls. If three balls are drawn from the urn what different combinations may result, and what is the probability of each ? 2. In a single throw with a pair of dice what is the probability that neither ace nor doublets will appear ? y 3. Four cards are drawn from a pack. What is the probability of getting four aces? Of getting one of each suit? 4. From a lottery of thirty tickets, marked 1, 2, ... 30, four tickets are drawn. What is the probability that num- 2 bers 1 and 15 will be among them? 145 5. Find the odds against the appearance of 7 or 11 in a single throw with a pair of dice. 7 : 2 6. I toss up n coins. What is my chance of getting just one head ? 7. In a single throw what are the relative chances of throwing 9 with two dice and with three dice ? 24 : 25 8. From 2 n counters marked with consecutive numbers two are drawn. What are the odds against having an even sum ? n : n 1 9. In two trials with a single die what is the probability of throwing (a) an ace the first time only? (b) at least one ace ? 10. Find the probability of throwing doublets one or 91 more times in three trials with a pair of dice. 216 118 METHOD OF LEAST SQUARES. 11. Find the probability of throwing exactly three aces 125 in five trials with a single die. 3888 12. A certain stake is to be won by the first person who throws 5 with a die of twelve faces. What is the chance of the sixth person ? 13. A and B play chess. A wins on the average two games out of three. What is A's chance of winning just 80 four games out of the first six ? 243 14. A and B shoot alternately at a mark. A hits once in n times and B once in n 1 times. Find their chances of first hit, and the odds in favor of B if A misses on his first shot. Even, n : n 2 15. In how many trials will it be a wager of 4 to 3 that double five will be thrown with a pair of dice ? 30 16. Find the probability of throwing one and only one 5 ace in two trials with a single die. 18 17. If I have three tickets in a lottery of four prizes and 41 eight blanks, what is my chance of drawing a prize ? 55 18. Find the probability of throwing at least four aces in 203 six trials with a single die. 23328 19. On an average seven ships out of eight return to port. Find the chance that out of five ships expected at least three ... 16121 will return. 16384 20. In a lottery containing a large number of tickets, where the prizes are to the blanks as 1 : 9, find the chance of drawing at least two prizes in five trials. 100000 EXAMPLES. 119 21. In a purse are ten coins, all nickels except one which is a five-dollar gold piece ; in another are ten coins, all nickels. Nine coins are taken from the first purse and placed in the second, and then nine coins are taken from the latter and placed in the former. If you now had your choice which purse would you take ? 22. A and B engage in a game in which A's skill is to B's as 2:3. What is A's chance of winning at least two games out of five ? 23. If A's skill at a certain game is double that of B, what are the odds against A's winning four games before B wins two? 131 : 112 24. A party of twenty-five take seats at a round table. What are the odds against any two specified persons sitting next to each other ? 25. A has three shares in a lottery in which there are three prizes and six blanks. B has one share in another where there is but one prize and two blanks. What are their relative chances of getting a prize ? A : B = 16 : 7 26. Expand through the terms involving h* and & 8 , the expression - + (y + *)' 'X -\- h When a; is 1 and y is -, does - 4- y* increase or 2 x diminish when x and y begin to increase at the same rate? 27. Given f (x, y) = x 2 (a -j- y) 8 , expand the expres- sion (a; 4- A) 2 (a + y + &)'. 28. Find the value of Iog 10 a -j- cos ft, when a =. 1001 and b = 0.1. Give the result first to five places and then to seven places of significant figures, in each case without the aid of tables. 4.000433 29. Transform to the new origin, (2, 3, 1), the equa- tion, z 2 -f three points whose altitudes are to be determined, the following observations are made: P l above = 10 ft. P 2 above P 3 = 9^ ft. P 2 PL = 7 P l " P 8 = 2 P 2 " (9 = 18 Find the most probable altitudes. P s = 8.50 .29 50. The altitudes of A above 0, H above A, and JS above O are found by measurements to be respectively 12.3, 14.1, and 27.0 feet. What is the most probable value of each of these differences in level? A = 12.50 .17 51. Measurement of the ordinates of points on a straight line corresponding to abscissas 4, 6, 8, 9, are made with results 5, 8, 10, 12. What is the most probable equation of the line in the form y = mx -\- bf b = 0.29 124 METHOD OF LEAST SQUARES. 52. Find the altitudes in Example 49 if the observations have weights 5, 3, 6, 2, 4, respectively. -" 53. Solve the example in paragraph 31, giving the obser- vations the weights 25, 25, 4, 4, 4, 4, 4, 4, 1, respectively. Elevation of P 5 = 320.25 54. Find the most probable values of z t , 2 2 , and z 8 from the observations 2 X = 552.10 wt. 16 21 2 2 = -75 wt. 1 z 2 -f- z a = .15 " 9 Zi -J- z 2 z s = 552.05 " 1 z 3 = 551.23 "4 z l z 3 = .70 " 1 2 2 = 551.30 " 4 2 2 = 551.2345 55. In the triangulation of Lake Superior there were measured at station the angles F P = 62 59' 40".33 wt. 5 F E = 64 11 34 .92 7 F B = 100 20 29 .12 " 4 P B = 37 20 49 .55 7 E O B = 36 8 55 .86 "4 Required the adjusted values of the angles. F P = 40".28 0".34 56. In the U. S. Lake Survey the following angles were measured at station North Base : (1) Crebassa Middle 55 57' 58".68 wt. 3 (2) Middle Quaquaming 48 49 13 .64 " 19 (3) Crebassa Quaquaming 104 47 12 .66 17 (4) Quaquaming South Base 54 38 15 .53 " 13 (5) Middle South Base 103 27 28.99 6 Find the adjusted values of the angles. (1) = 58".965; r = 0".28 EXAMPLES. 125 - 57. Adjust the following observations of differences in level: Altitude of A 401.3 wt. 16 C above B 72.5 wt. 9 A above B 220.8 " 16 A B 222.0 1 A (7 150.2 " 4 Altitude of J? 180.7 " 1 58. ] )8. In "Conditioned Observations" can the number of observations required be less than the number of unknown quantities? Why must the number of conditions be less than the number of unknowns? 59. From the following measurements of the angles formed at the centre of a disk by four radial lines, find the most probable values of the angles. A = 104 25' 13" O = 86 33' 20" B = 98 13 47 D = 70 48 23 A = 104 25' 2".25 Also solve giving the observations the weights 5, 2, 1, 4, respectively. 60. Four observations on the angle A of a triangle gave a mean of 36 25' 47", two observations on B gave a mean of 90 36' 28", and three on G gave 52 57' 57". Adjust the triangle. A = 36 25' 44".2 ; r = 7".7 61. Five angles at a station are measured, and also their sum. The observed sum differs from the sum of the five observed parts by the amount d. What are the adjusted values of the angles ? 62. The three angles of a spherical triangle are measured with results A = 46 17' 38".32 B = 73 35' 16".15 C = 60 7' 5".16. Adjust the triangle, knowing that the spherical excess is 2".475. A = 39".3; ^ = 1".6 126 METHOD OF LEAST SQUARES. 63. At the station Pine Mountain the following angles were observed between surrounding stations : Jocelyne Deepwater 65 11' 52".500 wt. 3 Deepwater Deakyne 66 24 15 .553 " 3 Deakyne Burden 87 2 24 .703 " 3 Burden Jocelyne 141 21 21 .757 " 1 Find the most probable values of the angles. 64. Solve Examples 55 and 56 by the method of " Con- ditioned Observations." 65. A is a station whose altitude is known to be 5240.1 feet. JB and C are floats on a lake, and D is a signal point. From the following observations determine the most prob- able altitudes of J?, C and D. C below A 720.1 wt. 3 B below A 719.7 wt. 3 D A 200.3 "5 B D 520.9 " 2 C " D 520.4 " 2 66. Given the following observations, subject to the con- dition Zi -j- z a = z s , find the most probable values of z ly z^ and z 3 . 2zi z a + z s = 3.0 22 2 z a = 1.0 2z! 3z 2 = 4.5 Zi -f- 22 2 = 5.1 Zl + z 3 = 3.8 67. The chemical composition of a specimen was found by several observers to be as follows : Pb = .52 Other substances = .09 Au and Ag = .39 Ag = .27 Pb and Ag = .78 Impurities = .10 Au = .11 Pb and impurities = .62 Au = .12 From these observations find the most probable composition of the specimen. EXAMPLES. 127 68. From the following observations what are the best values of the unknowns, supposing that y and z must be equal ? x -\- y = 5.2 wt. 4 y -\- z = 4.2 wt. I x = 3.0 " 9 z = 2.0 " 4 85 -- = 1.1 " 1 69. In determining the difference in longitude between various cities the results obtained were (1) Cambridge Washington 23 4K041 wt. 30 (2) Cambridge Cleveland 42 14.875 " 7 (3) Cambridge Columbus 47 27.713 " 8 (4) Washington Columbus 23 46.816 " 7 (5) Cleveland Columbus 5 12.929 " 5 Adjust these observations. 70. The capacity of a condenser is known to be 14.0 m. f. It is divided into five sections, a, b, c, d, e, and it is known that the difference between b and d is 1.5 m. f. Find the most probable capacities of the sections from the observa- tions a = 2.02 wt. 3 d = 2.67 wt. 7 b = 4.13 "2 e = 2.84 " 4 c = 2 ' 52 " 5 a = 1.9651 71. If the unknowns in the following observations are subject to the condition x -|- 2y -|- 3z = 36, what are their adjusted values? x = 4.3 wt. 1, y = 5.7 wt. 4, z = 7.3 wt. 9 x = 3.77 72. A cannon is discharged horizontally from the top of a bluff. Observations on the time, and distance of fall of the ball gave the results t = 0.5 1.0 1.5 2.0 seconds 8 = 1.2 4.0 9.1 15.0 metres 128 METHOD OF LEAST SQUARES. What curve, passing through the point of departure of the ball, will represent the above observations ? 73. An Argand burner shows the following efficiencies with varying rates of gas consumption : g = 2.0 2.3 2.8 3.3 4.0 4.5 5.0 feet E = 2.1 2.4 2.5 3.0 3.2 3.8 4.1 Find the equation of the straight line which best rep- resents the relation between g and E. The measurements on g are without appreciable error. 74. Observations are made upon the expansion of Amyl alcohol with change in temperature as follows : V = 1.04 1.12 1.19 1.24 1.27 cu. cm. t = 13.9 43.0 67.8 89.0 99.2 C. degrees If V = 1 -)- -Z? t -j- C t 2 expresses the law connect- ing the volume and temperature, find the most probable values of B and C. 75. In a Hooke's joint where the angle between the axes is 45, x being the angular rotation of the driver, and y that of the follower, from the following measurements find the equation of a curve that will represent the relation between x and y x. X y x x y x x y x o.o 80 - 5.8 140 8.8 20 - 5.7 90 - 2.0 160 5.3 40 9.9 100 2.3 180 o.o 60 - 10.4 120 8.0 y x = 0.85 9.82 sin 2a -f 0.92 cos 2a 76. A series of observations extending over a period of thirty years was made by Quetelet to determine the daily variation in temperature at Brussels. The mean results of EXAMPLES. 129 the measurements are given below. From them derive an equation to express the temperature at any time of the year. Jan. 4.66 May 9,83 Sept. 8.16 Feb. 5.42 June 10.09 Oct. 6.55 Mar. 6.77 July 9.71 Nov. 5.10 Apr. 8.59 Aug. 9.14 Dec. 4.41 y = 7.369 + 0.9854 sinSOz - 2.7084 cos30x -f 0.0100 sin60a; 0.1950 cosGOa; - 0.0133 sin 90sc-f 0.1783 cos 90a In this answer the values of x begin at the 15th of Janu- ary, and represent the time in months. 77. The law connecting the time of vibration of a pendu- lum with its length is assumed to be of the form, T = m L n . From the following observations find the most probable values of m and n. T = 12.9 'll.6 10.4 9.7 5.3 4.6 'L = 164.4 132.9 107.6 93.5 28.4 20.6 L is in centimetres, T in tenths seconds. n 0.5000 m = 1.0044 78. Determine the equation of a curve which will repre- sent the following observations : X 0.0 y 0.00 X 1.5 y 1.09 X 3.0 y 8.65 0.5 0.04 2.0 2.56 3.5 13.72 1.0 0.31 2.5 4.99 4.0 20.47 79. Determine the equation of a curve which will repre- sent the relation between x and y in the observations, X y x y x y x y 0.0 4.51 0.3 4.09 0.6 3.03 1.2 0.92 0.1 4.44 0.4 3.76 0.8 2.24 1.5 0.38 0.2 4.31 0.5 3.42 1.0 1.49 2.0 0.05 130 METHOD OF LEAST SQUARES, 80. At a station P the angles between a straight line passing through P parallel to the axis of X and the direc- tions from P of four points P 1} P 2 , _P 8 , P 4 , are measured. Having given the coordinates, (a, >), of the four points, find the coordinates of P. Point. Pi If the coordinates of the point P are (x, y), and the angle is denoted by A, we have Coordinates. Angle. a 6 4.21 3.24 39 18' 1.21 2.10 147 54' 0.51 0.22 205 24' 2.50 - 1.10 277 15'' x a 81. If a sin bx = M, and values of M are observed for known values of a and >, determine the most probable value of x. If x' is an approximate value of x found by trial, and m = a sin bx' M, we shall have V 2 a b m cos bx' /j# . - /v' _ . _ ~2t(a b cos bx')' 2 ' 82. If in one series of observations the value of h is twice what it is in another, what is the relative probability of the occurrence of an error of given magnitude a in the two series ? Show what the curves of error will be in the two cases. What error has the same probability for its occurrence in each series ? What is the relative probability of the occur- rence of an error not greater than a in the first case and not greater than 2a in the second case ? 83. From 64 observations the latitude of a station was found to be 49 10' 9".110 0".051. What was the prob- able error of a single observation ? 0".41 EXAMPLES. 131 84. If twenty measurements of an angle give a result with an A.D. of 0".38, and it is required to find the angle so that the A.D. shall be only 0".25, how many more observa- tions must be made ? 27 85. From the following determinations of the area of a field find the most probable area and its probable -error. 5674 12, 5680 4, 5685 3, 5682 1, 5678 2 4 = 5681.41 0.84 86. From the following measurements by Fizeau and x others, find the most probable value for the velocity of light together with its probable error. Measurements are in kilo- meters. 298000 1000 299990 200 299930 100 298500 1000 300100 1000 V = 299917 88 87. Two different instruments give for the value of an angle, f 11 * 34 55' 33".0 4".l, 34 55' 36".0 6".3 What is the best value to take for the angle ? 34 55' 33".9 3".4 88. Determinations of the difference in longitude between Washington and Key West made on seven different days gave the results I9 m 1'.42 0'.044 19 m l s .60 0'.046 1 .37 .037 1 .55 .045 1 .38 .036 1 .57 .047 1 .45 .036 What is the best value and its probable error? r.4GO 0*.016 132 METHOD OF LEAST SQUARES. 89. In the triangulation of Lake Ontario two different instruments gave for an angle, 74 25' 5". 429 0".29 from sixteen readings, and 74 25' 4". 611 0''.22 from twenty- four readings. Find the most probable value of the angle and its probable error. 90. In each of Examples 39-45 find the mean and prob- able errors and average deviation of each observation and of the most probable value, using formulas from (50) to (63) according as they apply. 91. In Example 42 divide the observations in their order into six groups of four observations each and compute the mean of each group. Then determine the probable error of the first of these means : ( 1 ) considered as a single measure of four times the weight of those in Example 42 ; (2) directly as one of six observations of equal weight; (3) as a deter- mination from its four constituents. 0".67 ; 0".72 ; 1".00 92. The following twenty-nine measurements on the den- sity of the earth, made by Cavendish, give as a mean result 5.48. What is the probable error of an observation ? Solve by the usual method and also by taking the residual that occupies the middle position. 0.14 5.50 5.55 5.57 5.34 5.42 5.30 .61 .36 .53 .79 .47 .75 .88 .29 .62 .10 .63 .68 .07 .58 .29 .27 .34 .85 .26 .65 .44 .39 .46 93. What is the probable error of the mean of two obser- vations which differ by the amount a ? 94. A base-line is measxired five times with a steel tape reading to hundredths of a foot, and five times with a chain reading to tenths of a foot, with results By tape, 741.17 741.09 741.22 741.12 741.10 By chain, 741.2 741.4 741.0 741.3 741.1 EXAMPLES. 133 Find the probable errors and weights for a single observa- tion in each case, and also the adjusted length of the line and its probable error. 741.146 0.015 95. Twenty-one determinations of a chronometer correc- tion gave results - 8.78 - 8.78 - 8.68 - 8.80 - 8.96 - 8.83 - 8.79 .76 .51 .63 .75 .64 .70 .90 .85 .64 .58 .78 .65 .64 .93 Find the probable error of the mean by using both formulas (53) and (57), and also determine the probable error of a single observation by taking the middle residual. 0.017; 0.018; 0.09 96. In the following observations show that M = 49.64, fi = 1.95, r = 1.31, /AO == 0.40, r = 0.27, p. 3 = 0.87, r z = 0.59. M = 48.81 48.76 49.53 51.56 50.38 49.84 p = 5 4 5 3 2 5 97. Observations on the time of ending of a transit of Mercury are made by different observers with a variety of instruments and under more or less favorable circumstances. If the weights assigned by the computer are as indicated, find the best value for the time and its probable error. b h 38 m 23' wt. 1 38 m 26* wt. 3 38 m 19* wt. 3 37 55 " 38 21 2 38 21 " 2 38 10 " 1 38 18 2 38 15 2 t = 5* 38 m 19 S .9 98. An angle is measured five times with a theodolite, and seven times with- a transit, giving results Theodolite, 31".7, 39".S, 40".7, 28".6, 32".3 Transit, 32 .S, 30 .7, 38 .2, l>9 .3, 41 .6 35".3, 36".2 134 METHOD OF LEAST SQUARES. If the relative values of readings by the two instruments are as 3 to 2, what is the most probable value of the angle ? What is the mean error of the result ? ^99. Given J/ x = 65.58 .59, Jf 2 = 35.15 .93, M 8 = 49.64 .27, find the probable errors of 4 J/ t 3 J/ 8 + 2 J/ 3 and of ^ + ^ - ^ 3 . 3.69 ; 0.43 2* O T: 100. The three angles of a triangle are measured, and the probable error of each observation is r . What is the prob- able error of the triangle error ? r y/~3~ 101. The zenith distance of a star on the meridian is observed to be z = 21 17' 20' .3 2".3. The declina- tion of the star is given as d = 19 30' 14".8 0".8. What is the latitude of the place and its probable error ? L z -f d = 40 47' 35".l 2"4. 102. The zenith distance z of a star at upper culmina- tion is observed ,n times, and its zenith distance z> at lower culmination n' times. If the latitude is given by L = 90 -- | (z -f- z'), and the probable error of an observation is r, what is the probable error of the latitude ? 103. The horizontal force necessary to start a 100-pound weight sliding along a table is observed to be 15.5 0.2 pounds. Find the probable error of the coefficient of friction. 104. If a line is measured by the continued application of a unit of measure, and r is the probable error of the placing and reading of this measure, what is the probable error of the length I ? r f[~ 105. If the average deviations of z 1? z, 2 s? are a t ^ c > respectively, what is the average deviation of zf -(- z 2 2 -\- z s 2 ? 106. If the radius of a circle is measured with result 1000.0 2.0, how should the circumference and area be expressed ? 107. Two sides, a and >, and the included angle C of a triangle are measured with results a = 252.52 .06 EXAMPLES. 135 feet, b = 300.01 .06 feet, C = 42 13' 00" 30". What is the area and its probable error ? 25452 9 108. Measurements of adjacent sides of a rectangle gave a r 1? and b r 2 . What is the probable error of the area, and for what kind of a rectangle will this probable error be the least ? 109. If the measured sides of a rectangle have the same a.d., what is the a.d. of the diagonal determined from them ? Same 110. If the sides of a rectangle are measured in the manner indicated in Example 104 and found to be a and b, w r hat is the probable error of the area ? 111. The correction to be applied to a chronometer is found to be -(- 12 m 13*.2 8 .3. Ten days later the cor- rection is again determined and found to be 12 m 21*.4 0*.3. What is the mean daily rate and its probable error ? 0*.820 O s .042 112. Measurements of the compression of the earth's meridian have resulted in .000046 294 What is the probable error of the denominator 294 ? 3.98 113. The current flowing in a circuit is due to two sources whose electromotive forces are determined to be ! = 200 2, e 3 = 400 3. The resistance of the circuit is 30 1. Find the current and its probable error. 20 0.68 114. The side b and angles B and C of a triangle are measured with results b = 106 .06 metres, H = 29 39' 1', C = 120 7' 2'. What is the most probable value of the angle A and of the side c ? A = 30 14' 2'.2; c = 185.5J5 0.15 115. The distance between two divisions on a graduated scale is measured by a micrometer. Show that the average 136 METHOD OF LEAST SQUARES. deviation of the mean of two results is the same as the aver- age deviation of a single reading. 116. If the weights of the determinations of three angles A, B, C, are 3, 3, 1, respectively, what is the weight of the sum of the three angles ? 0.6 117. If the weight of x is />, what is the weight of loga * ? 118. If 05 = and the weight of y is p, what is the c weight of x ? c z p 119. In Example 107, how closely must the parts be measured in order to obtain the area within 0.5 per cent ? 120. From observations on I and t the value of g is to be computed by the pendulum formula t = 7T \/ 9 What changes in g will be produced by changes in I and t of Si and 8 2 units, respectively, and what are the allow- able errors in I and t it g is to be determined within 1 per cent ? 121. The moment of inertia of a cylindrical bar is to be obtained from measurements on its mass w, its length A, and its diameter d. The error in the determination of m is negligible, the precision of the determination of d is four times that of h. If the measurements give m = 48, h = 8.000, d = 1.200 0.10, and T I , I = m I 1 12 16 what is the probable error of I, and what should be the ratio of c? to A to determine I most accurately ? d : h 256 : 9 EXAMPLES. 137 122. If observations give for a certain quantity x the value 303, with a mean error of 2, what is the mean error of the expression 3 x -f- Iog 10 2 x ? 123. The probable error of the determination of the angle A is 20". What is the maximum probable error of sin A -j- oos A ? 124. If the probable error of an observation on an angle is 10", is there any difference between the probable error of the function sin A -\- cos A -\- sin C and of the func- tion sin A -j- cos J5 -f- sin (7, supposing A and B are of the same magnitude ? 125. Given the observations, Zj - 2z 2 + z 8 - 3 = 3*! -f- 2 2 -f 22 8 - 17 = 3z 2 - 4z 8 - 2 = - ! 4- 4 Z2 -j- 3z, - 10 = Find the most probable values of z u z,, z s , and also their weights and precision measures. Z! == 3.541 ; p^ = 29 ; r Zl = .024 126. Find the weights and precision measures of the unknowns in Examples 48 to 57. 127. Determine the probable errors of the constants in Examples 72 to 79, inclusive. 128. The length of a pendulum which beats seconds is given by I == /' -|- I q -- s\ I' sin'Z where I' is the length at the equator, q the ratio of 289 the centrifugal force at the equator to the weight, and s the compression of the meridian regarded as unknown. Putting I' = 991 + *, q - s I' = y, \ 2 I observations in different latitude* gave in millimetreR the 138 METHOD OF LEAST SQUARES. following equations, from which we are to determine I and a together with their probable errors : x -|- 0.969y = 5.13 x -f 0.152y = 0.77 x _|_ 0.749y = 3.97 x + 0.327y = 1.70 x -f 0.426y = 2.24 x 4- 0.685y = 3.62 x 4- 0.095y = 0.56 * + 0.793y = 4.23 x = 0.19 I' = 991.069 .026; s = 0.00046 294 129. Find the weights and precision measures of the unknowns in Examples 64 to 70, and also in Examples 59 to 3, and in 71. 130. In Example 95 the probable error of a single obser- Tation is 0.08 seconds. Find the number of errors which should fall between 0.00 seconds and 0.10 seconds, between 0.10 seconds and 0.20 seconds, and also the number that should be over 0.20 seconds. Compare the results with the number actually found. 131. In 470 determinations of the right ascensions of Sirius and Altair made by Bradley, the probable error of a single observation was 0".2637. The number of errors falling between specified limits was as shown below. Compare this result with the distribution of errors called for by the theory. Limits. Errors. Limits. Errors. 0".0 to 0".l 94 0".6 to 0".7 26 .1 to .2 88 .7 to .8 14 .2 to .3 78 .8 to .9 10 .3 to .4 58 .9. to 1 .0 7 .4 to .5 51 Over 1 .0 8 .5 to .6 36 132. What is the probability that the error of a single observation will be as large as twice the probable error? As large as five times the probable error ? EXAMPLES. 139 133. On the average how many observations must be made before an error as large as three times the mean error will occur ? 134. In Example 46, assuming that all errors between any two limits fall half way between those limits, compute the average deviation and mean error of an observation and com- pare their ratio with the theoretical value given in the table in paragraph 55. 135. A line is measured 500 times and the probable error of each observation is 0.6 cm. How many errors should occur between 0.4 c.m. and 0.8 c.m. ? 136. Show how the value of -n- could be determined experimentally from observations such as those in Example 131. 137. In a system of observations all equally good, r being the probable error of a single observation, if two observations are taken at random, what quantity is their difference as likely as not to exceed, and what is the probability that the difference will be less than r? r^lT; 0.367 138. In the following measurements of an angle, ought any of the observations to be rejected ? 12' 51".75 47".85 47".40 48".90 44".45 48 .45 51 .05 48 .85 50 .95 50 .60 47 .75 49 .20 50 .55 139. Determine whether any of the observations in Example 44 should be rejected. 140. A quantity M is measured with the results given below. Ought all the observations to be retained ? M = 236, 251, 249, 252, 248, 254, 246, 257, 243, 274 141. A certain angle has been laid out with such accu- racy that its true value may be taken as exactly 90. Twenty- five observations are made upon it with a transit that it is 140 METHOD OF LEAST SQUARES. desired to test, and the result obtained is 89 59' 57" 0".8. What are the odds in favor of a constant error in the instru- ment between 1" and 5"? Between 0" and 6"? 908 : 92 ; 86 : 1 142. Repeated measurements of a standard metre bar with a decimetre scale gave a result 10.032 0.010. What are the odds in favor of a constant error in the scale between 43:7 143. Two determinations of the length of a line gave 683.4 0.3 and 684.9 0.3, respectively. Show that the best value for the length is 684.15 0.51, and that the probable systematic error of each determination is 0.65. 144. Two men A and B observe an angle repeatedly with the same instrument with results A. B. 47 23' 40" 23' 35" 47 23' 30" 24' 00" 23 45 23 40 23 40 23 20 23 30 23 50 Is there any relative personal error, and what is the best final value? * 47 23' 38".2 1".6 145. Three independent determinations of the capacity of a condenser made with three different instruments gave results 42.22 0.21, 43.40 .15 and 44.20 0.18. What is the most probable value of the capacity ? For extended treatment of the subject illustrated in Exam- ples 143 to 145 see Johnson, "The Theory of Errors and Method of Least Squares," chap. vii. 146. In an estimation of tenths what is the probable error of an observation ? What is the average deviation ? 0.025 147. In obtaining the angle of deflection of the needle of a tangent galvanometer by the usual method what is the probable error of the result ? EXAMPLES. 141 148. If all the errors of a series of observations must fall between and a, and the frequency of any error is pro- portional to its magnitude, what is the Curve of Error? What are the values of r, a.d., and /* ? r = - V"2~ 149. In Example 148 what is the probability that the error of a single observation will be as large as 0.5 a. 150. If all values of x between and a are possible, and their probabilities are proportional to their squares, find the mean value of x and the probability that x will be as large as 0.5 a. Also draw the Curve of Error. 151. What is the greatest probable error of a logarithm found by interpolation in a seven place table ? .000000015 152. Given the following set of Normal Equations, together with [mm] = 1.3409, find the most probable values of the unknowns and their weights and probable errors. There were sixteen observations. 3.1217 z l 4- .5756 z 2 - .1565 z 3 - .0067 z< - 1.5710 = .5756 z l 4- 2.9375 z 2 -f- .1103 z a - .0015 z< 4- .9275 = - .1565 z l 4- .1103 z 2 4- 4.1273 z 3 4 .2051 z t + .0652 = - .0067 z, - .0015 z 2 4- .2051 z 3 -j- 4.1328 z< -f .0178 = z l = 0.583 0.018 z 4 = - 0.004 0.015 153. From ten observation equations, for which was found [mm] = 2.6322, there resulted the normal equations 5.2485 zj - - 1.7472 z 2 - 2.1954 z, -f- 0.5399 = - 1.74722 t -f 1.SS59 z a 4- 0.8041 z 3 -- 1.4493 = - 2.1954 zt -j- 0.8041 z 2 -j- 4.0440 z 8 - 1.S6S1 = 142 METHOD OF LEAST SQUARES. Find the most probable values of z x , z 2 and z a together with their probable errors. z^ =. 0.42 0.11 154. Find the most probable values of the unknowns in the normal equations 459 z x 308 z 2 389 z 3 -f- 244 z 4 507 = _ 308 z t -f- 464 z 2 -f- 408 z s 269 z 4 -f 695 = 389 ! 4- 408 z 2 -f 676 z 3 - 381 z 4 -f- 653 = 244 z t 269 z 2 331 z 3 -f- 469 2 4 -- 283 [mm] = 1129 z 4 = _ 0.488 ; p, t = 281 155. If thirteen observation equations give rise to the result [mm~\ = 100.34 and to the normal equations 17.50 z t - 6.50 z 2 - 6.50 z 3 - 2.14 = 6.50 z l -j- 17.50 z 2 6.50 z 3 13.96 = 6.50 z t 6.50 z 2 + 20.50 z 3 + 5.40 = show that the most probable values of the unknowns are z t = 0.67 0.60, z 2 = 1.17 0.60, 2 8 = 0.32 0.55 APPENDIX. ELEMENTS OP THE THEORY OP PROBABILITY. 200. Definition. If an event can happen in a ways, and fail in b ways, and all these ways are equally likely to occur, the probability of the happening of the event is ^-, a I o and the probability of its failure is : -. a -j- b Since the event must either happen or fail, the sum of the above probabilities must represent a certainty. But _ _ -i a -f b That is, the probability of a certainty is expressed by unity. Also, if the probability, F, of the happening of an event is known, the probability of its failure is given at once by 1 -P. 201. Example A. A single throw is made with a pair of dice. What is the probability that the sum of the spots turned up will be 5 ? Number of ways of throwing the dice is 6 X 6 == 36 Number of ways of throwing five is 4 4 1 .-. Probability of throwing five is 36 9 Example J3. A coin is tossed up six times. Find the chance that three heads and three tails will be the result ii METHOD OF LEAST SQUAEES. Number of ways of throwing the coin is 2* = 64 6x5x4 Number of ways of throwing three heads is = 20 1X2X3 20 5 Probability of throwing three heads is = 64 16 202. Compound Events. A certain event can happen in a ways and fail in b ways : a second independent event can happen in a' ways, and fail in b 1 ways, all of these ways being equally likely to occur. To find the probability of the simultaneous occurrence of the two events. The total number of ways in which the events can take place together is (a -\- b) (a 1 -\- b') (1) Both events can happen in a a' ways. (2) Both events can fail in b b' ways. (3) First event can happen and second fail in a b' ways. (4) First event can fail and second happen in a' b ways. The probability of (1) is (a + b) (a 1 + b') The probability of (2) is ; , ; ; rr - (a + b) (a 1 -j- b 1 ) ft />' The probability of (3) is The probability of (4) is ( + *) (' + *') aTb (a -f b) (a 1 + b') But the probability of the happening of the first event is , and of the second event is , r/ , etc. Hence it will at once be seen that the probability of the simultaneous occurrence of two independent events is equal to the product APPENDIX. iii of the probabilities of the occurrence of the component events. Or, in general, if P lt P 2 , . . . P n are the probabilities of the occurrence of any number, /?, of independent events, the probability of the simultaneous occurrence of all the events is P l X P, X ... P n (A) By independent events is meant those such that the manner of occurrence of one has no influence upon the manner of occurrence of the others. 203. Example C. The chance that A can solve a cer- 2 tain problem is , and the chance that B can solve it is Find, 12 (a) The probability that both will solve it. (b) The probability that the problem will be solved. For (a). This is a question as to the probability of the concurrence of two independent events. Therefore by an application of (A), the probability that both will solve the problem is 2 N 5 A 3 ' 12 18" For (b). The problem will be solved unless both fail. The probability that both will fail is v = 8 12 86 7 29 The probability of getting a solution is 1 - - = - 36 36 Example D. A pack of cards is cut, and those taken off then replaced. In how many trials will it be an even wager that an ace will be cut? iv METHOD OF LEAST SQUARES. Let n be the number of trials. Then n is to be found from \= - 52 / 2 where the first member of the equation represents the proba- bility that we shall not fail n times in succession. Solving for n, log 2 n = 2 log 52 log 48 = 8.7 In nine trials then there is a little more than an even chance of cutting an ace. 204. Dependent Events. If we have a number of events whose modes of occurrence are dependent one upon another, the probability of their concurrence will be found by the same method as in paragraph 202 ; a' now denoting the number of ways in which after the first event has happened the second will follow, and b 1 the number of ways in which after the first has happened the second will not follow, etc. Accord- ingly, the general formula (A) of paragraph 202 applies to dependent events a well as to independent ones. Also, if an event can take place in a variety of ways, the total probability of its occurrence will be the sum of the probabilities of its occurrence in each of the different ways. 205. Example E. Suppose two purses contain respect- ively five dimes and a copper, and six dimes. A coin is taken at random from the first purse and placed in the second, and then a coin is transferred from the second to the first. What is the probability that the copper will remain in the first purse ? The probability that the copper will be taken from the first purse and placed in the second, and then returned to the first purse is 1_ J_ J_ ~6 X Y 42 APPENDIX. and the probability that the copper will not be taken from the first purse at all is 5 ~6~ Therefore the probability that the copper will finally remain in the first purse is _1_ 5_ 3C ^ AO R Af> 1 FUNCTIONS OF SEVERAL VARIABLES. 206. For the application of Taylor's Theorem to the expansion of a function of several independent variables, see Osborne's " Differential and Integral Calculus," page 145. And for the conditions that lead to maxima and minima values of such functions, see page 155 of the same work. BIBLIOGRAPHY. The following brief list of treatises, dealing with the Method of Least Squares, is appended for the benefit of those whose professional work requires such constant application of the process as to render desirable a more detailed knowl- edge of various special methods of solution. In connection with some of the titles attention is called to the subjects the treatment of which is particularly full. fohnson, " The Theory of Errors and Method of Least Squares." I'rolxMlity of Errvrs. Systematic Errors. The Method of Substitu- tion. Wright, " Treatise on the Adjustment of Observations." Specinl Method* of Solution. Applicationt to Geodetic and Engi- neering J'roblemt. vi METHOD OF LEAST SQUARES. Merriman, " Text-Book on the Method of Least Squares." Chauvenet, " Treatise on the Method of Least Squares." Development of the Theory. Applications to Astronomical Observa- tions. Bobek, " Lehrbuch der Ausglelchsrechnung nach dor Methode der Kleinsten Quadrate." General Synopsis of the Method, illustrated by Numerous Examplet. Koll, " Die Methode der Kleinsten Quadrate." Applications to Geodesy. Hansen, " Von der Methode der Kleinsten Quadrate." Applications to Geodesy. Helmert, " Die Ausgleichungsrechnung nach der Methode der Klein- sten Quadrate." Liagre, " Calcul des Probabilities. " Holman, " Discussion of the Precision of Measurements." Problems in Physics and Electrical Engineering, Weinstein, " Handbuch der Physikalischen Maassbestimmungen." Applications to Physical Problems. Oppolzer, " Lehrbuch znr Bahnbestimmung der Kometen und Plane- ten." Jordan, " Handbuch der Vermessungskunde." For a complete list of works on the Method of Least Squares published up to 1876, see Merriman, "A List of Writings relating to the Method of Least Squares, with Historical and Critical Notes." Published in the Transactions of the Connecticut Academy, vol. iv, 1877. Notice of works published since 1876 may be found in periodicals devoted to the progress of Mathematical Science. Such as " Jahrbuch (iber die Fortschritte der Mathematik." " Bulletin des Sciences Mathematiques." TABLES. Vll TABLE I. Values of the Integral I e-^dt for Argument t a or 0.4769 r a r 01234 56789 Diff. 0.0 0.1 0.2 0.3 0.4 0.0000 0.0054 0.0108 0.0161 0.0215 0538 0591 0645 0699 0752 1073 1126 1180 1233 1286 1603 1656 1709 1761 1814 2127 2179 2230 2282 2334 0.0269 0.0323 0.0377 0.0430 0.0484 0806 0859 0913 0966 1020 1339 1392 1445 1498 1551 18C6 1!H8 1971 2023 2075 2385 2436 2488 2539 2590 54 54 53 52 51 0.5 0.6 0.7 0.8 0.9 0.2641 0.2691 0.2742 0.2793 0.2843 3143 3192 3242 3291 3340 3632 3680 3728 3775 3823 4105 4152 4198 4244 4290 4562 4606 4651 4695 4739 0.2893 0.2944 0.2994 0.3043 0.3093 3389 3438 3487 3535 3583 3870 3918 3965 4012 4059 4336 4381 4427 4472 4517 4783 4827 4860 4914 4957 50 49 46 45 43 1.0 1.1 1 2 1.3 1.4 0.5000 0.5043 0.5085 0.5128 0.5170 5419 5460 5500 5540 5581 5817 5856 5894 5!)32 5970 6194 6231 6267 6303 6339 6550 6584 6618 6652 6686 0.5212 0.5254 0.5295 0.5337 0.5378 5620 5660 5700 5739 5778 0008 6046 6083 6120 6157 6375 6410 6445 6480 6515 6719 6753 6786 6818 6851 41 3! 37 35 32 1.5 1.6 1.7 1 8 1.9 0.6883 0.6915 0.6947 0.6979 0.7011 7195 7225 7255 7284 7313 7-185 7512 7540 7567 7594 7753 7778 7804 7829 7854 8000 8023 8047 8070 8093 0.7042 0.7073 0.7104 0.7134 0.7165 7342 7371 7400 7428 7457 7621 7648 7675 7701 7727 7879 7904 7928 7952 7976 8116 8138 8161 8183 8205 30 28 26 24 22 20 i \ 22 23 2.4 0.8227 0.8248 0.8270 8291 0.8312 8433 8453 8473 84i2 8511 8622 8039 8657 8674 82 8792 8808 8824 8840 8855 8945 8960 8974 8988 9002 0.8332 0.8353 0.8373 0.8394 0.8414 8530 8549 8567 8585 8604 8709 8726 8742 8759 8775 8870 8886 8901 8916 8930 9016 9029 9043 9056 9069 19 18 17 15 13 2.5 2.6 2.7 2.8 2.9 0.90S2 0.9095 0.9108 0.9121 0.9133 9205 9217 9228 9239 9250 9314 9324 9334 9344 9a r >4 9410 9419 9428 9437 9446 9495 9503 9511 9519 9526 O.f'146 0.9158 0.9170 0.9182 0.9103 9261 9272 9283 9293 9304 9.'!64 9373 9383 9392 9401 9454 9403 9471 9479 94H7 9534 9541 9548 9556 9563 12 10 9 8 7 30 3.1 3.2 33 3.4 0.9570 0.9577 0.9583 0.9590 0.95!>7 9635 9U41 9647 9C52 9658 9691 9606 9701 9706 9711 9740 9744 974'.) 9753 9757 9782 9786 9789 9793 9797 0.9603 0.9610 O.P616 0.9T.22 O.W29 IM'4J4 9669 9675 9ti80 9H86 9716 !>72l 9726 9731 9735 97fil 97C6 9770 97:4 9778 9800 9804 9807 9811 9814 6 5 5 4 4 3. 0.9570 0.9635 0.9691 0.9740 0.9782 0.9818 0.9848 0.9874 0.9896 0.9915 4. 9930 9943 9954 9963 9970 997C 9981 9US5 9988 9900 5. 9993 9994 9996 9997 9997 9998 9998 9999 9999 9999 CO 1.0000 01234 r 50789 OUT. Vlll METHOD OF LEAST SQUARES. TABLE II. Common Logarithms. n 1234 56789 Diflf. 10 11 12 13 14 0000 0043 0086 0128 0170 0414 0453 0402 0531 0569 0792 0828 0864 0899 0034 1139 1173 1206 1239 1271 1461 1492 1523 1553 1584 0212 0253 0294 0334 0374 0607 0645 0682 0719 0756 0J27 8932 8938 8943 8976 8982 8987 8993 8998 8779 8785 8791 8797 8802 8837 8842 8848 8854 8859 8893 8899 8904 8910 8915 8949 8954 8960 8965 8971 9004 9009 9015 9020 9025 6 80 81 82 83 84 9031 9036 9042 9047 9053 9085 9090 9096 9101 9106 9138 9143 9149 9154 9159 9191 9196 9201 9206 9212 9243 9248 9253 9258 9203 9058 9063 9069 9074 9079 1)112 9117 9122 9128 9133 9166 9170 9175 9180 9186 9217 9222 9227 9232 9238 9269 9274 9279 9284 9289 5 85 86 87 88 89 9294 9299 9304 9309 9315 9345 9350 9355 9360 9365 9395 9400 9405 9410 9415 9445 9450 9455 9460 9465 9494 9499 9504 9509 9613 9320 9325 9330 9335 9340 9370 9375 9380 9385 9390 9420 9425 9430 9435 9440 9469 9474 9479 9484 9489 9518 9523 9528 9533 9538 5 90 91 92 93 94 9542 9547 9552 9557 9562 9590 9595 9600 9605 %09 9638 9643 9647 9652 9657 9685 9089 9694 9699 9703 9731 9736 9741 9745 9750 9566 9571 9576 9581 9586 9614 9619 9624 9628 9633 9661 9666 9671 9675 9680 J708 9713 9717 9722 9727 9754 9759 9763 9768 9773 5 95 96 7 M 99 9777 9782 9786 9791 9795 9823 9827 9832 9836 9841 9868 9872 9877 9881 9886 9912 9917 9921 9926 9930 9956 9961 9965 9969 9974 9800 9805 9809 9814. 9818 9845 9850 9854 98- r >9 9863 9890 9894 9899 9903 9008 9934 9939 9943 9948 9952 9978 9983 9987 9991 9996 4 n 1234 56789 Diff. METHOD OF LEAST SQUAEES. TABLE III. Squares of Numbers. n 01234 56789 Diff. 1.0 1.1 1.2 1.3 1.4 t.OOO 1.020 1.040 1.061 1.082 1.210 1.232 1.254 1.277 1.300 1.440 1.464 1.488 1.513 1.538 1.6HO 1.716 1.742 1.769 1.796 1.960 1.988 2.016 2.045 2.074 1.103 1.124 1.145 1.166 1.188 1.323 1.346 1.369 1.392 1.410 1.663 1.588 1.613 1.638 1.664 1.823 1.850 1.877 1.904 1.932 2.103 2.132 2.161 2.190 2.220 22 24 26 28 30 1.5 1.6 1.7 1.8 1.9 2.250 2.280 2.310 2.341 2.372 2.560 2.592 2.624 2.657 2.690 2.890 2.924 2.958 2.993 3.028 3.240 3.276 3.312 3.349 3.38 3.610 3.648 3.686 3.725 3.764 2.403 2.434 2.465 2.496 2.528 2.723 2.756 2.789 2.822 2.856 3.063 3.098 3.133 3.168 3.204 3.423 3.460 3.497 3.534 3.572 3.803 3.842 3.881 3.920 3.960 32 34 36 38 40 2.0 2.t 2.2 2.3 2.4 4.000 4.040 4.080 4.121 4.162 4.410 4.452 4.494 4.537 4.580 4.840 4.884 4.928 4.973 5.018 5.290 5.336 6.382 5.429 5.476 5.760 5.808 5.856 5.905 5.954 4.203 4.244 4.285 4.326 4.368 4.623 4.666 4.709 4.752 4.796 5.063 5.108 5.153 5.198 5.244 5.523 5.570 5.617 5.664 6.712 0.003 6.052 6.101 6.150 6.200 42 44 46 48 50 2.5 2.0 2.7 2.8 2.9 6.250 6.300 6.350 6.401 6.452 6.760 6.812 6.864 6.917 6.970 7.290 7.344 7.398 7.453 7.508 7.840 7.896 7.952 8.009 8.066 8.410 8.468 8.526 8.585 8.644 6.503 6.554 6.605 6.656 6.708 7.0^3 7.076 7.129 7.182 7.236 7.563 7.618 7.673 7.728 7.784 8.123 8.180 8.237 8.294 8.352 8.703 8.762 8.821 8.880 8.940 52 64 66 68 60 3.0 3.1 32 3.3 3.4 9.000 9.060 9.120 9.181 9.242 9.610 9.672 9734 9.797 9.860 10.24 10.30 10.37 10.43 10.50 10.89 1096 11.02 11.09 11.16 11.56 11.63 11.70 11.76 11.83 9.303 9.364 9.425 9.486 9.548 9.923 9.986 10.05 10.11 10.18 10.56 10.63 10.69 10.76 10.82 11.22 11.29 11.36 11.42 11.49 11.90 11.97 12.04 12.11 12.18 62 6 7 7 7 3.5 3.6 3.7 3.8 3.9 12.25 12.32 12.39 12.46 12.53 12.96 13.03 13.10 13.18 13.25 13.69 13.76 13.84 13.91 13.99 14.44 14.52 14.59 14.67 14.75 16.21 15.29 15.37 15.44 15.62 12.60 12.67 12.74 12.82 12.89 13.32 13.40 13.47 13.54 13.62 14.06 14.14 14.21 14.29 14.36 14.82 14.90 14.98 15.05 15.13 15.60 15.68 15.76 15.84 15.92 7 7 8 8 8 4.0 4.1 4.2 43 4.4 16.00 16.08 16.16 16.24 16.32 16.81 16.89 16.97 17.06 17.14 17.64 17.72 17.81 17.89 17.98 18.49 18.58 18.66 18.75 18.84 19.36 19.45 19.54 19.62 l'J.71 16.40 16.48 16.56 16.65 16.73 17.22 17.31 17.39 17.47 17.56 18.06 18.15 18.23 18.32 18.40 18.92 19.01 19.10 19.18 19.27 19.80 19.89 19.98 20.07 20.16 8 8 9 9 9 4.5 4.6 4.7 4.8 4.9 20.25 20.34 20.43 20.52 20.61 21.16 21.25 21.34 21.44 21.53 22.09 22.18 22.28 22.37 22.47 23.04 23.14 23.23 23.33 23.43 24.01 24.11 24.21 24.30 24.40 20.70 20.79 20.88 20.98 21.07 21.62 21.72 21.81 21.90 22.00 22.56 22.66 22.75 22.85 22.94 23.52 23.62 23.72 23.81 23.91 24.60 24.60 24.70 24.80 24.90 9 9 10 10 10 5.0 5.1 5.2 5.3 5.4 25.00 25.10 25.20 25.30 25.40 26.01 26.11 26.21 26.32 26.42 27.04 27.14 27.25 27.35 27.46 28.09 28.20 28.30 2841 28.52 29.16 29.27 29.38 29.48 29.59 25.50 25.60 25.70 25.81 25.91 26.62 26.63 26.73 26.83 26.94 27.56 27.67 27.77 27.88 27.98 28.62 28.73 28.84 28.94 29.05 29.70 29.81 29.92 30.03 30.14 10 10 n n n H 01 234 56789 Diff. TABLES. TABLE III. Squares of Numbers. n 01 234 56 789 Diff. 5.5 5.6 5.7 58 5.9 30.25 30.36 30.47 30.58 3069 31.36 31.47 31.58 31.70 31.81 32.49 32.60 32.72 32.83 32.95 33.64 33.76 33.87 33.99 34.11 34.81 34.93 35.05 35.16 35.28 30.80 30.91 31.02 31.14 31.25 31.92 32.04 32.15 32.26 32.38 33.06 33.18 33.29 33.41 33.52 34.22 34.34 34.46 34.57 34.69 35.40 35.52 35.64 35.76 35.88 11 11 12 12 12 6.0 6.1 6.2 6.3 6.4 36.00 36.12 36.24 36.36 36.48 37.21 37.33 37.45 37.58 37.70 38.44 38.56 38.69 38.81 38.94 39.69 39.82 39.94 40.07 40.20 40.96 41.09 41.22 41.34 41.47 36.60 36.72 36.84 36.97 37.09 37.82 37.95 38.07 38.19 38.32 39.06 39.19 39.31 39.44 39.56 40.32 40.45 40.58 40.70 40.83 41.60 41.73 41.86 41.99 42.12 12 12 13 13 13 6.5 6.6 6.7 6.8 6.9 42.25 42.38 42.51 42.64 42.77 43.56 43.69 43.82 43.96 44.09 44.89 45.02 45.16 45.29 45.43 46.24 46.38 46.51 46.65 46.79 47.61 47.75 47.89 48.02 48.16 42.90 43.03 43.16 43.30 43.43 44.22 44.36 44 49 44.62 44.76 45.56 45.70 45.83 45.97 46.10 46.92 47.06 47.20 47.33 47.47 48.30 48.44 48.58 48.72 48.86 13 13 14 14 14 7.0 7.1 7.2 49.00 49.14 49.28 49.42 49.56 50.41 50.55 50.69 50.84 50.98 51.84 61. U8 52.13 52.27 5242 49.70 49.84 49.98 50.13 50.27 51.12 51.27 51.41 51.55 51.70 52.56 52.71 52.85 53 00 53.14 14 14 15 7.3 7.4 53.29 53.44 53.58 53.73 53.88 64.76 64/J1 55.06 55.20 55.35 54.02 54.17 64.32 54.46 54.61 55.50 65.65 65.80 55.95 66.10 15 15 7.5 7.6 7.7 7.8 7.9 56.25 56.40 56.55 56.70 56.85 57.76 57.91 58.06 58.22 58.37 59.29 59.44 59.60 59.75 59.91 60.84 61.00 61.15 61.31 61.47 62.41 62.57 62.73 62.88 63.04 57.00 57.15 57.30 57.46 57.61 58.52 58.68 58.83 58.98 59.14 60.06 60.22 60.37 60.53 60.68 61.62 61.78 61.94 6_'.09 62.25 63.20 63,36 63.52 63.68 63.84 15 15 16 16 16 8.0 8.1 8.2 8.3 8.4 64.00 64.16 64.32 64.48 64.64 65.61 65.77 65.93 66.10 66.26 07.24 67.40 67.57 67.73 67.90 68.89 69.06 69.22 69.39 69.56 70.56 70.73 70.80 71.06 71.23 64.80 64.96 65.12 65.29 65.45 66.42 66.59 66 75 66.91 67.08 68.06 68.23 68.39 68.56 68.72 69.72 69.89 70.06 70.22 70.39 71.40 71.57 71.74 71.91 72.08 16 16 17 17 17 8.5 8.6 8.7 88 8.9 72.25 72.42 72.59 72.76 72.93 73.96 74.13 74.30 74.48 74.65 75.69 75.86 76.04 76.21 76.39 77.44 77.62 77.79 77.97 78.15 79.21 79.39 79.57 79.74 79.92 73.10 73.27 73.44 73.62 73.79 74.82 75.00 75.17 75.34 75.52 76.56 76.74 76.91 77.09 77.26 78.32 78.50 78.68 78.85 79.03 80.10 80.28 80.46 80.64 80.82 17 17 18 18 18 9.0 9.1 9.2 9.3 94 81.00 81.18 81.36 81.54 81.72 82.81 82.99 83.17 83.36 83.54 84.64 8482 85.01 85.19 85.38 86.49 86.68 86.86 87.05 87.24 88.36 88.55 88.74 88.92 89.11 81.90 82.08 82.26 82.45 82.63 83.72 83.91 84.09 84.27 84.40 85.56 85.75 85.93 86.12 86.30 87.42 87.61 87.80 87.98 88.17 89.30 89.49 89.68 89.87 90.06 18 18 111 I'.t 19 9.5 9.6 9.7 9.8 9.9 90.25 00.44 90.63 90.82 91.01 92.16 92.35 92.54 92.74 92.93 94.09 94.28 94.48 94.67 94.87 96.04 96.24 96.43 96.63 96.RT 98.01 98.21 98.41 98.60 98.80 91.20 91.39 91.58 91.78 91.97 93.12 93.32 93.51 93.70 93.90 95.06 95.26 95.45 95.65 95.84 97.02 97.22 97.42 97.61 97.81 99.00 99.20 9U.40 99.60 99.80 19 lit 20 20 20 n 01 234 56789 DilT. 1388 UCLA-Geology/Geophysics Library QA 275 B28g 1915 L 006 556 080 7 A"" "" Ml" I Ml Hill III I HI 001 001 075 9 The RALPH I>. H .::) H nri'\i' UN!'- MA LOS A AUF. THE LIBRARY UNIVERSITY OF CALIFORNIA s