Automatic expert system based on images for accuracy crop row detection in maize fields Automatic expert system based on images for accuracy crop row detection in maize fields J.M. Guerrero , M. Guijarro , M. Montalvo , J. Romeo , L Emmi , A. Ribeiro , G. Pajares A B S T R A C T This paper proposes an automatic expert system for accuracy crop row detection in maize fields based on images acquired from a vision system. Different applications in maize, particularly those based on site specific treatments, require the identification of the crop rows. The vision system is designed with a defined geometry and installed onboard a mobile agricultural vehicle, i.e. submitted to vibrations, gyros or uncontrolled movements. Crop rows can be estimated by applying geometrical parameters under image perspective projection. Because of the above undesired effects, most often, the estimation results inaccurate as compared to the real crop rows. The proposed expert system exploits the human knowledge which is mapped into two modules based on image processing techniques. The first one is intended for separating green plants (crops and weeds) from the rest (soil, stones and others). The second one is based on the system geometry where the expected crop lines are mapped onto the image and then a correction is applied through the well-tested and robust Theil-Sen estimator in order to adjust them to the real ones. Its performance is favorably compared against the classical Pearson product-moment correlation coefficient. 1. Introduction 1.1. Problem statement Machine vision systems onboard robots are being increasingly used for site specific treatments in agriculture. With such arrange- ments, the robot navigates and acts over a site-specific area of a larger farm (Davies, Casady, & Massey, 1998), where the vision sys- tems can supply abundant information. An important issue related with the application of machine vi- sion methods is that concerning the crop row and weed detection, which has attracted numerous studies in this area (Burgos-Artizzu, Ribeiro, Tellaeche, Pajares, & Fernández-Quintanilla, 2009; Guerrero, Pajares, Montalvo, Romeo, & Guijarro, 2012; López- Granados, 2011; Montalvo et al., 2012; Onyango & Marchant, 2003; Sainz-Costa, Ribeiro, Burgos-Artizzu, Guijarro, & Pajares, 2011; Tellaeche, Burgos-Artizzu, Pajares, & Ribeiro, 2008; Tellaeche, Burgos-Artizzu, Pajares, Ribeiro, & Fernández-Quintanilla, 2008). The goal is to eliminate weeds to favor the growth of crops. The vision system consists of a CCD-based calibrated camera with known intrinsic parameters, i.e. focal length, lens distortion, image center and CCD sensor sizes and pixel resolutions. The cam- era is located in front of the robot, inclined with a tilt angle (pitch) and at a high from the ground. Yaw and roll angles are also known. This allows determining the rotation and translation matrices defining the extrinsic parameters. Thus, areas in the field can be identified onto the image plane. This means that given an element in the field, with its spatial location, we can determine its relative positioning on the image. The vehicle navigates on a real terrain presenting irregularities and roughness. This produces vibrations and also swinging mainly in the pitch and roll angles. The yaw angle is assumed to be correct because otherwise the robot navigates erroneously out of the crop rows. Moreover, the spacing of crop rows in the field is also known. Because of the above, most often, the mapped expected crop rows in the image do not match with the real ones, this inaccurate esti- mation impedes the application of correct site specific treatments. On the other hand the discrimination of crops and weeds in the im- age is a very difficult task because their Red, Green and Blue spec- tral components display similar values. This means that no discrimination is possible between crops and weeds based on the spectral signatures. Thus, the best option is to locate the crop rows in the image with the maximum accuracy as possible. Indeed, if the crop rows are well located, we can accurately identify those pixels along and around the detected line as crops and the remainder, which are moved away, can be considered as weeds. To achieve this goal, we propose an automatic expert system, which exploits the human knowledge, with two main modules based on image processing techniques, as described later. 1.2. Revision of methods Several strategies have been proposed for crop row detection. Fontaine and Crowe (2006) tested the abilities of fourth line detec- tion algorithms to determine the position and the angle of the cam- era with respect to a set of artificial rows with and without simulated weeds. These were stripe analysis, Hough transform, blob analysis and linear regression. The following is a list of crop row detection methods grouped into different categories including the above. 1.2.1. Methods based on the exploration of horizontal strips Sogaard and Olsen (2003) apply RGB color image transformation to gray scale. This is done by first dividing the color image into its red, green and blue channels and then by applying the well-tested methods to extract living plant tissue described in Woebbecke, Meyer, von Bargen, and Mortensen (1995). After this, the gray scale image is divided into horizontal strips where maximum gray values indicate the presence of a candidate row, each maximum deter- mines a row segment and the center of gravity of the segment is marked at this strip position. Crop rows are identified by joining marked points through a similar method to the one utilized in the Hough transform or by applying linear regression. Sainz-Costa et al. (2011) have developed a strategy based on analysis of video sequences for identifying crop rows. Crop rows persist along the directions defined by the perspective projection with respect the 3D scene in the field. Exploiting this fact, they apply gray scale transformation based on the approach proposed by Ribeiro, Ferná- ndez-Quintanilla, Barroso, and García-Alegre (2005) and then the image is binarized applying a thresholding technique. Each image is divided into four horizontal strips. Rectangular patches are drawn over the binary image to identify patches of crops and rows. The gravity centers of these patches are used as the points defining the crop rows and a line is adjusted considering these points. The first frame in the sequence is used as a lookup table that guides the full process for determining positions where the next patches in subsequent frames are to be identified. Hague, Tillet, and Wheeler (2006) transform the original RGB image to gray scale. The transformed image is then divided into eight horizontal bands. The intensity of the pixels across these bands exhibits a periodic variation, due to the parallel crop rows. Since the camera character- istics, pose and the crop row spacing are known a priori, the row spacing in image pixels can be calculated for each of the horizontal bands using a pinhole model of the camera optics. A band-pass filter can then be constructed which will enhance this pattern, and has a given frequency domain response. Sometimes horizontal patterns are difficult to extract because crops and weeds form a unique patch. 1.2.2. Methods based on the Hough transformation According to Slaughter, Giles, and Downey (2008), one of the most commonly used machine vision methods for identifying crop rows is based upon the Hough (1962) transform. It was intended to deal with discontinuous lines, where the crop stand is incomplete with gaps in crop rows due to poor germination or other factors that result in missing crop plants in the row. It has been intended for real- time automatic guidance of agricultural vehicles (Astrand & Baerveldt, 2005; Hague, Marchant, & Tillett, 1997; Leemans & Destain, 2006; Marchant, 1996). It is applied to binary images, which are obtained by applying similar techniques to the ones explained above, i.e. RGB image transformation to gray scale and binarization (Tellaeche, Pajares, Burgos-Artizzu, & Ribeiro, 2011; Tellaeche et al., 2008; Tellaeche, Burgos-Artizzu, Pajares, Ribeiro, et al., 2008). Gee, Bossu, Jones, and Truchetet (2008) apply a double Hough transform under the assumption that crop rows are the only lines of the image converging to the vanishing point, the remainder lines are rejected, additional constraints such as inter-row spacing and perspective geometry concepts help to identify the lines. It is required to deter- mine the threshold required by the Hough transform to determine maximum peaks values (Jones, Gée, & Truchetet, 2009a, 2009b) or predominant peaks (Rovira-Más, Zhang, Reid, & Will, 2005). Depending on the crop densities several lines could be feasible and a posterior merging process is applied to lines with similar parame- ters (Tellaeche et al., 2008; Tellaeche, Burgos-Artizzu, Pajares, Ribeiro, et al., 2008; Tellaeche et al., 2011). Although intended for real-time, as mentioned before, in our images, where crop and weed plants contribute on the Hough parameter estimation, this method becomes computationally expensive (Ji, & Qi, 2011). On the other hand, the randomized Hough transform requires selecting pairs of points to be considered as a line, i.e. pairs of points belonging to a crop row. If we apply this technique in images where edge points have been extracted, the selection of those pairs becomes highly complex because weeds are also involved. 1.2.3. Vanishing point-based Pla, Sanchiz, Marchant, and Brivot (1997) propose an approach that identifies regions (crops/weeds and soil) by applying color im- age segmentation. They use the skeleton of each defined region as a feature to work out the lines that define the crop. The resulting skeletons and their properties, defined as chains of connected con- tour points, allow the identification of crop rows oriented toward the vanishing point. This process is highly dependent of skeletons, which are not always easy to extract, specially taking into account that weed patches are present. Romeo et al. (2012) apply also knowledge concerning the position of the vanishing point and the crop rows arrangement in the field to detect the expected crop rows. The process is based on the identification of maximum accu- mulation of green pixels along lines oriented toward the vanishing point. A supervised fuzzy clustering method is the proposed strat- egy for greenness identification. This makes the method highly dependent on the training phase unlike the one prosed in this ap- proach which is automatic, i.e. unsupervised. 1.2.4. Stereo-based approach Kise, Zhang, and Rovira-Más (2005) or Kise and Zhang (2008) developed a stereo vis ion-based agricultural machinery crop-row tracking navigation system. Stereo-image processing is used to determine 3D locations of the scene points of the objects of interest from the obtained stereo image. Those 3D positions, determined by means of stereo image disparity computation, provide the base information to create an elevation map that uses a 2D array with varying intensity to indicate the height of the crop. This approach re- quires crops with significant heights with respect the ground. Be- cause in maize fields, during the treatment stage, the heights are not relevant, it becomes ineffective in our application. Rovira-Más, Zhang, and Reid (2008) have applied and extended stereovision techniques to other areas inside Precision Agriculture. Only feasible if crops or weeds in the 3D scene display a relevant height. 3.2.5. Methods based on blob analysis This method finds and characterizes regions of contiguous pix- els of the same value in a binarized image (Fontaine & Crowe, 2006). The algorithm searches for white blobs (inter-row spaces) of more than 200 pixels, under the assumption that smaller blobs could represent noise in the crop rows. Once the blobs were iden- tified, the algorithm determined the angle of their principal axes and the location of their centre of gravity. For a perfectly straight white stripe, the centre of gravity of the blob was over the centre line of the white stripe, and the angle was representative of the angle of the inter-row spaces. The algorithm returned the angle and center of gravity of the blob closest to the center of the image. Identification of blobs in areas with weed patches does not distin- guish between blobs caused by weeds and crops. 1.2.6. Methods based on the accumulation of green plants Olsen (1995) proposed a method based on the consideration that along the crop row appear an important accumulation of green parts in the image. The image is gray scale transformed where green parts appear clearer that the rest. A sum-curve of gray levels is obtained for a given rectangular region exploring all col- umns in the rectangle. It is assumed that vertical lines follow this direction in the image. The images are free of perspective projec- tion because they are acquired with the camera in orthogonal po- sition. A sinusoidal curve is fitted by means of least squares to the sum-curve previously obtained. Local maxima of the sinusoid pro- vide row centers locations. 1.2.7. Methods based on frequency analysis Because crop rows are vertical in the 3D scene, they are mapped under perspective projection onto the image displaying some behavior in frequency domain. Vioix et al. (2002) exploit this fea- ture and apply a bi-dimensional Gabor filter, defined as a modula- tion of a Gaussian function by a cosine signal. The frequency parameter required by the Gabor filter is empirically deduced from the 2D-Fast Fourier Transform (Bossu, Gee, Guillemin, & Truchetet, 2006). Bossu, Gee, Jones, and Truchetet (2009) apply wavelets to discriminate crop rows based on the frequency analysis. They exploit the fact that crop rows are well localized in the frequency domain; thus selecting a mother wavelet function with this fre- quency the crop rows can be extracted. Crops, in the images we have studied, do not display clear frequency contents in the Fourier space, therefore the application of filters based on the frequency becomes a difficult task. 1.2.8. Methods based on linear regression Some techniques above apply this approach. Billingsley and Schoenfisch (1997) reported a crop detection system that is rela- tively insensitive to additional visual 'noise' from weeds. They used linear regression in each of three crop row segments considered and a cost function analogous to the moment of the best-fit line to detect lines fitted to outliers (i.e., noise and weeds) as a means of identifying row guidance information. Montalvo et al. (2012) ap- ply a linear regression for crop row detection in images containing high weeds densities. Some templates are used to guide the detec- tion. Linear regression is also applied in Sogaard and Olsen (2003). Linear regression is highly sensitive to isolated weeds patches placed on the inter crop rows and also for weeds patches over- lapped with crops. In this paper we also apply linear regression based on the Theil Sen estimator (Sen, 1968; Theil, 1950) which is free of the above sensitivity and has been proven in statistics with satisfactory results. 2. Design of the automatic expert system 2.1. System architecture The system architecture is inspired on the human expert knowl- edge about the specific application and also considering the require- ments that must be fulfilled. Astrand (2008) and Slaughter et al. (2008) propose a list of requirements for guidance systems that can be also considered for crop row detection, which in essence is a similar problem. Knowledge and requirements are mapped as fol- lows to build the architecture of the proposed automatic expert sys- tem for the accuracy crop row detection based on images. (a) Both crop and weeds display similar color spectral compo- nents and during the treatment their growth stages are sim- ilar, i.e. with similar height in the plants. (b) Crop rows are accumulations of green plants following spe- cific alignments oriented to the vanishing point. Crops are sown, not manually planted, and the inter-line distances in the field are known. (c) Weeds appear on isolated or overlapped patches with respect crops with irregular distributions. (d) Crop rows must be located with the most accuracy as possi- ble, regardless the distribution of weeds patches around crop and also considering that crop plants could miss along crop lines, as a common situation. Original ¡mage Image Segmentation Apply combined vegetation indices & greenness reinforcement Otsu thresholding Binary image (crop lines & weeds patches) Expert system Crop row detection Crop row estimation Trace expected crop lines according to the camera system geometry Apply Theil-Sen estimator Fig. 1. Automatic expert system architecture. (e) Camera s y s t e m g e o m e t r y is k n o w n , i.e. t h e intrinsic a n d extrinsic p a r a m e t e r s . (f) The robot navigates on u n e v e n t e r r a i n s w i t h p e r h a p s a b u n - d a n t irregularities. (g) The s y s t e m m u s t w o r k on r e a l - t i m e . This r e p r e s e n t s a t r a d e - off b e t w e e n t h e speed of t h e robot a n d t h e c o m p u t a t i o n a l cost. Based on this k n o w l e d g e a n d r e q u i r e m e n t s a n d also considering a d v a n t a g e s a n d shortcomings of t h e different crop r o w d e t e c t i o n m e t h o d s , t h e a u t o m a t i c e x p e r t s y s t e m is designed consisting of t w o m a i n m o d u l e s : image segmentation a n d crop rows estimation. Fig. 1 displays schematically t h e s e t w o m o d u l e s w i t h t h e c o r r e - s p o n d i n g processes. This results in a r o b u s t e x p e r t system, m a k i n g t h e c o n t r i b u t i o n of this paper. 2.2. Image segmentation Image s e g m e n t a t i o n is focused on t h e s e p a r a t i o n of g r e e n plants (crops a n d w e e d s ) from t h e rest (soil, s t o n e s a n d o t h e r s ) . According t o point (a) in t h e list of k n o w l e d g e a n d r e q u i r e m e n t s above, t h e b e s t o p t i o n t o identify w e e d s a n d c r o p is t h e application of v e g e t a t i o n indices i n s t e a d of m e t h o d s b a s e d on height discrim- ination. Vegetation indices are well t e s t e d m e t h o d s , Guijarro et al. (2011) p r o p o s e a c o m b i n a t i o n of v e g e t a t i o n indices, w h i c h is t h e o n e c h o s e n in this p a p e r b e c a u s e its p e r f o r m a n c e in m a i z e fields. 2.2.1. Combination of vegetation indices Given a n original i n p u t i m a g e in t h e RGB color space, w e apply t h e following n o r m a l i z a t i o n s c h e m e , w h i c h is usually applied in a g r o n o m i c i m a g e s e g m e n t a t i o n (Gee et al., 2008), Rn C„ , B„ R„ B„ R„ B„ b = Rn B„ (1) w h e r e R, G a n d B are t h e normalized RGB coordinates ranging from 0 t o 1 and are o b t a i n e d as follows: R„ Rn B„ Bn (2) w h e r e Rm3X = Gm a x = B m a x = 255 for our 24-bit color images. Vegetation indices t o b e c o m b i n e d are c o m p u t e d as follows Guijarro et al. (2011), Excess Green ( W o e b b e c k e et al., 1 9 9 5 ; Ribeiro et al., 2005) Color index of vegetation extraction (Kataoka, Kaneko, Okamoto, & Hata, 2 0 0 3 ) Vegetativen ( H a g u e et al., 2 0 0 6 ) (3) ExG = 2g -r -b C/VE = 0 . 4 4 1 r - 0 . 8 1 1 g (4) + 0 . 3 8 5 b + 1 8 . 7 8 7 4 5 VEG = (5) r"b w i t h a = 0.667 as in its reference Excess green minus excess red (6) (Meyer & Camargo-Neto, = _ 2 0 0 8 ; Neto, 2 0 0 4 ) w h e r e excess red is c o m p u t e d as follows (Meyer, Hindman, & Lakshmi 1998): ExR = 1 . 4 r - g . According t o Guijarro et al. (2011) t h e above four indices are c o m b i n e d t o obtain t h e resulting value COM as follows, COM = WEXGEXG + WEXGEXGR + W Q V E C / V E + WVEGVEG (7) w h e r e wExG = 0.25, wExGR = 0.30, wCJV£ = 0.33 a n d wVEG = 0.12 are t h e w e i g h t s for each index, r e p r e s e n t i n g t h e i r relative relevance in t h e combination. The resulting c o m b i n e d image COM, is linearly m a p p e d t o range into t h e interval [0,1]. 2.2.2. Greenness reinforcement Romeo et al. (2012) p r o p o s e a fuzzy clustering strategy w h e r e t h e cluster c o n t a i n i n g pixels belonging t o g r e e n plants has b e e n analyzed. Clusters c o n t a i n pixels w i t h t h e t h r e e spectral c o m p o - n e n t s in t h e RGB m o d e l as features. Obviously, a n d as expected, t h e g r e e n spectral c o m p o n e n t is d o m i n a n t . On a v e r a g e this c o m p o - n e n t in t h e cluster c e n t e r for g r e e n plants r e p r e s e n t s values a b o v e t h e 36% w i t h r e s p e c t t h e o t h e r t w o c o m p o n e n t s . Exploiting this k n o w l e d g e a n d applying t h e trivial reasoning t h a t pixels c o m i n g from plants should have t h e i r g r e e n c o m p o n e n t d o m i n a n t , w e a c c e n t u a t e t h e g r e e n n e s s in COM by multiplying t h e i r values by g in Eq. (1), i.e. a n e w g r e e n n e s s is o b t a i n e d a s : GA = COM*g. The multiplication is carried o u t pixel by pixel a n d GA is linearly m a p p e d to range in [0,1]. Because g r e p r e s e n t s t h e p e r c e n t a g e of t h e g r e e n c o m p o n e n t , t h e result o b t a i n e d r e p r e s e n t s t h e e m p h a s i s in t h e g r e e n n e s s . 2.2.3. Thresholding Given t h e t r a n s f o r m e d i m a g e GA, t h e n e x t s t e p is its binariza- tion for posterior processing. An easy t h r e s h o l d b a s e d on t h e m e a n gray level of t h e i m a g e ( h i s t o g r a m ) has b e e n i m p l e m e n t e d in Gee et al. (2008) w h e r e t h e living plant m a t e r i a l (crop or w e e d ) a p p e a r s as w h i t e spots a n d t h e rest (i.e. soil surface, stones, s h a d o w s ) as black. Also in Guijarro et al. (2011) t h e w e l l - k n o w n Otsu's (1979) m e t h o d , traditionally applied for binarization, has b e e n applied. More c o m p l e x a p p r o a c h e s h a v e b e e n also applied such as t h e o n e u s e d in Bossu et al. (2009), b a s e d on t h e fe-means clustering m e t h o d . W e h a v e c h o s e n t h e Otsu's m e t h o d for its w e l l - k n o w n p e r f o r m a n c e as r e p o r t e d in Meyer a n d Camargo-Neto (2008) a n d also b a s e d on t h e s t u d y of Sezgin a n d Sankur (2004) w h e r e its per- formance has b e e n t e s t e d in images w h e r e t h e n u m b e r of pixels in b o t h p a r t s of t h e i m a g e h i s t o g r a m t h a t Otsu's p r o d u c e s is close t o e a c h o t h e r . Fig. 2(a) displays a n original i m a g e in t h e RGB color space of a m a i z e crop field. The color space t r a n s f o r m a t i o n by a p p l y i n g GA is displayed in Fig. 2(b). Fig. 2(c) displays t h e i m a g e t r a n s f o r m a t i o n from i m a g e in Fig. 2(b) by applying t h e Otsu's m e t h o d . Note t h e l a n d m a r k s in t h e image, w h i c h a r e explained later in Section 3. 2.3. Crop row estimation This m o d u l e is i n t e n d e d t o apply t h e k n o w l e d g e e m b e d d e d in points (b)-(f), Section 2 . 1 , at t h e s a m e t i m e it provides specific solutions for t h e r e q u i r e m e n t s e x p r e s s e d in such points. 2.3.1. Tracing expected crop lines The robot navigates on u n e v e n t e r r a i n s w i t h p e r h a p s a b u n d a n t irregularities, t h e k n o w l e d g e of extrinsic p a r a m e t e r s of t h e vision s y s t e m does not suffice b e c a u s e t h e c a m e r a is c o n t i n u o u s l y in- volved in a p e r m a n e n t swinging. W e p r o p o s e t h e c u s t o m i z a t i o n of t h e Theil-Sen regression e s t i m a t i o n a p p r o a c h , b e c a u s e of its w e l l - t e s t e d p e r f o r m a n c e in statistics. Because t h e c r o p rows a r r a n g e m e n t are k n o w n in t h e field a n d also t h e extrinsic a n d intrinsic c a m e r a s y s t e m p a r a m e t e r s , t h e ex- p e c t e d c r o p r o w locations in t h e i m a g e can b e e s t i m a t e d a n d m a p p e d as k n o w n lines o n t o t h e i m a g e (Fu, González, & Lee, 1987; Hartley & Zisserman, 2006). U n d e r t h e a s s u m p t i o n of ideal s y s t e m g e o m e t r y t h e e x p e c t e d lines should m a t c h a n d overlap t h e imaged real crop r o w s . Nevertheless, d u e t o u n e v e n t e r r a i n s a n d errors in t h e c r o p r o w a l i g n m e n t d u r i n g t h e sowing, this often does not occur. (a) t \ '% (c) Fig. 2. (a) Original image; (b) GA index extracted from the image in (a); (c) binary image after Otsu thresholding. Therefore, under the above consideration, two cases can appear with respect to the expected and the imaged real crop lines: (a) they match; (b) they do not match. In the first case, the detection method needs to verify this matching. In the second case, a line location correction must be applied until the real crop row is lo- cated. Under this approach the system geometry through the intrinsic and extrinsic parameters guides the crop row detection process. Now the question is: how can we verify the expected lines match or not with the real crop ones? Because we have available white pixels representing green plants in the binary image, we can adjust a straight line for specific pixel alignments that are ex- pected to identify crop rows. This will represent the real crop line. So, because we have both, the expected straight line equation and the adjusted one, we are able to verify the correct or incorrect match for both lines. Thus, we focus the effort in methods for esti- mating the parameters defining real crop lines. 2.3.2. Correction of the expected crop lines: Theil-Sen estimator An important problem to be addressed in our approach is that the method selected can cope with specific pixel alignments but also must be robust enough to avoid significant deviations caused by weeds that are not aligned and placed more or less near the main crop row alignments. This is the main issue addressed in this work. Stewart (1999) provides a tutorial oriented toward robust parameter estimation in computer vision. Two frequently tech- niques used are least-median of squares (LMS) (Rousseeuw, 1984) and M-estimators (Hampel, Rousseeuw, Ronchetti, & Stahel, 1986; Huber, 1981), but a huge volume of data implies that param- eter estimation techniques in computer vision are heavily over constrained, even for problems where low-level feature extraction, such as edge detection, are applied. This in turn implies that parameter estimation problems in vision should be solved by least squares or, more generally, maximum likelihood estimation (MLE) techniques. Unfortunately, computer vision data are rarely drawn from a single statistical population as required for effective use of MLE. In our approach we must estimate two parameters, defining the straight line equations associated to the corresponding crop rows, they are the slope a and the intercept /¡. For the linear regression approach, after several studies, we observed that the least squares estimator of a regression, coefficient a is vulnerable to gross errors and the associated confidence interval is, in addition, sensitive to non-normality of the parent distribution. With other measures, for example, the breakdown point (Rousseeuw & Leroy, 1987) a small number of outlying data can cause an estimate to diverge arbitrarily far from the true estimate. From the point of view of our approach, this means that few weed pixels can move the least squares fit far from the true fit, i.e. far of the real crop line. A second measure of robustness is the influence function (Hampel et al., 1986; Huber, 1981), in which the change in an estimate caused by insertion of outlying data, using a function of the distance, also causes false estimations because it should tend to zero with increasing distance to achieve robustness. Alternative estimators for the regression coefficient, a, based on suitable rank tests are proposed by Mood (1950). They apply the estimation of both parameters a and ¡5 simultaneously using the statistical median by trial and error. Adichie (1967) proposes a more restrictive method under the assumption that the set of points to be adjusted is an absolutely continuous and symmetric distribution function with also an absolutely continuous and square integrable density function; Theil (1950) proposes a very simple estimator for a using also the statistical median; Dytham (2011) and Sen (1968) study a simple and robust estimator for a based on Kendall's (1955) tau rank correlation, a simple non- parametric test that can be used instead of normal regression. Hence, the estimator for a and ¡5 is the median of the set of slopes, where a simple slope is computed between every possible pair of pixels i and j with image coordinates (x„ y,) and (x,-, y¡) respec- tively, finally the median slope is then selected as the best estimate for a. Based on the above considerations, we select the Theil-Sen esti- mator as proposed in Massart (1997), because of its statistical effi- ciency and its robustness, even for low image resolutions, resulting in a promising approach in agricultural images containing crop rows. Nevertheless, as we will see later, its effectiveness from the real-time point of view is relatively low. This means that further analysis or new software and hardware implementations should be required for real-time processing. A straight line is represent by its slope a and its intercept ¡5 as follows, Y = aX- (8) Given a distribution of n pixels the goal is to adjust a straight line to such distribution. The Theil-Sen estimator evaluates pairs of pixels i and j and compute the slope over the set of all possible pairs of such pixels, i.e. over the n(n - l)/2 possible combinations. This is carried out as follows, a = med[Sij\Sij = y¡-y¡ x¡ T^Xj, i,j = 1,2, (9) The estimation of the intercept, /¡, is computed as the statistical median of the intercepts obtained with the robust slope a in (9). The E parameter, set to 10~3, is introduced to avoid exactly vertical lines with slope toward oo. Polar coordinates could be used to avoid this problem. Nevertheless, vertical lines do not appear in our real application. This is carried out as follows, • med(y¡ -