Image Boundary, Corner, and Edge Detection: Past, Present, and Future

This paper presents a thorough review of image processing tasks such as boundary, corner, and edge detection. A clear introduction of these topics is provided for a new researcher, which is followed by a description of the technical details that have been implemented for these topics. In addition, the database list used in the experimentations is also provided. Further, all the techniques are classified and listed in a tabular format. Finally, the open challenges in this field of research are discussed and concluded with the future scope and directions. The authors of this paper expect that this review paper will help new researchers to find an approach to conduct further studies in this topic and also identify some of the unsolved problems. The latter will assist them in solving and reaching close to this research topic.


Introduction
Image boundary detection techniques are actively used in computer vision applications including image segmentation, classification, and object recognition. In comparison to an image edge, which is typically defined as an abrupt variation in low-level image characteristics such as brightness in color, the boundary of an image is a contour. The latter signifies a change in the pixel ownership from the surface of one object to another. Boundary detection, as shown in Fig. 1, is actually based on edge detection with the application of some high-level techniques, provide a significant amount of useful information. However, it slightly more computationally expensive than edge detection techniques. For example, as a representative gradient-based algorithm for high-quality boundary detection, a Pb (probability boundary) detector requires more than 1 min to process a 321 481 image when being executed on 2.10-GHz Pentium (R) dual-Core CPU T4300. Because of the above-mentioned disadvantages, still in most countries worldwide a relatively simpler technique is applied, e.g., boundary detection, which has less accuracy than edge detection that yields high quality but with complex computations.
For the systems including vision application, which are used in embedded applications, the actual performance is highly important because these systems frequently need to relate with the real world. Therefore, the computations in boundary detection techniques need to be accelerated, which is extremely important for deploying high-quality methods for real-world applications. Now, it is slightly clear that the boundary detection method is an long-standing problem and a major task in computer vision and pattern recognition [1]. It is generally engaged as a pre-processing step for the tasks that have extremely high-level

International Journal of Computer Electrical Engineering
vision [2]- [4], and the images after applying boundary detection algorithms are in compressed states. According to [5], boundary detection plays a key role in implicating low-level features to high-level semantics for improving the understanding of image details. It is also closely related to texture segmentation. The question of whether the intersection of two lines in an image of a three-dimensional (3D) scene corresponds to a real object point still merits further investigation. In pattern recognition and reconstruction of 3D images, accurately determining a corner point of an object is a challenging task. During the past decades, numerous techniques have been developed to accomplish this task of corner detection for applications related to computer vision such as motion tracking [6], stereo matching [7], image registration [8], object recognition [9], shape detection, and camera calibration [10], [11]. In applications such as analysis of images based on a feature, the detection of corners plays a chief role, because corners are considered as the most noticeable points in the intensity landscape when compared to other image features. A corner as, shown in Fig. 2, is a distinct feature point because it is the location where solid intensity diffusion occurs in more than one direction, and it is the also point where the frequency of the gradient change becomes maximum [12]. The operator for detecting a corner point should be well confined at its correct position, repeatable, and efficient as well as have the ability to avoid spurious detection [13]. The corners of a polygonal object plane, e.g., roof and wall, can be determined in an image by detecting and intersecting edge straight lines bounding the plane. In applications such as object recognition, classification, and segmentation, it is an extremely important procedure to detect the object edges. Therefore, the algorithms related to the detection of edges play a key role in the performance for such applications. An edge can be generally defined as the evident change of the intensity in an image. Hence, they assist is recognizing the position of an object as well as the boundary of a specific entity in an image. In

International Journal of Computer Electrical Engineering
pattern recognition, the detection of edges assists the extraction of features. Therefore, edge detection, as shown in Fig. 3, plays an important role in computer vision. Until now, numerous researchers have selected software for executing simple edge detection algorithms for detecting the variation in images [14]. However, it is found that this is not an accurate and effective strategy for real-time applications. Instead, development of a new edge detection algorithm is comparatively more efficient for hardware platforms for real-time applications. Since the progress of the Very-large-scale integration (VLSI) technology, the implementation of edge detection in hardware offers tremendous potential for performing parallel subroutines in a program. Therefore, with VLSI technology, this implementation in hardware offers a substitute that is more rapid compared to using software. In [15] projected the hardware execution of edge detection in precise integrated circuit (IC) applications via a Sobel operator with the assistance of the VLSI technology.

Boundary Detection
Basically, boundary detection algorithms could be classified into three categories: gradient-based, machine learning-based, and saliency-based methods. In the gradient-based methods category, the ultimate classical and well recognized edge detector is the Canny edge operator [16]. Regardless of its initial prominence and success, the results obtained from this technique majorly depend on the scale parameter, which is possibly not easily customizable and whose parameters are not feasibly tuned. Bergholm [17] investigated the problem of selecting the scale using the approach of focusing on edges, leading to the integration of multiple-scale information. Both Canny edge detection and edge focusing algorithms were planned to identify local edges; however, this led to a problem because the method could detect not only true images boundaries but also false positive texture edges. Therefore, to reduce texture edge detection, Grigorescu et al. [18] as well as Papari and Petkov [19] designed a model called the surround suppression model, which was based on gradient image features. Specifically, the design of suppression model in [18] was inspired by the "non-classical receptive field" concept from biology. Essentially, this approach can be seen as a filter operating in the gradient space. As pointed by the authors, the method had two problems: unwanted self-inhibition and undetermined inhibition levels [20]. Even on using a more sophisticated steerable filter [21], the self-inhibition problem still could not be completely resolved. Because of the current acceptance of machine learning techniques for the applications of computer vision and availability of human-labelled image databases [22], the boundary detection problem is progressively being framed as a machine learning problem. Martin et al. [23] offered a learning-based boundary detector called Pb based on the local brightness, color, and texture cues. The main technique was to calculate the v2 distance of two halfdisc strength histograms, in addition to the candidate edge angles. Dolla r et al. offered boosted edge learning (BEL) [24], which consumed numerous image features to form a probabilistic boosting tree classifier. The benefit of BEL [24] is that it can acquire object-specific edges; however, first, the edge classes need to be well-defined in the training set. Recently, Arbela ez et al. [25] proposed an extraordinary-performance boundary detector called gPb, which is an upgraded version of Pb [17]. Specifically, it obtains initial boundary detection results, named as mPb, by combining both multiscale color features and texture features. This is followed by the addition of the global information for the construction of the gPb detector. The performance of edge detection is optimized by tuning the parameters of gPb. Finally, the gPd detector contributes to further improving the performance of boundary detection but with the usage of a longer computational time and larger memory intake compared to those of the Pb detector. Further increasing the speed of the gPd detector, Catanzaro et al. [26] proposed a GPU-based implementation, which resulted in a fast computation. However, it is still not clear whether the usage of a GPU for the gPd detector will really run smoothly for large images. Concurrently, increasingly, machine-learning-based boundary detection algorithms are being proposed. For instance, Kokkinos [27] used Pb [17] to form candidate edges using Canny edge operators, these operators can be further advanced by a machine-learning methodology for boundary detection and grouping. In comparison with supervised-learning based boundary detection algorithms, a new model called the OCM was proposed with the incorporation of a texture suppression term, which allowed for an improved performance compared to those of the Pb [17] and BEL [18] algorithms for scenes with texture dominance. The third category of boundary detection is called saliencybased method. To build a saliency map, Itti et al. [28] and Sun and Fisher [29] used both color and orientation contrast maps as the early features. Feng et al. [30] then used the saliency map to identify salient edges and areas for content-based image retrieval. Shimodaira [31] defined the edge saliency quantity and utilized the boundary probability for edge detection. The method in [25] requires several parameters to be adjusted, which limits its potential applications. Kennedy [32] offered a contour cut algorithm for salient contour detection. Even though the saliency and boundary maps were related, the two concept still have a major dissimilarity. The additional domain related to boundary detection is called perceptual organization, in which contour completion is particularly focused. Following this, Kovacs and Julesz [27] claimed that closed contours were more beneficial than partially completed contours, particularly for figure-ground segmentation. Linking this to the detection method, in [28] proposed a technique to calculate closed contours. Another similar research was performed done by Ren [29], who proposed a probabilistic model for contour completion. Similarly, Ming et al. [30] implemented a model called a higher-order conditional random field model to calculate close contours. These works are relevant to low-level image boundary detection tasks, but with differing emphasis.

Corner Detection
Existing corner detection operators can be classified into two broad groups: template-based and geometry-based corner detectors [33]. A difference between the two approaches is that template-based corner detectors tend to compare a definite corner template over to the input image and on the opposite side. By contrast, geometry-based detectors attempt to measure the differential geometry features close to the corner point. In template-based mechanisms, numerous alignments of corner templates are applied to include all types of corners, leading to consumption of tremendous computational assets and time. In comparison, geometry-based detectors are additionally expanded to numerous designs such as edgerelated [34], topological feature-based, [35] and auto-correlation-based [36] techniques. The intersection of two or more lines is considered as a corner point in edge-related models [37], whereas a geometry feature point on an image plane is considered as a corner point in topology-based models. Moravec [38] was the first to propose a corner detection operator, initiating a new development in computer vision. The method is based on the sum of squared differences (SSDs) [39], which calculates the change in strength by changing a small 3 × 3 window in orthogonal directions and identifies all the important points of interest in an image.
Recently, a few more corner detection operators have been developed, and possibly the most extensively used is the Harris corner detector [36], which is an extended version of the Moravec operator. Other

International Journal of Computer Electrical Engineering
researchers, namely, Smith and Brady [40] have developed a new corner detector named as SUSAN (smallest uni-value segment with an assimilating nucleus) by adopting morphological operations in the corner detection method, instead of taking derivatives and image gradients. The proposed model is capable of managing all types of corners, as shown in Fig. 4, regions, and junctions, and also can be used for detecting edges. However, the SUSAN operator has drawbacks of poor localization capability and low stability; further, the detection of false features is particularly more when the boundaries are blurred [41]. Rosten et al. [42]) presented the FAST (feature from accelerated segment test) operator, which differentiates between lighter and darker around neighboring pixels by partitioning the center pixel window into two groups. The authors of ( [43], [44]) introduced the SUFR (speeded up robust features) corner detector based on two-dimensional (2D) Haar wavelets [45]; however, the performance did not remain good with changes in the scale. The SUSAN principle is based on using a nearly circular mask with an area of nmax pixels and with the nucleus at a corner feature. The objective of the mask is to provide an isotropic response, by moving above each pixel in an image, serving as the center or nucleus of the mask.
The strengths are supposed to be alike when their alteration is fewer than a definite threshold t. This threshold can be recycled to regulate the quantity of the output, without significantly affecting the excellence of the results. The response function used in the associating concentrations is where r 0 is the location of the nucleus in the image, r is the location of some additional point within the mask, I(r) is the strength of any pixel, and t is the variance threshold as shown in eq. (1). With the response function as define in eq. (1), the area of SUSAN with nucleus r 0 is For a corner to exist at r 0 , n(r 0 ) must be less than half of its maximum possible value, n max . The number of pixels, n(r 0 ), is compared with a geometric threshold as shown in eq. (2), g, generally set as n max /2 as shown in eq. (3). This yields the decision function.

International Journal of Computer Electrical Engineering
The above function is non-zero at all the candidate points for the corners. Reducing the geometric threshold, g, restricts the candidate points to those having sharp angles. The center of gravity of SUSAN is calculated at each candidate point according to the following formula: For any candidate point to be a genuine corner, r must be distinct from r 0 . Thus, candidate points are rejected if |r-r 0 | < 2. As a final test, all the points lying on the line from the nucleus, r 0 , through the center of gravity, r, to the edge of the mask are tested as shown in eq. (4). If all these pixels are in SUSAN, the nucleus is accepted as a corner point; otherwise it is rejected.
The author of [46] utilized the SUSAN corner detector algorithm to distinguish real and virtual edge intersections in pairs of uncalibrated images, as shown in Fig. 5.

Corner Detection
In image processing applications, the basic and fundamental task is to perform edge detection, particularly for analyzing motion detection [47], [48], image segmentation, pattern recognition [49], [50], vision measurement [51], remote sensing, and medical applications [52]. Researchers Sobel, Roberts, and Prewitt [16] have worked on numerous pixel-level edge detection algorithms such Canny operators, which are widely studied in the field of image processing.
In this paper, the authors have focused on an edge detection method, which is an interesting task for the identification of the correct edges of an object. Currently, numerous edge detection algorithms are being developed and applied in various fields. Lin et al. [53] explained edge detection in medical images with quasi-high filters. In addition, in [53], a WL operator was proposed for detecting edges in different medical images. Research on blurred images is also performed for satellite images using a technique called the iterative fuzzy edge detection method [54]. Rahimzadegan et al. compared their result with those obtained from other edge detection methods; from the outcome research it could be inferred that higher parametric values of the peak signal to noise ratio (PSNR) had the possibility of yielding better results with blurred satellite images. Further research relating to satellite images is can be found in [55], where an efficient algorithm is proposed by enhancing the traditional Canny edge detection algorithm. Kalra et al. [56]. proposed a hybrid model of the edge detection algorithm. They combined the traditional Sobel operator

International Journal of Computer Electrical Engineering
with a Canny edge operator to identify the edges, and assured better results compared to those provided by the traditional methods. In [57], a technique was proposed, which could increase the robustness of the algorithm; the technique was to include features such as the length and directional changes in the edges into the traditional Canny edge operator. In [58], the authors proposed a robust and novel model using a Canny edge operator and based on an anisotropic and genetic algorithm. This algorithm searcher for high and low-threshold values, similar to in the traditional Canny operator. In another research [59], the edge detection was achieved by using a polynomial-evaluation-based Gabor filter. In this study, the information used was based on the intensity to locate the edges in an image, for which the polynomial evaluation was performed to decompose the high and low-index values. An improved edge detection algorithm as proposed in [60] based on k-means clustering, where the image data were related to a brain tumor to detect the associated disease. Another approach of edge detection is called anisotropic method. Maiseli et al. [61] proposed an alternate edge detection method based on the anisotropic diffusion method. This method is iterative and has robust features; however, it is computationally complex. The core of this method is the modification of the Moore-neighbor algorithm to detect the boundary of an object from an image. In this study, the researchers [62] adopted the above work and enhanced it further by incorporating range filters to the modified Moore-neighbor algorithm to detect the edges with enhanced accuracy. These range filters have the following functions morphological functions: dilation and erosion to decide the maximum and minimum values from the definite neighborhood. The outcome of this work resulted in detection of more accurate edges and higher robustness to noise.

Applications
Image processing is performed in numerous applications for either boundary, corner, or edge detection. In this section, a brief description of each application with the corresponding algorithm being used is provided.

Medical Images Boundary Detection
For a long time, the researchers associated with medical imaging and its analysis have been attempting to explore the contours derived from different body organs and tissue types by medical imaging. It is believed that the segmentation algorithm for these medical images on large datasets has not been developed. Establishing a new algorithm that can deal with large medical images for segmentation is an extremely important step. A few researchers have performed some studies on numerous clinical datasets of medical images for segmentation and also evaluated them [63]. Numerous researchers have conducted their investigation by comparing their algorithm with phantoms [64], because this strategy acts as an idealistic representation for real data. Some of the scholars have tested real data and then evaluated their algorithms with different criteria [65], so that it is extremely difficult to relate the performance of their algorithms with other ones. Some of the other researchers have attempted to address the above issue, identified as medical image segmentation; however, until till now, none of them have been solve all the problems. Some researchers have derived an evaluation metric for boundary detection, mostly by utilizing the parameters extracted from the boundaries, such as the area of the perimeters, to compare boundaries [66]. For the purpose of evaluation, a few scholars have also used metrics that are based on the distance between boundaries.

Diagnosis System Based on Mammography Analysis [67]
Computer-aided diagnosis (CAD) systems based on mammography are used to assist human experts by extracting the features of breasts; these extractions are done automatically for disease detection. The main purpose of using mammography for CAD is to extract the image segmentation, which helps in estimating the

International Journal of Computer Electrical Engineering
skin-air and pectoral-breast boundaries, which collectively explain and describe the breast contours. The current study of image segmentation focusing on breast boundary and pectoral muscle segmentation may possibly be classified into five categories: thresholding, region-growing, morphology-based, active contour (AC), and texture-based, according to their segmentation approaches. Some research [68] relating to global thresholding methods are generally utilized to obtain the skin-air boundary because there are noteworthy differences in the intensities of the tissues that are in between the background and foreground of an image. However, the thresholding method has some limitations to extract the pectoral-breast boundary because of the low contrast between a pectoral muscle and breast. The segmentation method was proposed by in [69] based on region growth with the initialization of 40 points in addition to the boundary; these points were selected based on thresholding. However, the obtained result showed lower accuracy for pectoral-breast boundary detection compared to that for skin-air boundary detection. Some of the methods based on morphology [70] utilize the features of natural shapes to develop complex models that can be fitted as objects of breast. However, some of the AC methods that are based on edges [71], [72] are dealing with mammography to identify only the skin-airline pectoral muscle remaining in a breast region. A few techniques are applied based on the texture of an image, where the texture is extracted using texture filters such as wavelets [73] or the Gabor filter [74], and the boundaries in the image are determined based on the significant change in the texture. Rampun et al. [75] worked on this topic and designed a hybrid method by combining model-based and region-based approaches. This strategy yielded extremely good outcomes but still the accuracy when compared and estimated with pectoral muscle boundaries with complex contours.

Road Boundary Detection [76]
The enhance the safety of roads and comfort during driving, numerous intelligent technologies related to vehicles are advancing rapidly. These are focusing on assisting the driver to avoid collisions and send a notification prior to any type of collision. The development of the technologies related to intelligent vehicles is to avoid obstacles around a road and vehicle for the safety of the driver and to generate a safe path for the driver. A large amount of work has already been done in this direction to detect roads using various types of sensing technologies such as vision, radar, and LIDAR. Technologies related to vision are introduced to tract lane marks, particularly on structured roads, and they are extensively evaluated by employing the following developed systems: rapid adapting lateral position handler (RALPH) (Pomerleau, 1995), generic obstacle and lane detection (GOLD) (Bertozzi and Broggi, 1998), random sample consensus (RANSAC), Kalmanfilter-based lane tracking, and forward vehicle detection (Choi et al., 2012). Additional research has also been conducted related to unstructured roads using vision-based techniques. The following are the methods that have been developed for unstructured roads: supervised classification applied to road following (SCARF) (Crisman and Thorpe, 1993), unsupervised clustering applied to road following (UNSCARF) (Crisman and Thorpe, 1991), and vision-based road transportation for unmanned operation on urban-like scenarios (VIRTUOUS) (Sotelo et al., 2004).
The main advantages of vision-based techniques are the rich content of information and low operating power is low, which leads to a low cost and the requirement of a passive non-invasive sensor. However, depending on the illumination, these techniques are affected, because with complex illumination their effect also decreases. The other influencing factors are shadows and bad weather. Redundant data minimization: optimum camera activation [77], road networks [78].

Shadow Detection
In image processing, the detection of the shadows in an image is extremely important, particularly in satellite images, which are surrounded by a shadow. However, sometimes these shadows are also beneficial, and their existence is useful. The shadows created because of the residence in satellite images help in

International Journal of Computer Electrical Engineering
computing building structures based on their height and shape; this is beneficial for urban planning and constructing 3D scenes. Contrary to this advantage, shadows obscure vital information about the object, leading to false color tomes and also alteration of the edges of the underlying objects. These issues serve as barriers to image processing and analysis; in view of all the above points, it is evident that the removal of shadows from an image is an important research topic. The following are the methods being developed to remove shadows from an image. The edge correlation method is applied as a classification technique to classify texture-based images into shadow and non-shadow regions [79]. Similarly, a few more techniques have been developed to detect shadows in an image, e.g., Gabor filtering [80] and normalized crosscorrelation [81]. Salvador classified shadows as self-shadows and cast shadows based on the intensity and an edge map [82]. Region growth is also used for the segmentation of shadow regions [83].

Cardiac
Boundary detection techniques and non-linear filtering are applied to detect the cardiac quiescence in heartbeats using echocardiography images. The purpose and motivation of this study of quiescent phases are to offer enhanced cardiac gating to obtain motion-artifact free images of the heart in cardiac computed tomography (CT). Thresholding is applied to echocardiography, yielding a binary image; then, the boundary detection technique is applied to this binary image using the Moore-neighbor algorithm. Finally, the criterion for boundary detection is Jacob's stopping [84].

Tumor Boundary Detection
The main purpose of this study is to detect the boundary of a brain tumor, for which the boundary detection technique is applied. The methodology is to apply the boundary detection technique to each image slice, and then separate the tumor from the normal brain tissues. During this process, the tumor found in each slice is further treated for measuring the volume for the 3D rendering. The outcome of this method is assessed on numerous MRI datasets. The following are the experimental details for this application of boundary detection: during the initial experiments, an axial MRI dataset consisting of 74 slices was used. The pixel size was 0.898 mm × 0.898 mm, and the thickness of the slice was 0.8 mm. [85]

Datasets
Generally, two types of mammography are being used in clinics, including the analog screen-film mammography (SFM) and full-field digital mammography (FFDM) [86]. There are many datasets available in the literature for boundary detection some of them are listed below in Table 1.

Algorithms Applied
Researchers have applied various techniques for boundary, edge, and corner detections, which are listed below. In Table 2, the techniques related to edge detection are listed, whereas Table 3 lists all the algorithms related to corner detection.

7.
natural language applications rule-based system N/A correctly identifying the boundary of a sentence written in Marathi [103] 6. Open Challenges

Edge Detection
In this section, some of the open challenges related to edge detection are discussed; this will help a new researcher to focus on these challenges and advance in doing research in the corresponding directions.
Ghost imaging (GI) is creating obstacles for the recognition of a target and extraction of features in the form of noise, which is created during the retrieval of the images using the ghost image mechanism [92]. Although numerous techniques have been implemented to achieve this, such as differential GI [104], compressive GI [105], normalized GI [106], iterative GI [107], and sinusoidal GI [108], still further research could be done in this direction to enhance the results further.
Edge detection [93] is considered as an important and initial stage for human visual processing. However, extremely few studies have been undertaken in the direction of human edge detection filters. In a visual cortex [109], neurons that are sensitive to edges are highly common; however, less efforts have been conducted for edge detection of in humans.
In [95], the development of a latest technique is reported for the robust detection of carotid artery layer contours. This technique is basically developed with B-mode ultrasound images for the estimation of local orientation and anisotropic filters. The main challenge in this approach is preventing the production of false edges close to isolated and high-curvature contours.
It is well known that in computer vision, edge detection is of tremendous importance [96]. Numerous researchers have used software for the development of basic edge detection algorithms [110]. However, it has been found that the software methodology is not an efficient approach for real-time applications. The implementation of edge detection algorithms on hardware platforms is revealed to be relatively more efficient for real-time applications. This is because of the feasibility of the process of VLSI technology hardware implementation.
Another challenge is the development of good-quality edges in ferrograph images [98] for numerous applications. Because until now no single algorithm can ensure this type of an edge, another task for a researcher is to develop a reliable method to measure the performance of various edge detection algorithms, particularly for ferrographic applications.

Boundary Detection
In the study of breast boundary segmentation [67], quantitative analysis and calcification localization are extremely important phases in a CAD system; such an analysis is called mammographic analysis. However, the challenge here is the non-uniform distribution of the spatial signals of the glandular tissues. Additionally, based on the above issues, it is still a challenging task to automatically analyze mammogram images with high precision.
An extensive study was conducted on optimum camera activation and redundant data minimization [77], but the activation of some number of cameras still required with a suitable courage of events is proving to be a challenging task. Compared to all the previous studies, in this study, none of the research is considered because unnecessary cameras became activated under the event boundary detection approach.
In the image boundary detection application [88], grain boundary extraction is an active research area, in the challenge faced a the researcher is that the microstructure of the grain is highly complex and various image defects and missing boundaries are formed throughout the sample preparation. These difficulties are causing the difficulty in extracting the grain boundary detection correctly.
Terrestrial lidar data [78] is naturally a bulky data. It is a difficult job to process the entire raw point cloud. However, this bulkiness is removed by converting the Lidar data into intensity images. The current challenge in this research is addressing the limitations produced by the change in the dependency on opensource software. The show also accomplishes the ideal parameter values spontaneously for improving the classification outcomes for the Lidar data.

Boundary Detection
The boundaries are locally adaptive, and the isotropic detection operator (BLAID) [86] is suffering from a few shortcomings. Corner detection methods are not reliable at different rotation angles, and BLAID is sensitive to the region. Additionally, another future issue in these techniques is that their outcomes are good only for images having an SNR of N30. Therefore, this needs to be implemented and examined as a future task.
In the application of breast tumor detection using the image boundary detection method, the proposed method in [67] was capable of dealing with images with unusual pectoral muscle shapes. Another method in [75], which was a well-designed model for the pectoral muscle region, would be of tremendous assistance in detecting boundaries. These could be utilized as a strong basis for future research on pectoral muscle segmentation.
The collected sensor data are used to estimate boundaries by this technique [102]. The detection is done even when there are errors in reading the sensors. Therefore, as a future task, the researchers plan to assess the performance of the proposed method by even including the errors.
The proposed method in [111] used the LSTM, which was shown to be less effective when matched with CNNs. The authors recommend that making the method learn from more epochs would lead to similar performances. As a future task, the training parameters could be optimized to increase the number of classes to detect various types of boundaries. Additionally, a hybrid system could be proposed by combining a bi-LSTM and CNN to the obtain the advantage from each technique.

Edge Detection
However [95], the proposed method also produced false edges near both isolated and high-curvature contours. The future work will comprise of optimal threshold selection methods for an individual image and a combination of planned methods in carotid intima-media segmentation algorithms.
It is necessary that the suggested assessment method [98] be auxiliary with respect to numerous realworld surroundings to make the assessment technique more appropriate for precise engineering applications.
Hence, future enhancements of the present research must also examine the applicability [112] of further innovative artificial intelligent approaches for the purpose of pavement crack detection. Moreover, they must explore combinations of the present CNN-based model with new refined image processing algorithms to improve the classification accuracy.

Conclusion
In this review paper, the authors have made efforts to collect maximal information on image processing International Journal of Computer Electrical Engineering topics, and then they have classified the paper into three subtopics: boundary detection, corner detection, and edge detection. A clear introduction is provided with further elaboration of the literature review, highlighting the important experimentations and techniques used to solve image processing tasks. The authors also list some of the databases that are used in the experimentation to help a new researcher access those datasets and experimentation details. All the techniques are classified for the ease of the readers and listed in tabular format with the references in a separate column. Finally, the most important section of the paper is that of open challenges and future directions. In open challenges, the authors have attempted to provide the reader all the scope to conduct this research and solve the highlighted challenges in to prepare for this research topic. Finally, the future scope section is included with the expectation that it will help the readers and researchers to follow an identified path and increase their enthusiasm toward this research topic.