key: cord-0330702-624ka04p authors: Peddireddy, Dheeraj; Fu, Xingyu; Wang, Haobo; Joung, Byung Gun; Aggarwal, Vaneet; Sutherland, John W.; Byung-Guk Jun, Martin title: Deep Learning Based Approach for Identifying Conventional Machining Processes from CAD Data date: 2020-12-31 journal: Procedia Manufacturing DOI: 10.1016/j.promfg.2020.05.130 sha: d90b871f62436646f6144fcd81aee5611d9a39cd doc_id: 330702 cord_uid: 624ka04p Abstract Manufacturing has evolved to become more automated in pursuit of higher quality, better productivity and lower cost. However, industrial logistics between the end customer and manufacturing supply chain still demands human labor and intelligence. This logistic work requires engineers with experienced knowledge to identify machining processes from the CAD model provided by customers at the beginning, and later sourcing for qualified manufacturing suppliers according to identified manufacturing processes. Developing an efficient automatic Machining Process Identification (MPI) system becomes a pivotal problem for logistic automation. In this paper, a novel MPI system is presented based on 3D Convolutional Neural Networks (CNN) and Transfer learning. The proposed system admits triangularly tessellated surface (STL) models as inputs and outputs machining process labels (e.g. milling, turning etc.) as the results of classification of the neural network. Computer-synthesized workpiece models are utilized in training the neural network. In addition to the MPI system, a portable framework was developed for future applications in related fields. The MPI system shows more than 98% accuracy for both synthesized models and real workpiece models which verifies its robustness and real-time reliability. The next industrial revolution -Industry 4.0 -calls for highlevel automation in the design of products, process planning, operation of manufacturing processes, and supply chain interactions. Industry 4.0 envisions the transformation of the manufacturing sector to make it more productive, less costly, and more resource efficient. However, autonomous decisions are still difficult, e.g., identification of needed manufacturing processes from a part design or CAD file. When a part is designed and the corresponding CAD model is generated, a manufacturing engineer has to ensure the manufacturability of the part, identify the processes (often, machining processes), and identify the needed resources within a facility to realize a given component (as shown in Fig. 1) . Traditionally, machining process identification (MPI) based on a product design representation requires substantial knowl- The next industrial revolution -Industry 4.0 -calls for highlevel automation in the design of products, process planning, operation of manufacturing processes, and supply chain interactions. Industry 4.0 envisions the transformation of the manufacturing sector to make it more productive, less costly, and more resource efficient. However, autonomous decisions are still difficult, e.g., identification of needed manufacturing processes from a part design or CAD file. When a part is designed and the corresponding CAD model is generated, a manufacturing engineer has to ensure the manufacturability of the part, identify the processes (often, machining processes), and identify the needed resources within a facility to realize a given component (as shown in Fig. 1) . Traditionally, machining process identification (MPI) based on a product design representation requires substantial knowl-edge and experience in manufacturing. Since engineering design is often completed by an Original Equipment Manufacturer (OEM) and component production is completed by suppliers, MPI often involves significant interactions between an OEM and its suppliers that are time consuming. The purpose of these interactions is to verify the suitability of the planned processes and availability of the requisite machine tools that are compatible for the planned processes. And, this problem becomes more challenging when the next generation of manufacturing calls for more personalized products that envisions the co-creation of products by customers who lack the professional knowledge generally needed to undertake engineering design. The lack of an efficient and automated MPI system becomes a bottleneck to the competitive manufacturing of products that can rapidly design, realize, and deliver products to market. Past researchers have worked on a relevant concept -Computer Aided Process Planning (CAPP), which generates the process plans and provides machining advice to Computer-Aided Manufacturing (CAM) systems according to digital models generated via CAD systems [15] . Many researchers have developed CAPP systems to identify suitable processes, facility/equipment setups, and operating conditions according to a given digital model. Xu et al. [21] developed a process param- The next industrial revolution -Industry 4.0 -calls for highlevel automation in the design of products, process planning, operation of manufacturing processes, and supply chain interactions. Industry 4.0 envisions the transformation of the manufacturing sector to make it more productive, less costly, and more resource efficient. However, autonomous decisions are still difficult, e.g., identification of needed manufacturing processes from a part design or CAD file. When a part is designed and the corresponding CAD model is generated, a manufacturing engineer has to ensure the manufacturability of the part, identify the processes (often, machining processes), and identify the needed resources within a facility to realize a given component (as shown in Fig. 1) . Traditionally, machining process identification (MPI) based on a product design representation requires substantial knowl-edge and experience in manufacturing. Since engineering design is often completed by an Original Equipment Manufacturer (OEM) and component production is completed by suppliers, MPI often involves significant interactions between an OEM and its suppliers that are time consuming. The purpose of these interactions is to verify the suitability of the planned processes and availability of the requisite machine tools that are compatible for the planned processes. And, this problem becomes more challenging when the next generation of manufacturing calls for more personalized products that envisions the co-creation of products by customers who lack the professional knowledge generally needed to undertake engineering design. The lack of an efficient and automated MPI system becomes a bottleneck to the competitive manufacturing of products that can rapidly design, realize, and deliver products to market. Past researchers have worked on a relevant concept -Computer Aided Process Planning (CAPP), which generates the process plans and provides machining advice to Computer-Aided Manufacturing (CAM) systems according to digital models generated via CAD systems [15] . Many researchers have developed CAPP systems to identify suitable processes, facility/equipment setups, and operating conditions according to a given digital model. Xu et al. [21] developed a process param- The next industrial revolution -Industry 4.0 -calls for highlevel automation in the design of products, process planning, operation of manufacturing processes, and supply chain interactions. Industry 4.0 envisions the transformation of the manufacturing sector to make it more productive, less costly, and more resource efficient. However, autonomous decisions are still difficult, e.g., identification of needed manufacturing processes from a part design or CAD file. When a part is designed and the corresponding CAD model is generated, a manufacturing engineer has to ensure the manufacturability of the part, identify the processes (often, machining processes), and identify the needed resources within a facility to realize a given component (as shown in Fig. 1) . Traditionally, machining process identification (MPI) based on a product design representation requires substantial knowl-edge and experience in manufacturing. Since engineering design is often completed by an Original Equipment Manufacturer (OEM) and component production is completed by suppliers, MPI often involves significant interactions between an OEM and its suppliers that are time consuming. The purpose of these interactions is to verify the suitability of the planned processes and availability of the requisite machine tools that are compatible for the planned processes. And, this problem becomes more challenging when the next generation of manufacturing calls for more personalized products that envisions the co-creation of products by customers who lack the professional knowledge generally needed to undertake engineering design. The lack of an efficient and automated MPI system becomes a bottleneck to the competitive manufacturing of products that can rapidly design, realize, and deliver products to market. Past researchers have worked on a relevant concept -Computer Aided Process Planning (CAPP), which generates the process plans and provides machining advice to Computer-Aided Manufacturing (CAM) systems according to digital models generated via CAD systems [15] . Many researchers have developed CAPP systems to identify suitable processes, facility/equipment setups, and operating conditions according to a given digital model. Xu et al. [21] developed a process param- The next industrial revolution -Industry 4.0 -calls for highlevel automation in the design of products, process planning, operation of manufacturing processes, and supply chain interactions. Industry 4.0 envisions the transformation of the manufacturing sector to make it more productive, less costly, and more resource efficient. However, autonomous decisions are still difficult, e.g., identification of needed manufacturing processes from a part design or CAD file. When a part is designed and the corresponding CAD model is generated, a manufacturing engineer has to ensure the manufacturability of the part, identify the processes (often, machining processes), and identify the needed resources within a facility to realize a given component (as shown in Fig. 1) . Traditionally, machining process identification (MPI) based on a product design representation requires substantial knowl-edge and experience in manufacturing. Since engineering design is often completed by an Original Equipment Manufacturer (OEM) and component production is completed by suppliers, MPI often involves significant interactions between an OEM and its suppliers that are time consuming. The purpose of these interactions is to verify the suitability of the planned processes and availability of the requisite machine tools that are compatible for the planned processes. And, this problem becomes more challenging when the next generation of manufacturing calls for more personalized products that envisions the co-creation of products by customers who lack the professional knowledge generally needed to undertake engineering design. The lack of an efficient and automated MPI system becomes a bottleneck to the competitive manufacturing of products that can rapidly design, realize, and deliver products to market. Past researchers have worked on a relevant concept -Computer Aided Process Planning (CAPP), which generates the process plans and provides machining advice to Computer-Aided Manufacturing (CAM) systems according to digital models generated via CAD systems [15] . Many researchers have developed CAPP systems to identify suitable processes, facility/equipment setups, and operating conditions according to a given digital model. Xu eter selection model based on an atomic inference engine system and illustrated its reliability for drilling features in a 2D mechanical drawing. Wang et al. [19] employed a decisionmaking approach to build a two-step adaptive setup planning method and tested this algorithm on a workpiece model with 26 machining features. Deb et al. [4] applied a back-propagation neural network to select machining operations and used an expert system to provide machining setup plans according to CAD model features. Most CAPP technologies rely on geometrical features to generate operation sequences. This requires Feature Recognition (FR) to serve as the bridge between CAD and CAPP systems and provide feature information from digital models. Ismail et al. [7] developed an Edge Boundary Classification (EBC) method to detect polyhedral features from a boundary representation (B-rep) model by analyzing the state of the points (e.g., in/on/off) of an object around the edge loop. Sunil et al. [17] pre-processed a B-Rep model into face adjacency graphs and detected interacting machining features based on defined standard feature rules. Zhang et al. [22] developed an ontology-based semantic approach to represent machining features and realized automatic feature recognition based on semantic model parsed from a B-Rep model. Zhibo et al. [23] proposed an approach named FeatureNet that used 3D Neural networks for recognizing machining features. From the literature review, it can be seen that, though some techniques from CAPP may be applied to an MPI system, a CAPP system cannot serve by itself as an MPI system. This is the case since the purpose of MPI is to identify, from a workpiece model, the various machining processes that must be performed. This functionality assists in delivering the model to a qualified industrial supplier efficiently and brings more benefits and convenience to the OEMs and customers. However, a CAPP system works more on industry side, which helps to automate the process planning and provide detailed machining solutions. Further, the output information of MPI and CAPP is different. Output of an MPI are machining categories (e.g., milling, turning, and drilling) while the outputs of a CAPP system are the exact process plans and machining sequences. Moreover, CAPP is a complex system that requires FR as a pre-process task and provides detailed process plans. However, Fig. 2 : Difference between MPI and CAPP systems machining process identification needs far less computational resources and time, which can directly classify workpiece from CAD models into different machining categories. Another concern is that, the required CAD models for CAPP system are generally B-Rep models or STEP models that store exact geometrical descriptions. This doesn't meet the requirement of machining process identification, which should be tolerant to approximate models provided by customers, for instance, polygon mesh models. The difference between MPI and CAPP has been illustrated in Fig. 2 . 3-dimensional (3D) machine learning is a new, interdisciplinary field that is accelerating the understanding of many issues in manufacturing [6, 2] . Previous literature has reported successful advancement of machine learning approaches, in particular extending 2D based frameworks to 3D geometric problems. Xiang et al. [20] constructed a large-scale dataset, ObjectNet3D, which consisted of 100 categories, 90,127 images and 44,147 3D shapes in CAD models. They proposed 2D object alignment with 3D shape for pose estimation and model retrieval in 3D. In addition, Su et al. [16] synthesized training images by overlaying 3D models on top of real images for viewpoint estimation, and they showed that their deep CNN-based viewpoint estimator-rendered dataset significantly outperformed available real 3D images dataset. As CAD models cannot directly provide volumetric information to a CNN-based machine learning algorithm, format conversion is required. Most of the current research utilizes 3D point cloud and voxel as data inputs. Mesh is also an important type of data for 3D shapes, but due to complexity and irregularity of mesh, fewer efforts have focused on using mesh data for learning. Feng et al. [5] propose a mesh neural network, to learn 3D shape representation from mesh data with face-unit and feature splitting. Their architecture conducts effective 3D object classification and retrieval performance compared to existing literature. Wu et al. [24] presented 3D ModelNet that employed a probability distribution to represent a 3D voxel grid. They designed a convolutional deep belief network for object recognition and shape completion from 2.5-D depth maps. Qi et al. [11] directly consumed raw point cloud data in their deep neural network, PointNet, to a number of 3D recognition tasks. Lastly, Maturana et al. [9] designed VoxNet, a feed-forward CNN for classifying 32 × 32 × 32 voxel volumes from point cloud data and CAD models for real-time object recognition which outperformed several existing methods before them. Voxel-based 2 representation has gained popularity in applications of 3D Machine Learning owing to it being a very simple extension of pixels, making operations like 3D convolutions easy to interpret. This paper proposes a novel approach based on deep transfer learning using 3D CNNs to identify the machining processes from a CAD model. As discussed earlier, many researchers have developed analytical and heuristic methods for FR over the years. However, the mapping of features to the machining processes is prone to several complexities. Some of these include intersection of features or new features that were never seen before. FR methods are limited in their ability to learn and handle variations in machining features. The objective of this paper is to avoid reducing process planning to an FR problem i.e., mapping features to processes . The intent behind using a CNN based approach is for the model to learn to identify the spatial metadata that makes a feature instead of learning to identify the feature itself, alleviating the aforementioned issues in FR. The proposed approach involves the source task of recognizing machining features and the target task of identifying machining processes. The listed tasks are accomplished by building two neural networks, one for each task. We separate these tasks, in order to build a portable neural network for recognizing the features so that the knowledge from this model can be utilized in learning a wide variety of problems such as cutting tool, machine tool, and fixture selection. We summarize our contributions as follows: 1. A large scale dataset of machining features and a dataset of synthetic mechanical parts are constructed to train deep learning models. 2. A portable neural network based framework with transferrable weights for automatic feature recognition from CAD models is presented. 3. A transfer learning based framework is employed that exploits the pre-trained FR framework for MPI bypassing feature detection. 4. The accuracy of the framework at identifying the machining processes for several synthetic and real mechanical parts is demonstrated. The rest of the paper is organized as follows: Section 2 describes the methodology for data generation and database creation. Section 3 explains the proposed architecture of the CNN models. In Section 4, we evaluate the training process and the effectiveness of the proposed framework. Finally, conclusions are presented and discussion is provided on the potential for future work in Section 5. As the name suggests, the performance of any data-driven machine learning algorithm is heavily dependent on the volume and the veracity of the data. We would ideally want a balanced and comprehensive training dataset for the best accuracy and robustness of the model. While there is a plethora of datasets for 2D image classification problems, we often find ourselves lacking in suitable training data for 3D Machine Learning problems. This shortfall urged us to generate two synthetic datasets for our paper. For the preliminary analysis, we chose Milling and Turning among the conventional machining processes, to be classified from the CAD model. Before we delve into the details of the database creation, Sections 2.1 and 2.2 describe the algorithm for the synthetic feature and model generation, respectively. The algorithm chooses machining features from milling and turning workpiece and randomizes the size of the features. Then, synthetic workpiece models can be generated by randomly combining machining features 3 Basic machining features are generated by 3D Boolean operation on base geometry. We chose 9 milling features and 7 turning features as basic features for later synthetic workpiece model generation as shown in Fig. 3 . For each feature, the algorithm generates uniform random value to set size, position and direction of the feature. One example for pocket milling feature generation is given to describe this process. For milling features, the base geometry dimension is 50mm × 50mm × 50mm. The algorithm computes Boolean difference operation between pocket entity and base geometry to realize pocket milling feature. Parameters which should be set for this process are listed below (and shown in Fig. 4 =50mm, parameters, R, L, W, H, a and b , should all be generated by rand(0, d/2) which means uniform random float numbers generated from interval (0, d/2). The algorithm rotates the feature model by 0/90/180/270 degrees randomly after Boolean difference operation to provide more training models. The algorithm implements tolerance check after model generation to avoid extremely small entities, non-machinable thin walls, or overcut. Tolerance check for pocket milling feature is shown in Fig. 4 , where t is the minimum allowed wall thickness of generated models and we chose t =5mm in this paper. The algorithm generated turning feature using the same approach but employed cylinder with the dimension of φ50mm × 50mm as the base geometry. Details for basic feature generation are shown in Appendix A. Synthetic workpiece model generation employs uniform random combination of basic features. Details for synthetic model generation are explained in Appendix B. Some of the generated workpiece models are presented in Fig. 5 . Tolerance checks should be implemented to avoid nonmachinable conditions. Algorithm iterates all surfaces after boolean operations and detects extreme small edges and surface area of the entities to exclude non-machinable conditions. Further, it should be noted that Boolean difference operation sometimes can eliminate some features from base geometry which makes the labelling incorrect. For instance, a large pocket milling entity can include a hole entity in its center and eliminate the hole feature from the workpiece. This paper applies additional entity check to avoid this problem. For example, entity check executes intersection operation between pocket milling feature and hole feature and judge if the Boolean operation result equals to the hole feature. If the intersection Boolean operation result is identical to or has extreme small difference from the original feature -hole feature, the output model cannot pass the entity check and the algorithm will discard this model. Detailed mathematical expressions for entity check are explained in Appendix B. As a part of the feature database, we automatically generate 200 models for each of the 16 identified features. The models are generated with the feature on the top surface of the base stock, which limits the learning efficiency. In order to make our approach invariable with respect to the orientation, each model is rotated such that 6 new models are generated with the axis of the feature orthogonal to all 6 faces of the bounding cuboid. Therefore, the dataset contains 1200 models for each feature, with a total of 19200 models labelled with the corresponding feature. As discussed in Section 4.2, we generate synthetic workpieces for the mechanical part database as a uniformly random combination of features. For example, to generate a part that needs to be milled, a subset of the milling features are randomly combined. We generate 500 workpieces that need to be milled, 500 workpieces that need to be turned and 500 that need to be both milled and turned. Similar to the feature database, 4 Fig. 7 : Architecture of the source model for feature recognition we create 6 rotated variants of each of these workpieces for a rotation-agnostic model. The mechanical part database has 18000 models with two binary labels i.e., Milling and Turning. It should be noted that the proposed models are not limited with respect to this data and can be trained on any datasets with the discussed labels. This way, more features or parts could be added to the models to improve accuracy and robustness. The shapes of the generated models were represented using triangularly tessellated surfaces (STL format). Although it is the most commonly used format for representation among 3D printing and machining industries, it's not convenient to use with the convolutional layers. For this reason, we use occupancy grids as the training data format, with each voxel in the grid lattice structure representing a binary variable. The binary variable can take the values 1 or 0, with 0 indicating that the voxel is outside the part and 1 indicating that it's inside the part. We use a 3D mesh voxelizer library called binvox [10] to convert the mesh-based models to voxelated models. The grid resolution used for the proposed method was 64 × 64 × 64. Finer resolution could improve accuracy and help the model learn more spatial features, but at the expense of computational complexity and tractability. An illustration of a voxelized model is shown in Fig. 6 . CNNs have time and again proved to be very efficient solutions to image classification and object recognition problems due to their ability to exploit local correlations over spatial topographies [12, 13, 14] . They are designed to emulate the animal visual cortex and adaptively learn the local spatial filters that aid in classification and regression tasks. The filters at the input layers encode information about simple spatial features at various orientations such as lines and edges in case of 2D pixels and planes and corners in case of 3D voxels. By stacking several such convolutional layers atop each, the deeper filters can hierarchically encode more complex features that represent a larger region of space subsequently leading to the outer layer that encodes information about the global label. The spatial structure of the training data i.e., occupancy grid makes CNNs an attractive option for the problem. Transfer learning is a machine learning technique that improves the learning of a new task through the transfer of knowledge from a related task that has already been learned [18] . It is analogous to humans utilizing their knowledge from previous learning experiences to perform a new task. Transfer learning in Machine Learning is achieved through weight sharing, where the learned weights of the pretrained model are frozen (made non-learnable) and used in the target model. CNNs along with transfer learning proved to be an effective choice for the machining process recognition problem as we would demonstrate in Section 4. The architectures for the source and target models will be discussed in the following subsections. The source model is a volumetric convolutional network with 7 layers including an input layer, three 3D convolutional layers, a pooling layer, a fully connected layer and an output layer. The network architecture is illustrated in Figure 7 . The input layer accepts a tensor grid of a fixed size I × J × K. The size represents the resolution of the voxelated 3D models. However, the network can be trained on occupancy grids of any size given that the resolution in the input layer is adjusted accordingly. Each element in an input unit represents a voxel and can take the value of 0 or 1. The data doesn't undergo any further pre-processing at the input layer. The network contains three 3D convolutional layers following the input layer. These layers accept a 4-dimensional input where 3 dimensions represent the spatial structure of the occupancy grid and the fourth dimension represent the number of feature maps in the input f . 3D convolutional layers have 3 defining parameters: number of feature maps in the output f , filter shape (d × d × d) and the spatial stride s. The output of the layer contains f feature maps that are created by convolving the inputs with adaptive filters of size d × d × d × f applied at a spatial stride s. The output from the layer is passed through an activation function to add non-linearity to the network. We do so because the functional mappings between the input and response is rarely linear. The parameters ( f, d, s) used for the three 3D convolutional layers in this architecture are (32, 7, 2) (32, 5, 1), and (64, 3, 1). We use rectified linear unit (ReLU) as the activation function in all the three 3D convolutional layers. The mathematical expression for the ReLU function is given as follows: The three 3D convolutional layers are followed by a 3D pooling layer. These layers take the convolutional results and 6 downsamples them along the spatial dimension by a factor of m. The pooling layer in the proposed network uses a max pooling method where each non-overlapping m × m × m block in the input volume is replaced by their maximum. The value for the downsampling factor was set to be 2 for this network. The pooling layer is succeeded by a fully connected layer. It has one parameter n that represents the number of output neurons. The output of each neuron is obtained from a linear combination of all the inputs with a bias, passed through a non-linear activation function. We set a value of 128 for n and ReLU function is used as the activation for this experiment. The final output layer is a variant of the fully connected layer with K output neurons, one for each classification label -16 in this case. Softmax function is used for the activation which gives the predicted probability of the input belonging to each class, and the sum of all the probabilities would be 1. The predicted probability for i th class is given by the following expression: Where g i (x) is the predicted probability of the input belonging to i th class and x i is the output of i th neuron in the output layer. The model has about 33.7 million learnable parameters. Categorical cross entropy loss is used as the objective function and the weights at each layer are used as the parameters for the minimization problem. The expression for the loss is given by: With n being the number of samples in the training, and t ij = 1 if the model predicted the correct class i.e., y =ȳ and 0 otherwise. The estimated probability of i th sample belonging to j th class is given by g ij . Similar to the source model, the target model is also a volumetric convolutional network with 8 layers -an input layer followed by four 3D convolutional layers and a max pooling layer, a fully connected layer and an output layer. The architecture of the target model is illustrated in Fig. 8 . The target model follows a similar architecture as the source model for the first four layers but includes an additional convolutional layer of parameters (64, 3, 1) followed by a max pooling layer and a fully connected layer with 128 neurons. The output layer in the target model has two neurons and sigmoid is used as the activation function. Sigmoid function gives the estimated probability of the input belonging to the first class. The complement of the output from the sigmoid gives the predicted probability of the second class. Note that the sigmoid activation can only be used in cases of binary classification. The mechanical part dataset contains two binary labels i.e., Milling and Turning with each taking a value of either 1 or 0. The expression for Sigmoid function is given below: The learned weights from the three convolutional layers are used as frozen weights in the first three convolutional layers of the target model. The model has about 0.2 million non-learnable parameters (transferred weights) and 33.6 million learnable parameters. Binary cross entropy is used as the minimization objective which is given by the following expression: Where p i is the predicted probability of sample i belonging to the first class and y i being the ground truth. Both the feature database of 19200 samples and the mechanical part database of 18000 samples were split into three subsets each namely, training, validation and testing. The ratio used for the split was 70:15:15 for training, validation and testing respectively. The models utilize the training subsets in order to learn the weights to minimize the loss. We manually tune the hyperparameters based on the performance of the model on the validation test. The testing subset is used to report the performance of the learned model. Keras [3] library was used with TensorFlow [1] backend to train the models. We employ Adam (Adaptive moment estimation) as the iterative method to solve for the optimum, since it's been empirically shown that it performs better than AdaGrad or RMSProp with high dimensional data [8] . For this paper, all the simulations have been performed on a PC with a 2.67 GHz Intel Core i7 CPU, 12 GB memory and an NVIDIA GeForce GTX 1070 GPU. The source model was trained on the training subset from the feature database (as described in Section. 2.3). The mini-batch size was set to 32 and the learning rate was set to 0.0001 for this 7 Fig. 9 . The accuracy of the learned model with the testing subset was computed to be 93%. The performance of the source model on the test data is illustrated in the form a confusion matrix in Fig. 11 . It should be noted that majority of the test error can be attributed to the confusion between features #1, #9 and #6, #7. When the features #1 or #6 are smaller, the finer information needed to distinguish them from features #9 or #7 respectively (and vice versa) is lost to the voxelizing process, which could have been ameliorated with higher resolutions. Similarly, the target model was trained on the training subset of the mechanical part database(as described in Section. 2.3) with some of the weights frozen as discussed in Section 3.2. Mini-batch size and the learning rate used for the training were 32 and 0.0001 respectively with each epoch using five minibatches. After 400 steps of training, the training and validation accuracy were close to 100% and 98.44%. It should be noted that the model converges faster since it exploits the knowledge from the source model and doesn't need to learn from the scratch. The accuracy and loss of training and validation subsets over time are illustrated in Fig. 10 . The learned model performs equally well on the testing data with an accuracy of 98.51%. To evaluate the robustness of the model and to understand its effectiveness in an actual manufacturing environment, we test the model on five real industrial components of varying complexities. The CAD models for these components were partly sourced from industry and collected from internet. The CAD models are then manually labelled by experts in the field of machining and converted to occupancy grids using the binvox library as discussed before. The label is a 1D array of size 2 with each element in the array taking a value of either 1 or 0. The first element the array corresponds to milling and the second corresponds to turning. For example, the label (1, 0) in-dicates that the component needs to be milled; (0, 1) indicates that the component needs to be turned and (1, 1) indicates that the component needs to be both milled and turned. The selected models and the results from classification are depicted in Fig. 12 . The model correctly classifies all five parts indicating an accuracy of 100% on the real test data. This work combines the insights from the recent developments in 3D CNNs and Transfer learning to propose an intelligent approach in identifying the machining processes from a CAD model. In order to train the neural networks, we automatically synthesize two large scale datasets of features and synthetic mechanical parts. We demonstrate that our approach shows a promising accuracy in the classification tasks, while reducing the training time. The paper also provides a transferrable framework that could be used as a prelearned task for a wide variety of problems such as cutting tool selection, defect detection, etc. Some of the potential directions for the future work are extending the framework to determine the machinability of a component from its CAD model. The robustness of the framework can be improved by augmenting the datasets to include more features and parts from various industries. We also aim to improve the accuracy of the model by using a mesh-based representation and multi-view representation of the data along with the volumetric representation, since voxel based representations tend to lose information about the finer parts of the model. This work is funded in part by program -"Engineering Faculty Conversation (EFC) on Future Manufacturing" from College of Engineering at Purdue University. Basic machining features are generated by Boolean difference or Boolean union operations between base geometry and feature entities, shown in Fig. A.13 and Fig. A. 14. Base geometry for milling features is cube with dimension of d × d × d while for turning is column with dimension of φd × d. Feature entities are features to be machined on base geometry which is the part the algorithm randomize. Three type of parameters require to be generated through algorithm: feature size, position and direction. The size of feature entities are assigned by uniform random number between (0,d/2), denoted by rand(0,d/2) (shown in Fig. A.13 and Fig. A.14, W, L, H, R, A, B, C, D) . Parameters for setting relative position between feature entity and base geometries are a, b according to Fig. A.13 and Fig. A. 14, which also equals to rand(0,d/2). The algorithm changes generated feature model's direction by 0/90/180/270 degrees following uniform random distribution in order to generate more types of model data improve prediction precision. The algorithm implements tolerance check after model generation to avoid extremely small entities, non-machinable thin walls or overcut. The minimum allowed wall thickness t has been set to avoid generation of thin walls according to the resolution of MPI system. Tolerance check for each features are shown in Fig. A.13 and Fig. A. 14. If the generated random values satisfy tolerance check, the generated feature is qualified as one of training data for following process. In this paper, we set d=50mm and t=5mm. Synthetic model generation includes three types of workpiece models: models which only have milling features, models which only have turning features and the ones which have both. Synthetic model generation gathers models generated from basic feature generation algorithm and employ 3D Boolean operation to combine them. In this paper, synthetic models for milling features, denoted by E m , can be expressed by Boolean intersection between basic milling features, shown in equation (B.1): Where E i represents basic milling feature models. Factor, rand[0,1], represents a Bernoulli random value of probability 0.5 which controls if certain feature E i exists. This paper only employs milling feature 1,2,3,4,5,8,9 in Fig. A.13 to synthesize workpiece models. Synthetic models for turning features, denoted by E T , can be expressed by Boolean union between basic turning features with certain translation in z-axis, shown in equation (B.2): Where E i (x, y, z) represents expression of basic turning feature models, h i is the height along z-axis for each feature Where E b is base geometry for milling and E j (x, y, z) represents generated milling feature models. Factor, rand[0,1], controls existence of feature E j 9 where j < i, since This means that the volume of milling feature j entity has been removed by previous feature entities. Feature j does not exist in final output model but still has the label according to its input. Therefore, entity check, denoted by B E should be conducted by following expression: If B E equals to true, synthesized model includes all features required by input labels and is qualified to be one of training data for following process. If B E equals to false, output model does not pass the entity check and should be discarded. Ten-sorFlow: Large-scale machine learning on heterogeneous systems A deep 3d convolutional neural network based design for manufacturability framework An integrated and intelligent computer-aided process planning methodology for machined rotationally symmetrical parts Meshnet: Mesh neural network for 3d shape representation Learning localized features in 3d cad models for manufacturability analysis of drilled holes Feature recognition patterns for form features using boundary representation models Adam: A method for stochastic optimization Voxnet: A 3d convolutional neural network for real-time object recognition Pointnet: Deep learning on point sets for 3d classification and segmentation Volumetric and multi-view cnns for object classification on 3d data You only look once: Unified, real-time object detection Very deep convolutional networks for large-scale image recognition A survey of automated process planning approaches in machining Render for cnn: Viewpoint estimation in images using cnns trained with rendered 3d model views An approach to recognize interacting features from b-rep cad models of prismatic machined parts using a hybrid (graph and rule based) technique Transfer learning. Handbook of Research on Machine Learning Applications Asp: An adaptive setup planning approach for dynamic machine assignments. Automation Science and Engineering Objectnet3d: A large scale database for 3d object recognition Modeling of process parameter selection with mathematical logic for process planning Semantic approach to the automatic recognition of machining features. The International Journal of Advanced Manufacturing Technology Featurenet: Machining feature recognition based on 3d convolution neural network 3d shapenets: A deep representation for volumetric shapes