Biography
Prof. Manu Pratap Singh
Prof. Manu Pratap Singh
Dr. B. R. Ambedkar University, India
Title: Hybrid Evolutionary Techniques in Restricted Feed Forward Neural Network with distributed error for Classification
Abstract: 
Pattern recognition is the study of how machine can observe the environment, learn to distinguish patterns of interest from their background and make sound and reasonable decisions about the categories of the patterns. In spite of more than 50 years of research, design of general-purpose machine pattern recognizer remains an elusive goal. Pattern recognition and its application have been studied from very long period of time. There are various methods have been proposed to accomplish the task of pattern recognition. The domain of pattern recognition has been considered mostly for the hand written curve script or character recognition. The new stage in the evolution of hand writing processing results from a combination of following elements
(i) Improvement in recognition task
(ii) The use of complex systems integrating several kinds of information.
(iii) The choice of relevant application domains.
(iv) New technologies such as high quality and high speed scanners and in expensive powerful processors.
Methods and recognition rates depend on the level of constraints on hand writing. The constraints are mainly characterized by the types of hand writing, the number of scripter and the spatial layout recognition strategies; those heavily depend on the nature of the character set to be recognized. Recognition becomes more difficult when the constraint decreases. Intense activities were devoted to the character recognition problem during the seventies of the eighties and pretty good results have been achieved. Generally, character recognition techniques can be classified based on two criteria:
(i) The way preprocessing is performed on the data.
(ii) The type of the decision algorithm.
Studies indicate that the neural network learning techniques have been found more prominent as the selection of decision algorithm for the recognition of hand written character, but it is very difficult to find any general investigation which might shed light on the systematic approach of a complete neural network system for the automatic recognition of cursive hand written character. There is some work has been done on Arabic cursive characters. In this work, a simple and computational efficient mapping, which can be used for character recognition by taking the application of hand written Arabic characters and explained that it can be used to produce an effective recognition system to identify each character uniquely. There are also numbers of neural network models have been proposed for hand written character recognition. Among them, the most popular model is the multilayer feed forward neural networks which have shown their effectiveness in recognizing hand written characters with various writing styles and sizes. However, there approaches can only provide or partial solution to real-world data because they have shown insufficient learning capabilities for the similar characters. In general, for the case of hand written character recognition with multilayer feed forward neural network, hidden units are learned to maximize the useful information from input pattern and output units are learned to discriminate the information given from the hidden units. Therefore, it seems to be reasonable to provide more information to output units in order to improve discrimination and generalization power in recognizing handwritten characters. In this reference the recurrent neural network offers a framework suitable for reusing the output values of the neural network in training. Many researchers have applied this recurrent neural network to hand written character recognition and some of them have shown promising results. However these approaches are mostly based on Jorden and Elman recurrent neural networks which were proposed for dynamic patterns. Therefore they may be inefficient for recognizing hand written characters which are static patterns. In facts, it is also true that the multilayer neural network, trained by the Back-propagation (BP) algorithm, is currently the most widely used neural network since it can solve many complex pattern recognition problems. The back-propagation algorithm explores the mechanism for training to the neural network by incremental adjustment of the set of weights for the given training set of patterns. A common approach is to use two-term algorithm consisting of a learning rate and a momentum term. However the back-propagation algorithm has several limitations some of them have been identified as:

(1) It requires the neural transfer functions to be differentiable in order to calculate the derivatives with respect to the systematic weights.
(2) The performance index to be minimized is constantly to be mean square error, because a non-quadratic performance index (error) many result in very complex performance surface.
(3) The systematic weights obtained by the algorithm are continuous as a consequence of its updating equation of systematic weights.
(4) The two term BP algorithm in most of the cases trains into the local minima of the unknown instantaneous error and also it involves slow convergence speed, which limit the scope for real-time application. As local minima is defined as a point such that all points in a neighborhood have an error value greater than or equal to the error value in that point .
(5) The back-propagation learning is based on the gradient descent along the error surface. That is, the weight adjustment is proportional to the negative gradient of the error between the desired and the actual values of the output of the network. This instantaneous error is due to a given training pattern, which can be assumed to the sample function of a random process. Thus the back-propagation can be assumed to be a random variable. Therefore, this gradient descent method is a stochastic gradient learning method. Hence, due to this stochastic nature, the path to the minimum of the error surface will be zigzag.
(6) Due to the non-quadratic nature of random error the convergence is not guaranteed for the BP algorithm.
(7) The BP algorithm is a local learning rule. When training a network using a local rule, one input the sample into the network one by one and each time the semantic weights are updated independently on other samples. A step as update of semantic weights included by the input of a sample in an optimal solution for this sample, but not for other samples.
Therefore these limitations are also invariant in variation of the BP algorithm. In principle, it is more favorable if each step of the update is an optimal solution for the entire sample. This requires the consideration of the whole set of samples globally. An influential example of global rules is the pseudo inverse rule used for training single layer networks.

There are various solutions and modifications have been proposed to minimize the limitation of BP learning algorithm. In this attempt a global learning rule, called the monte-carlo adaptation algorithm has been proposed. The basic idea is to make an adaptation to randomly chosen synaptic weights and except the adaptation if it improves the network performance globally. This approach works in a refined manner but still it also has no guarantee of convergence and the nature of error is still random and back-propagated.

The evolutionary search algorithm i.e. genetic algorithm (GA) is also considered as a better alternative to search the global minima and for convergence if the search space is large. As the complexity of the search space increases, GA presents an increasingly attractive alternative to gradient based techniques such as error, back-propagation algorithm because this algorithm does not really gradient information, they can sample the search space irrespective of where the existing solution is to be found while remaining is based toward good solution. There are various good results have been reported in the literature with hybrid evolutionary learning algorithm in multilayer feed forward neural network architecture for the classification problem and hand written English character recognition problem. In this hybrid approach the two term BP learning algorithm has been improved with incorporation of genetic algorithm and the improved performance in the terms of accuracy and rate of convergence has been observed. In this approach also the fitness performance for the weights has been considered with back-propagated error of the current presented pattern vector. Hence the performance of network is still depending upon the back-propagated instantaneous error. Thus the determination of weights or adjustment of weights is still using the back-propagated unknown instantaneous error.

Here in the propose work we are dealing with performance index i.e. instantaneous unknown error in different ways instead of considering it in back-propagated nature we are considering it as distributed error for the multilayer feed forward neural network, in which the number of units in hidden layer and output layers are equal i.e. the restricted multilayer feed forward neural network architecture. Thus, the same desired output pattern for a presented input pattern will distribute to every unit of hidden layers and output layer. Each unit of hidden layers and output layer has its own actual output so the performance measure or the error will different for each layer. Thus, the instantaneous error is now distributed instead of back propagated. The hybrid evolutionary algorithm is used as learning method in multilayer neural network architecture for the recognition of hand written Hindi characters in its basic form i.e. individual character recognition. It rationally improves the efficiency of neural network for Hindi character recognition task. In the implementation of hybrid evolutionary algorithm the performance index which works as the fitness evaluation function, is defined with distributed instantaneous error instead of instantaneous back propagated error only for proposed restricted multi-layer feed forward neural network architectures. 

Biography: 
Dr. Manu Pratap Singh received his Ph.D. in Computer science from Kumaun University Nanital, Uthrakhand, India, in 2001. He has completed his Master of Science in Computer Science from Allahabad University, Allahabad in 1995. Further he obtained the M. Tech. in Information technology from Mysore. He is currently asAssociate Professor in Department of Computer Science, Institute of Engineering and Technology, Dr. B.R. Ambedkar University, Agra, UP, India since 2008. He is engaged in teaching and research since last 16 years. He has more than 80 research papers in journals of international and national repute. His work has been recognized widely around the world in the form of citations of my research papers. He also has received the Young Scientist Award in computer science by international Academy of Physical sciences, Allahabad in year 2005. He has guided 18 students for their doctorate in computer science. He is also referee of various international and national journals like International Journal of Uncertainty, Fuzziness and Knowledge Based Systems by World scientific publishing cooperation Ltd, International Journal of Engineering, Iran, IEEE Transaction of fuzzy systems and European journal of operation research by Elsevier. He has developed a feed forward neural networks simulator for hand written character recognition of English alphabets. He has also developed a hybrid evolutionary algorithm for hand written character recognition of English as well as for Hindi language classification. In the hybrid approach the Genetic algorithm is incorporated with back propagation learning rule to train the feed forward neural networks. In this approach the genetic algorithm starts from the suboptimal solution and converges for the optimal solutions. There are more than one optimal solution has obtained. This approach leads for the multi objective optimization phenomena. Another hybrid approach of evolutionary algorithm is developed for the feedback neural network of Hopfield type for efficient recalling for the memorized patterns. Here also the randomness from the genetic algorithm is minimized by starting it from the suboptimal solution in the term of parent weight matrix for the global optimal solutions i.e. correct weight matrices for the network to consider it for efficient pattern recalling. His research interests are focused on Neural networks, pattern recognition and machine intelligence, soft-computing, etc. He is a member of technical committee of IASTED, Canada since 2004. He is also the regular member of machine intelligence Research Labs (MIR Labs), scientific network for innovation and research excellence (SNIRE), Auburn, Washington, USE, http://www.mirlabs.org, since 2012.His Google citation indices are 9, i10-index is 8 and he has 257 citations. He has been invited as keynote speaker and invited guest speaker in various institutions in India and Abroad.