A Study on the Effects of Combining Different Features for the Recognition of Handwritten Bangla Characters
Volume 5, Issue 2, Page No 197–203, 2020
Adv. Sci. Technol. Eng. Syst. J. 5(2), 197–203 (2020);
DOI: 10.25046/aj050225
Keywords: Handwritten Bangla character, recognition, Artificial neural network, Shadow feature, Longest run feature, Chain code histogram feature, Gabor filter, Feature combinations
This paper studies and compares the effectiveness of four different features and their combinations on the recognition accuracy of handwritten Bangla characters. The longest run, chain code histogram, shadow, and Gabor filter-based features and their eleven (11) combinations were tested on a standard Bangla database of 15; 000 basic handwritten characters to compare their recognition performances. From the experiments performed, it was observed that the combination of the longest run, chain code histogram, and the shadow features (having feature vector sizes of 20, 20, and 16 respectively) produce the highest recognition accuracy of 84:01%. Furthermore, inclusion of a feature with a large vector size compared to the other features in the combination generally dominates the recognition accuracy. In our case, inclusion of the Gabor filter-based features with a vector size of 1024 in the combination produced a recognition accuracy of 69:71%, which is worse than the accuracy obtained using the other three features. The analysis of the results indicates that the combinations of different feature vectors produce better accuracy as long as the sizes of each individual feature vector is comparable with each other in the combination.
1. Introduction
Handwritten character recognition has wide commercial applications, e.g., automatic letter sorting based on postal code, extracting information from bank checks and filled up forms, digitization of old handwritten books and documents etc. However, the similarity of shapes of different characters in a particular language, as well as the high variability in the handwritten scripts by different writers makes it quite a challenging task to correctly identify the characters in any language.
Bangla is ranked as the 6th most spoken language in the world and is spoken by nearly 230 million people. In Bangla, there are 50 basic character classes: 11 vowels and 39 consonants. There are also vowel modifiers, consonant modifiers, and other compound characters. Several characters in Bangla have identical shapes, which are only distinguished by the presence of a dot, a short straight line, a curved line, or by the number of loops, strokes etc. These characteristics have given Bangla its unique but complex nature. For recognition of handwritten characters, the obvious similarity of the characters of different classes can hurt the overall accuracy of a system.
Research on Bangla handwritten character recognition is comparatively new. Broadly speaking, two types of research trends are observed in handwritten character recognition. One is the extraction of features from the scanned character images and then employing a classifier to identify the characters based on the extracted features. Another recent trend is to use convolutional neural network to perform the task of both feature extraction and classification. Success of the first type of methods largely depend on the careful selection of features, while the second type of methods heavily rely on the design of the network structure and the number as well the variability of sample data used to train the network.
Bhowmik et al. [1] used stroke based features on a database of 25,000 characters with recognition accuracy of 84.33%, while Rahman et al. [2] used a multi stage approach on a database comprised of 20 different samples of 49 categories each, with recognition accuracy of 88.38%. Bag et al. [3] used skeletal concavity/convexity of characters along two directions with a recognition accuracy of 60.6%. Basu et al. [4] used an artificial neural network with multi layer perceptron (MLP) on a database of 10,000 characters. Rahman et al. [5] used a convolutional neural network (CNN) to perform the task of both feature extraction and classification on their own dataset of 20000 samples with a recognition accuracy of 85.96%. Alom et al. [6] applied deep CNN for Bangla handwritten character recognition. Although a few of them used large datasets, but these were not publicly available. Therefore, it was not possible to compare the effectiveness of different features proposed by researchers.
Previously, we worked on the combination of three different
features, namely Longest run features (LR), Chain code histogram features (CH) and features extracted using Gabor filter [7]. As a classifier, artificial neural network with back propagation algorithm was considered. We used a publicly available benchmark database [8] of basic Bangla characters and obtained an recognition accuracy of 76.47% considering the combination of the longest run and chain code histogram features [7].
In this paper, we used the same standard database to study and compare the effectiveness of different combinations of four types of features: shadow feature, chain code histogram feature, longest run feature and feature extracted using Gabor filters, in recognizing Bangla handwritten (isolated) basic characters. The purpose of this research is not only to identify the best combinations of features but also to identify the general condition of feature combinations to yield better recognition accuracy.
2. Brief Description of the Database and Preprocessing of the Character Image

Figure 1: Randomly selected samples from the CMATER training database

Figure 2: Randomly selected samples from the CMATER test database
In this paper, a benchmark database named ‘CMATERdb 3.1.2’ has been used [8]. The data was collected by the ‘Center for
Microprocessor Application for Training Education and Research’ (CMATER), a research laboratory of Jadavpur University, India.
The hand written isolated characters were collected from native Bangla writers of different age, sex and educational groups in formatted data sheets designed by them. The collected data sheets were optically scanned in gray scale with a resolution of 300 dpi. In this way, a total of 15,000 characters of 50 character classes were collected (i.e. 300 character samples for each class). The character images vary from a size of 50 × 17 pixels to 589 × 667 pixels. The images were already divided into a training set and a test set. For any class of character, the number of images in the training set and the test set are 240 and 60 respectively. Fig. 1 shows a few randomly selected samples of Bangla characters from the training database, and Fig. 2 shows samples from the test database.
For recognizing handwritten characters, it is sufficient to work with binary images because the color of the characters do not contribute any additional information in classifying the characters. Binarizing the images also reduce the computational overhead. Furthermore, the size of the images must be uniform across the dataset for proper classification. Therefore, each image was size normalized to 128 × 128 pixels.
Fig. 3 and Fig. 4 show the samples of the preprocessed data. The foreground of the images (i.e. the character strokes) has a binary value of 1, while the background has a value of 0.

Figure 3: The preprocessed samples from the CMATER training database

Figure 4: The preprocessed samples from the CMATER test database
3. Description of Features
In order to obtain a good recognition accuracy from a classifier, appropriate features need to be extracted from the character images. Researchers generally look to define features that are unique to the in-class characters but differ significantly between inter-class characters.
In this paper, four features, namely shadow feature, chain code histogram feature, longest run feature, and feature extracted using Gabor filters and their different combinations have been used.
3.1 Shadow Feature
A shadow of a character is the projection of that character in a particular direction (typically on the image borders). The shadow feature computes the length of the shadow in each direction of projection. Projections along the vertical and horizontal directions were considered in this paper [9].
To extract the shadow feature from each of the character images, it was first divided into eight octants as shown in Fig. 5. For each octant, the length of the shadows (of the character segment falling into that octant) were computed on two perpendicular borders. The length of the shadow along a perpendicular border of an octant was divided by the total length of that border to obtain a normalized value and was considered as a feature.
Therefore, a total of 16 shadow features from each character image were obtained. Fig. 6 demonstrates the shadow feature extraction scheme for a sample character (of 16 by 16 pixels), where arrows show the direction of the projection of shadows.

Figure 5: Octants for obtaining shadow features of image

Figure 6: Horizontal and vertical shadow features of image for each octant
3.2 Chain Code Histogram Feature
The chain code carries the information of the shape and the size of a character image. In this method, the directions of movement along the character’s boundary are encoded using a numbering scheme and this allows for a compact representation and reduction of data [10, 11].

Figure 7: Illustration of the change of direction of boundary pixels for the whole character image
In this paper, to extract this feature, the boundary pixels of the character image (which must be 8-connected or 4-connected) were first detected using Moore’s contour tracing algorithm [12] and then the changes in the direction of the boundary pixels were coded using a scheme proposed by Freeman [13]. Fig. 7 illustrates the change in the direction of the boundary pixels for a sample Bangla character. In the figure, the direction changes along the boundary are denoted with arrows. The change in the direction between two neighboring pixels is coded using Freeman’s chain code. The numbering scheme for the code is illustrated in Fig. 8.

Figure 8: Illustration of the numbering scheme proposed by Freeman’s Chain code considering 8 connectivity of pixels
For example, if the movement from one pixel to the next is along 0◦, then the movement is coded as the number 0, if the direction of movement is 45◦, then the movement is coded as the number 1 and so on.
Moreover, in the case of character images, the general practice is for the opposite directed movements to be coded with the same value, that is, 0◦ and 180◦ are both considered as code 0, 45◦ and 225◦ are both coded as 1, 90◦ and 270◦ are both coded 2, and 135◦ and 315◦ are both coded as 3 [9]. This modification reduces the chain code to four values.
Using the modified chain code, the frequency of occurrence of each directional code (histogram) for a character image was calculated, and considered as a feature. Therefore, four features were obtained from the entire character corresponding to the histogram of four directions.
In addition to using the whole image for the chain code histogram feature, the character image was further divided into four equally sized sub-images, and then, the chain-code histogram features were also obtained from each of the four sub-images. Fig. 9 shows one of the sub-images (i.e. the top-left section of the character shown in Fig. 7) after dividing into four sub-images.

Figure 9: Demonstration of change of direction of boundary pixels of a sub-image
Therefore, using chain code, a total of 20 features (i.e. 4 from the whole image and 16 from the four sub-images) were obtained for each image. The feature vector was then normalized to keep the magnitude of each feature within 0 and 1 [9].
3.3 Longest Run Feature
For a character image, the longest run feature gives a measure of the total longest length of consecutive black (i.e. foreground) pixels along a particular direction. Generally, this feature is extracted for four different directions, i.e. row-wise, columnwise and two major diagonal-wise. Fig. 10 illustrates the rowwise longest run feature calculations for a sample image. From the figure, it is observed that the longest lengths of consecutive black pixels for each of the 16 rows of the character are: 0,8,2,3,3,3,2,5,2,2,3,5,2,2,6, and 4 respectively – which add up to 52.

Figure 10: Longest run values for each row of the character image
From the entire image, 4 longest run features were obtained. The image was then divided into four sub-images around the center of gravity of the image [8]. The coordinates of the center of gravity (CGx,CGy) of any image were calculated according to the following formula,

where, x and y are the coordinates of each pixel in the image I(x,y) of size m × n, and k is the count of pixels having I(x,y) = 1.
Therefore, a total of 20 features, i.e. 4 from the entire image, and 16 from the four sub-images were obtained altogether. The value of the features so computed is divided by the area of the entire image (or corresponding sub-image) for normalization [8].
3.4 Feature Extracted using Gabor filters
Gabor filters are useful for extracting directional features of a character. A two dimensional (2D) Gabor function is a sinusoidal plane wave modulated by a Gaussian kernel function [14], which is expressed as,

x0 = xcosθ + ysinθ, and y0 = −xsinθ + ycosθ, where, x,y are spatial coordinates of the image pixels, λ represents the wavelength and ψ is the phase offset of the sinusoid; θ is the spatial orientation of the Gabor function, and σ is the standard deviation of the Gaussian envelope. Fig. 11 shows the Gabor filter kernels for θ = 0◦,45◦,90◦ and 135◦. Here, the values for γ,λ,ψ,σ are 0.3,9,0, and 3 respectively. These represent the optimal values of the parameters and were obtained heuristically.

Figure 11: Gabor filter kernels
Fig. 12 shows an original (unfiltered) sample of a character image and Fig. 13 shows the filtered images using the four kernels of the Gabor filter. The enhanced directional features are evident from the filtered images.

Figure 12: Original sample image before applying Gabor filter

Figure 13: Filtered sample image with enhanced directional feature
As the four filtered images has size 128 × 128 each, therefore, in this paper, these were down-sampled to get sub-sampled images of size 32 × 32 each. The four filtered sub-sampled images were then averaged and used as a feature. This was done to reduce the high dimensionality of the feature vector. The feature vector size for each character image is thus 32 × 32 = 1024.
4. Artificial Neural Network
An artificial neural network (ANN) was used as the classifier to identify the characters. The ANN consists of nodes in different layers with inter connections [15], where the nodes are distributed in mainly three types of layers, i.e. (i) input layer, (ii) output layer, and (iii) one or more hidden layers. Mathematically, the output of a layer of the neural network can be defined as the weighted sum of n input signals, xj = 1,2,…n, and the firing of the neurons are controlled by the activation function. The output is expressed as,

where f(·) is the sigmoid activation function, wj is the weight associated with the j-th input and b is the bias.
We used a feed-forward neural network architecture (with a vanilla structure), where each layer is fully connected to the next as shown in Fig. 14. In the figure, wijL is the weight of the connection between the i-th node in the layer (L − 1) to j-th node in the layer L.
To train the network, p-numbers of input-output pairs (called training pairs), defined as {(x(1),d(1)),(x(2),d(2)),…,(x(p),d(p))} are used. Here, each input x(r) = [x1(r), x2(r),…, xn(r)]T is a feature vector in n-dimensional space, and each corresponding output d(r) = [d1(r),d2(r),…,dm(r)]T (where dq(r) ∈ [0,1]) is a vector in m-dimensional space.

Figure 14: Artificial neural network
Note that, m is the number of classes. In the vectors d(r), only the correct class element has a value of 1, and all other elements have values of 0. The cost function is defined between the predicted class (y) and and the actual class (d) over the entire training range as,

During the training phase, the back-propagation learning algorithm is used to optimize the weight values through the minimization of the squared error cost function [16].
5. Classification Results and Analysis
The extracted features from the training data set characters were used as the inputs of the ANN for varying number hidden layers (up to 500), and the recognition accuracy was checked on the test data set. Therefore, the features obtained from the training data actually train the network, and the recognition accuracy indicates how well the trained network is able to recognize the test data set.

Figure 15: Recognition accuracy of test data for single feature sets
Figure 15 shows the recognition accuracy on the test data against different number of hidden layers, where only single features were used. The recognition performance based on the shadow feature seems to be better than the other three individual features. On the other hand, the recognition performance based on the features extracted using the Gabor filter produces the poorest results among the four.
Figure 16 shows the recognition accuracy on the test data against different number of hidden layers, where different combinations of features (taken two at a time) were used.

Figure 16: Recognition accuracy of test data for feature sets taken in pairs
Again, the recognition accuracy seems to be poor whenever any combination were used which involved features extracted using the Gabor filter. The combinations (excluding the Gabor filter based features) also produce better results than the those obtained using individual features.

Figure 17: Recognition accuracy of test data for feature sets taken three or more at a time
Figure 17 shows the recognition accuracy on the test data against different number of hidden layers, where different combinations of features (taken more than two at a time) were used. Like the previous case, the recognition accuracy is better without the Gabor filter based features. The recognition accuracy for all the feature combinations (excluding the Gabor filter based features) produce the best results among the combinations.
Table 1 summarizes the maximum recognition accuracy of the neural network (on the test data) under different feature sets and their combinations. The optimal number of hidden layers of the network are also shown in the table.
For the individual feature sets, the highest recognition accuracy is 66.51% using the shadow features, while the recognition accuracy is the lowest (i.e. 59%) for the Gabor filter based feature. It is noted that the shadow feature produces a better result although the feature vector size is only 16, which is the lowest among all the individual feature sets used in this paper, while the Gabor filter based feature performs poorly, although the size of this feature vector is the largest (i.e. 1024).
Table 1: Classification performance of different feature sets and their combinations
| No. | Feature name | Number of features | Classification accuracy | Number of hidden
layers |
| 1 | Longest run (LR) | 20 | 61.52% | 476 |
| 2 | Chain code histogram
(CC) |
20 | 63.47% | 476 |
| 3 | Gabor filter (GF) | 1024 | 59% | 428 |
| 4 | Shadow feature (SF) | 16 | 66.51% | 390 |
| 5 | LR and CC combination | 40 | 76.47% | 358 |
| 6 | LR and GF combination | 1044 | 60.96% | 356 |
| 7 | LR and SF combination | 36 | 78.07% | 418 |
| 8 | CC and GF combination | 1044 | 65.49% | 440 |
| 9 | CC and SF combination | 36 | 80.13% | 326 |
| 10 | GF and SF combination | 1040 | 63.54% | 464 |
| 11 | LR, CC, and SF combination | 56 | 84.01% | 406 |
| 12 | LR, CC, and GF combination | 1064 | 66.68% | 370 |
| 13 | LR, SF, and GF combination | 1060 | 65.11% | 348 |
| 14 | CC, SF, and GF combination | 1060 | 68.77% | 366 |
| 15 | CC, SF, LR and GF combination | 1080 | 69.71% | 202 |
However, higher recognition accuracy was obtained when we used combinations of features on character images. There are six possible combinations of feature sets taken two at a time, four possible combinations of feature sets taken three at a time, and only one combination of all the four feature sets. Looking at the recognition accuracy when using the combinations, it can be seen that a combination of longest run, chain code histogram, and shadow feature produces the best results, i.e. 84.01%. The number of features of this combination was only 56.
It is seen that, any combination of features that contains the Gabor filter based feature produces a comparatively lower accuracy result. The reason for this is, the feature vector size of the Gabor filter based feature is more than 50 times larger than any of the other three individual features, i.e. while the feature vector size of the Gabor filter based feature is 1024, those of the longest run, chain code histogram, and shadow features are 20,20 and 16 respectively. As a result, the Gabor filter based feature overshadows the other features in these combinations. Therefore, since the Gabor filter based feature itself does not produce a very good recognition accuracy (i.e. only 59%), the recognition accuracy produced by these combinations are weighed down by this dominating sized feature.
Combinations of the other three features (i.e. longest run, chain code histogram, and shadow feature) produce relatively better recognition accuracy, because, the size of the feature vectors of all the three feature sets are comparable. Moreover, for a particular class, some of the character images recognized by each feature sets are non-overlapping. That is, some characters which are not recognized using one feature set, are recognized when using another feature set.
Therefore, to get a better recognition accuracy, more than one feature sets are necessary, and the feature vectors of these sets should be comparable in size, otherwise, the results may be dominated by a particular feature set and the purpose of combining different feature sets may not be fulfilled.
6. Conclusion
In this paper, we have compared the effectiveness of four feature sets (shadow, longest run, chain code histogram and Gabor filter based feature) on Bangla handwritten isolated basic character recognition. Moreover, the effectiveness of 11 different combinations of these features were also studied. A benchmark database with 15,000 samples of Bangla basic characters was used for this comparative study.
We observed that, among the individual features, the shadow feature was more effective in Bangla handwritten character recognition compared to the other three features, although its feature vector size was only 16. On the other hand, among the combination of features, the combination of the shadow feature, the longest run feature and the chain code histogram feature yielded the best recognition accuracy, i.e. 84.01%, with a feature vector size of 56 only. In this combination, the size of the feature vectors from the shadow, longest run, and chain code histogram features were 16, 20, and 20 respectively, which are comparable. This implies that, in this combination, the three different features have equal contribution in recognizing the character classes. If the combination includes a feature with a larger vector size (compared to the size of other feature vectors in the combination), then it can strongly influence the recognition accuracy. In our case, when the Gabor filter-based feature (with a vector size of 1024) was added to the combination, it resulted in a comparatively poor performance than those obtained with the combinations of the other features. Therefore, we can surmise that, the combination of features performs better in recognizing character classes as long as the size of the feature vectors are comparable.
In this paper, although we have used hand-coded features, and a feed-forward neural network with backward propagation, but the recent trend is to use convolution neural networks (CNNs). In CNNs, the number of layers is many, and the structure of the network allows for the hidden layers to become sensitized to different features during training. We mainly avoided CNNs because of the small size of the available database, but in our future work, we would like to use CNNs to evaluate the recognition accuracy of handwritten characters and compare with the results obtained in this paper.
- T. K. Bhowmik, U. Bhattacharya, S. K. Parui, “Recognition of Bangla Handwritten Characters Using an MLP Classifier Based on Stroke Features” in 11th International Conference, ICONIP 2004, Calcutta, India, 2004. https://doi.org/10.1007/978-3-540-30499-9 125
- A. F. R. Rahman, R. Rahman, and M. C. Fairhurst, “Recognition of hand- written Bengali characters: a novel multistage approach” Pattern Recogni- tion, 35(5), 997-1006, 2002. https://doi.org/10.1016/s0031-3203(01)00089- 9
- S. Bag, P. Bhowmick, G. Harit, “Recognition of Bengali Handwritten Characters Using Skeletal Convexity and Dynamic Programming” in 2011 Second International Conference on Emerging Applications of Information Technology, Kolkata, India, 2011. https://doi.org/10.1109/eait.2011.44
- S. Basu, N. Das, R. Sarkar, M. Kundu, M. Nasipuri, D.K. Basu, “A hierarchical approach to recognition of handwritten Bangla characters” Pattern Recognition, 42(7), 1467-1484, 2009. https://doi.org/10.1016/j.patcog.2009.01.008
- M. A. H. Akhand, M. M. Rahman, P. C. Shill, S. Islam, M. M. H. Rahman, “Bangla Handwritten Numeral Recognition using Convolutional Neural Network” in 2015 International Conference on Electrical Engineering and Information Communication Technology (ICEEICT), Dhaka, Bangladesh, 2015. https://doi.org/10.1109/iceeict.2015.7307467
- M. Z. Alom, P. Sidike, M. Hasan, T. M. Taha, V. K. Asari, “Handwritten Bangla character recognition using the state-of-the-art deep convolutional neural networks” Computational Intelligence and Neuroscience, 2018, 1-13, 2018. https://doi.org/10.1155/2018/6747098
- H. Begum, A. Rafid, M. M. Islam, “Recognition of Bangla Hand- written Characters using Feature Combinations” in 2018 5th IEEE Ut- tar Pradesh Section International Conference on Electrical, Electron- ics and Computer Engineering (UPCON), Gorakhpur, India, 2018. https://doi.org/10.1109/upcon.2018.8597076
- N. Das, K. Acharya, R. Sarkar, S. Basu, M. Kundu, M. Nasipuri, “A bench- mark image database of isolated Bangla handwritten compound characters” International Journal on Document Analysis and Recognition (IJDAR), 17(4), 413-431, 2014. https://doi.org/10.1007/s10032-014-0222-y
- D. R. Birajdar, M. M. Patil, “Recognition of off-line handwrit- ten Devanagari characters using combinational feature extraction” In- ternational Journal of Computer Applications, 120(3), 1-4, 2015. https://doi.org/10.5120/21204-3883
- Y. Qian, W. Xichang, Z. Huaying, S. Zhen, L. Jiang, “Recognition Method for Handwritten Digits Based on Improved Chain Code Histogram Feature” in Proceedings of 3rd International Conference on Multimedia Technol- ogy (ICMT-13), Guangzhou, China, 2013. http://doi.org/10.2991/icmt- 13.2013.53
- R. C. Gonzalez, R. Woods, Digital Image Processing, Prentice Hall, 2008.
- U. Pape, “Implementation and efficiency of Moore-algorithms for the shortest route problem” Mathematical Programming 7(1), 212-222, 1974. https://doi.org/10.1007/BF01585517
- H. Freeman, “On the encoding of arbitrary geometric configurations” IRE Transactions on Electronic Computers, EC-10(2), 260–268, 1961. https://doi.org/10.1109/tec.1961.5219197
- D. Gabor, “Theory of communication. Part 1: The Analysis of Information” Journal of the Institution of Electrical Engineers – Part III: Radio and Com- munication Engineering, 93(26), 429–441, 1946. https://doi.org/10.1049/ji- 3-2.1946.0074
- S. Haykin, Neural Networks: A Comprehensive Foundation, Prentice Hall PTR, 1994.
- C. M. Bishop, Pattern Recognition and Machine Learning, Springer, 2006.
- Shuvo Kumar Paul, Mircea Nicolescu, Monica Nicolescu, "Integrating Speech and Gesture for Generating Reliable Robotic Task Configuration", Advances in Science, Technology and Engineering Systems Journal, vol. 9, no. 4, pp. 51–59, 2024. doi: 10.25046/aj090406
- Ajla Kulaglic, Zeynep Örpek, Berk Kayı, Samet Ozmen, "Analysis of Emotions and Movements of Asian and European Facial Expressions", Advances in Science, Technology and Engineering Systems Journal, vol. 9, no. 1, pp. 42–48, 2024. doi: 10.25046/aj090105
- El Mehdi Ben Laoula, Marouane Midaoui, Mohamed Youssfi, Omar Bouattane, "Improving License Plate Identification in Morocco: Intelligent Region Segmentation Approach, Multi-Font and Multi-Condition Training", Advances in Science, Technology and Engineering Systems Journal, vol. 8, no. 3, pp. 262–271, 2023. doi: 10.25046/aj080329
- Sutham Satthamsakul, Ari Kuswantori, Witsarut Sriratana, Worapong Tangsrirat, Taweepol Suesut, "Landmarking Technique for Improving YOLOv4 Fish Recognition in Various Background Conditions", Advances in Science, Technology and Engineering Systems Journal, vol. 8, no. 3, pp. 100–107, 2023. doi: 10.25046/aj080312
- Temsamani Khallouk Yassine, Achchab Said, Laouami Lamia, Faridi Mohammed, "Hybrid Discriminant Neural Networks for Performance Job Prediction", Advances in Science, Technology and Engineering Systems Journal, vol. 8, no. 2, pp. 116–122, 2023. doi: 10.25046/aj080213
- Víctor Manuel Bátiz Beltrán, Ramón Zatarain Cabada, María Lucía Barrón Estrada, Héctor Manuel Cárdenas López, Hugo Jair Escalante, "A Multiplatform Application for Automatic Recognition of Personality Traits in Learning Environments", Advances in Science, Technology and Engineering Systems Journal, vol. 8, no. 2, pp. 30–37, 2023. doi: 10.25046/aj080204
- Hanae Naoum, Sidi Mohamed Benslimane, Mounir Boukadoum, "Encompassing Chaos in Brain-inspired Neural Network Models for Substance Identification and Breast Cancer Detection", Advances in Science, Technology and Engineering Systems Journal, vol. 7, no. 3, pp. 32–43, 2022. doi: 10.25046/aj070304
- Piotr Szymkowski, Khalid Saeed, Nobuyuki Nishiuchi, "Hiragana and Katakana Minutiae based Recognition System", Advances in Science, Technology and Engineering Systems Journal, vol. 6, no. 6, pp. 54–59, 2021. doi: 10.25046/aj060608
- Osaretin Eboya, Julia Binti Juremi, "iDRP Framework: An Intelligent Malware Exploration Framework for Big Data and Internet of Things (IoT) Ecosystem", Advances in Science, Technology and Engineering Systems Journal, vol. 6, no. 5, pp. 185–202, 2021. doi: 10.25046/aj060521
- Wided Hechkel, Brahim Maaref, Néjib Hassen, "Coupled Apodization Functions Applied to Enhance Image Quality in Ultrasound Imaging using Phased Arrays", Advances in Science, Technology and Engineering Systems Journal, vol. 6, no. 5, pp. 149–157, 2021. doi: 10.25046/aj060517
- Chia-En Yang, Yang-Ting Shen, Shih-Hao Liao, "SyncBIM: The Decision-Making BIM-Based Cloud Platform with Real-time Facial Recognition and Data Visualization", Advances in Science, Technology and Engineering Systems Journal, vol. 6, no. 5, pp. 16–22, 2021. doi: 10.25046/aj060503
- Saichon Sinsomboonthong, "Efficiency Comparison in Prediction of Normalization with Data Mining Classification", Advances in Science, Technology and Engineering Systems Journal, vol. 6, no. 4, pp. 130–137, 2021. doi: 10.25046/aj060415
- Hathairat Ketmaneechairat, Maleerat Maliyaem, Chalermpong Intarat, "Kamphaeng Saen Beef Cattle Identification Approach using Muzzle Print Image", Advances in Science, Technology and Engineering Systems Journal, vol. 6, no. 4, pp. 110–122, 2021. doi: 10.25046/aj060413
- Valerii Dmitrienko, Serhii Leonov, Aleksandr Zakovorotniy, "New Neural Networks for the Affinity Functions of Binary Images with Binary and Bipolar Components Determining", Advances in Science, Technology and Engineering Systems Journal, vol. 6, no. 4, pp. 91–99, 2021. doi: 10.25046/aj060411
- Kwun-Ping Lai, Jackie Chun-Sing Ho, Wai Lam, "Exploiting Domain-Aware Aspect Similarity for Multi-Source Cross-Domain Sentiment Classification", Advances in Science, Technology and Engineering Systems Journal, vol. 6, no. 4, pp. 01–12, 2021. doi: 10.25046/aj060401
- Juyana Islam, M. A. H. Akhand, Md. Ahsan Habib, Md Abdus Samad Kamal, Nazmul Siddique, "Recognition of Emotion from Emoticon with Text in Microblog Using LSTM", Advances in Science, Technology and Engineering Systems Journal, vol. 6, no. 3, pp. 347–354, 2021. doi: 10.25046/aj060340
- Svetlana Segarceanu, George Suciu, Inge Gavăt, "Environmental Acoustics Modelling Techniques for Forest Monitoring", Advances in Science, Technology and Engineering Systems Journal, vol. 6, no. 3, pp. 15–26, 2021. doi: 10.25046/aj060303
- Nadia Jmour, Slim Masmoudi, Afef Abdelkrim, "A New Video Based Emotions Analysis System (VEMOS): An Efficient Solution Compared to iMotions Affectiva Analysis Software", Advances in Science, Technology and Engineering Systems Journal, vol. 6, no. 2, pp. 990–1001, 2021. doi: 10.25046/aj0602114
- Murtadha Arif Bin Sahbudin, Chakib Chaouch, Salvatore Serrano, Marco Scarpa, "Application-Programming Interface (API) for Song Recognition Systems", Advances in Science, Technology and Engineering Systems Journal, vol. 6, no. 2, pp. 846–859, 2021. doi: 10.25046/aj060298
- Susanto Kumar Ghosh, Mohammad Rafiqul Islam, "Convolutional Neural Network Based on HOG Feature for Bird Species Detection and Classification", Advances in Science, Technology and Engineering Systems Journal, vol. 6, no. 2, pp. 733–745, 2021. doi: 10.25046/aj060285
- Abraham Adiputra Wijaya, Inten Yasmina, Amalia Zahra, "Indonesian Music Emotion Recognition Based on Audio with Deep Learning Approach", Advances in Science, Technology and Engineering Systems Journal, vol. 6, no. 2, pp. 716–721, 2021. doi: 10.25046/aj060283
- Phillip Blunt, Bertram Haskins, "A Model for the Application of Automatic Speech Recognition for Generating Lesson Summaries", Advances in Science, Technology and Engineering Systems Journal, vol. 6, no. 2, pp. 526–540, 2021. doi: 10.25046/aj060260
- Shahnaj Parvin, Liton Jude Rozario, Md. Ezharul Islam, "Vehicle Number Plate Detection and Recognition Techniques: A Review", Advances in Science, Technology and Engineering Systems Journal, vol. 6, no. 2, pp. 423–438, 2021. doi: 10.25046/aj060249
- Basavaraj Madagouda, R. Sumathi, "Artificial Neural Network Approach using Mobile Agent for Localization in Wireless Sensor Networks", Advances in Science, Technology and Engineering Systems Journal, vol. 6, no. 1, pp. 1137–1144, 2021. doi: 10.25046/aj0601127
- Anass Barodi, Abderrahim Bajit, Taoufiq El Harrouti, Ahmed Tamtaoui, Mohammed Benbrahim, "An Enhanced Artificial Intelligence-Based Approach Applied to Vehicular Traffic Signs Detection and Road Safety Enhancement", Advances in Science, Technology and Engineering Systems Journal, vol. 6, no. 1, pp. 672–683, 2021. doi: 10.25046/aj060173
- Khalil Mustafa Ahmad Yousef, Bassam Jamil Mohd, Yusra Abd-Al-Haleem Al-Khalaileh, Ahlam Hani Al-Hmeadat, Bushra Ibrahim El-Ziq, "Automatic License Plate Detection and Recognition for Jordanian Vehicles", Advances in Science, Technology and Engineering Systems Journal, vol. 5, no. 6, pp. 699–709, 2020. doi: 10.25046/aj050684
- Katerina Prihodova, Miloslav Hub, "Hand-Based Biometric Recognition Technique – Survey", Advances in Science, Technology and Engineering Systems Journal, vol. 5, no. 6, pp. 689–698, 2020. doi: 10.25046/aj050683
- Khalid Chennoufi, Mohammed Ferfra, "Fast and Efficient Maximum Power Point Tracking Controller for Photovoltaic Modules", Advances in Science, Technology and Engineering Systems Journal, vol. 5, no. 6, pp. 606–612, 2020. doi: 10.25046/aj050674
- Gede Putra Kusuma, Jonathan, Andreas Pangestu Lim, "Emotion Recognition on FER-2013 Face Images Using Fine-Tuned VGG-16", Advances in Science, Technology and Engineering Systems Journal, vol. 5, no. 6, pp. 315–322, 2020. doi: 10.25046/aj050638
- Fei Gao, Jiangjiang Liu, "Effective Segmented Face Recognition (SFR) for IoT", Advances in Science, Technology and Engineering Systems Journal, vol. 5, no. 6, pp. 36–44, 2020. doi: 10.25046/aj050605
- Kodai Kitagawa, Koji Matsumoto, Kensuke Iwanaga, Siti Anom Ahmad, Takayuki Nagasaki, Sota Nakano, Mitsumasa Hida, Shogo Okamatsu, Chikamune Wada, "Posture Recognition Method for Caregivers during Postural Change of a Patient on a Bed using Wearable Sensors", Advances in Science, Technology and Engineering Systems Journal, vol. 5, no. 5, pp. 1093–1098, 2020. doi: 10.25046/aj0505133
- Suni S S, K Gopakumar, "Dense SIFT–Flow based Architecture for Recognizing Hand Gestures", Advances in Science, Technology and Engineering Systems Journal, vol. 5, no. 5, pp. 944–954, 2020. doi: 10.25046/aj0505115
- Daniyar Nurseitov, Kairat Bostanbekov, Maksat Kanatov, Anel Alimova, Abdelrahman Abdallah, Galymzhan Abdimanap, "Classification of Handwritten Names of Cities and Handwritten Text Recognition using Various Deep Learning Models", Advances in Science, Technology and Engineering Systems Journal, vol. 5, no. 5, pp. 934–943, 2020. doi: 10.25046/aj0505114
- Rohith Raj S, Pratiba D, Ramakanth Kumar P, "Facial Expression Recognition using Facial Landmarks: A Novel Approach", Advances in Science, Technology and Engineering Systems Journal, vol. 5, no. 5, pp. 24–28, 2020. doi: 10.25046/aj050504
- Adonis Santos, Patricia Angela Abu, Carlos Oppus, Rosula Reyes, "Real-Time Traffic Sign Detection and Recognition System for Assistive Driving", Advances in Science, Technology and Engineering Systems Journal, vol. 5, no. 4, pp. 600–611, 2020. doi: 10.25046/aj050471
- Moulay Smail Bouzakraoui, Abdelalim Sadiq, Abdessamad Youssfi Alaoui, "Customer Satisfaction Recognition Based on Facial Expression and Machine Learning Techniques", Advances in Science, Technology and Engineering Systems Journal, vol. 5, no. 4, pp. 594–599, 2020. doi: 10.25046/aj050470
- Amar Choudhary, Deependra Pandey, Saurabh Bhardwaj, "Overview of Solar Radiation Estimation Techniques with Development of Solar Radiation Model Using Artificial Neural Network", Advances in Science, Technology and Engineering Systems Journal, vol. 5, no. 4, pp. 589–593, 2020. doi: 10.25046/aj050469
- Deborah Ooi Yee Hui, Syaheerah Lebai Lutfi, Syibrah Naim, Zahid Akhtar, Ahmad Sufril Azlan Mohamed, Kamran Siddique, "The Sound of Trust: Towards Modelling Computational Trust using Voice-only Cues at Zero-Acquaintance", Advances in Science, Technology and Engineering Systems Journal, vol. 5, no. 4, pp. 469–476, 2020. doi: 10.25046/aj050456
- Ali Al-Rashid, "Composition of Methods to Ensure Iris Liveness and Authenticity", Advances in Science, Technology and Engineering Systems Journal, vol. 5, no. 4, pp. 130–143, 2020. doi: 10.25046/aj050417
- Chafaa Hamrouni, "Multi-Agent Data Recognition System Based on Received Signal in Antenna on Board Telecom System", Advances in Science, Technology and Engineering Systems Journal, vol. 5, no. 4, pp. 94–98, 2020. doi: 10.25046/aj050412
- Krina B. Gabani, Mayuri A. Mehta, Stephanie Noronha, "Racial Categorization Methods: A Survey", Advances in Science, Technology and Engineering Systems Journal, vol. 5, no. 3, pp. 388–401, 2020. doi: 10.25046/aj050350
- Jesuretnam Josemila Baby, James Rose Jeba, "A Hybrid Approach for Intrusion Detection using Integrated K-Means based ANN with PSO Optimization", Advances in Science, Technology and Engineering Systems Journal, vol. 5, no. 3, pp. 317–323, 2020. doi: 10.25046/aj050341
- Suharjito, Atria Dika Puspita, "Face Recognition on Low Resolution Face Image With TBE-CNN Architecture", Advances in Science, Technology and Engineering Systems Journal, vol. 5, no. 2, pp. 730–738, 2020. doi: 10.25046/aj050291
- Yeji Shin, Youngone Cho, Hyun Wook Kang, Jin-Gu Kang, Jin-Woo Jung, "Neural Network-based Efficient Measurement Method on Upside Down Orientation of a Digital Document", Advances in Science, Technology and Engineering Systems Journal, vol. 5, no. 2, pp. 697–702, 2020. doi: 10.25046/aj050286
- Evan Kristia Wigati, Gede Putra Kusuma, Yesun Utomo, "Combination of Salient Object Detection and Image Matching for Object Instance Recognition", Advances in Science, Technology and Engineering Systems Journal, vol. 5, no. 2, pp. 584–591, 2020. doi: 10.25046/aj050273
- Jude B. Rola, Cherry Lyn C. Sta. Romana, Larmie S. Feliscuzo, Ivy Fe M. Lopez, Cherry N. Rola, "A Comparative Analysis of ARIMA and Feed-Forward Neural Network Prognostic Model for Bull Services", Advances in Science, Technology and Engineering Systems Journal, vol. 5, no. 2, pp. 411–418, 2020. doi: 10.25046/aj050253
- Walaa Gouda, Randa Jabeur Ben Chikha, "NAO Humanoid Robot Obstacle Avoidance Using Monocular Camera", Advances in Science, Technology and Engineering Systems Journal, vol. 5, no. 1, pp. 274–284, 2020. doi: 10.25046/aj050135
- Noor A.Hussein, Mohamed Ibrahim Shujaa, "Smart Ambulance: Speed Clearance in the Internet of Things paradigm using Voice Chat", Advances in Science, Technology and Engineering Systems Journal, vol. 4, no. 6, pp. 280–284, 2019. doi: 10.25046/aj040635
- Ivan P. Yamshchikov, Alexey Tikhonov, "Learning Literary Style End-to-end with Artificial Neural Networks", Advances in Science, Technology and Engineering Systems Journal, vol. 4, no. 6, pp. 115–125, 2019. doi: 10.25046/aj040614
- Mohamad Faiz Ahmad, Syed Sahal Nazli Alhady, Ooi Zhu Oon, Wan Amir Fuad Wajdi Othman, Aeizaal Azman Abdul Wahab, Ahmad Afiq Muhammad Zahir, "Embedded Artificial Neural Network FPGA Controlled Cart", Advances in Science, Technology and Engineering Systems Journal, vol. 4, no. 4, pp. 509–516, 2019. doi: 10.25046/aj040461
- Shanmuganathan Vasanthapriyan, Malith De Silva, "Optical Braille Recognition Software Prototype for the Sinhala Language", Advances in Science, Technology and Engineering Systems Journal, vol. 4, no. 4, pp. 221–229, 2019. doi: 10.25046/aj040427
- Hakan Tora, Gursel Karacor, Baran Uslu, "Vowel Classification Based on Waveform Shapes", Advances in Science, Technology and Engineering Systems Journal, vol. 4, no. 3, pp. 16–24, 2019. doi: 10.25046/aj040303
- Weiguo Wan, Hyo Jong Lee, "Deep Feature Representation for Face Sketch Recognition", Advances in Science, Technology and Engineering Systems Journal, vol. 4, no. 2, pp. 107–111, 2019. doi: 10.25046/aj040214
- Ivaylo Penev, Milena Karova, Mariana Todorova, Danislav Zhelyazkov, "Robot Self-Detection System", Advances in Science, Technology and Engineering Systems Journal, vol. 3, no. 6, pp. 391–402, 2018. doi: 10.25046/aj030647
- Denise Ashe, Alan Eardley, Bobbie Fletcher, "An Empirical Study of Icon Recognition in a Virtual Gallery Interface", Advances in Science, Technology and Engineering Systems Journal, vol. 3, no. 6, pp. 289–313, 2018. doi: 10.25046/aj030637
- Tamarafinide Victory Dittimi, Ching Yee Suen, "Modified HOG Descriptor-Based Banknote Recognition System", Advances in Science, Technology and Engineering Systems Journal, vol. 3, no. 5, pp. 354–364, 2018. doi: 10.25046/aj030541
- Sudipta Saha, Aninda Saha, Zubayr Khalid, Pritam Paul, Shuvam Biswas, "A Machine Learning Framework Using Distinctive Feature Extraction for Hand Gesture Recognition", Advances in Science, Technology and Engineering Systems Journal, vol. 3, no. 5, pp. 72–81, 2018. doi: 10.25046/aj030510
- Margaret Lech, Melissa Stolar, Robert Bolia, Michael Skinner, "Amplitude-Frequency Analysis of Emotional Speech Using Transfer Learning and Classification of Spectrogram Images", Advances in Science, Technology and Engineering Systems Journal, vol. 3, no. 4, pp. 363–371, 2018. doi: 10.25046/aj030437
- Sherif Said, Samer Al-Kork, Vishnu Nair, Itta Gowthami, Taha Beyrouthy, Xavier Savatier, M Fayek Abdrabbo, "Experimental Investigation of Human Gait Recognition Database using Wearable Sensors", Advances in Science, Technology and Engineering Systems Journal, vol. 3, no. 4, pp. 201–210, 2018. doi: 10.25046/aj030418
- Alaa Hamza Omran, Yaser Muhammad Abid, "Design of smart chess board that can predict the next position based on FPGA", Advances in Science, Technology and Engineering Systems Journal, vol. 3, no. 4, pp. 193–200, 2018. doi: 10.25046/aj030417
- Ola Surakhi, Mohammad Khanafseh, Yasser Jaffal, "An enhanced Biometric-based Face Recognition System using Genetic and CRO Algorithms", Advances in Science, Technology and Engineering Systems Journal, vol. 3, no. 3, pp. 116–124, 2018. doi: 10.25046/aj030316
- Rasel Ahmmed, Md. Asadur Rahman, Md. Foisal Hossain, "An Advanced Algorithm Combining SVM and ANN Classifiers to Categorize Tumor with Position from Brain MRI Images", Advances in Science, Technology and Engineering Systems Journal, vol. 3, no. 2, pp. 40–48, 2018. doi: 10.25046/aj030205
- Sougata Sheet, Anupam Ghosh, Sudhindu Bikash Mandal, "Cancer Mediating Genes Recognition using Multilayer Perceptron Model- An Application on Human Leukemia", Advances in Science, Technology and Engineering Systems Journal, vol. 3, no. 2, pp. 8–20, 2018. doi: 10.25046/aj030202
- Eleni Bougioukou, Nikolaos Toulgaridis, Maria Varsamou, Theodore Antonakopoulos, "Hardware Acceleration on Cloud Services: The use of Restricted Boltzmann Machines on Handwritten Digits Recognition", Advances in Science, Technology and Engineering Systems Journal, vol. 3, no. 1, pp. 483–495, 2018. doi: 10.25046/aj030159
- Yuksel Arslan, Abdussamet Tanıs, Huseyin Canbolat, "A Relational Database Model and Tools for Environmental Sound Recognition", Advances in Science, Technology and Engineering Systems Journal, vol. 2, no. 6, pp. 145–150, 2017. doi: 10.25046/aj020618
- Krystian Kapala, Dawid Krawczyk, Stefan Brachmanski, "Emotional state recognition in speech signal", Advances in Science, Technology and Engineering Systems Journal, vol. 2, no. 3, pp. 1654–1659, 2017. doi: 10.25046/aj0203205
- Swapnil Vitthal Tathe, Abhilasha Sandipan Narote, Sandipan Pralhad Narote, "Face Recognition and Tracking in Videos", Advances in Science, Technology and Engineering Systems Journal, vol. 2, no. 3, pp. 1238–1244, 2017. doi: 10.25046/aj0203156
- Mohammed Myasar Ali, Hui Liu, Norbert Stoll, Kerstin Thurow, "Recognition and Position Estimation for Multiple Labware Transportation Using Kinect V2 and Mobile Robots", Advances in Science, Technology and Engineering Systems Journal, vol. 2, no. 3, pp. 1218–1226, 2017. doi: 10.25046/aj0203154
- Jawad Ahmad, Ammar Mohsin Butt, Muhammad Tanveer Riaz, Shoaib Bhutta, Muhammad Zeeshan Khan, Inam-Ul-Haq, "Multiclass Myoelectric Identification of Five Fingers Motion using Artificial Neural Network and Support Vector Machine", Advances in Science, Technology and Engineering Systems Journal, vol. 2, no. 3, pp. 1026–1033, 2017. doi: 10.25046/aj0203130
- Mohammed Faeik Ruzaij Al-Okby, Sebastian Neubert, Norbert Stoll, Kerstin Thurow, "Low-cost Hybrid Wheelchair Controller for Quadriplegias and Paralysis Patients", Advances in Science, Technology and Engineering Systems Journal, vol. 2, no. 3, pp. 687–694, 2017. doi: 10.25046/aj020388
- Estefanía D. Avalos-Rivera, Alberto de J. Pastrana-Palma, "Classifying region of interests from mammograms with breast cancer into BIRADS using Artificial Neural Networks", Advances in Science, Technology and Engineering Systems Journal, vol. 2, no. 3, pp. 233–240, 2017. doi: 10.25046/aj020332
- Adolfo J. Cangas, Noelia Navarro, Juan J. Ojeda, Diego Cangas, Jose A. Piedra, José Gallego, "Assessment of the usefulness and appeal of stigma-stop by psychology students: a serious game designed to reduce the stigma of mental illness", Advances in Science, Technology and Engineering Systems Journal, vol. 2, no. 3, pp. 186–190, 2017. doi: 10.25046/aj020324
- Nabil Nassif, Jordan Gooden, "Development and Validation of a Heat Pump System Model Using Artificial Neural Network", Advances in Science, Technology and Engineering Systems Journal, vol. 2, no. 3, pp. 182–185, 2017. doi: 10.25046/aj020323
- R. Manju Parkavi, M. Shanthi, M.C. Bhuvaneshwari, "Recent Trends in ELM and MLELM: A review", Advances in Science, Technology and Engineering Systems Journal, vol. 2, no. 1, pp. 69–75, 2017. doi: 10.25046/aj020108
- Owais Qayum, Tahreem Sohail, "FenceBook a Geofencing based Advertisements Application Using Android", Advances in Science, Technology and Engineering Systems Journal, vol. 1, no. 5, pp. 27–33, 2016. doi: 10.25046/aj010506
- Ahmet Kayabasi, Ali Akdagli, "An Application of ANN Model with Bayesian Regularization Learning Algorithm for Computing the Operating Frequency of C-Shaped Patch Antennas", Advances in Science, Technology and Engineering Systems Journal, vol. 1, no. 5, pp. 1–5, 2016. doi: 10.25046/aj010501