Traditional and Deep Learning Approaches for Sentiment Analysis: A Survey

Traditional and Deep Learning Approaches for Sentiment Analysis: A Survey

Volume 6, Issue 5, Page No 01-07, 2021

Author’s Name: Fatima-Ezzahra Lagraria), Youssfi Elkettani

View Affiliations

Department of Mathematics, MISC Laboratory, Ibn Tofail University, Kenitra, 14000, Morocco

a)Author to whom correspondence should be addressed. E-mail: Fatima.ezzahra.lagrari@gmail.com

Adv. Sci. Technol. Eng. Syst. J. 6(5), 01-07 (2021); a  DOI: 10.25046/aj060501

Keywords: Sentiment Analysis, Opinion Mining, Deep learning, Neural network, Natural language processing

Share

820 Downloads

Export Citations

Presently, individuals generate tremendous volumes of information on the internet. As a result, sentiment analysis is a critical tool for automating a deep understanding of user-generated information. Of late, deep learning algorithms have shown endless promises for a variety of sentiment analysis. The purpose of sentiment analysis is to categorize different descriptions as good, bad, or impartial based on context data. Numerous studies have been concentrated on sentiment analysis in addition to the ability to examine thoughts, views, and reactions. In this paper, we review classical and deep learning approaches that have been applied to various sentiment analysis tasks and their evolution over last years and provide performance analysis of different sentiment analysis models on particular datasets. In the end, we will highlight current challenges and suggested solutions that can be considered in future work to achieve better performance.

Received: 03 May 2021, Accepted: 13 July 2021, Published Online: 10 September 2021

1. Introduction

Nowadays, the number of web 2.0 resources where every one of us is comfortable sharing our views and suggestions is growing exponentially such as e-commerce websites and social media. As a result of this growth, massive volumes of data are generated. Hence, sentiment analysis was developed to extract relevant information from this large data size automatically. Sentiment analysis is a subfield of natural language processing (NLP) which gets considerable attention from researchers, industry, and government organizations worldwide. Sentiment Analysis, also referred as view mining, which is a procedure that assists in defining the emotional tone of a collection of words; this analysis enables us to determine the impression, viewpoint or conveyed emotion. According to Liu’s concept, an opinion consists of a target, an object or a feature of an entity, and a mood (positive, negative, or neutral) [1].

Additionally, it may include the originator of the view and the day and time at which the opinion was given [2]. Sentiments are categorized into numerous groups based on the classification purpose. Authors in [3] and [4] divided tweets into favorable, unfavorable, and neutral tweets. Sites like Twitter media platforms where only a vast number of tweets are posted, many of which include helpful information. As stated, [5] active Arabic users sent over 20 million tweets each day in March 2014. Each minute, millions of tweets are posted, many of them in the original languages. Health-related issues are frequently trending on Twitter.

Numerous approaches to sentiment analysis have been introduced. Traditional techniques are categorized into two types: lexicon-based and machine learning-based. Lexicon-based methods, in which the alignment of a document is determined founded on the sentiment polarity of phrases also words in the sentiment [6], and machine learning methods, in which models are constructed from labeled examples of readers or sentences, for instance, The paper in [7] categorize text documents containing film reviews using machine learning techniques to evaluate the statement’s feelings.

While traditional methods have achieved a high degree of precision, the feature engineering required is time-consuming and complex. Today’s massive amount of user-generated data necessitates complete knowledge and reliable ways for its management. Thus, deep learning methods facilitate the development of computational models under their ability to learn from data sets without requiring a manual attribute selection process. Recent research indicates that deep learning techniques outperform standard machine learning techniques in sentiment analysis [8].

2. Levels of sentiment analysis

Sentiment Analysis can be conducted at three granularity levels: the document level, sentence level, and aspect level [9]. Its application is a highly demanding area of investigate that entails a numerous of complex tasks. Subjectivity identification, sentiment analysis, lexicon construction, component extraction, aspect sentiment analysis, and viewpoint fraud detection are the most investigated tasks [10]. Sentiment analysis techniques are confronted with numerous issues relating to traditional sentiment analysis, autonomous natural language processing, and also new and challenging hurdles in this novel communication situation. This difficulty is mainly due to variant signals’ properties, which are typically brief but dense with semantic information [11], unstructured, thrashed in variant languages, and involve clutter. Amongst the features discussed previously, natural language text study is a critical component on which the sentiment analysis community bases its work.

 Individuals and organizations/governments alike are incredibly interested in sentiment analysis. Before purchasing a product, we examine the ratings and reviews on the internet, while businesses utilize these reviews to understand their consumers’ preferences better and improve their products. Numerous attempts have been made to incorporate sentiment analysis into customer feedback. Governments use the same technologies to gauge public sentiment, for example, in response to new tactics or during polls.

3. Sentiment analysis approaches

Before As mentioned above, traditional approaches were the first ones that have merged to deal with sentiment analysis. Globally we have two categories in traditional approaches: the lexicon based and machine learning approaches.

3.1. Lexicon based approaches

Before, As indicated previously, traditional methodologies were the first to converge on sentiment analysis. Globally, traditional methods fall into two categories: lexicon-based approaches and machine learning approaches. Lexicographical techniques classify a given text into its appropriate class by utilizing precompiled sentiment lexicons with various words and their orientation (positive, negative, etc.) [12]. The terms contained in a particular document or sentence can be retrieved using the bag of words technique and classified using a lexicon. A merging method such as the average of each class established can consolidate the data and form a judgment about the overall sentiment of the phrase.

The majority of ways for producing sentiment lexicons are either dictionary-based or corpus-based: the first category relies on a lexicon in which words are labeled with their past polarity (seed words). WordNet is an example of a lexicon in this area, as is WordNet-Affect [13], which associates adaptive synsets (a collection of synonyms) with inspirational ideas. SentiNet [14] annotates each WordNet synset based on three numerical sentiment scores (positivity, negativity, and neutrality). At the same time, MPQA [14] offers a list of words, together with their PO Stagging, labeled with polarity (positive, negative, and neutral) and intensity (strong, weak). The sensations of a new document / phrase can be examined by comparing synonyms of names, verbs, adjectives and adverbs to the seed words previously labeled in the vocabulary. The second category mines a domain corpus for sentiment terms and their polarity. For instance, considering the co-occurrence likelihood of positive and negative phrases from search engines [15]. The corpus-based approaches address domain independence and provide a more accurate sentiment analysis [16]. The authors of [17] began by establishing a similarity between each pair of words in the corpus by generating a word graph with nodes representing words and reinforced edges representing similarities. The unidentified words are then automatically formed lexicons by attributing their polarity based on the graph. Certain lexicons feature non-words as well, such as emoticons [18]. SentiStrength is a hybrid sentiment analysis software suggested by [19] for extracting sentiment strength from the informal English text. Between 1 and 5, a score for negative or positive sentiment is assigned. It is built on a manual sentiment word list that was improved using a training algorithm, followed by a spelling correction algorithm, a booster and negation word list, and popular emoticons.

It is worth noting that developing or even maintaining a sentiment lexicon is a challenging effort in the modern era due to the rise of current data that is unstructured mainly and user-generated, such as social media. Rather than relying solely on the sentiment values included in the lexicon, the background and other characteristics of the words must be considered; this is accomplished through a process known as feature engineering. Bag of Words is one of the NLP’s preparation techniques. It separates the different words from a text [20]. This technique will be unable to distinguish between “I like you,” where “like” is a positive verb, and “I am like you,” where “like” is a neutral proposition. To address this, Part of Speech POS enables the assignment of a word in a corpus to the appropriate part of the speech tag (verbs, nouns, adjectives, and adverbs) depending on its context and definition. Adjectives and adverbs are mainly employed to describe entities that exhibit emotional characteristics.

Apart from using individual words (unigrams) as features, classic text classification algorithms weight the words within a sentence using Word Positions and Frequency. For instance, a movie review may begin with an overall sentiment statement, move to a conversation, and close with a summary of the author’s opinions [1], indicating that position can be used to change sentiment weights. The term frequency can be used to highlight terms that may be significant because they appear in many publications. For example, the TF-IDF (Term Frequency – Inverse Document Frequency) approach [21] enables this to be calculated by comparing the frequency of occurrence of a given term in a document to the number of documents containing that word. Another strategy is to hunt for the most uncommon words rather than the most common ones. In [7] , the authors optimize their system’s performance by referring to it as “presence” rather than “frequency.” Rather than weighing all words equally, some approaches favor terms that reflect or imply thoughts and opinions, such as excellent, great, wrong, incorrect, natural, like, agree, hate, trust, conceive, and so forth [22].

Another factor to consider when interpreting the context of a word is negation. To ascertain the negation’s scope and flip the polarity of terms affected by a negation [23]. Negation phrases include the following: not, no, never, cannot, should not, and would not. The dependency-based aspects of word words were derived from Dependency trees provide a simple tree-based method to express the structural relationships between the sentence’s words. These relationships are denoted by triplets that contain the name of the relationship type, the relationship’s governor, and the relationship’s dependent [24]. While these systems do not require training datasets, they strain to adapt to new data trends caused by the changing nature of language, the growth in high-dimensional data, the structural and cultural intricacies of brief text such as tweets, and the usage of emoticons and abbreviations [22]. To address the challenge of rapidly assessing these new data types, machine learning algorithms have demonstrated their superiority over lexicon-based methods.

3.2. Machine learning approaches

Machine Learning methods are based on machine learning algorithms that classify annotated data into its intended category. They can be classified into three broad categories: supervised, semi-supervised, and unsupervised learning methods. When humans do not annotate data, individuals opt for lexicon-based approaches. The machine learning approach is fully automated, convenient, and capable of handling extensive collections of data. These methods require a training dataset to assist in automating the Classifier; it is used to develop a classification model that classifies feature vectors [25] and a test dataset to predict the class label of unseen feature vectors.

On numerous sentiment analysis jobs, machine learning is used; however, the two classical tactics to sentiment analysis can be merged to maximize each benefit. In [26], the authors demonstrated promising results when they combined machine learning and lexicon-based practices. Several machine learning approaches are used to categorize data, including Naive Bayes (NB), Maximum Entropy (ME), and Support Vector Machines (SVM). Sentiment classification studies on polarity determination, utilizing the Support Vector Machine SVM algorithm and a variety of characteristics selection methods, including word frequency, binary occurrence, and term occurrence [27].

In [28], the authors created a method for text categorization that combined a genetic algorithm and a Random forests classifier and achieved extremely high accuracy. Crawford et al. investigated opinion spam detection using SVM and Naive Bayes [29]. They created a collection of unfavorable deceptive opinions that included both genuine and misleading reviews. They illustrate that linear SVM produced results more than 80% of the time. In [30], authors published a new technique for sentiment analysis that is based on Bayesian network classifiers. The authors employed the Bayes factor approach to reduce the number of edges during the training procedure automatically. The proposed technique was evaluated using two Spanish datasets. The evaluation’s outcome demonstrated the superiority of the submitted work above previously published efforts.

A hybrid technique combining machine learning (SVM classifier) and semantic orientation was proposed [31]. We applied content-neutral, content-specific, and sentiment-based features. Senti WordNet was used to derive the weights for the sentiment characteristics. These methodologies strive to keep up with the new data trend of a dynamic language, the growth of high-dimensional data, and the structural and cultural intricacies of short text such as tweets. To address the challenge of efficiently processing this type of data, deep learning algorithms have demonstrated their superiority over more traditional methods.

3.3. Deep learning approaches

Deep learning is a rapidly growing subfield of machine learning techniques based on artificial neural networks (ANNs) built based on the biological brain and capable of self-learning. The web comprises dozens to hundreds of neurons referred to as layers; each layer receives and interprets data from the preceding layer, allowing for multiple processing. For instance, the program will learn to detect letters before targeting words in a text or identify whether a photo has a face before determining who it belongs to. Deep learning algorithms have been primarily used in sentiment analysis due to their ability to automatically learn and generate input models from datasets [32].

Recent studies present numerous ways to sentiment analysis based on deep learning, including supervised and unsupervised algorithms. We will start by reviewing word embedding approaches as the basic data processing layer of deep learning and then examine many different deep learning approaches; We recommend to read [31] for mathematical details.

3.3.1. Word embedding

Word embedding is a collection of machine learning approaches aiming to express words in a text using vectors of real numbers, with similar vectors representing words with similar interpretations. These new forms of textual data have increased the efficiency of systems for automatic language processing. The first approach to word embedding dates back to the 1960s and is based on dimensionality reduction techniques. Recent advances in performance have been facilitated by innovative methods based on probabilistic models and neural networks, such as Word2Vec [33]. The following sections will discuss various word embeddings that are frequently utilized in sentiment analysis. Words having comparable semantic and contextual properties form a class. Authors in [34] describe a language model that simultaneously learns both distributed representations and the probability function for word sequences. Numerous studies have been conducted to strengthen this work. For instance, in [35] authors Suggested a classifier word embedding based on convolutional neural networks that can learn the characteristics associated with a particular task without basic info.

Authors in [33] proposed Word2vec which is a common pre-trained word embedding technique that uses two-layer neural networks to learn the vector representations of the words in a text so that similar numerical vectors express words with related comments. Word2Vec utilizes two neural architectures: CBOW and Skip-Gram. BOW takes the setting of a word as input and make efforts to identify the word in question. Skip-Gram, on the other hand, accepts a word as data and tries to guess its circumstances. In both situations, the network is trained by parsing the text and adjusting the neural weights to lower the algorithm’s model complexity.

Glove2 is another popular term for embedding; it provides a global neural network and local context. In [36] authors presented a method for fast text word embedding in which n-grams encode words. Notably, the NLP Branch earned a millstone, Bidirectional encoder representation from transformers (Bert) language model [8]. This is an utterly fascinating work. The authors detail a masked LM (MLM) that enables bidirectional training in models and subsequent acquisition of a strong sense of language context. In [3], authors obtained impressive findings by improving the Bert algorithm utilizing the lion method, in which the weights of the network of the BERT encoder were optimized using standard LA and also the sequence number length of the BERT encoder.

3.3.2. Convolution neural networks

A convolutional neural network (CNN) is a feedforward neural network established in 1989 and is the most widely used in computer vision. The visual cortex of animals inspires it. Each neuron in the visual brain has a small receptive area, which overlaps to perceive the complete object; these convolutional layers operate as local filters. In essence, CNN consists of three types of coatings: input, feature selection, and classification [37]. The input layer embeds raw data, the convolution layer learns and produces a feature map, and the neural network extracts practical attributes. Finally, these features are fed into a fully integrated categorization layer [2].

CNN has gotten perfect results in different domains, the same as in NLP and so sentiment analysis, and thus for the additional benefits. CNN utilizes few variables that don’t need a lot of time to train the model. They have achieved good popularity in sentiment analysis since they can acquire contextual characteristics but are constrained in long-term connections. CNN’s gets satisfactory accuracy when prepared in many samples, which is not always satisfied, where the development of Recurrent neural networks (RNNs) can manage the challenge of long-term dependencies.

3.3.3. Recurrent neural networks

RNNs are invented by ELMAN in 1990 [38], they have neural networks in which information can propagate in both directions, including from deep layers to early layers. They are closer to the true functioning of the nervous system, which is not one-way direction. These networks have recurrent connections in the sense that they keep information in memory: they can take into account a certain number of past states at a given moment t. For this reason, RNNs are particularly well suited to applications involving the context. Indeed, “classical” RNNs (simple recurrent neural networks or Vanilla RNNs) are only able to memorize the so called near past, and begin to “forget” after about fifty iterations. This two-way transfer of information makes their training much more complicated, and it is only recently that effective methods such as LSTM (Long Short-Term Memory) have been developed. Authors in [39] proposed long short-term memory network which is a special type of RNNs able of learning long term dependencies via the forget gate that allows the memory cell to keep information for a long time. GRU (Gated recurrent unit) is another type of RNNs to LSTM that have only two layers instead of three layers as in LSTM, it is a simplified version of LSTM that is more efficient computationally then Vanilla RNNs and LSTM.

A new technique called attention mechanism was introduced in 2015, which simulates the human reading capability of gaze-fixations by prioritizing relevant parts of data based on its weighted representation. Since then, more researches apply this technique [40]. RNNs approaches have been widely applied in sentiment analysis field, A neural network was proposed in [41] which is capable of learning document representation taking into consideration the sentence relationship. First it learns sentence representation with CNN and then a GRU is used to encode semantics.

3.3.4. Recursive neural networks

RecNNs are a generalization of RNNs, a form of neural network used to train a graph-based classification model from data and is familiar with structured inputs of any structure [42]. RecNN produces parent representations underside from the structural term of a sentence by merging tokens to obtain representations for phrases and soon the entire sentence. The representation at the sentence level can then be utilized to perform the final classification on the input text.

3.3.5. Deep Reinforcement Learning

Deep reinforcement learning (DRL) is a relatively new subfield of deep learning. It is a way of artificial intelligence that is not widely used but opens up entirely new possibilities for automation. The DRL enables the development of software capable of matching or exceeding human intellect in various domains. The most well-known system based on DRL is Deep Mind (Google’s AI Platform) [43]. DRL employs validation learning algorithms, the most widely used of temporal difference learning (TD Learning) and Q-learning (QL). These learning models are described as the human (and animal) process of information acquiring through trial and error to learn from multiple exposures. To ensure that the action taken by the machine is in an intended way, these algorithms use a reward system to evaluate the machine’s selections. The method is comparable to animal training. DRL, which is under the domain of automatic (or unsupervised) machine learning, integrates learning algorithms with neural networks to successfully estimate the validity of a “complex” strategy that incorporates a substantial percentage of decision criteria. The primary problem is to develop a system that promotes specific behaviors while avoiding adverse effects. The objective is to maximize rewards inside a simulation environment in which the computer acts and then receives feedback. The software agent is not informed in advance about the most probable activity and must select its strategy on its own through a process of interpretation.

DRL models are built on the Markov decision process (MDP), enabling an intelligent agent to be trained [2] by contact with the environment. They have demonstrated efficacy in improving structure, which is beneficial in NLP, particularly sentiment analysis. In [44], authors presented a reinforcement learning (RL) method for automatically learning phrase representation by discovering optimum structures. The reward in this research is the chance of correctly classifying the input sentence based on its structured presentation. The results demonstrate the algorithm’s efficacy. However, developing a model that learns a function of rewards remains challenging.

3.3.6. Generative adversarial network

GANs are an unsupervised pre-trained network architecture first proposed by Good-fellow and colleagues in 2014 [45]. GANs enable pretraining the network layers with large datasets. We asynchronously train two models: a generative model G that discovers the distribution of the data and a discriminative model D that indicates the likelihood that a compiled piece of data is from the classification model or the generative model. A GAN aims to analyze real data sets and develop its inventions that must resemble so natural that they cannot be identified as machine images without human interaction. The generative and discriminative models are perpetually in conflict with one another. When the discriminative network detects an error, it returns it to the other network. In this situation, the generative network has not yet reached its optimal state and must keep learning. Simultaneously, the discriminatory network has grown in sophistication. The generative network aims at generating data sets that appear to be so real that they are accepted as confirmed by the discriminative. On the other hand, the discriminative approach aims to thoroughly evaluate and comprehend real-world examples to eliminate the possibility of fraud being detected as accurate. In sentiment analysis, GAN models are the starting point.

3.3.7. Hybrid deep neural networks

Deep learning models have been extensively employed in the natural language processing sector and have shown impressive results, particularly sentiment analysis. Numerous researchers have advocated for merging models to maximize the performance of each kind. Multiple hybrid models have been supported for sentiment analysis; for example, [46] implemented a hybrid deep learning architecture for sentiment estimates based on a convolutional neural network for word embedding and a backing vector machine for sentiment classification; evaluation reveals that the proposed mechanism is exceptionally efficient. Authors in [47]   created a model that needs to be extracted from several deep learning word embedding methods, mixes them, and classifies texts according to their sentiments.

4. Works on different sentiment analysis datasets

The table below summarizes and compares the performance of various Deep Learning techniques for sentiment analysis tasks.

Table 1: Performance and applications of DL approaches on various SA tasks.

References Approach Features Dataset Accuracy
[3] Customized BERT based lion algorithm Sentiment 120 92.2 %
[8] Bidirectional Encoder Representations from transformers CoNLL-2003 86.7%
[19]

 

SentiStrength + spelling correction, booster and negation word list + common emoticons MySpace comments 66.7%
[25] Dictionary based lexicon (SentiWordNet) + SVM Lexical and syntactic features + unigrams and bigrams + POS Online product reviews 84.15% for kitchen appliances

And 78.85% for books

[26] SentiWordNet Lexicon + semantic rules + fuzzy set Word features Twitter Dataset, Movie review 88.02% for twitter and 75.85% for movie
[27] SVM Trigram and Binary Occurrence Movie review 84.90%
[28] Genetic algorithm + random forest Business documents 98.7%
[30] SVM BOW Spanish Twitter Dataset 85.8%
[33] Recurrent neural network + skip-gram model 6-word window around the target word Microsoft sentence completion challenge 58.9%
[34] Neural network Word features AP News corpus
[35] Convolutional neural network Prop Bank dataset 85.7%
[36] Global logbilinear regression model (conditional random field) Discrete feature from  Stanford NER model + five word context word analogy

task

75%
[41] Recurrent neural network (CNN/LSTM) IMDB 45.3%
[44] Reinforcement learning (HS-LSTM) Subjectivity dataset 93.7%
[46] Category sentence GAN with Reinforcement learning + generative adversarial networks + recurrent neural networks Subset of Amazon review dataset and Emotion dataset 41.52% for Emotion and 86.4% for Amazon

5. Current issues and possible directions

Numerous approaches to sentiment analysis have been presented. The majority of the proposed technique for sentiment analysis is static. As a result, one of the significant obstacles in applying deep learning algorithms for sentiment analysis is dynamic sentiment analysis and tracking. For instance, on the Twitter social network, the dataset, vocabulary, and user count might change at any time, making dynamic analysis a problematic process. On the other hand, dealing with language structure, which includes slang, is a significant difficulty. Additionally, heterogeneous information (i.e., short or long phrases) necessitates in-depth examination and specialized processing [48]. We recommend that researchers employ cutting-edge deep learning techniques to address these issues, such as the Bert algorithm for context representation learning [8], deep reinforcement learning (DRL), and generative adversarial networks (GANs), which are incredibly efficient at resolving complex problems.

6. Conclusion

Sentiment analysis is critical for decision-making in a variety of domains, including economics, business development, and social phenomenon research. Owing to its large use in potential implementation, there has been a meteoric rise in academic research and industrial applications. Recently, a significant number of academics have become interested in sentiment analysis utilizing deep learning algorithms; consequently, large varieties of efficient approaches for various tasks have been developed. We began this subject by discussing the history of sentiment analysis and its different degrees. There has been discussion of both traditional and machine learning methodologies, emphasizing deep learning approaches and their applications. We offered an application and performance analysis on a variety of real-world sentiment analysis datasets. Lastly, we outline existing issues that require attention and make some recommendations for change.

Conflict of Interest

The authors declare no conflict of interest.

  1. B. Liu, “Sentiment analysis and opinion mining,” Synthesis Lectures on Human Language Technologies, 5(1), 1–167, 2012.
  2. O. Habimana, Y. Li, R. Li, X. Gu, G. Yu, “Sentiment analysis using deep learning approaches: an overview,” Science China Information Sciences, 63(1), 1–36, 2020.
  3. F. LAGRARI, Y. ELKETTANI, “Customized BERT with Convolution Model : A New Heuristic Enabled Encoder for Twitter Sentiment Analysis”,2020
  4. S.C. Rachiraju, M. Revanth, “Feature Extraction and Classification of Movie Reviews using Advanced Machine Learning Models,” in 2020 4th International Conference on Intelligent Computing and Control Systems (ICICCS), 814–817, 2020.
  5. A.M. Alayba, V. Palade, M. England, R. Iqbal, “Arabic language sentiment analysis on health services,” in 2017 1st international workshop on arabic script analysis and recognition (asar), 114–118, 2017.
  6. S. Trinh, L. Nguyen, M. Vo, P. Do, Lexicon-based sentiment analysis of Facebook comments in Vietnamese language, Springer: 263–276, 2016.
  7. R. Alhajj, J. Rokne, Encyclopedia of social network analysis and mining, Springer, 2014.
  8. J. Devlin, M.-W. Chang, K. Lee, K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” ArXiv Preprint ArXiv:1810.04805, 2018
  9. Y. Ma, H. Peng, E. Cambria, “Targeted aspect-based sentiment analysis via embedding commonsense knowledge into an attentive LSTM,” in Thirty-second AAAI conference on artificial intelligence, 2018.
  10. M.H. Arif, J. Li, M. Iqbal, K. Liu, “Sentiment analysis and spam detection in short informal text using learning classifier systems,” Soft Computing, 22(21), 7281–7291, 2018
  11. M. Al-Smadi, M. Al-Ayyoub, Y. Jararweh, O. Qawasmeh, “Enhancing aspect-based sentiment analysis of Arabic hotels’ reviews using morphological, syntactic and semantic features,” Information Processing \& Management, 56(2), 308–319, 2019.
  12. I.P. Windasari, D. Eridani, “Sentiment analysis on travel destination in Indonesia,” in 2017 4th International Conference on Information Technology, Computer, and Electrical Engineering (ICITACEE), 276–279, 2017.
  13. C. Fellbaum, “WordNet: An electronic lexical resource,” The Oxford Handbook of Cognitive Science, 301–314, 2017.
  14. E.M. Alshari, A. Azman, S. Doraisamy, N. Mustapha, M. Alkeshr, “Effective method for sentiment lexical dictionary enrichment based on Word2Vec for sentiment analysis,” in 2018 Fourth International Conference on Information Retrieval and Knowledge Management (CAMP), 1–5, 2018.
  15. K. Ravi, V. Ravi, “A survey on opinion mining and sentiment analysis: tasks, approaches and applications,” Knowledge-Based Systems, 89, 14–46, 2015.
  16. M. del P. Salas-Zárate, J. Medina-Moreira, K. Lagos-Ortiz, H. Luna-Aveiga, M.A. Rodriguez-Garcia, R. Valencia-Garcia, “Sentiment analysis on tweets about diabetes: an aspect-level approach,” Computational and Mathematical Methods in Medicine, 2017.
  17. A. Bittar, S. Velupillai, A. Roberts, R. Dutta, “Using General-purpose Sentiment Lexicons for Suicide Risk Assessment in Electronic Health Records: Corpus-Based Analysis,” JMIR Medical Informatics, 9(4), e22397, 2021.
  18. J. Zhang, C. Zhao, F. Xu, P. Zhang, “SVM-Based Sentiment Analysis Algorithm of Chinese Microblog Under Complex Sentence Pattern,” in International Conference in Communications, Signal Processing, and Systems, 801–809, 2016.
  19. M.R. Islam, M.F. Zibran, “SentiStrength-SE: Exploiting domain specificity for improved sentiment analysis in software engineering text,” Journal of Systems and Software, 145, 125–146, 2018.
  20. M.K. Sohrabi, F. Hemmatian, “An efficient preprocessing method for supervised sentiment analysis by converting sentences to numerical vectors: a twitter case study,” Multimedia Tools and Applications, 78(17), 24863–24882, 2019.
  21. S. Jabri, A. Dahbi, T. Gadi, A. Bassir, “Ranking of text documents using TF-IDF weighting and association rules mining,” in 2018 4th international conference on optimization and applications (ICOA), 1–6, 2018.
  22. A. Krouska, C. Troussas, M. Virvou, Deep learning for twitter sentiment analysis: the effect of pre-trained word embedding, Springer: 111–124, 2020.
  23. M. Ghio, K. Haegert, M.M. Vaghi, M. Tettamanti, “Sentential negation of abstract and concrete conceptual categories: a brain decoding multivariate pattern analysis study,” Philosophical Transactions of the Royal Society B: Biological Sciences, 373(1752), 20170124, 2018.
  24. M. Xiao, Y. Guo, “Annotation projection-based representation learning for cross-lingual dependency parsing,” in Proceedings of the Nineteenth Conference on Computational Natural Language Learning, 73–82, 2015.
  25. B.M. Hopkinson, A.C. King, D.P. Owen, M. Johnson-Roberson, M.H. Long, S.M. Bhandarkar, “Automated classification of three-dimensional reconstructions of coral reefs using convolutional neural networks,” PloS One, 15(3), e0230671, 2020.
  26. O. Appel, F. Chiclana, J. Carter, H. Fujita, “Cross-ratio uninorms as an effective aggregation mechanism in sentiment analysis,” Knowledge-Based Systems, 124, 16–22, 2017.
  27. A. Alsaeedi, M.Z. Khan, “A study on sentiment analysis techniques of Twitter data,” International Journal of Advanced Computer Science and Applications, 10(2), 361–374, 2019.
  28. F.-E. Lagrari, H. Ziyati, Y. El Kettani, “An efficient model of text categorization based on feature selection and random forests: case for business documents,” in International Conference on Advanced Intelligent Systems for Sustainable Development, 465–476, 2018.
  29. M. Crawford, T.M. Khoshgoftaar, J.D. Prusa, A.N. Richter, H. Al Najada, Survey of review spam detection using machine learning techniques. J Big Data 2 (1): 23, 2015.
  30. G.A. Ruz, P.A. Henr\’\iquez, A. Mascareño, “Sentiment analysis of Twitter data during critical events through Bayesian networks classifiers,” Future Generation Computer Systems, 106, 92–104, 2020.
  31. Shoukry, A., & Rafea, A.,” A hybrid approach for sentiment classification of Egyptian dialect tweets,” 2015 First International Conference on Arabic Computational Linguistics (ACLing). https://doi.org/10.1109/acling.2015.18,(2015)
  32. L. Zhang, S. Wang, B. Liu, “Deep learning for sentiment analysis: A survey,” Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 8(4), e1253, 2018.
  33. N. Alami, M. Meknassi, N. En-nahnahi, “Enhancing unsupervised neural networks based text summarization with word embedding and ensemble learning,” Expert Systems with Applications, 123, 195–211, 2019.
  34. F. Hill, K. Cho, A. Korhonen, “Learning distributed representations of sentences from unlabelled data,” ArXiv Preprint ArXiv:1602.03483, 2016.
  35. E. Mansouri-Benssassi, J. Ye, “Synch-graph: Multisensory emotion recognition through neural synchrony via graph convolutional networks,” in Proceedings of the AAAI Conference on Artificial Intelligence, 1351–1358, 2020.
  36. Z. Rahimi, M.M. Homayounpour, “TensSent: a tensor based sentimental word embedding method,” Applied Intelligence, 1–16, 2021.
  37. M.F. Burg, S.A. Cadena, G.H. Denfield, E.Y. Walker, A.S. Tolias, M. Bethge, A.S. Ecker, “Learning divisive normalization in primary visual cortex,” PLOS Computational Biology, 17(6), e1009028, 2021.
  38. S.Seker, E. Ayaz, E. Türkcan, “Elman’s recurrent neural network applications to condition monitoring in nuclear power plant and rotating machinery,” Engineering Applications of Artificial Intelligence, 16(7–8), 647–656, 2003.
  39. L. Mou, Z. Jin, General Framework of Tree-Based Convolutional Neural Networks (TBCNNs), Springer: 37–40, 2018.
  40. D. Cazzato, M. Leo, C. Distante, H. Voos, “When i look into your eyes: A survey on computer vision contributions for human gaze estimation and tracking,” Sensors, 20(13), 3739, 2020.
  41. N. Capuano, L. Greco, P. Ritrovato, M. Vento, “Sentiment analysis for customer relationship management: An incremental learning approach,” Applied Intelligence, 51(6), 3339–3352, 2021.
  42. P. Tino, L. Benuskova, A. Sperduti, Artificial neural network models, Springer: 455–471, 2015.
  43. L. Lei, Y. Tan, K. Zheng, S. Liu, K. Zhang, X. Shen, “Deep reinforcement learning for autonomous internet of things: Model, applications and challenges,” IEEE Communications Surveys \& Tutorials, 22(3), 1722–1760, 2020.
  44. Z. Zhang, P. Cui, W. Zhu, “Deep learning on graphs: A survey,” IEEE Transactions on Knowledge and Data Engineering, 2020.
  45. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, Y. Bengio, “Generative adversarial nets,” Advances in Neural Information Processing Systems, 27, 2014.
  46. K. Pasupa, T.S.N. Ayutthaya, “Hybrid deep learning models for thai sentiment analysis,” Cognitive Computation, 1–27, 2021.
  47. S.U. Hegde, A.S. Zaiba, Y. Nagaraju, others, “Hybrid CNN-LSTM Model with GloVe Word Vector for Sentiment Analysis on Football Specific Tweets,” in 2021 International Conference on Advances in Electrical, Computing, Communication and Sustainable Technologies (ICAECT), 1–8, 2021.
  48. J. Tang, M. Qu, Q. Mei, “Pte: Predictive text embedding through large-scale heterogeneous text networks,” in Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, 1165–1174, 2015.

Citations by Dimensions

Citations by PlumX

Google Scholar