Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 1103 2023-12-05 14:42:17 |
2 Main text revised. Meta information modification 1103 2023-12-08 09:46:53 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Dell’oglio, P.; Bondielli, A.; Marcelloni, F. Semantic Textual Similarity. Encyclopedia. Available online: https://encyclopedia.pub/entry/52387 (accessed on 22 May 2024).
Dell’oglio P, Bondielli A, Marcelloni F. Semantic Textual Similarity. Encyclopedia. Available at: https://encyclopedia.pub/entry/52387. Accessed May 22, 2024.
Dell’oglio, Pietro, Alessandro Bondielli, Francesco Marcelloni. "Semantic Textual Similarity" Encyclopedia, https://encyclopedia.pub/entry/52387 (accessed May 22, 2024).
Dell’oglio, P., Bondielli, A., & Marcelloni, F. (2023, December 05). Semantic Textual Similarity. In Encyclopedia. https://encyclopedia.pub/entry/52387
Dell’oglio, Pietro, et al. "Semantic Textual Similarity." Encyclopedia. Web. 05 December, 2023.
Semantic Textual Similarity
Edit

Semantic textual similarity (STS) refers to the degree of similarity between two pieces of text based on their meaning or semantics and assesses how closely related or similar two pieces of text are based on the information they convey, regardless of variations in wording or structure. STS has been explored from both linguistic and computational perspectives. It holds significance in various NLP applications, and in recent years, Transformer-based neural language models have emerged as the state-of-the-art solutions for many of these applications.

natural language processing text similarity Transformers neural language models

1. Introduction

Semantic textual similarity (STS) is a challenging task in Natural Language Processing (NLP) and text mining and consists in verifying the degree of similarity between two pieces of text based on their meaning. STS is closely related to the field of distributional semantics, which focuses on developing theories and methods for representing and acquiring the semantic properties of linguistic items based on their distributional properties in text corpora [1].
From the linguistic perspective, distributional semantics is based on a simple assumption called distributional hypothesis. For the distributional hypothesis, the more two words are semantically similar to each other, the more they tend to appear in the same, or similar, linguistic context due to the fact that “difference of meaning correlates with difference of distribution” [1]. From a computational perspective, we can leverage this hypothesis by representing words as vectors encoding the properties of their contexts in a vector space. Their (semantic) similarity is then given by the distance between their respective vector representations. We usually refer to vector representations of words as word embeddings [1].
The NLP field has made large use of the distributional hypothesis and the distributional properties of words to encode their meaning. While the earliest attempts exploited co-occurrence matrices to represent words based on their contexts, more modern approaches leverage machine learning and deep learning in the form of neural language models (NLMs). These models determine a probability distribution over a sequence of tokens, where tokens are usually defined as an approximation of the concept of word and are always discrete entities. Among such models, the earliest ones typically employed unsupervised learning to obtain fixed-length representations of words [2] and sentences [3]. In the case of words, it is important to cite Word2Vec, that is implemented in two different algorithms: the CBOW (Continuous Bag of Words) and the SGNS (Skip-Gram with negative samplings) [2]. In the case of sentences, Doc2Vec is an extension to Word2Vec that incorporates document level representations. Doc2Vec enables the representation of documents as dense vectors, allowing for various downstream tasks such as document classification, document similarity, and information retrieval [3].

2. Semantic Textual Similarity

Over the years, several architectures have been proposed, such as ELMo (Embeddings from Language Model) and LSTM (Long short-term memory)-based language models. In recent years, Transformer-based NLMs have established themselves as the de facto standard for many NLP tasks. Transformers, such as BERT (Bidirectional Encoder Representations from Transformers), are a type of neural network for sequence transduction that relies on a self-attention mechanism. They are able to deal with complex tasks involving human language, achieving state-of-the-art results [4]. A very attractive aspect of the BERT-like architectures is that their internal representations of words and sequences are context-aware. The attention mechanism in Transformers facilitates the consideration of relationships between words within a sentence or across more significant portions of a text, establishing deep connections. Furthermore, researchers have proposed other architectures with attention-based mechanisms, such as AlBERT [5] and DistilBERT [6], which have gained significant attention and continue to be exploited by the NLP community. Nevertheless, it is fair to admit that BERT and similar models face some limitations, especially when applied to tasks related to semantic textual similarity, particularly at the level of sentence-level embeddings.
One of the limitations of BERT and BERT-like models is evident in tasks regarding semantic textual similarity, particularly when coping with sequence-level embeddings [7]. It is well known that BERT’s sequence-level embeddings are not directly trained to encode the semantics of the sequences and, thus, are not suited to compare them with standard metrics such as cosine similarity [4]. To overcome these limitations, Sentence-BERT [7] was proposed. Sentence-BERT is a modification of the pre-trained BERT network with Siamese and triplet network structures. It can produce sentence embeddings that are semantically meaningful and compared using a similarity measure (for example, cosine similarity or Manhattan/Euclidean distance). The process of finding the most similar pair is reduced from 65 hours with BERT/RoBERTa to about 5 seconds with Sentence-BERT while maintaining the accuracy achieved by BERT [7]. Research has been conducted to design and evaluate various approaches for employing Siamese networks, similarity concepts, one-shot learning, and context/memory awareness in textual data [8]. Furthermore, recent efforts have focused on developing an unsupervised contrastive learning method that transforms pre-trained language models into universal text encoders, as seen with Mirror-BERT [9] and subsequent models [10].
In the last few years, we have also seen the rise of large language models (LLMs), with a considerable number of parameters reaching the tens or even hundreds of billions. These models differ from their predecessors in terms of scale and, in some instances, they incorporate reinforcement learning techniques during training. Prominent examples of large language models include OpenAI’s GPT [11], Google’s LLaMA [12], and Hugging Face’s BLOOM [13]. LLMs typically outperform smaller counterparts and are recognized by their zero-shot learning capabilities and emergent new abilities  [14]. However, they are affected by two significant limitations. First, most of these models are controlled by private companies and are only accessible via APIs. Second, the computational costs of such models often pose challenges for running them on standard commercial hardware without resorting to parameter selection and/or distillation techniques.
The problem of semantic textual similarity between pairs of sentences has been discussed in several papers and some studies have faced the issue of extracting semantic differences from texts. In particular, research has been carried out on the ideological discourses in newspaper texts. Following this idea, others investigated the utility of applying text-mining techniques to support the discourse analysis of news reports. They found contrast patterns to highlight ideological differences between local and international press coverage [15]. Recently, critical discourse analysis has been applied to investigate ideological differences in reporting the same news across various topics, such as the COVID-19 in Iranian and American newspapers [16], and the representation of Syrian refugees in Turkey, considering three Turkish newspapers [17].
Sentiment analysis has also been a key area of focus [18]. Finally, several studies have been conducted in completely different domains, such as scholarly documents. A hybrid model, which considers both section headers and body text to recognize automatically generic sections in scholarly documents, was proposed in [19].

References

  1. Harris, Z.S. Distributional structure. Word 1954, 10, 146–162.
  2. Mikolov, T.; Sutskever, I.; Chen, K.; Corrado, G.S.; Dean, J. Distributed representations of words and phrases and their compositionality. Adv. Neural Inf. Process. Syst. 2013, 26, 3111–3119.
  3. Le, Q.V.; Mikolov, T. Distributed Representations of Sentences and Documents. arXiv 2014, arXiv:1405.4053.
  4. Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv 2018, arXiv:1810.04805.
  5. Lan, Z.; Chen, M.; Goodman, S.; Gimpel, K.; Sharma, P.; Soricut, R. Albert: A lite bert for self-supervised learning of language representations. arXiv 2019, arXiv:1909.11942.
  6. Sanh, V.; Debut, L.; Chaumond, J.; Wolf, T. DistilBERT, a distilled version of BERT: Smaller, faster, cheaper and lighter. arXiv 2019, arXiv:1910.01108.
  7. Reimers, N.; Gurevych, I. Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing, Hong Kong, China, 3–7 November 2019; Association for Computational Linguistics: Kerrville, TX, USA, 2019.
  8. Holeček, M. Learning from similarity and information extraction from structured documents. Int. J. Doc. Anal. Recognit. (IJDAR) 2021, 24, 149–165.
  9. Liu, F.; Vulić, I.; Korhonen, A.; Collier, N. Fast, effective, and self-supervised: Transforming masked language models into universal lexical and sentence encoders. arXiv 2021, arXiv:2104.08027.
  10. Liu, F.; Jiao, Y.; Massiah, J.; Yilmaz, E.; Havrylov, S. Trans-Encoder: Unsupervised sentence-pair modelling through self-and mutual-distillations. arXiv 2021, arXiv:2109.13059.
  11. OpenAI. GPT-4 Technical Report. arXiv 2023, arXiv:2303.08774.
  12. Touvron, H.; Lavril, T.; Izacard, G.; Martinet, X.; Lachaux, M.A.; Lacroix, T.; Rozière, B.; Goyal, N.; Hambro, E.; Azhar, F.; et al. Llama: Open and efficient foundation language models. arXiv 2023, arXiv:2302.13971.
  13. Scao, T.L.; Fan, A.; Akiki, C.; Pavlick, E.; Ilić, S.; Hesslow, D.; Castagné, R.; Luccioni, A.S.; Yvon, F.; Gallé, M.; et al. Bloom: A 176b-parameter open-access multilingual language model. arXiv 2022, arXiv:2211.05100.
  14. Wei, J.; Tay, Y.; Bommasani, R.; Raffel, C.; Zoph, B.; Borgeaud, S.; Yogatama, D.; Bosma, M.; Zhou, D.; Metzler, D.; et al. Emergent abilities of large language models. arXiv 2022, arXiv:2206.07682.
  15. Pollak, S.; Coesemans, R.; Daelemans, W.; Lavrač, N. Detecting contrast patterns in newspaper articles by combining discourse analysis and text mining. Pragmatics 2011, 21, 647–683.
  16. Dezhkameh, A.; Layegh, N.; Hadidi, Y. A Critical Discourse Analysis of COVID-19 in Iranian and American Newspapers. GEMA Online J. Lang. Stud. 2021, 21, 231–244.
  17. Onay-Coker, D. The representation of Syrian refugees in Turkey: A critical discourse analysis of three newspapers. Continuum 2019, 33, 369–385.
  18. Luo, M.; Mu, X. Entity sentiment analysis in the news: A case study based on negative sentiment smoothing model (nssm). Int. J. Inf. Manag. Data Insights. 2022, 2, 100060.
  19. Li, S.; Wang, Q. A hybrid approach to recognize generic sections in scholarly documents. . Int. J. Doc. Anal. Recognit. (IJDAR) . 2021, 24, 339–348.
More
Information
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , ,
View Times: 149
Revisions: 2 times (View History)
Update Date: 08 Dec 2023
1000/1000
Video Production Service