Jinjuan Wu* , Zhengtao Yu* , Shulong Liu* , Yafei Zhang* and Shengxiang Gao*Summarizing the Differences in Chinese-Vietnamese Bilingual NewsAbstract: Summarizing the differences in Chinese-Vietnamese bilingual news plays an important supporting role in the comparative analysis of news views between China and Vietnam. Aiming at cross-language problems in the analysis of the differences between Chinese and Vietnamese bilingual news, we propose a new method of summarizing the differences based on an undirected graph model. The method extracts elements to represent the sentences, and builds a bridge between different languages based on Wikipedia’s multilingual concept description page. Firstly, we calculate the similarity between Chinese and Vietnamese news sentences, and filter the bilingual sentences accordingly. Then we use the filtered sentences as nodes and the similarity grade as the weight of the edge to construct an undirected graph model. Finally, combining the random walk algorithm, the weight of the node is calculated according to the weight of the edge, and sentences with highest weight can be extracted as the difference summary. The experiment results show that our proposed approach achieved the highest score of 0.1837 on the annotated test set, which outperforms the state-of-the-art summarization models. Keywords: Bilingual News , Chinese-Vietnamese , Sentence Similarity , Summarizing the Difference , Undirected Graph 1. IntroductionIn the Internet Age, information spreads rapidly regardless of borders. The media in different countries will report on the same event and express different opinions because of different positions. For example, on the theme of “One Belt, One Road”, news reports in both Chinese and Vietnamese describe the content of the cooperation project agreement. However, Chinese news tend to emphasize the promotion of trade cooperation and cultural exchange, while Vietnamese articles tend to describe improvements in infrastructure construction and industrial development. This paper aims to summarize the differences in reporting between different languages, and generate a difference summary to help people understand events more comprehensively and accurately. Within the field of summarizing the difference in bilingual news, cross-language analysis is a difficult issue. This problem generally can be addressed by the bilingual dictionary approach [1], parallel corpus approach [2,3] and machine translation approach [4,5]. The dictionary approach first builds a bilingual alignment dictionary, and aligns the key words (e.g., emotional words or entities) as required. This method assumes that bridges can be built between different languages through clearly aligned keywords. For example, Mihalcea et al. [1] use both an English-Romanian and general dictionary to construct a bilingual-aligned subjective dictionary. The parallel corpus approach mainly uses the alignment relationship of a parallel corpus which is composed of the source language and its translation into other languages. The parallel corpus includes word-level, sentence-level and chapter-level alignment, but the difficulty lies in the fact that the corpus is not easy to obtain. For example, Banea et al. [3] propose to use subjective and objective classifiers of source language and parallel corpus to classify objective language. In recent years, the results of machine translation have improved, and it is gradually becoming an effective means of cross-language analysis. For example, Banea et al. [4] propose two different cross-language methods using machine translation: source language translation to target language, and target language translation to source language. Research into news summary extraction can be divided into topic representation approaches and indicator representation approaches [6]. Topic representation approaches first convert the text into a series of topics, then calculate the importance of the sentences according to the topic, and finally select the important sentences as the summary. Gillick et al. [7] propose the use of higher frequency words as topic representations, and these higher frequency words tended to be domain specific. Celikyilmaz and Hakkani-Tur [8] suggest using the hLDA model to calculate important topics in multi-document news, and then generate a summary. The authors [9] propose to use cosine distance to compute sentence similarity, and cluster sentences to extract the topic. Indicator representation approaches directly express the sentence into the feature vector and then calculate the importance of the sentence. For example, graph models [10,11] are used to calculate the importance of sentences, where graph vertices represent sentences, edges represent cosine similarity between sentences, the random walk algorithm is used to calculate the weight of the vertices, and the high weight would be used to select the most important sentence as a summary. Wan and Zhang [12] propose a novel system to incorporate the new factor of information certainty into the summarization task, which produce better content quality. The rise in the study of deep learning has also contributed to the extractive summarization task. Some methods use neural networks in the single document summarization framework [13-15]. They formulate sentence ranking as a hierarchical regression process. Given sentences with labeled importance scores [13], or the symbol [14,15] of 0 or 1, which indicates whether to extract the sentence into the summary or not. Unfortunately, the application of neural networks methods to bilingual multi-document summarization is difficult. Not only encoding and decoding for a long sequence of multiple sentences still lack satisfactory solutions [16], but it also lacks a large-scale corpus for training. The existing approaches to summary extraction mainly involves single language documents, which aim to extract the important content of news and eliminate redundant information. In this paper, we analyze the multilingual news and extract different information. Singh et al. [17] propose to use a restricted Boltzmann machine to generate a summary retaining its important information. In recent years, graph-based ranking algorithm has been widely used for this task, such as the research conducted by Wan et al. [18], who propose a ranking method based on a graph to score the importance and differences in Chinese-English documents and then select sentences with high scores to generate a summary. The current article focuses on Chinese and Vietnamese news documents, with the research methods divided into two steps. First, the similarity information is filtered according to the cosine similarity between Chinese and Vietnamese news sentences. Second, the graph model is constructed, and the random walk algorithm is used to extract the representative sentences to generate the summary. 2. A Summary Method of News Difference Based on a Graph ModelTo reflect the difference between Chinese and Vietnamese news. First, we extract the elements contained in the news documents to characterize the sentences. Second, we calculate the similarity between cross-language news to filter out the highly similar sentences. Third, the sentences that had not been filtered out as the vertices to construct the graph model. Finally, we use the random walk algorithm to obtain the weight of the vertices, that is the importance of the sentence, with the most important (n) selected as the summary. The method of implementation is shown in Fig. 1. 2.1 The Extraction of Bilingual News ElementsThe elements [19] contain important information such as the time, place, participant, and institution in the news events. This paper aims to extract elements contained in Chinese and Vietnamese sentences, and use them to characterize the sentences. The extraction of Chinese elements use the LTP cloud platform [20]. We set named entities as news elements then obtain the collection of Chinese elements [TeX:] $$E_{c n}=\left\{e_{c 1}, e_{c 2}, \ldots, e_{c m}\right\}$$. Due to the lack of Vietnamese named entity recognition tools in the process, word segmentation tool [21] can be used to segment sentences, and part-of-speech tagging. We then manually extract the elements according to the processing results to obtain the collection of Vietnamese elements [TeX:] $$$$E_{v e}=\left\{e_{v 1}, e_{v 2}, \ldots, e_{v n}\right\}. Chinese and Vietnamese sentences are characterized by elements, for example [TeX:] $$S_{k}=\left\{e_{1}, e_{2}, \ldots, e_{k}\right\}$$. 2.2 Filter Similar News SentencesChinese and Vietnamese sentences with high similarity will not reflect differences. Based on this consideration, initial filtering is carried out according to the similarity before the sentence is analyzed. As Chinese-Vietnamese machine translation technology is not mature, we cannot simply translate bilingual news into one language. We therefore seek help from the multi-language concept description pages on Wikipedia [22]. The translation between concepts corresponded, so this is used in the calculation of Chinese/Vietnamese semantic similarity to realize the analysis of sentence relations. There are many language options in Wikipedia, in which Chinese and Vietnamese concepts are the basis for similarity calculation between Chinese and Vietnamese words. Using this method [22], we first extract the Chinese/Vietnamese concept set with correspondences in Wikipedia, constructing a bilingual concept feature space. Then, words are represented as vectors by the mapping of feature spaces. Finally, the similarity between two vectors is calculated by the cosine. In our proposed approach, the input are the Chinese word [TeX:] $$e^{c n}$$ and the Vietnamese word [TeX:] $$e^{v e}$$ , let the two vectors are represented by [TeX:] $$\vec{e}^{c n}=\left\{e_{1}^{c n}, e_{2}^{c n}, \ldots, e_{n}^{c n}\right\}$$ and [TeX:] $$\vec{e}^{v e}=\left\{e_{1}^{v e}, e_{2}^{v e}, \ldots, e_{n}^{v e}\right\}$$ , respectively. The formula for semantic similarity of Chinese and Vietnamese words is as follows:
(1)[TeX:] $$\operatorname{sim}\left(e^{c n_{n}} e^{n e}\right)=\frac{\sum_{i=1}^{n}\left(e_{i}^{c_{n}}, e_{i}^{\mathrm{ve}}\right)}{\sqrt{\sum_{i=1}^{n}\left(e_{i}^{c n}\right)^{2}} \sqrt{\sum_{i=1}^{n}\left(e_{i}^{\mathrm{ve}}\right)^{2}}}$$Each news sentence is characterized by one or more elements, so similarity of the sentences can be computed by the similarity of the elements it contains. Assuming two sentences [TeX:] $$s_{i}$$ and [TeX:] $$s_{j}$$ contain the elements [TeX:] $$e_{1}, e_{2}, \cdots e_{m}$$ and [TeX:] $$e_{1}, e_{2}, \cdots e_{n}$$ after the word segmentation and part-of-speech tagging, that is, [TeX:] $$s_{i}$$ is composed of [TeX:] $$m$$ words and [TeX:] $$s_{j}$$ is composed of [TeX:] $$n$$ words. The sentence similarity calculation method is based on the set of extracted elements. Words are selected one by one from the set of elements in a sentence to calculate the similarity with words in the element set of the same language documents. The word pair that obtains maximum similarity will be selected until the sentence element collection is void. Then the similarity of these word pairs will be added, and divided by the number of words contained in the sentence element set to determine similarity of the two sentences. The formula is as follows:
where [TeX:] $$w_{i j}$$ represents the similarity between the sentence [TeX:] $$s_{i}$$ and [TeX:] $$s_{j}$$ in the same language document, and [TeX:] $$\operatorname{sim}\left(e_{i}, e_{j}\right)$$ means the similarity between the elements [TeX:] $$e_{i}$$ and [TeX:] $$e_{j}$$ . Assuming that [TeX:] $$S_{c n}=\left\{s_{1}^{c n}, s_{2}^{c n}, \ldots, s_{m}^{c n}\right\}$$ contains m sets of Chinese sentences, [TeX:] $$S_{v e}=\left\{s_{1}^{v e}, s_{2}^{v e}, \ldots, s_{n}^{v e}\right\}$$ contains n sets of Vietnamese sentences, and [TeX:] $$W_{i j}, i \in[1, m], j \in[1, n]$$ represents the similarity matrix between Chinese and Vietnamese sentences, which can be shown as:
(3)[TeX:] $$W_{i j}=\left[\begin{array}{ccccc} {w_{11}} & {w_{12}} & {\cdots} & {w_{1 n-1}} & {w_{1 n}} \\ {w_{21}} & {w_{22}} & {\cdots} & {w_{2 n-1}} & {w_{2 n}} \\ {\vdots} & {\vdots} & {\ddots} & {\vdots} & {\vdots} \\ {w_{m-11}} & {w_{m-12}} & {\cdots} & {w_{m-1 n-1}} & {w_{m-1 n}} \\ {w_{m 1}} & {w_{m 2}} & {\cdots} & {w_{m n-1}} & {w_{m n}} \end{array}\right]$$After obtaining the similarity between Chinese and Vietnamese sentences, it is obviously unreasonable to filter sentences directly according to the similarity. For example, assume that the threshold is [TeX:] $$alpha$$, and [TeX:] $$w_{24} \geq \alpha$$ , which satisfy the condition that the similarity is greater than the threshold. If the sentences [TeX:] $$s_{2}^{c n}$$ and [TeX:] $$s_{4}^{v e}$$ are directly filtered only because of high similarity between them, the accuracy of the summary will be affected. [TeX:] $$w_{24} \geq \alpha$$ can only indicate that there is little difference between sentence [TeX:] $$s_{2}^{c n}$$ and [TeX:] $$s_{4}^{v e}$$ ,but sentence [TeX:] $$s_{2}^{c n}$$ may still be different from Vietnamese sentences with the exception of [TeX:] $$s_{4}^{v e}$$. Sentence [TeX:] $$s_{4}^{v e}$$ may still be different from Chinese sentences except for [TeX:] $$s_{2}^{c n}$$. Based on the above considerations, the following method is adopted. First, global similarity is calculated for each sentence. Second, the sentences are filtered according to whether the global similarity of the sentences satisfy the threshold condition.
(4)[TeX:] $$\operatorname{sim}\left(s_{i}^{c n}\right)=\frac{1}{n} \sum_{j=1}^{n} w_{i j}, i=1,2, \ldots, m$$
(5)[TeX:] $$\operatorname{sim}\left(s_{j}^{\mathrm{ve}}\right)=\frac{1}{m} \sum_{i=1}^{m} w_{i j}, j=1,2, \ldots, n$$where [TeX:] $$\sin \left(s_{i}^{c n}\right)$$ and [TeX:] $$\operatorname{sim}\left(s_{j}^{v e}\right)$$ represent the global similarity of Chinese sentence [TeX:] $$s_{i}^{c n}$$ and Vietnamese sentence [TeX:] $s_{j}^{v e}$$, respectively. To be specific,[TeX:] $$\sin \left(s_{i}^{c n}\right)$$ measures the similarity between a Chinese sentence and the Vietnamese full text articles. If the global similarity is higher than the threshold, it means that the difference of the Chinese sentence and Vietnamese full text is small and the sentence should be filtered out. The Vietnamese news sentences are handled in a similar way. We set the global similarity threshold to 0.2 during the experiment. 2.3 Graph Model ConstructionAfter initial filtering of the sentence, the Chinese sentence [TeX:] $$S_{c n}=\left\{s_{1}^{c n}, s_{2}^{c n}, \ldots, s_{m}^{c n}\right\}$$ and the Vietnamese sentence [TeX:] $$S_{v e}=\left\{s_{1}^{v e}, s_{2}^{v e}, \ldots, s_{n}^{v e}\right\}$$ are obtained, where m and n are used to indicate the quantity of the remaining Chinese and Vietnamese sentences, respectively. The remaining sentences can, in some cases, reflect differences between different languages news. The purpose of this paper is to summarize the differences between Chinese and Vietnamese news. To achieve this goal, we need to meet two conditions. First of all, the extracted news sentences should reflect that different language sentences contain different information. Second, the extracted sentences should reflect the nature of the summary, that is, they should be representative or important. In the first step, the filtering based on the global similarity of the sentence satisfied the first condition, so the Chinese and Vietnamese sentences remaining after filtering need to be processed into a summary. To achieve this goal, we calculate the scores of the different language sentences separately, and extract high value (n) scores as a summary of news differences. To evaluate the importance of a sentence, we can consider the following features: the similarity of sentences in the same language documents, the difference of sentences in the different language documents. The higher the similarity of the same language documents, the more the sentence can reflect the news document content. The higher the difference of the different language document sets, the more the difference can be reflected in the Chinese and Vietnamese news expressions. Using this analysis, we construct the undirected graph model shown in Fig. 2. The vertices in the Fig. 2, indicate Chinese or Vietnamese sentences. [TeX:] $$E^{v e}$$ represents the similarity between Chinese sentences; [TeX:] $$E^{v e}$$ represents the similarity between Vietnamese sentences, and [TeX:] $$E^{c n v e}$$ represents the difference between Chinese and Vietnamese sentences. The similarity between sentences in the same language is calculated by cosine similarity. We select unigram + bigram as the feature. The sentence is represented as a vector by using the vector space model (VSM) model, and the similarity is calculated according to the cosine distance of the vector. Bilingual sentence similarity calculation is based on Wikipedia, and the similarity of every sentence is obtained by calculating the Euclidean distance between the element vectors. Chinese word vector [TeX:] $$\vec{e}^{c n}=\left\{e_{1}^{c n}, e_{2}^{c n}, \ldots, e_{n}^{c n}\right\}$$ ,and Vietnamese word vector [TeX:] $$\vec{e}^{\mathrm{ve}}=\left\{e_{1}^{v e}, e_{2}^{v e}, \ldots, e_{n}^{v e}\right\}$$ . The formula of similarity for Chinese and Vietnamese words is as follows:
(6)[TeX:] $$\operatorname{Dis}\left(e_{i}^{c n}, e_{j}^{\mathrm{ve}}\right)=\frac{\left\|\vec{e}_{t}^{c n}-\vec{e}_{j}^{\mathrm{ve}}\right\|}{\left\|\vec{e}_{i}^{c n}\right\|+\left\|\vec{e}_{j}^{v e}\right\|}$$Similarity between Chinese-Vietnamese news sentences is as follows:
(7)[TeX:] $$w_{i j}=\sum_{u=1}^{m} \max \operatorname{Dis}\left(e_{i}^{c n}, e_{j}^{v e}\right) / mwhere [TeX:] $$w_{i j}$$ represents the similarity between sentences [TeX:] $$s_{i}$$ and [TeX:] $$s_{j}$$ in different language document sets, and [TeX:] $$\operatorname{Dis}\left(e_{i}, e_{j}\right)$$ represents the similarity between elements [TeX:] $$e_{i}$$ and [TeX:] $$e_{j}$$ . We construct the similarity matrix between Chinese and Vietnamese [TeX:] $$W_{i j}^{c n v e}$$ , [TeX:] $$i \in[1, m] ; j \in[1, n]$$, and let [TeX:] $$\left(W_{i j}^{\text {cnve }}\right)^{T}=W_{i j}^{\text {vecn }}$$ . 2.4 Graph Model SolvingThe matrices [TeX:] $$w_{i j}^{c n}$$ , [TeX:] $$w_{i j}^{v e}$$ , [TeX:] $$w_{i j}^{c n v e}$$ represent the similarity between Chinese sentences, the similarity between Vietnamese sentences and the similarity between Chinese and Vietnamese sentences. The element of the matrix is equivalent to the weight of the edge in the graph model. The weight of the vertex can be calculated by the weight of the edge [18], letting [TeX:] $$u\left(s_{i}^{c n}\right)_{m \times 1}$$ and [TeX:] $$v\left(s_{j}^{v e}\right)_{n \times 1}$$ represent the scores of the Chinese sentence and the Vietnamese sentence. To achieve this goal, each matrix is first normalized to get [TeX:] $$\tilde{w}_{i j}^{c n}$$ , [TeX:] $$\tilde{w}_{i j}^{v e}$$ , [TeX:] $$\tilde{w}_{i j}^{c n v e}$$ , ensuring the sum of the elements in each row of the matrix is 1.
(7)[TeX:] $$u\left(s_{i}^{c n}\right)=\alpha \sum_{j} \tilde{w}_{i j}^{c n} u\left(s_{j}^{c n}\right)+\beta \sum_{j} \tilde{w}_{i j}^{v e c n} u\left(s_{j}^{v e}\right)$$
(8)[TeX:] $$u\left(s_{j}^{v e}\right)=\alpha \sum_{i} \tilde{w}_{i j}^{v e} u\left(s_{i}^{v e}\right)+\beta \sum_{i} \tilde{w}_{i j}^{c n v e} u\left(s_{i}^{c n}\right)$$where [TeX:] $$\alpha$$ and [TeX:] $$\beta$$ indicate the effect of the same and different language similarity. Based on these assumptions we can see [TeX:] $$\alpha>0$$ , [TeX:] $$\beta>0$$ , let [TeX:] $$\alpha+\beta=1$$. The above formulae are iteratively solved. To make the solution converge, [TeX:] $$u\left(s_{i}^{c n}\right)_{m \times 1}$$ and [TeX:] $$v\left(s_{j}^{v e}\right)_{n \times 1}$$ are normalized after each iteration. When the difference between the results of the two iterations is less than the threshold, it is assumed that the iteration ends. The scores of Chinese sentences [TeX:] $$u\left(s_{i}^{c n}\right)_{m \times 1}$$ and Vietnamese sentences [TeX:] $$v\left(s_{j}^{v e}\right)_{n \times 1}$$ are obtained by this method. To further filter redundant information, we choose the greedy algorithm [23] to deal with the current score, and get the final sentence score. The algorithm for dealing with Chinese sentences was as follows: (1) Initializes the two collections: [TeX:] $$A=\varphi$$ ,[TeX:] $$B=\left\{s_{i}, i=1,2, \dots, m\right\}$$ ,where set B represents the Chinese sentence set. (2) The elements in set B are sorted in reverse order with the original score [TeX:] $$u\left(s_{i}^{c n}\right)_{m \times 1}$$ . (3) Assuming that [TeX:] $$S_{i}$$ is ranked first, it is moved from set B to set A, and then the sentence score recalculated for similarity with [TeX:] $$s_{i}$$ in set B. [TeX:] $$s_{j}$$ is used to represent the sentence with the similarity to [TeX:] $$s_{i}$$ . The score is calculated as follows: [TeX:] $$\operatorname{score}\left(s_{j}\right)=u\left(s_{j}^{c n}\right)-\varphi \cdot w_{i j}^{c n} \cdot u\left(s_{j}^{c n}\right)$$ , where [TeX:] $$u\left(s_{j}^{c n}\right)$$ represents the original score of sentence [TeX:] $$s_{j}$$ , [TeX:] $$\varphi$$ represents penalty factor, [TeX:] $$w_{i j}^{c n}$$ and represents the similarity of [TeX:] $$s_{i}$$ and [TeX:] $$s_{j}$$ . When the penalty factor [TeX:] $\varphi$$ is 0 there is no penalty, and the sentence [TeX:] $$s_{j}$$ score is unchanged. We use an experimental selection penalty factor of 0.5. (4) The score calculated in the previous step was used to reverse the order of the elements in set B and then return to the third step until the number of elements in set B was zero. The algorithm for Vietnamese news processing is consistent with the above. It needs to replace the input into Vietnamese sentence sets, so that the original score matrix is replaced by [TeX:] $$v\left(s_{j}^{v e}\right)_{n \times 1}$$ , and the similarity matrix is replaced by [TeX:] $$w_{i j}^{v e}$$ . Using this method to calculate the final score of Chinese and Vietnamese sentences, and sort the sentences according to the final scores for each language, the top (n) sentences are extracted as summaries of news differences. 3. Experiments and Result3.1 Data SetThe experimental data set contains Chinese and Vietnamese news of three topics. We searched http://google.com.hk/ to obtain the news documents related to the topics. Some documents were collected manually as data sets for the experiments. The specific information is shown in Table 1. Table 1.
To evaluate the results, we read the Chinese and Vietnamese sentences on each of the three topics. Based on our full understanding of these news items, we chose 5 sentences from each language to form a summary of the differences in the news. 3.2 Evaluation MetricsThe top 5 sentences from each language were extracted as different sentences for the experiment. To evaluate the effect of the algorithm, we used the n-gram co-occurrence measure proposed by Lin and Hovy [24]. This method evaluates the model by calculating the degree of n-gram co-occurrence between the model summary and the manual summary. The higher the co-occurrence, the better the effect of the model. The calculation method is as follows:
(10)[TeX:] $$C_{\mathrm{n}}=\frac{\sum_{C \in\{M \text { od }\ el\}} \sum_{n-g r a m \in C} \operatorname{Count}_{\text {match}}(n-g r a m)}{\sum_{C \in\{M \text { od }\ el\}} \sum_{n-g r a m \in C} \text { Count }(n-\text {gram})}$$where [TeX:] $$\text { Count }_{\text {match}}(n-\text {gram})$$ represents the number of n-gram co-occurrences between the model summary and the manual summary, [TeX:] $$\text { Count }(n-\text {gram})$$ represents the number of n-gram in the model summary.
(11)[TeX:] $$\operatorname{Ngram}(i, j)=\exp \left(\sum_{i}^{j} w_{n} \log C_{n}\right) i \leq j ; i, j \in[1,4]$$where [TeX:] $$w_{n}$$ is the normalization factor and [TeX:] $$w_{n}=\frac{1}{j-i+1}$$ , when [TeX:] $$i=j=1$$ , [TeX:] $$\operatorname{Ngram}(1,1)$$ represents the degree of unigram co-occurrence, [TeX:] $$\operatorname{Ngram}(1,2)$$ represents the degree of unigram+bigram co-occurrence. 3.3 Evaluation ResultsThis paper selected the following three baselines methods to show the effectiveness of our proposed approaches. Centroid [25]: A centroid-based method is used to calculate the saliency scores of sentences in the different languages. First, we calculate three scores: the centroid value, the position value and the overlapping value of the first sentence. Second, the three values are linearly summed to get the sentence score. Finally, the redundant information is removed to obtain the summary sentence. It is worth noting that this method does not use cross-language information. Centroid++: This is an improved method based on the centroid method, which integrates cross-language information. The final score of the sentence comes from subtracting cross language similarity from scores calculated by the centroid method, and further reflects the differences between the different languages. PBES [26]: Phrase-based extractive summarization [26] uses phrase-based scoring to represent saliency scores of sentences. We can assign phrase-based scores to sentences from the translated documents for summarization purposes. The model can operate on lexical entries with more than one word in the source and target languages. This works well with cross-language document summarization. In the initial filtering of bilingual sentences according to global similarity, the global similarity threshold was set to 0.2, i.e. when we take 0.2, about 30% of the sentences are filtered out. In addition, the purpose of this paper was to extract the difference summary, not only concerned with the difference between the languages, but also the importance of sentences in the same language. We set [TeX:] $$\alpha=0.5$$ , [TeX:] $$\beta=0.5$$ in the random walk algorithm, which means the similarity between different languages and between the same languages contribute equally to the final score of the sentence. We used the settings to implement the methods given in this article, and to achieve three baseline methods. Based on the n-gram co-occurrence measure, the [TeX:] $$\operatorname{Ngram}(1,1)$$ and [TeX:] $$\operatorname{Ngram}(1,2)$$ of the three different methods were calculated. Table 2 shows the experimental results of the Chinese difference summary. Table 3 shows the experimental results of the Vietnamese difference summary. Table 2.
Table 3.
We compared the output of the model to other summary systems. The first two methods pay more attention to the location characteristics of sentences when extracted. PBES analyzes the relation between bilingual sentences by machine translation. This paper studies the problem of cross-language document summarization in Chinese and Vietnamese. Vietnamese is a minority language, and the results of machine translation are not optimal. In response, our method builds a bridge between different languages based on Wikipedia’s multilingual concept description page, extracting elements to represent the sentences. It can be seen from Tables 2 and 3 that our method is superior to Centroid, Centroid++ and PBES under the same evaluation method, whether Chinese or Vietnamese news data is examined. The effect of α on the experimental results can be observed in Figs. 3 and 4, which indicate the influence on Chinese and Vietnamese respectively. It can be seen that the experimental results gradually improve with the increase of α value, peaking at about 0.5, then gradually decreasing with the increase of α value. Table 4.
Finally, we selected the topic “Mekong River”, and used this method to summarize the differences in Chinese-Vietnamese bilingual news as shown in Table 4. Here the proposed method extracts different viewpoints from the Chinese and Vietnamese news on the Mekong River topic. The Chinese summary paid attention to Vietnam’s severe drought and provides an objective analysis of the shortage of water resources. The Vietnamese summary emphasized the limited flow of the Mekong to particular areas and the need of the China Hydropower Station to discharge water. To a certain extent, the differences between Chinese and Vietnamese news are reflected here. 4. ConclusionsIn this paper, we have proposed a method based on a graph model to summarize the differences between Chinese and Vietnamese bilingual news. In the proposed method, multilingual conceptual description pages on Wikipedia were used to analyze sentences similarity, which contribute to solving the graph model and further complete the summary task. The experiments are giving to show the effectiveness of our proposed approach. AcknowledgementThis work was supported by National key research and development plan project (No. 2018YFC0830105, 2018YFC0830100), National Nature Science Foundation (No. 61732005, 61672271, 61761026, 61662041, 61762056), High-tech Industry Development Project of Yunnan Province (No. 201606), and Natural Science Foundation of Yunnan Province (No. 2018FB104). BiographyBiographyZhengtao Yuhttps://orcid.org/0000-0002-4012-461XHe is currently a professor and Ph.D. supervisor at School of Information Engineering and Automation, and the chairman of Key Laboratory of Intelligent Information Processing, Kunming University of Science and Technology, Kunming, China. He received the Ph.D. degree in Computer Application Technology from Beijing Institute of Technology, Beijing, China, in 2005. His main research interests include natural language processing, machine translation and information retrieval. BiographyBiographyYafei Zhanghttps://orcid.org/0000-0003-2347-5642She is currently a lecturer and master’s supervisor at College of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China. She received the Ph.D. degree in Signal and information processing from Institute of Electronics, Chinese Academy of Sciences, Beijing, China, in 2008. Her main research interests include image processing and natural language processing. BiographyShengxiang Gaohttps://orcid.org/0000-0002-2980-8420She is lecturer at Kunming University of Science and Technology, Kunming, China. She is also a CCF member since 2013. She received the bachelor’s degree in industrial automation, the M.S. degree in pattern recognition and intelligent system and the Ph.D. degree from Kunming University of Science and Technology in 2000, 2005, and 2016, respectively. Her research interests include nature language processing, machine translation, and information retrieval. References
|