Yatao Zhang* ** , Zhenguo Ma*** and Wentao Dong****Nonlinear Quality Indices Based on a Novel Lempel-Ziv Complexity for Assessing Quality of Multi-Lead ECGs Collected in Real TimeAbstract: We compared a novel encoding Lempel-Ziv complexity (ELZC) with three common complexity algorithms i.e., approximate entropy (ApEn), sample entropy (SampEn), and classic Lempel-Ziv complexity (CLZC) so as to determine a satisfied complexity and its corresponding quality indices for assessing quality of multi-lead electrocardiogram (ECG). First, we calculated the aforementioned algorithms on six artificial time series in order to compare their performance in terms of discerning randomness and the inherent irregularity within time series. Then, for analyzing sensitivity of the algorithms to content level of different noises within the ECG, we investigated their change trend in five artificial synthetic noisy ECGs containing different noises at several signal noise ratios. Finally, three quality indices based on the ELZC of the multi-lead ECG were proposed to assess the quality of 862 real 12-lead ECGs from the MIT databases. The results showed the ELZC could discern randomness and the inherent irregularity within six artificial time series, and also reflect content level of different noises within five artificial synthetic ECGs. The results indicated the AUCs of three quality indices of the ELZC had statistical significance (>0.500). The ELZC and its corresponding three indices were more suitable for multi-lead ECG quality assessment than the other three algorithms. Keywords: Complexity , ECG Quality Assessment , Encoding LZ Complexity , Entropy 1. IntroductionIn practice, considerable amount of electrocardiogram (ECG) recordings collected via wearable devices and smart phones cannot be used in clinic because their quality is so poor that their main waveform cannot be identified, so the quality of ECGs needs to be classified before diagnosis [1-3]. At present, most assessing quality methods used quality indices of waveform and frequency characters of ECG. Waveform characters of ECG are firstly used to generate quality indices because the characters are easy to be identified and calculated. A simple and effective method just used quality indices of waveform characters, i.e., flat lines, saturation, baseline drift, high and low amplitude and steep slope, to classify quality of ECG [4], and its assessment accuracy, namely the accuracy of quality classification achieved 85.7% on a test dataset from a specially collected database of the PhysioNet/Computing in Cardiology Challenge 2011 (CinC2011). Similarly, Kuzilek et al. [5] summarized six quality indices of waveform characters to assess ECG quality, and its accuracy achieved 83.6% on the same test dataset. Frequency characters, i.e., power spectrum density (PSD) can reflect energy distribution within time series, so they can help to obtain some detailed information within physiological series. So quality indices of PSD were used for assessing quality of ECG [6-8]. In [6], 35 quality indices generated from PSD were used for assessing ECG quality, and the corresponding assessment result achieved 90.4% on the aforementioned test dataset and was higher than that of the other methods that just employed quality indices of waveform characters. Actually, some ECG recordings can be correctly and fast classified as unacceptable ones by some obvious quality indices of waveform characters (i.e., flat line and high amplitude) without that of frequency characters. Therefore, some assessment methods combining waveform quality indices with frequency indices not only improve assessment accuracy but also help for calculation efficiency. Clifford et al. [7] and Zhang et al. [8] proposed an assessing method based on combination of quality indices of time and frequency characters respectively. Comparing with the previous methods, multi quality indices including waveform and frequency indices can provide relatively more comprehensive information within ECG so that higher assessment accuracy can be achieved. However, in quality assessment, the ECG recordings to be assessed are raw ones without any pretreatment so that both of the acceptable and unacceptable ECG recordings contain a lot of noise. So, waveforms of the acceptable signals are also seemingly disorderly so that they are similar to that of unacceptable in many cases. Thus, waveform of ECG is not favor to improve assessment accuracy and generalization ability of the assessment methods, or even worse. Furthermore, frequency bandwidths of random noise within ECG often overlap with that of normal ECG so that it is difficult to discern noise and ECG on frequency domain. Therefore, it is necessary to seek new characters within ECG and the corresponding quality indices to more accurately reflect the inherent irregularity within ECG instead of random components so that they can help for improving classification accuracy of quality assessment. In fact, normal and abnormal ECGs should exhibit the inherent nonlinear irregularity instead of randomness when content of noise and unexpected components within ECG are relatively lower, otherwise the ECG exhibits randomness. So we can evaluate quality of ECG by measuring complexity of the ECG. At present, some common complexity algorithms, i.e., classical Lempel-Ziv complexity (CLZC), approximate entropy (ApEn), and sample entropy (SampEn) have been extensively used for biomedical signal analysis [9-12]. Zhang et al. [13] evaluated performance of CLZC on ECG quality assessment and concluded that the CLZC was only sensitive to content of high frequency noise. In practice the ECG are usually contaminated by the mixed noise composed from the high frequency, low frequency and power line noise instead of a high frequency noise, so CLZC shows an unsatisfactory classification performance when it is used for quality assessment. In addition, the ApEn and SampEn have been not yet reported to be used for assessing ECG quality. Zhang et al. [14] proposed an encoding Lempel-Ziv complexity (ELZC) to analyze the irregularity of physiological signal, and the algorithm could better discern randomness and the inherent irregularity within time series than other Lempel-Ziv complexity algorithms. In this study, we performed the ApEn, SampEn, CLZC, and ELZC algorithms on three experimental schemes respectively, aiming to find a satisfactory complexity algorithm and its corresponding quality indices for ECG quality assessment. Firstly, we compared performance of the four complexity algorithms on discerning the inherent irregularity within time series and randomness caused by noise and unexpected components. Furthermore we evaluated performance of the complexity algorithms on reflecting different noises and their content level within the ECG. Finally, we proposed three quality indices based on complexity values of the multi-lead ECG, and used the indices to assess quality of 862 real 12-lead ECG recordings selected from the CinC2011 dataset of the MIT database. Receiver operating characteristic curves (ROC) and the corresponding areas under the curve (AUC) of the quality indices were calculated respectively. The rest of this study is organized as follows: Section 2 describes the four complexity methods and the dataset including artificial time series and real ECG recordings. The experimental results and discusses are described in Section 3 and Section 4, respectively. Section 5 concludes the current study. 2. Materials and Methods2.1 Artificial Synthetic and Real ECGsThis study employed three datasets including six typical artificial time series, the artificial synthetic noisy ECG and the real 12-lead ECG from the CinC2011 database. First, in this study, six typical artificial time series were generated for evaluating performance of the aforementioned four complexity algorithms on discerning the inherent irregularity within time series and randomness caused by random component. Gaussian noise represents pure noise. The first mixed noise series MIX(0.4) mean that 40% of a period series of length N are randomly chosen and replaced with independent identically distributed random noise. The other mixed noise series MIX(0.2) is generated by the same way. Logistic (Logi) mapping series represent nonlinear irregularity series and are defined as
where [TeX:] $$1<\mu \leq 4.$$ Logi(4.0) series represent the Logistic series that [TeX:] $$\mu$$ is set as 4.0. Similarly, Logi(3.8) and Logi(3.5) are also generated. Actually the Logi(3.5) represent the periodic series. Second, in this study, the artificial synthetic noisy ECG was constructed by two signals, a signal was the clear artificial ECG generated by the open source software ECGSYN [15], and the other one was the common real noise within ECG from the MIT Noise Stress Test Database (NSTDB) [16,17]. The NSTDB provides three common noises that can be typically found in real clinical application—i.e., baseline wander (BW), electrode motion (EM), and muscle artefacts (MA)—and each of three noises includes 48 real noise series. In addition, the other typical common noise, i.e., power line (PL) is also used for synthesizing the noisy ECG. In fact, the ECG is usually contaminated by a mixed noise. So we built a hybrid noise with each of the aforementioned noises on 25% proportion, and added to the clean artificial ECG. Here, we generated 48 clear artificial ECGs duration of 10 seconds with 360 Hz sampling rate and heart rate from 50 to 100 beats per minute. The clear artificial ECG was set to a 360-Hz sampling rate because sample rate of three common noises from the NSTDB is 360 Hz so that we can expediently construct the artificial synthetic noisy ECG. In this study, we generated five kinds of artificial synthetic noisy ECGs, i.e., the clean ECG plus BW, EM, MA, PL and hybrid noise. In practice, ECG usually cannot be used for clinical purpose when signal- to-noise rate (SNR) is lower than -10 dB because the main waveform of ECG cannot be recognized due to a lot of noise. So SNRs of the five synthetic ECGs are from -10 dB to 15 dB with steps of 5 dB. For each SNR level, 48 repeats were yielded in this study. Finally each of five synthetic noisy ECGs with 360 Hz sampling rate was transformed to 500 Hz sampling rate using the resample function of MATLAB in this study. Fig. 1 shows the aforementioned five artificial synthetic noisy ECGs with SNR from -10 dB to 15 dB with steps of 5 dB. SNR reflects the content level of noise within time series, and it is defined as
where [TeX:] $$P_{\text {signal}} \text {and } P_{\text {noise}}$$ denote the power of the clean ECG and the power of the noise, respectively. Third, the real data was selected from training dataset of the CinC2011 [2] in this study. The training dataset includes 1000 12-lead ECGs with sampling rate 500 Hz, and it is available to all researchers and given quality label as “acceptable” or “unacceptable”. In this dataset, each recording lasts for 10 seconds. It is unnecessary to calculate the complexity of ECG recordings with lead-fall because their waveforms look like a straight line so that the signals are easy to be identified, furthermore the signals with lead-fall affect accuracy of the final results. So this study removed the ECG recordings with lead-fall from the training set, and then the remained ECGs included 767 acceptable and 95 unacceptable recordings. 2.2 The CLZC and ELZCThe CLZC consists of two processes, and the first process is performed to transform an original time series into a new binary symbolic sequence by comparing series X with a threshold, and the series is larger than the threshold, one maps the series to 1, otherwise, to 0. In this study, the mean of the series is usually selected as the threshold for the CLZC. Then the LZ value of the new symbolic sequence will be calculated on the traditional method [9]. In fact, the LZ complexity counter c(n) is closed related to the new subsequence of consecutive characters within the symbolic sequence. For calculating the counter c(n), the symbolic sequence B is scanned from left to right and the c(n) is increased one unit when a new subsequence is encountered. First, S and are represented two subsequences of B respectively, and S is the concatenation of S and . The subsequence S yielded from S after its last character is deleted ( is the operation deleting the last character in a sequence). v(S) is the vocabulary of all subsequences of S. Initially, c(n)=1, S=s(1) and Q=s(2), then S =s(1). Generally, S=s(1), s(2), s(3), …, s(r) and =s(r+1), and so S=S. is a subsequence of S instead of a new sequence when it belongs to v(S). Then is replaced with s(r+1), s(r+2) and used to judge if it belongs to v(S) or not. The aforementioned processes are repeat until =s(r+1), s(r+2), …, s(r+i) and it is a new sequence instead of a subsequence of S, then c(n)=c(n)+1. Thereafter S=s(1), s(2), …, s(r+i) and =s(r+i+1). This above procedure is repeated until is the last character of B. in fact the LZ complexity counter c(n) may be normalized as the C(n)
where n is the length of signal X, is the number of possible symbols contained in the new sequence. Generally, the normalized complexity C(n) is adopted instead of c(n) in practice. For the ELZC, we leverage a novel symbolic process to generate an 8-state symbolic (3-bit binary) sequence, then the LZ value is calculated by the aforementioned calculation process. The novel symbolical process consists of the following steps [14]. Each [TeX:] $$x_{i}$$ within the original signal [TeX:] $$X=x_{1}, x_{2}, \ldots, x_{n}$$ is transformed into a 3-bit binary symbol [TeX:] $$b_{1}(i) b_{2}(i) b_{3}(i).$$ Step 1: the [TeX:] $$b_{1}(i)$$ is determined by comparing [TeX:] $$x_{i}$$ with the mean of signal X, and [TeX:] $$b_{1}(i)$$ is set 0 when the [TeX:] $$x_{i}$$ is less than the mean, otherwise the [TeX:] $$b_{1}(i)$$ is 1. Step 2: the [TeX:] $$b_{2}(i)$$ is 0 when the difference between xi and [TeX:] $$x_{i}$$-1 is less than 0, otherwise the [TeX:] $$b_{2}(i)$$ 1. Initially, 1. Initially, [TeX:] $$b_{2}(i)$$ is set to 0. Step 3: calculated process of the third digit [TeX:] $$b_{3}(i)$$ is relatively complex, a variable Flag is first denoted as follows:
(4)[TeX:] $$F l a g(i)=\left\{\begin{array}{l} 0 \text { if }\left|x_{i}-x_{i-1}\right|<d m \\ 1 \text { if }\left|x_{i}-x_{i-1}\right| \geq d m \end{array}, i=2,3, \ldots, n\right.$$where dm is the mean distance between adjacent points within signal X. Subsequently, [TeX:] $$b_{3}(i)$$ is calculated as follows:
(5)[TeX:] $$b_{3}(i)=\operatorname{NOT}\left(b_{2}(i) \operatorname{XOR} \operatorname{Flag}(i)\right), i=2,3, \dots, n$$where [TeX:] $$b_{3}(i)$$ is 0. The detailed symbolic process is described in [14]. 2.3 The ApEn and SampEnPincus [18] proposed ApEn as a metric to quantified regularity of a time series, and meaning the probability of new pattern within time series when the dimension increases from m to m+1. The process of calculating ApEn is described as follows: Let S be a time series of length N and [TeX:] $$S=s_{1}, \ldots, s_{N},$$ and reconstruct a vector xi of the embedded dimension m and [TeX:] $$x_{i}=s_{i}, s_{i}+1, s_{i}+2, \ldots, s_{i}+m-1 \text { for } 1 \leq i \leq N-m+1$$ where m indicates the embedding dimension. The distance [TeX:] $$d_{i j}$$ between the two vectors [TeX:] $$x_{i} and x_{j}$$ is calculated where [TeX:] $$1 \leq i, j \leq N-m+1.$$
For each vector [TeX:] $$x_{i},$$ the number of the distance [TeX:] $$d_{i j}$$ within r×Std is found where Std is the standard deviation of the time series S, and the ratio of the number to the total number of vector N-m+1 is calculated as [TeX:] $$C_{i}^{m}(r).$$
(7)[TeX:] $$C_{i}^{m}=\frac{1}{N-m+1} \operatorname{num}\left\{d_{i j}<r\right\}, i=1,2, \ldots, N-m+1$$Then the average degree of similarity for all of i is defined as
Similarly, when the embedded dimension is m+1, the corresponding [TeX:] $$C_{i}^{m}+1(r) \text { and } \varphi^{m+1}(r)$$ can be obtained.
(9)[TeX:] $$C_{i}^{m+1}=\frac{1}{N-m} \operatorname{num}\left\{d_{i j}<r\right\}, i=1,2, \ldots, N-m$$
Then, the ApEn is described as follows:
The length N is not infinite, so the ApEn can be calculated by Eq. (12) when N is a finite number.
In fact, the SampEn is a modified complexity method based on ApEn. Comparing with the ApEn, the SampEn does not include self-similar patterns, and it also does not depend on data size. Finally the SampEn can be given as
where [TeX:] $$B^{m}(r)$$ is the probability that two m-dimension vectors will match, and similarly [TeX:] $$B^{m}+1(r)$$ is the probability that two m+1 dimension vectors will match, and m is the embedding dimension [19]. In this study, the ApEn and SampEn were calculated with the typical m setting of 2 and r setting of 0.20×SD where SD was the standard deviation of the data series [20,21]. 2.4 Testing Performance on Identifying the Irregularity within Time SeriesThe inherent irregularity within ECG especially within abnormal signals is different from randomness caused by random noise and unexpected components, although they look disorderly. In fact a quality index has poor performance on quality assessment when it confuses the inherent irregularity within signals and randomness. So, we need to firstly analyze performance of the ApEn, SampEn, CLZC and ELZC algorithms on discerning the inherent irregularity and randomness before they were used as quality index. In this study, this test was designed to calculate the aforementioned four complexities on six typical artificial time series mentioned in Section 2.1. In this test, 20 samples for each type of series were employed, and length of each sample was 100, 500, 2000, and 5000 points. 2.5 Analyzing Sensitivity to Different Noises and Different SNRsIn ECG quality assessment, all ECG recordings including acceptable and unacceptable ones contain a lot of noise because they are raw signals without any pretreatment. So a satisfied quality index should be able to reflect content level of noise within ECG, furthermore it is also necessary to reflect different types of noise. In order to evaluate sensitivity of the four complexity algorithms to content level of several common noises contained in ECG, this test was designed to compared the ApEn, SampEn, CLZC, and ELZC values of five aforementioned synthetic noisy ECGs in Section 2.1 (i.e., the clear artificial ECG plus BW, EM, MA, PL, and hybrid noises) at different SNRs. 2.6 Verifying on the Real DataGenerally, we cannot judge the quality of a multi-lead ECG recording based on complexity value of a lead. So the statistics (i.e. the mean, maximum and standard deviation) of complexity values of a multilead ECG recording are proposed as quality indices to assess quality of the recording in this study, and we defined three quality indices of the ELZC as follows: Let ELZCi denote the ELZC value of the ith lead of a 12-lead ECG recording, i = 1, 2, …, 12, and the three quality indices are the mean, maximum and standard deviation of the ELZCi respectively. For comparing performance of quality indices of the ELZC, we also similarly calculated the corresponding quality indices of the ApEn, SampEn, and CLZC, respectively. In this test, for evaluating performance of the three quality indices of the ELZC, we calculated their ROCs and the corresponding AUCs on the real 12-lead ECG recordings that were collected from the training set of the CinC 2011 and described in Section 2.1. Similarly, we calculated ROCs of the ApEn, SampEn and CLZC respectively. Then, for calculating sensitivity (Se) and specificity (Sp) of each of the three quality indices, a 12-lead ECG recording was marked as an acceptable signal when its quality index was less than preset threshold. Otherwise the recording was recognized as an unacceptable one. In this test, the range of the threshold was from 0 to 1 in steps of 0.005. The two indices Se and Sp are described as follows:
(14)[TeX:] $$\begin{aligned} S e=\text {true positive / total number of unacceptable ECG recordings}\\ S p=\text { true negative / total number of acceptable ECG recordings} \end{aligned}$$where true positive is number of identified unacceptable recordings among the unacceptable ECG recordings, and true negative is number of identified acceptable recordings among the acceptable ECG recordings. Actually, normalization of data is necessary because the value ranges of the ApEn, SampEn, CLZC and ELZC are different. Therefore, the Min-Max normalization approach was employed to normalize the values of the four complexity approaches before calculating Se and Sp in this study. The normalization method is described as follows:
where x and [TeX:] $$x^{*}$$ represent original time series and the new series after normalizing, respectively. 3. Results3.1 Results of Discerning Nonlinear Properties within Time SeriesFig. 2 shows ApEn, SampEn, CLZC and ELZC values of six aforementioned typical artificial time series including Gaussian noise, MIX(0.4), MIX(0.2), Logi(4.0), Logi(3.8) and periodic series on four time lengths 100, 500, 2000, and 5000, respectively. The ELZC and SampEn values exhibit monotonically decrease in order of Gaussian noise, MIX(0.4), MIX(0.2), Logi(4.0), Logi(3.8) and Logi(3.5) on all type time lengths. Similarly, the ApEn values monotonically decrease also in the aforementioned order when time lengths are 500, 2000, and 5000, respectively however the values exhibit fluctuations on MIX(0.4) and Logi(4.0) on 100 length. The CLZC values exhibit fluctuations between MIX(4.0) and Logi(4.0) on all time lengths. 3.2 Results of Sensitivity Analysis on SNRTable 1 shows the ApEn, SampEn, CLZC, and ELZC values for five artificial synthetic noisy ECGs (i.e., the clean artificial ECG plus BW noise, the clean plus EM, the clean plus MA, the clean plus PL, and the clean plus hybrid noise) on several SNRs from -10 dB to 15 dB, in steps of 5 dB. The ELZC values exhibit monotonically decrease with increase of SNR on all of five artificial synthetic ECGs except the clean ECG plus PL noise. In fact the ELZC values increase with increase of SNR on the clean ECG plus PL noise. The ApEn and CLZC values exhibit monotonically decrease with increase of SNR on the clean ECG plus MA noise. The SampEn values show the fluctuations on all of five artificial synthetic ECGs. 3.3 ROC of the ELZC on Real DataFig. 3 shows ROC curves and their corresponding AUCs of three quality indices defined in Section 2.6 of the ApEn, SampEn, CLZC, and ELZC for classifying quality of the real 12-lead ECG recordings. The ROCs of the ELZC are the highest than that of the ApEn, SampEn, and CLZC on all of the quality indices. The corresponding AUCs show that the AUCs of the indices of ELZC are 0.505, 0.536, and 0.596, respectively. AUCs of the indices of the ApEn, SampEn, and CLZC are lower than 0.500. Table 1.
4. DiscussionsFig. 2 shows that the ELZC and SampEn have a satisfied performance for discerning the inherent irregularity within time series and randomness caused by random noise and unexpected components on all time lengths 100, 500, 2000, and 5000. The ApEn can achieve a satisfied performance on short signals i.e. time length 100 however it exhibits a poor performance on medium and long signals, i.e., time lengths 500, 2000, and 5000. The CLZC confuses the inherent irregularity within time series and randomness on all time lengths. The reason why the ELZC achieves the satisfied performance is that the ELZC is performed on a new sequence generated from the original time series for calculating complexity values. For the ELZC, the new sequence can reflect properly inherent properties within time series instead of randomness because the transformation process preserves the inherent information of the original series, losing random components, i.e., noise and unexpected components [14]. Fig. 2 means that the ELZC and SampEn meet a fundamental requirement for assessing ECG quality because they can identify the inherent irregularity and randomness from seemingly disorderly signals, but the ApEn and CLZC cannot. However the SampEn relies on two parameters, i.e., m and r, so that performance of the SampEn maybe poor when m and r are set to other values. In contrast, the ELZC presents a more reliable and satisfied performance on discerning the inherent irregularity and randomness within signals than other complexity approaches, i.e., the ApEn, SampEn, and CLZC. In test 2, randomness caused by noise and unexpected components can be negligible in the clear artificial ECG because few random components are contained in the artificial clear ECG that are generated by the open source software ECGSYN. So the SNRs of the synthetic noisy ECG (i.e., the clean ECG mixed BW, the clean ECG mixed EM, the clean ECG mixed MA, and the clean ECG plus the hybrid noise) can be considered to reflect content level of the added noises (i.e., BW, EM, MA, PL, and mixed noises), then the randomness within the synthetic noisy ECG increases with decreasing of SNR. Pure PL noise is periodic signal, and its irregularity is nearly zero, so irregularity within the clean artificial ECG plus PL increases with increase of SNR. Actually, Table 1 shows that the ELZC exhibits the proper change trend on all of five synthetic noisy ECGs however the ApEn and CLZC exhibit incorrect change trend on all of the synthetic ECGs except the clean mixed MA and the clean mixed PL, and the SampEn exhibits an improper trend on five synthetic ECGs. The results mean that the ELZC can be used to measure content level of random noise contained in ECG, but the ApEn, SampEn and CLZC cannot. Fig. 3 shows the three quality indices, i.e., the mean, maximum and standard deviation of the ELZC of the real 12-lead ECG recordings for classifying quality of ECG have a relatively satisfied classification performance, and their corresponding AUCs, i.e. 0.505, 0.536, and 0.596 exhibit statistically significant (>0.500). So the indices of ELZC can be employed for classifying quality of multi-lead ECG. Conversely, the AUCs of the three indices of the ApEn, SampEn, and CLZC are lower than 0.500 so that they cannot be used to assess quality of multi-lead ECG. Actually the three quality indices of the ELZC are employed to assess ECG quality in conjunction with other time domain and frequency domain quality indices. 5. ConclusionsIn this study, we compared a novel ELZC with ApEn, SampEn, and CLZC for multi-lead ECG quality assessment. The results indicate that the ELZC achieves satisfied performance for distinguishing randomness and inherent irregularity within time series, and it can also efficiently reflect content level of noise contained in ECG because it exhibits proper change trend with increate of SNR on all of five synthetic noisy ECGs. Finally, we proposed three quality indices derived from complexity of the multilead ECG for multi-lead ECG, and validated the indices on the real 12-lead ECG signals from the CinC2011 of the MIT database. The results indicate the three indices (i.e., the mean, maximum and standard deviation) of the ELZC have statistical significance for multi-lead ECG quality assessment. In practice, three quality indices of the ELZC need to be combined with the other indices of time and frequency domain for ECG quality assessment. BiographyYatao Zhanghttps://orcid.org/0000-0002-6152-0806He received Ph.D. degree in Biomedical Engineering from Shandong University, China, in 2015. He is currently a senior Lecturer at School of Mechanical, Electrical & Information Engineering, Shandong University, Weihai, China. He has published 9 original journal/conference papers, and his research interests mainly focus on ECG signal processing, machine learning, data mining and big data processing for physiological signals. BiographyZhenguo Mahttps://orcid.org/0000-0001-7660-735XHe received B.S. degree in School of Mechanical, Electrical & Information Engi-neering from Shandong University, Weihai, China, in Jun. 2018. Now he is a graduate student in School of Computer Science and Technology, University of Science and Technology of China. His current research interests include ECG signal classification and machine learning. BiographyWentao Donghttps://orcid.org/0000-0001-8632-3473He received M.S. degree in Department of Clinical Medicine from Weifang Medical University, Weifang, China, in 2007. Now he is a doctor in Department of Cardio-vascular Surgery, Weihai municipal hospital, Weihai, China. His current research interests include Classification of arrhythmia and annotation of ECG. References
|