SungChul Byun , Seong-Soo Han and Chang-Sung JeongA Manually Captured and Modified Phone Screen Image Dataset for Widget Classification on CNNsAbstract: The applications and user interfaces (UIs) of smart mobile devices are constantly diversifying. For example, deep learning can be an innovative solution to classify widgets in screen images for increasing convenience. To this end, the present research leverages captured images and the ReDraw dataset to write deep learning datasets for image classification purposes. First, as the validation for datasets using ResNet50 and EfficientNet, the experiments show that the dataset composed in this study is helpful for classification according to a widget's functionality. An implementation for widget detection and classification on RetinaNet and EfficientNet is then executed. Finally, the research suggests the Widg-C and Widg-D datasets—a deep learning dataset for identi-fying the widgets of smart devices—and implementing them for use with representative convolutional neural network models. Keywords: Captured Image , CNN , Deep Learning Dataset , Image Classification , Object Detection , Widget 1. IntroductionSmart mobile devices offer various applications to users, and due to this convenience, their users are increasing worldwide. Accompanying this increase in the number of smartphone users is a growth in the development of applications and user interfaces (UIs) for the Android operating system (OS) and the iPhone operating system (iOS). However, these more diverse applications necessitate the creation of more platform types and widget forms. This diversity of applications may confuse users, and the test environ¬ment for smartphone production also becomes more complicated. In recent years, artificial neural networks (ANNs) have been utilized in various fields. For example, convolutional neural networks (CNNs) are widely used and show high performance in computer vision fields, such as image classification [1], object detection [2], and real-time processing [3]. In addition, CNNs’ fields of application are constantly growing and now include natural language processing [4] and voice detection [5]. As outstanding as CNNs’ performance and utilization have been, functional smartphone widget image classification using deep learning has also been proposed For example, Moran et al. [6] wrote the ReDraw dataset and classified various graphical user interfaces (GUIs) in the Android OS by using deep learning. Notably, the image classification model trained with the ReDraw dataset performed with 91% and 86% accuracy on validation tests using newly-captured screen images, leaving room for improvement in smart devices’ GUI classification of such images using deep learning. The primary purpose of this study is to improve convenience by identifying widgets with deep learning by writing a dataset that can detect and classify them according to their roles in screen-captured images. The research proposes a method to detect widget location and predict its functionality by implementing two CNN models on a full-screen image. As a result, the study achieved improvements in GUI image classification performance. Two datasets were written, one for widget classification and one for detection—the Widg-C and Widg-D datasets, respectively—that are more generalized and balanced than the datasets used in traditional studies. The Widg-C and Widg-D datasets are generalized by reducing redundancy and errors, a process that is reinforced by employing the template method and by collecting various captured images. These refining processes made the datasets more suitable for training CNN models. The overall composition of this paper is as follows. Section 2 discusses the existing method and models for object detection and image classification using a CNN and then describes the existing dataset for widget classification, i.e., the ReDraw dataset. Section 3 proposes the Widg-C and Widg-D datasets, while Section 4 compares and validates the proposed scheme with the extant dataset. An experimental implementation of the proposed widget detection and classification scheme with full screen-captured images is conducted for practical uses. 2. Related WorksThis section introduces the dataset background utilized in the classification of widgets and explains the concepts related to the CNN as well as representative CNN models for object detection and image classification. 2.1 Convolutional Neural NetworkCurrently, various CNNs are applied in several computer vision and deep learning fields. To date, CNNs’ image classification performance is superior to that of humans. This study will utilize a representative CNN model to implement the detection and classification of widgets using deep learning and will adopt EfficientNet [7] as a CNN model for image classification to achieve the respective objectives of classification. Furthermore, the most representative model, ResNet-50 [8] (ResNet with 50 layers), which is used in various deep learning fields [9] due to its effective residual learning method, will be used to evaluate the classification’s performance. For widget boundary detection, RetinaNet [10] using ResNet-50 as a backbone network will be implemented. Brief introductions of these three representative CNN models are presented below. 2.2 ReDraw DatasetThe ReDraw dataset, written by Moran et al. [6], is a deep learning dataset for classifying GUIs in smart mobile devices. The ReDraw dataset consists of synthetic images created by mocking up the actual widgets and organic images collected automatically from the top 250 Android apps as determined by their popularity on Google Play. Moran et al. [6] cropped these collected screen images and classified them according to GUI functionality. The ReDraw dataset is augmented to address data imbalances and is cropped to improve data diversification. The dataset is divided into 16 item classes: Button, CheckBox, CheckedTextView, EditText, ImageButton, ImageView, NumberPicker, ProgressBarHorizontal, ProgressBarVertical, RadioButton, RatingBar, SeekBar, Spinner, Switch, TextView, and ToggleButton. The organic images comprise 143,170 training images, 29,040 validation images, and 19,090 test images. As a result, the deep learning GUI classifier using the ReDraw dataset achieved an accuracy of 91%, a performance that can be improved. The present research will utilize the 16 classifications used in ReDraw equally, having determined that these 16 classes were appropriate for classifying widgets’ functions only with the appearance shown as an image. However, the ReDraw dataset has problems with misclassifications and image redundancy. Moreover, there are images that have been cropped in a manner that severely damage the feature. These problems can interfere with the convergence of deep learning models and cause errors. Furthermore, due to a GUI image’s characteristics, the image crop can confuse the model and make the GUI feature unrecognizable. From the newly collected test dataset, the classifier trained with ReDraw had an 86% accuracy performance, revealing that ReDraw has room to improve its performance by correcting the problem via a deep learning dataset. 3. Dataset DescriptionThis section introduces the Widg-C and Widg-D datasets—deep learning image datasets for widget classification and detection, respectively. It also introduces how the datasets were created and their composition. 3.1 Full Screen-Captured Image Collection and Cropped WidgetsThe datasets were collected by sorting and capturing a highly accessible screen that is frequently exposed to users. All the captured images were manually saved, and the bounding box information was defined by using the BoundingBoxerImg tool [11]. The bounding box region was separated into seven classes—text, image, edit, navi, status, button, and region—that were at first arbitrarily specified. The images were then cropped with the coordination of the bounding box from the full screen-captured image. Afterwards, the cropped images were classified into the same 16 widget classifications as in ReDraw. As an example, Fig. 1 shows a full screen-captured image with bounding boxes visualized and the cropped images from the screen images after being reclassified into the 16 widget classes. 3.2 Widg-C Dataset for Widget ClassificationThe original dataset has been modified by removing images to reduce redundancy and data imbalances. As a result, the size of the ReDraw dataset was reduced to half of the original. The training dataset grew to 74,771 images by adding 14,373 images captured and cropped from full screen captured images to the modified ReDraw dataset of 60,398 images. The validation dataset consists of 22,297 images that comprise 18,697 images from ReDraw’s training and validation datasets plus 3,600 images that were captured and cropped manually. Table 1 represents the configuration of the Widg-C dataset. Table 1.
3.3 Widg-D Dataset for Widget DetectionThe training dataset for the object detection model, based on the bounding box information defined from the full screen-captured image data collected earlier, is proposed as the Widg-D dataset. This dataset uses the seven classes mentioned above. Since the BoundingBoxerImg tool provides annotation in txt format, including coordinates and labels, the training dataset was created by converting the annotation to a csv file and an xml file in PASCAL Visual Object Class (VOC) form [12]. The PASCAL VOC dataset is a common and popular dataset in the field of object detection using deep learning. Research on object detection using the PASCAL VOC [13] is being actively conducted to improve the performance of deep learning models [14] or to more efficiently design models [15]. The Widg-D dataset was created by referring to the dataset format of the PASCAL VOC. The heat map in Fig. 2 shows the distribution of components according to the detection and classification classes of the screen images that were collected and defined with bounding boxes. 3.4 Template Image Dataset Supplementation for Better Region ProposalThe dataset needs to train the model using the intact widgets to detect them in a screen-captured image. Therefore, flip and rotation of images can harm features of the model’s training. Consequently, a screen image template dataset has been composed. Such a dataset aims to solve data imbalances by reinforcing insufficient widgets. To replenish the images in the region, edit, and button classes, 100 images of three types of templates and 100 mixed templates containing all three targeted widgets were generated. As a result, 400 total images were added to the Widg-D dataset. The template was composed by pasting the previously collected and cropped widget components into the background at a resolution of 1980×1080. The components were selected by non-restoring random extraction. A status bar is on top of the background, and a navigation bar is at the bottom of every template. Components which exceed the determined resolution are resized and pasted to the largest size suitable for resolution, while components that do not exceed resolution are pasted at their original size. Each component is pasted at random intervals on the y-axis and only in an empty area. The insertion of the components is repeated until there is no available y-axis space. Since a template contains one or two columns of components from side to side, small components can be pasted into a two-parted space on the x-axis or into a non-partitioned space, and large components can be pasted without partitioning, even if there are two columns in the template. Components that include other widgets inside, such as regions, are transcribed to preserve and paste their inclusion information. The template exploits the manually written widget dataset as diversely as possible due to non-restoring randomization. The described template writing methods can prevent detection models from training the component’s location or from spacing and overfitting due to the widgets’ characteristics. Fig. 3 is an example of the region and mixed templates. There is also an additional class, named “small,” to assist in the detection of smaller widgets, which are transferred to the small category if their area is less than 600 pixels. The configuration of the Widg-D dataset, including modifications and the template image dataset, is expressed in Table 2. Categorizing and generalizing GUI components with myriad forms based on functionality is a complex subject. The modification of the ReDraw dataset was intended to increase the dataset's universality while eliminating data imbalances. In addition, multiple apps were captured, and randomly-created templates were added to the dataset to ensure generality. The Widg-C and Widg-D datasets secure the universality of the GUI image dataset in the Android OS through this data diversification process. 4. ExperimentTo compare performance in widget classification trained with the Widg-C and ReDraw datasets, ResNet-50 was used, and EfficientNet was utilized to validate the Widg-C dataset’s classification accuracy. RetinaNet, which uses ResNet-50 as a backbone network, was employed as a detection module by training the model with the Widg-D dataset to validate the practicality of widget classification from full-screen images. Finally, using RetinaNet’s bounding box credentials, a complete implementation of widget classification was executed by classifying the detailed features of the widget with EfficientNet, which trained using the Widg-C dataset. All experiments were implemented by leveraging Keras embedded in the TensorFlow 2.3.0 library in PyThon 3.6.13. The two models were trained using a learning rate of [TeX:] $$10^{-5},$$ and the softmax function was used as the activation function. The EfficientNet for image classification was trained with a batch size of 16 using an RMSprop optimizer, introduced in Hinton's lecture [16]. The RetinaNet for widget detection was trained with a batch size of 4 using an Adam optimizer with the gradient clipping constant set to 0.001 [17]. 4.1 Comparison between the ReDraw and Widg-C DatasetsTo compare the classification performance, two ResNet-50 models were trained on two datasets for 30 epochs, and the changes in loss and accuracy were compared. A dataset’s size can affect loss and accuracy changes; we randomly cropped the ReDraw dataset to construct training and validation set in the same size of the Widg-C dataset. The results of the accuracy and loss in training and the validation of the two datasets are shown in Fig. 4. 4.2 Accuracy of the CNNs Trained with the Widg-C DatasetA total of 2,754 images are in the test dataset, collected identically to the method used to collect the Widg-C dataset, to verify the classification performance on practical screen-captured images. This test dataset was generated by selecting images that are likely to be easily exposed to users. Its configuration is shown in Table 3. The EfficientNet B0 and B3 models were trained for 30 epochs using the Widg-C dataset with the same training method as with ResNet50. The test results validating the performance of ResNet and EfficientNet are shown in Table 4. As shown in Table 4, all three models performed with high accuracy rates of more than 95%. The CNN models trained with the Widg-C dataset were not only able to classify widget images but were also able to achieve high accuracy on functional widget classification. Table 3.
Table 4.
4.3 Implementation for Full Screen Image Detection and ClassificationThis experiment implemented a wide-get region and feature estimation using CNNs from a real-world full screen-captured image accessible to users. The experiment employed the object detection model using the Widg-D dataset and the image classification model training using the Widg-C dataset. The bounding box information was obtained from the object detection model prediction for guessing the function of the detailed widget through the inference of the image classification model. RetinaNet was trained using ResNet-50 as a backbone network for 60 epochs using the Widg-D dataset and EfficientNet B3, which was trained earlier for classification. Fig. 5 is a wide-get prediction result sample of full screen-captured images using RetinaNet and EfficientNet B3. As a result of the implementation, the detection and classification models detected regions and guess the widgets’ functions accurately. However, there are also undetected regions and inaccurate boundary predictions. 5. ConclusionThe market for smart mobile devices and their applications is constantly changing and diversifying and thus often presents complexities for both users and engineers. The present research proposes to solve these problems with deep learning using CNNs. The modification of the existing dataset and manually captured and cropped images from multiple apps created a universal and diverse dataset. Furthermore, artificially-generated template images reinforced the dataset’s balance. As a result, this more generalized dataset prevents deep learning models from being wrongly trained. After training several representative CNNs with the Widg-C dataset, the Widg-C dataset was deemed suitable for classifying the widgets of screen-captured images. Deep learning models using the Widg-C dataset achieved accuracy rates of more than 95% on functional widget image classification. In addition, the object detection model demonstrated the possibility of performing the task of predicting not only functions but also locations to users using only images. Thus, the deep learning technique using the Widg-C and Widg-D datasets could provide guidelines for various screen functions without interacting with the device in question.However, the Widg-C and Widg-D datasets’ size needs to be enlarged. The ReDraw training dataset has 143,170 images, while the Widg-C dataset, with 74,771 images, is half its size and therefore requires additional data to be suitable for deep learning. Furthermore, widget images are diverse in their shapes and appearances; hence, the dataset needs to be made more versatile by being supplemented with more varied kinds of images. Cropped screen-captured images vary too much in image size to train CNN models. Moreover, this variance raises concerns about the failure of the feature detection during resizing and preprocesses. Due to these problems, we propose adjusting the input of the learning model. Furthermore, the TextView widget is expected to enable more accurate region extraction by leveraging models aimed at optical character recognition (OCR). The method suggested is currently problematic when dealing with the accurate extraction of regions and component losses. Deep learning models need to be trained with more classes to learn deeper for excessively small or significant features to be improved. Currently, we are collecting more captured images to reinforce the dataset and are designing a widget classifier for full-screen images that utilizes both region proposal and image classification models. Furthermore, we aim to build an identifier for captured images from any smart mobile device that will make it easier for users to access parts of applications and for producers or developers to more easily simulate and test various functions. BiographySeong-Soo Hanhttps://orcid.org/0000-0002-4915-6247He is a professor in the division of Liberal Studies at Kangwon National University. Before joining Kangwon National University in 2019, he was a professor at Soonchunhyang University. He received a B.S. in Information and Communication Engineering from Gyeongsang National University, an M.S. in Information and Communication Engineering from Soonchunhyang University, Korea, in 2005, and a Ph.D. in Visual Information Processing from Korea University in 2019. He was a Director of Orion Technology in 2015–2016. His research interests include computer education, AI, blockchain, deep learning, and distributed parallel processing. BiographyChang-Sung Jeonghttps://orcid.org/0000-0001-9654-8406He is a professor in the Department of Electrical Engineering at Korea University. Before joining Korea University in 1992, he was a professor at POSTECH from 1982 to 1992. He was on the editorial board for the Journal of Parallel Algorithms and Application from 1992 to 2002. In addition, he was a chair of the IEEE Seoul Section and has been working as a chair of the Computer Chapter in the Seoul Section of IEEE region 10. He was chair of the EE Department in Korea University and a leader of the BK21 project. His research interests include distributed parallel computing, cloud computing, networked virtual environments, and distributed parallel deep learning for real-time image processing and visualization. References
|