• 검색 결과가 없습니다.

conditioning augmentation

N/A
N/A
Protected

Academic year: 2023

Share "conditioning augmentation"

Copied!
54
0
0
더 보기 ( 페이지)

전체 글

Along with this context, the concerns of this article focused on (i) the development of a recommendation system based on a modified autoencoder, which is a typical deep learning technique used in this research field and presents outstanding performance in the feature extraction process and (ii) ) the application of data augmentation in this field, which is often used to solve the problem of data scarcity or limitation, which is one of the main challenges of recommender systems [3]. (iii) Finally, the proposed model can be applied to both tasks, collaborative filtering and content-based filtering are proven to be consistent in performance. A modified recommendation system based on convolutional autoencoding learns features of patterns represented by user ratings or user item rating matrices. The proposed model uses a vanilla autoencoder as the basic structure combined with a convolutional feature extraction layer that takes as input a coded vector represented by preprocessed scans or scores.

The contribution of the paper can be summarized as three points. i) Conditioning augmentation which is data augmentation technique using encoded vector to deal with data sparsity problem which is mainly involved in the field of recommender system. model can take both types of input which are content-based encoded vector or rating-based encoded vector. Content-based vector can represent attributes of item such as review, quality and various types of categorical attribute to corresponding product.

Background

Ubiquity of deep neural network and recommender system

Since various types of deep neural networks are applied to the provision of online services such as over-the-top (OTT) service, recommender systems constitute a large part of online business companies [8]. Subsequently, research related to recommender systems based on deep learning has increased both quantitatively and qualitatively. Furthermore, the number of papers submitted to RecSys, which is the ACM-organized conference on recommender systems, has increased to 66 percent [4].

As mentioned above, the boom in deep learning techniques has accelerated the research field of recommender systems, especially in the field of feature extraction [10]. For example, deep learning techniques such as autoencoder, generative adversarial network, and multilayer perceptron have shown particular effectiveness in learning representation from user element information, which is traditionally done by matrix factorization.

Unstructured data

  • Image data
  • Text data
  • Tabular data
  • Video
  • Preprocessing

Textual data, such as documents, ratings, or even movie scripts, are typical examples of unstructured data. In most cases, textual data consists of a corpus containing different types of metadata, linguistic features, and semantic features that can be used in many applications [15]. In addition, video data provides other different types of information, such as the number of views, the viewing history of each user, the number of likes and dislikes of the corresponding video.

Text data contains various types of noise such as punctuation, emotions, and several types of abbreviations that have prevailed recently as social networking services have evolved. Preprocessing of image data includes not only typical preprocessing of deep learning techniques such as normalization and vectorization, but also specific processes such as geometric transformation, augmentation, and so on [19].

Figure 3: User-item matrix
Figure 3: User-item matrix

Recommendation system

Collaborative filtering

Content-based filtering

Main challenges in recommender system

This section introduces various types of deep learning techniques used in recommendation systems with a brief explanation, including multi-layer perceptron, autoencoder, joint learning, and generative adversarial net. In addition, a category table will be provided containing recent publications related to corresponding techniques of current trends. Most of the algorithms in Netflix's recommendation services are based on deep learning recommendation system techniques, including personalized video ranker (PVR) and top-N video ranker [30].

On the other hand, the top-N ranking algorithm suggests several items that are mostly selected at that time, which means it considers the general preference of users at that time more than the PVR algorithm. These two algorithms are designed with a multi-layer perceptron that takes multiple look, search vectors as input to predict ratings of items. In general, users on the Internet make the decision to watch videos based on their preferences and relative needs [31].

After the autoencoder learns the representation from the inputs, it creates a latent space z representing each element. In the next step, recommendation sets are extracted based on cosine similarity, considering that the input features are composed of textual data.

Figure 6: Recommendation example page of Netflix (Carlos A et al. 2015)
Figure 6: Recommendation example page of Netflix (Carlos A et al. 2015)

Joint learning of deep and wide component, Google

To supplement this problem, remembering function interactions through wide components, such as function transformations between products, is used. As a result, jointly trained models with deep and wide components can take advantage of both memorization and generalization [9]. In addition, the system has been produced and evaluated on a huge e-commerce platform such as Google Play and a commercial mobile app store with approximately one billion active customers, making the system more robust and better.

Recommendation system with generaive adversarial net

  • Application of autoencoder to collaborative filtering
  • Denoising autoencoder
  • Convolutinoal autoencoder
  • Convolutional block
  • Conditioning autmentation
  • Application on contents-based filtering recommendation
  • Application on collaborative filtering recommendation

This section presents not only the proposed model but also the overall modeling process to address the data limitation problem. In addition, to make the model more robust, actual noise on a raw input X is used, which is called corruption. The process by which the decoder removes corruption from an input X makes the model more robust [36].

The learning objective of the model is to minimize the reconstruction errors between the raw X data and the predicted values. The type of activation function plays an important role in the performance of the model [1], so the choice should be considered. Conditioning augmentation (CA), which is the main concept for solving the data limitation problem, was first introduced in stackGAN (2017).

The process of conditioning augmentation is similar to the process of reparameterization trick used in variational autoencoder to tackle backpropagation problems [37]. As mentioned above, denoising uses autoencoder techniques to add factual noise on raw input X, while a conditioning augmentation resamples latent vector z using mean and variance predicted by fully connected layers [36]. It appeared that conditioning augmentation provides better performance in dealing with data sparsity and constraint problems [37].

The prediction of the recommendation list is based on the latent space of the embedding vectors z, which is the output of the encoder. After the model learns the representation from the matrix, the decoder will be able to predict the users' item ratings, which are mostly used to perform recommendations. The evaluation of the proposed model is based on this approach because this task provides an objective evaluation metric to compare the proposed model with other baselines.

Figure 10: Model structure of denoising autoencoder.
Figure 10: Model structure of denoising autoencoder.

Experimental result and analysis

Datasets

In the case of Movielens and Netflix, experiments were implemented to reduce the size of each data set, from 100 percent to 0.025 percent to verify the influence of conditioning enlargement on robustness of the proposed model compared to other base models.

Model parameters

Evaluation metric

Finally, the ideal discounted cumulative gain (IDCG) is the DCG for an ideal recommendation list. In this task, RMSE is used for the comparison between predicted ratings and true ratings of items.

Experimental result and analysis

Movielens and Netflix

As shown in Tables 1 to 4, the performance of the proposed model appears to be slightly lower than that of the state of the art, with sufficient dataset size of samples up to 500,000. However, as the size of the dataset decreases, the performance gap between the best model and the proposed model becomes smaller, which means that the proposed model with conditioning augmentation is more robust to the size of the dataset. Moreover, when the size of the dataset is less than or equal to 100,000 samples, the proposed model showed better performance than the state of the art.

Specifically, the proposed model has achieved approximately 3 to 4 percent of improvements compared to the state of the art. Furthermore, looking at Tables 1 and 2, the performance gap between designs with CA and without CA appeared significant, suggesting that CA helps improve performance in terms of learning representation of the input.

Flixster monti

Overall comparison

Batchakui, “Deep learning methods on recommender system: A survey of the state of the art,” International Journal of Computer Applications, vol. Ispir et al., "Wide & deep learning for recommender systems," in Proceedings of the 1st workshop on deep learning for recommender systems, 2016, pp. Yang, "Generative adversarial network based heterogeneous bibliographic network representation for personalized quotation recommendation," in Proceedings of the AAAI Conference on Artificial Intelligence, vol.

Xie, "Autorec: Autoencoders meet collaborative filtering," in Proceedings of the 24th international conference on World Wide Web, 2015, pp. Manzagol, "Extracting and composing robust features with denoising autoencoders," in Proceedings of the 25th international conference on Machine learning , 2008, pp. Wang, “Personalized top-n sequential recommendation via convolutional sequence embedding,” in Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, 2018, p.

Unger, “Latent context-aware recommender systems,” in Proceedings of the 9th ACM Conference on Recommender Systems, 2015, pp. Mary, “Hybrid Recommender System Based on Autoencoders,” inProceedings of the First Workshop on Deep Learning for Recommender Systems, 2016, pp. Fang, “Neural citation network for context-aware citation recommendations,” in Proceedings of the 40th international ACM SIGIR conference on research and development in information retrieval, 2017, pp.

Fu, “Sample rules guided a deep neural network for makeup recommendation,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. Lim, “Recgan: recurrent generative adversarial networks for recommender systems,” in Proceedings of the 12th ACM Conference on Recommender Systems, 2018, pp. McAuley, “Vbpr: visual bayesian personalized ranking based on implicit feedback,” in Proceedings of the AAAI conference on artificial intelligence, vol.

Yin, “Reinforcement learning to optimize long-term user engagement in recommendation systems,” in Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2019, pp. Li, “Comparative deep learning of hybrid representations for image recommendations,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp.

Figure 18: Text to image generating example by analogy of human painting
Figure 18: Text to image generating example by analogy of human painting

수치

Figure 1: Modern recommender system
Figure 3: User-item matrix
Figure 4: Collaborative filtering recommendation
Figure 5: Content-based filtering recommendation
+7

참조

관련 문서

This paper proposes a generative adversarial network (GAN) based algorithm named MalGAN to generate adversarial malware exam- ples, which are able to bypass black-box

Metz, “Unsupervised representation learning with deep convolutional generative adversarial networks”, ICLR(International Conference on Learning Representations)

A Study on the Validity of a Task Complexity Measure for Emergency Operating Procedures of Nuclear Power Plants—Comparing Task Complexity Scores with Two Sets

 Discuss the machine learning background of auto-regressive model to obtain the

In the experimemtal study, steel reinforced concrete beams(RC 12 beams, PC 12 beams) strengthened using external prestressing steel are tested with the

• A discrete random variable has an associated probability mass function (PMF) which gives the probability of each numerical value that the random variable can take.. • A function

The Australian Institute of Health and Welfare (AIHW) annually collects data from organisations funded by the Australian Government to provide one or more of the following

Tsunami Early Warning and Mitigation System in the North-Eastern Atlantic, the Mediterranean and Connected Seas: First Enlarged Communication Test Exercise (ECTE1); exercise

If you perform modifications to the motorcycle prior to its first retail sale that alter its emission performance (e.g. alterations or changes to engine calibration or