Advanced Analytics and Deep Learning Models. Группа авторов. Читать онлайн. Newlib. NEWLIB.NET

Автор: Группа авторов
Издательство: John Wiley & Sons Limited
Серия:
Жанр произведения: Программы
Год издания: 0
isbn: 9781119792413
Скачать книгу
perception, since they proved that their approach could overpower both strategies exploiting single ratings and MCRS algorithms.

      When multiple-criteria user-to-user CF is used as recommender algorithm, then the best overall results are obtained [5].

      3.4.2 User Preference Learning in Multi-Criteria Recommendation Using Stacked Autoencoders by Tallapally et al.

      Here, they come up with a stacked autoencoders which is a DNN approach to use the multiple-criteria ratings. They implemented a model which is configured to analyze the connection in the middle of every client’s criteria and general rating efficiency. Test outcomes are based on practical datasets like Yahoo! Movies dataset and TripAdvisor dataset. It illustrates that this approach can perform both single-criteria systems and multi-criteria approaches on different performance matrix [4].

      Now, if we look on their proposed performance evaluation and result analysis, then it will be cleared that how much efficiency this model can achieve.

       3.4.2.1 Dataset and Evaluation Matrix

      In this paper, they have used two datasets based on real world from tourism and movie domains that are used to evaluate the performance. They hold on to the sample data of the users who reviewed at least five hotels and hotels that were reviewed by at least five users to obtain working data subset from TA.

      They used subset that carry more than 19,000 rating instances by more than 3,100 users with around 3,500 hotels which has a high sparsity of 99.8272%. In addition, YM data are generated as shown in Tables 3.3 to 3.5. For analyzing the performance of this method, they used Mean Absolute Error (MAE) which is known for its simplicity, accuracy, and popularity [4].

       Result = YM 10-10

       Result = YM 20-20

Technique MAE GIMAE GPIMAE F1 Technique MAE GIMAE GPIMAE F1
MF [10] 0.8478 0.7461 0.6765 0.5998 MF [10] 0.7397 0.6077 0.57 0.6698
2016_Hybrid AE [23] 0.7811 0.6595 0.8269 0.7042 2016_Hybrid AE [23] 0.7205 0.6008 0.783 0.7578
2011_Liwei Liu [13] 0.6574 0.5204 0.6574 0.664 2011_Liwei Liu [13] 0.6576 0.5054 0.6576 0.6828
2017_Learning [22] 0.6576 0.5054 0.6576 0.6629 2017_Learning [22] 0.8254 0.5958 0.8131 0.7544
2017_CCC [27] 0.6374 0.624 0.7857 0.5361 2017_CCC [27] 0.6798 0.6095 0.7159 0.5585
2017_CCA [27] 0.6618 0.6015 0.799 0.5343 2017_CCA [27] 0.6691 0.6042 0.6971 0.5641
2017_CIC [27] 0.6719 0.6542 0.7743 0.5327 2017_CIC [27] 0.7029 0.6218 0.7064 0.5677
Extended_SAE_3 0.5783 0.487 0.6501 0.7113 Extended_SAE_3 0.5906 0.4959 0.6523 0.7973
Extended_SAE_5 0.564 0.4842 0.6503 0.7939 Extended_SAE_5 0.5798 0.4834 0.6306 0.807

       Result = YM 5-5

Technique MAE

e-mail: [email protected]