Recommender system

Recommender systems or recommendation systems (sometimes replacing "system" with a synonym such as platform or engine) are a subclass of information filtering system that seek to predict the "rating" or "preference" that a user would give to an item.[1][2]

Recommender systems have become extremely common in recent years, and are utilized in a variety of areas: some popular applications include movies, music, news, books, research articles, search queries, social tags, and products in general. There are also recommender systems for experts,[3] collaborators,[4] jokes, restaurants, garments, financial services,[5] life insurance, romantic partners (online dating), and Twitter pages.[6]

Overview

Recommender systems typically produce a list of recommendations in one of two ways – through collaborative and content-based filtering or the personality-based approach.[7] Collaborative filtering approaches building a model from a user's past behavior (items previously purchased or selected and/or numerical ratings given to those items) as well as similar decisions made by other users. This model is then used to predict items (or ratings for items) that the user may have an interest in.[8] Content-based filtering approaches utilize a series of discrete characteristics of an item in order to recommend additional items with similar properties.[9] These approaches are often combined (see Hybrid Recommender Systems).

The differences between collaborative and content-based filtering can be demonstrated by comparing two popular music recommender systems – Last.fm and Pandora Radio.

Each type of system has its own strengths and weaknesses. In the above example, Last.fm requires a large amount of information on a user in order to make accurate recommendations. This is an example of the cold start problem, and is common in collaborative filtering systems.[10][11][12] While Pandora needs very little information to get started, it is far more limited in scope (for example, it can only make recommendations that are similar to the original seed).

Recommender systems are a useful alternative to search algorithms since they help users discover items they might not have found by themselves. Interestingly enough, recommender systems are often implemented using search engines indexing non-traditional data.

Montaner provided the first overview of recommender systems from an intelligent agent perspective.[13] Adomavicius provided a new, alternate overview of recommender systems.[14] Herlocker provides an additional overview of evaluation techniques for recommender systems,[15] and Beel et al. discussed the problems of offline evaluations.[16] Beel et al. have also provided literature surveys on available research paper recommender systems and existing challenges.[17][18][19]

Approaches

Collaborative filtering

One approach to the design of recommender systems that has wide use is collaborative filtering.[20] Collaborative filtering methods are based on collecting and analyzing a large amount of information on users’ behaviors, activities or preferences and predicting what users will like based on their similarity to other users. A key advantage of the collaborative filtering approach is that it does not rely on machine analyzable content and therefore it is capable of accurately recommending complex items such as movies without requiring an "understanding" of the item itself. Many algorithms have been used in measuring user similarity or item similarity in recommender systems. For example, the k-nearest neighbor (k-NN) approach[21] and the Pearson Correlation as first implemented by Allen.[22]

Collaborative filtering is based on the assumption that people who agreed in the past will agree in the future, and that they will like similar kinds of items as they liked in the past.

When building a model from a user's behavior, a distinction is often made between explicit and implicit forms of data collection.

Examples of explicit data collection include the following:

Examples of implicit data collection include the following:

The recommender system compares the collected data to similar and dissimilar data collected from others and calculates a list of recommended items for the user. Several commercial and non-commercial examples are listed in the article on collaborative filtering systems.

One of the most famous examples of collaborative filtering is item-to-item collaborative filtering (people who buy x also buy y), an algorithm popularized by Amazon.com's recommender system.[24] Other examples include:

Collaborative filtering approaches often suffer from three problems: cold start, scalability, and sparsity.[25]

A particular type of collaborative filtering algorithm uses matrix factorization, a low-rank matrix approximation technique.[26][27][28]

Collaborative filtering methods are classified as memory-based and model based collaborative filtering. A well-known example of memory-based approaches is user-based algorithm[29] and that of model-based approaches is Kernel-Mapping Recommender.[30]

Content-based filtering

Another common approach when designing recommender systems is content-based filtering. Content-based filtering methods are based on a description of the item and a profile of the user’s preference.[31] In a content-based recommender system, keywords are used to describe the items and a user profile is built to indicate the type of item this user likes. In other words, these algorithms try to recommend items that are similar to those that a user liked in the past (or is examining in the present). In particular, various candidate items are compared with items previously rated by the user and the best-matching items are recommended. This approach has its roots in information retrieval and information filtering research.

To abstract the features of the items in the system, an item presentation algorithm is applied. A widely used algorithm is the tf–idf representation (also called vector space representation).

To create a user profile, the system mostly focuses on two types of information: 1. A model of the user's preference. 2. A history of the user's interaction with the recommender system.

Basically, these methods use an item profile (i.e., a set of discrete attributes and features) characterizing the item within the system. The system creates a content-based profile of users based on a weighted vector of item features. The weights denote the importance of each feature to the user and can be computed from individually rated content vectors using a variety of techniques. Simple approaches use the average values of the rated item vector while other sophisticated methods use machine learning techniques such as Bayesian Classifiers, cluster analysis, decision trees, and artificial neural networks in order to estimate the probability that the user is going to like the item.[32]

Direct feedback from a user, usually in the form of a like or dislike button, can be used to assign higher or lower weights on the importance of certain attributes (using Rocchio classification or other similar techniques).

A key issue with content-based filtering is whether the system is able to learn user preferences from users' actions regarding one content source and use them across other content types. When the system is limited to recommending content of the same type as the user is already using, the value from the recommendation system is significantly less than when other content types from other services can be recommended. For example, recommending news articles based on browsing of news is useful, but would be much more useful when music, videos, products, discussions etc. from different services can be recommended based on news browsing.

As previously detailed, Pandora Radio is a popular example of a content-based recommender system that plays music with similar characteristics to that of a song provided by the user as an initial seed. There are also a large number of content-based recommender systems aimed at providing movie recommendations, a few such examples include Rotten Tomatoes, Internet Movie Database, Jinni, Rovi Corporation, Jaman and See This Next. Document related recommender systems aim at providing document recommendations to knowledge workers, for example Noggle and Google Springboard. Public health professionals have been studying recommender systems to personalize health education and preventative strategies.[33][34]

Hybrid recommender systems

Recent research has demonstrated that a hybrid approach, combining collaborative filtering and content-based filtering could be more effective in some cases. Hybrid approaches can be implemented in several ways: by making content-based and collaborative-based predictions separately and then combining them; by adding content-based capabilities to a collaborative-based approach (and vice versa); or by unifying the approaches into one model (see[14] for a complete review of recommender systems). Several studies empirically compare the performance of the hybrid with the pure collaborative and content-based methods and demonstrate that the hybrid methods can provide more accurate recommendations than pure approaches. These methods can also be used to overcome some of the common problems in recommender systems such as cold start and the sparsity problem.

Netflix is a good example of the use of hybrid recommender systems. The website makes recommendations by comparing the watching and searching habits of similar users (i.e., collaborative filtering) as well as by offering movies that share characteristics with films that a user has rated highly (content-based filtering).

A variety of techniques have been proposed as the basis for recommender systems: collaborative, content-based, knowledge-based, and demographic techniques. Each of these techniques has known shortcomings, such as the well known cold-start problem for collaborative and content-based systems (what to do with new users with few ratings) and the knowledge engineering bottleneck[35] in knowledge-based approaches. A hybrid recommender system is one that combines multiple techniques together to achieve some synergy between them.

The term hybrid recommender system is used here to describe any recommender system that combines multiple recommendation techniques together to produce its output. There is no reason why several different techniques of the same type could not be hybridized, for example, two different content-based recommenders could work together, and a number of projects have investigated this type of hybrid: NewsDude, which uses both naive Bayes and kNN classifiers in its news recommendations is just one example.[36]

Seven hybridization techniques:

Beyond accuracy

Typically, research on recommender systems is concerned about finding the most accurate recommendation algorithms. However, there is a number of factors that are also important.

Mobile recommender systems

One growing area of research in the area of recommender systems is mobile recommender systems. With the increasing ubiquity of internet-accessing smart phones, it is now possible to offer personalized, context-sensitive recommendations. This is a particularly difficult area of research as mobile data is more complex than data that recommender systems often have to deal with (it is heterogeneous, noisy, requires spatial and temporal auto-correlation, and has validation and generality problems[52]). Additionally, mobile recommender systems suffer from a transplantation problem – recommendations may not apply in all regions (for instance, it would be unwise to recommend a recipe in an area where all of the ingredients may not be available).

One example of a mobile recommender system is one that offers potentially profitable driving routes for taxi drivers in a city.[52] This system takes input data in the form of GPS traces of the routes that taxi drivers took while working, which include location (latitude and longitude), time stamps, and operational status (with or without passengers). It uses this data to recommend a list of pickup points along a route, with the goal of optimizing occupancy times and profits. This type of system is obviously location-dependent, and since it must operate on a handheld or embedded device, the computation and energy requirements must remain low.

Another example of mobile recommendation is what (Bouneffouf et al., 2012) developed for professional users. Using GPS traces of the user and his agenda, it suggests suitable information depending on his situation and interests. The system uses machine learning techniques and reasoning processes in order to dynamically adapt the mobile recommender system to the evolution of the user’s interest. The author called his algorithm hybrid-ε-greedy.[53]

Mobile recommendation systems have also been successfully built using the "Web of Data" as a source for structured information. A good example of such system is SMARTMUSEUM[54] The system uses semantic modelling, information retrieval, and machine learning techniques in order to recommend content matching user interests, even when presented with sparse or minimal user data.

Risk-aware recommender systems

The majority of existing approaches to recommender systems focus on recommending the most relevant content to users using contextual information and do not take into account the risk of disturbing the user in specific situation. However, in many applications, such as recommending personalized content, it is also important to consider the risk of upsetting the user so as not to push recommendations in certain circumstances, for instance, during a professional meeting, early morning, or late at night. Therefore, the performance of the recommender system depends in part on the degree to which it has incorporated the risk into the recommendation process.

Risk definition

"The risk in recommender systems is the possibility to disturb or to upset the user which leads to a bad answer of the user".[55]

In response to these challenges, the authors in[55] have developed a dynamic risk sensitive recommendation system called DRARS (Dynamic Risk-Aware Recommender System), which models the context-aware recommendation as a bandit problem. This system combines a content-based technique and a contextual bandit algorithm. They have shown that DRARS improves the Upper Condence Bound (UCB) policy, the currently available best algorithm, by calculating the most optimal exploration value to maintain a trade-off between exploration and exploitation based on the risk level of the current user's situation. The authors conducted experiments in an industrial context with real data and real users and have shown that taking into account the risk level of users' situations significantly increased the performance of the recommender systems.

The Netflix Prize

Main article: Netflix Prize

One of the key events that energized research in recommender systems was the Netflix Prize. From 2006 to 2009, Netflix sponsored a competition, offering a grand prize of $1,000,000 to the team that could take an offered dataset of over 100 million movie ratings and return recommendations that were 10% more accurate than those offered by the company's existing recommender system. This competition energized the search for new and more accurate algorithms. On 21 September 2009, the grand prize of US$1,000,000 was given to the BellKor's Pragmatic Chaos team using tiebreaking rules.[56]

The most accurate algorithm in 2007 used an ensemble method of 107 different algorithmic approaches, blended into a single prediction:[57]

Predictive accuracy is substantially improved when blending multiple predictors. Our experience is that most efforts should be concentrated in deriving substantially different approaches, rather than refining a single technique. Consequently, our solution is an ensemble of many methods.

Many benefits accrued to the web due to the Netflix project. Some teams have taken their technology and applied it to other markets. Some members from the team that finished second place founded Gravity R&D, a recommendation engine that's active in the RecSys community.[56][58] 4-Tell, Inc. created a Netflix project-derived solution for ecommerce websites.

A second contest was planned, but was ultimately canceled in response to an ongoing lawsuit and concerns from the Federal Trade Commission.[44]

Performance measures

Evaluation is important in assessing the effectiveness of recommendation algorithms. The commonly used metrics are the mean squared error and root mean squared error, the latter having been used in the Netflix Prize. The information retrieval metrics such as precision and recall or DCG are useful to assess the quality of a recommendation method. Recently, diversity, novelty, and coverage are also considered as important aspects in evaluation.[59] However, many of the classic evaluation measures are highly criticized.[60] Often, results of so-called offline evaluations do not correlate with actually assessed user-satisfaction.[61] The authors conclude "we would suggest treating results of offline evaluations [i.e. classic performance measures] with skepticism".

Multi-criteria recommender systems

Multi-criteria recommender systems (MCRS) can be defined as recommender systems that incorporate preference information upon multiple criteria. Instead of developing recommendation techniques based on a single criterion values, the overall preference of user u for the item i, these systems try to predict a rating for unexplored items of u by exploiting preference information on multiple criteria that affect this overall preference value. Several researchers approach MCRS as a multi-criteria decision making (MCDM) problem, and apply MCDM methods and techniques to implement MCRS systems.[62] See this chapter[63] for an extended introduction.

See also

References

  1. 1 2 Francesco Ricci and Lior Rokach and Bracha Shapira, Introduction to Recommender Systems Handbook, Recommender Systems Handbook, Springer, 2011, pp. 1-35
  2. "Facebook, Pandora Lead Rise of Recommendation Engines - TIME". TIME.com. 27 May 2010. Retrieved 1 June 2015.
  3. H. Chen, A. G. Ororbia II, C. L. Giles ExpertSeer: a Keyphrase Based Expert Recommender for Digital Libraries, in arXiv preprint 2015
  4. H. Chen, L. Gou, X. Zhang, C. Giles Collabseer: a search engine for collaboration discovery, in ACM/IEEE Joint Conference on Digital Libraries (JCDL) 2011
  5. Alexander Felfernig, Klaus Isak, Kalman Szabo, Peter Zachar, The VITA Financial Services Sales Support Environment, in AAAI/IAAI 2007, pp. 1692-1699, Vancouver, Canada, 2007.
  6. 1 2 Pankaj Gupta, Ashish Goel, Jimmy Lin, Aneesh Sharma, Dong Wang, and Reza Bosagh Zadeh WTF:The who-to-follow system at Twitter, Proceedings of the 22nd international conference on World Wide Web
  7. Hosein Jafarkarimi; A.T.H. Sim and R. Saadatdoost A Naïve Recommendation Model for Large Databases, International Journal of Information and Education Technology, June 2012
  8. Prem Melville and Vikas Sindhwani, Recommender Systems, Encyclopedia of Machine Learning, 2010.
  9. R. J. Mooney & L. Roy (1999). Content-based book recommendation using learning for text categorization. In Workshop Recom. Sys.: Algo. and Evaluation.
  10. 1 2 Rubens, Neil; Elahi, Mehdi; Sugiyama, Masashi; Kaplan, Dain (2016). "Active Learning in Recommender Systems". In Ricci, Francesco; Rokach, Lior; Shapira, Bracha. Recommender Systems Handbook (2 ed.). Springer US. ISBN 978-1-4899-7637-6.
  11. 1 2 Elahi, Mehdi; Ricci, Francesco; Rubens, Neil. A survey of active learning in collaborative filtering recommender systems. Computer Science Review, 2016, Elsevier.
  12. Andrew I. Schein, Alexandrin Popescul, Lyle H. Ungar, David M. Pennock (2002). Methods and Metrics for Cold-Start Recommendations. Proceedings of the 25th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2002). New York City, New York: ACM. pp. 253–260. ISBN 1-58113-561-0. Retrieved 2008-02-02.
  13. Montaner, M.; Lopez, B.; de la Rosa, J. L. (June 2003). "A Taxonomy of Recommender Agents on the Internet". Artificial Intelligence Review. 19 (4): 285–330. doi:10.1023/A:1022850703159.
  14. 1 2 Adomavicius, G.; Tuzhilin, A. (June 2005). "Toward the Next Generation of Recommender Systems: A Survey of the State-of-the-Art and Possible Extensions". IEEE Transactions on Knowledge and Data Engineering. 17 (6): 734–749. doi:10.1109/TKDE.2005.99.
  15. Herlocker, J. L.; Konstan, J. A.; Terveen, L. G.; Riedl, J. T. (January 2004). "Evaluating collaborative filtering recommender systems". ACM Trans. Inf. Syst. 22 (1): 5–53. doi:10.1145/963770.963772.
  16. Beel, J.; Langer, S.; Genzmehr, M.; Gipp, B. (October 2013). "A Comparative Analysis of Offline and Online Evaluations and Discussion of Research Paper Recommender System Evaluation" (PDF). Proceedings of the Workshop on Reproducibility and Replication in Recommender Systems Evaluation (RepSys) at the ACM Recommender System Conference (RecSys).
  17. Beel, J.; Langer, S.; Genzmehr, M.; Gipp, B.; Breitinger, C. (October 2013). "Research Paper Recommender System Evaluation: A Quantitative Literature Survey" (PDF). Proceedings of the Workshop on Reproducibility and Replication in Recommender Systems Evaluation (RepSys) at the ACM Recommender System Conference (RecSys).
  18. Beel, J.; Gipp, B.; Langer, S.; Breitinger, C. (26 July 2015). "Research Paper Recommender Systems: A Literature Survey". International Journal on Digital Libraries: 1–34. doi:10.1007/s00799-015-0156-0.
  19. Waila, P.; Singh, V.; Singh, M. (26 April 2016). "A Scientometric Analysis of Research in Recommender Systems" (PDF). Journal of Scientometric Research: 71–84. doi:10.5530/jscires.5.1.10.
  20. John S. Breese; David Heckerman & Carl Kadie (1998). Empirical analysis of predictive algorithms for collaborative filtering. In Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence (UAI'98).
  21. Sarwar, B.; Karypis, G.; Konstan, J.; Riedl, J. (2000). "Application of Dimensionality Reduction in Recommender System A Case Study",
  22. Allen, R.B. (1990). "User Models: Theory, Method, Practice". International J. Man-Machine Studies.
  23. Parsons, J.; Ralph, P.; Gallagher, K. (July 2004). "Using viewing time to infer user preference in recommender systems". AAAI Workshop in Semantic Web Personalization, San Jose, California.
  24. Collaborative Recommendations Using Item-to-Item Similarity Mappings
  25. Sanghack Lee and Jihoon Yang and Sung-Yong Park, Discovery of Hidden Similarity on Collaborative Filtering to Overcome Sparsity Problem, Discovery Science, 2007.
  26. I. Markovsky, Low-Rank Approximation: Algorithms, Implementation, Applications, Springer, 2012, ISBN 978-1-4471-2226-5
  27. Takács, G.; Pilászy, I.; Németh, B.; Tikk, D. (March 2009). "Scalable Collaborative Filtering Approaches for Large Recommender Systems" (PDF). Journal of Machine Learning Research. 10: 623–656
  28. Rennie, J.; Srebro, N. (2005). Luc De Raedt, Stefan Wrobel, ed. Fast Maximum Margin Matrix Factorization for Collaborative Prediction (PDF). Proceedings of the 22nd Annual International Conference on Machine Learning. ACM Press.
  29. Breese, John S.; Heckerman, David; Kadie, Carl (1998). Empirical Analysis of Predictive Algorithms for Collaborative Filtering (PDF) (Report). Microsoft Research.
  30. "Kernel-Mapping Recommender system algorithms". Information Sciences. 208: 81–104. doi:10.1016/j.ins.2012.04.012. Retrieved 1 June 2015.
  31. Peter Brusilovsky (2007). The Adaptive Web. p. 325. ISBN 978-3-540-72078-2.
  32. Blanda, Stephanie (May 25, 2015). "Online Recommender Systems – How Does a Website Know What I Want?". American Mathematical Society. Retrieved October 31, 2016.
  33. Macedo AA, Pollettini JT, Baranauskas JA, Chaves JC (2016). "A Health Surveillance Software Framework to deliver information on preventive healthcare strategies.". J Biomed Inform. 62: 159–70. doi:10.1016/j.jbi.2016.06.002. PMID 27318270.
  34. Fernandez-Luque L, Karlsen R, Vognild LK (2009). "Challenges and opportunities of using recommender systems for personalized health education.". Stud Health Technol Inform. 150: 903–7. PMID 19745443.
  35. Rinke Hoekstra, The Knowledge Reengineering Bottleneck, Semantic Web – Interoperability, Usability, Applicability 1 (2010) 1 ,IOS Press
  36. 1 2 3 Robin Burke , Hybrid Web Recommender Systems, pp. 377-408, The Adaptive Web, Peter Brusilovsky, Alfred Kobsa, Wolfgang Nejdl (Ed.), Lecture Notes in Computer Science, Springer-Verlag, Berlin, Germany, Lecture Notes in Computer Science, Vol. 4321, May 2007, 978-3-540-72078-2.
  37. Alexander Felfernig and Robin Burke. Constraint-based Recommender Systems: Technologies and Research Issues, Proceedings of the ACM International Conference on Electronic Commerce (ICEC'08), Innsbruck, Austria, Aug. 19-22, pp. 17-26, 2008.
  38. Ziegler, C.N., McNee, S.M., Konstan, J.A. and Lausen, G. (2005). "Improving recommendation lists through topic diversification". Proceedings of the 14th international conference on World Wide Web. pp. 22–32.
  39. Joeran Beel; Stefan Langer; Marcel Genzmehr; Andreas Nürnberger (September 2013). "Persistence in Recommender Systems: Giving the Same Recommendations to the Same Users Multiple Times". In Trond Aalberg; Milena Dobreva; Christos Papatheodorou; Giannis Tsakonas; Charles Farrugia. Proceedings of the 17th International Conference on Theory and Practice of Digital Libraries (TPDL 2013) (PDF). Lecture Notes of Computer Science (LNCS). 8092. Springer. pp. 390–394. Retrieved 1 November 2013.
  40. Cosley, D., Lam, S.K., Albert, I., Konstan, J.A., Riedl, {J}. (2003). "Is seeing believing?: how recommender system interfaces affect users' opinions". Proceedings of the SIGCHI conference on Human factors in computing systems. pp. 585–592.
  41. {P}u, {P}., {C}hen, {L}., {H}u, {R}. (2012). "Evaluating recommender systems from the user's perspective: survey of the state of the art". User Modeling and User-Adapted Interaction. Springer: 1–39.
  42. Rise of the Netflix Hackers Archived January 24, 2012, at the Wayback Machine.
  43. "Netflix Spilled Your Brokeback Mountain Secret, Lawsuit Claims". WIRED. 17 December 2009. Retrieved 1 June 2015.
  44. 1 2 "Netflix Prize Update". Netflix Prize Forum. 2010-03-12.
  45. Naren Ramakrishnan; Benjamin J. Keller; Batul J. Mirza; Ananth Y. Grama; George Karypis (2001). "Privacy Risks in Recommender Systems". IEEE Internet Computing. Piscataway, NJ: IEEE Educational Activities Department. 5 (6): 54–62. doi:10.1109/4236.968832. ISBN 1-58113-561-0.
  46. Joeran Beel; Stefan Langer; Andreas Nürnberger; Marcel Genzmehr (September 2013). "The Impact of Demographics (Age and Gender) and Other User Characteristics on Evaluating Recommender Systems". In Trond Aalberg; Milena Dobreva; Christos Papatheodorou; Giannis Tsakonas; Charles Farrugia. Proceedings of the 17th International Conference on Theory and Practice of Digital Libraries (TPDL 2013) (PDF). Springer. pp. 400–404. Retrieved 1 November 2013.
  47. {K}onstan, {J}.{A}., {R}iedl, {J}. (2012). "Recommender systems: from algorithms to user experience". User Modeling and User-Adapted Interaction. Springer: 1–23.
  48. {R}icci, {F}., {R}okach, {L}., {S}hapira, {B}., {K}antor {B}. {P}. (2011). "Recommender systems handbook". Recommender Systems Handbook. Springer: 1–35.
  49. Gonçalves, Diogo; Costa, Miguel; Couto, Francisco M. (2016-09-15). "A Flexible Recommendation System for Cable TV". 3rd Workshop on Recommendation Systems for Television and online Video (RecSysTV), At Boston, MA, USA.
  50. Montaner, Miquel, L{\'o}pez, Beatriz, de la Rosa, Josep Llu{\'\i}s (2002). "Developing trust in recommender agents". Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 1. pp. 304–305.
  51. Beel, Joeran, Langer, Stefan, Genzmehr, Marcel (September 2013). "Sponsored vs. Organic (Research Paper) Recommendations and the Impact of Labeling". In Trond Aalberg; Milena Dobreva; Christos Papatheodorou; Giannis Tsakonas; Charles Farrugia. Proceedings of the 17th International Conference on Theory and Practice of Digital Libraries (TPDL 2013) (PDF). pp. 395–399. Retrieved 2 December 2013.
  52. 1 2 Yong Ge; Hui Xiong; Alexander Tuzhilin; Keli Xiao; Marco Gruteser; Michael J. Pazzani (2010). An Energy-Efficient Mobile Recommender System (PDF). Proceedings of the 16th ACM SIGKDD Int'l Conf. on Knowledge Discovery and Data Mining. New York City, New York: ACM. pp. 899–908. Retrieved 2011-11-17.
  53. Bouneffouf, Djallel (2012), "Following the User's Interests in Mobile Context-Aware Recommender Systems: The Hybrid-e-greedy Algorithm", Proceedings of the 2012 26th International Conference on Advanced Information Networking and Applications Workshops (PDF), Lecture Notes in Computer Science, IEEE Computer Society, pp. 657–662, ISBN 978-0-7695-4652-0
  54. Tuukka Ruotsalo; Krister Haav; Antony Stoyanov; Sylvain Roche; Elena Fani; Romina Deliai; Eetu Mäkelä; Tomi Kauppinen; Eero Hyvönen (2013). "SMARTMUSEUM: A Mobile Recommender System for the Web of Data". Web Semantics: Science, Services and Agents on the World Wide Web. Elsevier. 20: 657–662. doi:10.1016/j.websem.2013.03.001.
  55. 1 2 Bouneffouf, Djallel (2013), DRARS, A Dynamic Risk-Aware Recommender System (Ph.D.), Institut National des Télécommunications
  56. 1 2 Lohr, Steve. "A $1 Million Research Bargain for Netflix, and Maybe a Model for Others". The New York Times.
  57. R. Bell; Y. Koren; C. Volinsky (2007). "The BellKor solution to the Netflix Prize" (PDF).
  58. Bodoky, Thomas. "Mátrixfaktorizáció one million dollars". Index.
  59. Lathia, N., Hailes, S., Capra, L., Amatriain, X.: Temporal diversity in recommender systems. In: Proceeding of the 33rd International ACMSIGIR Conference on Research and Development in Information Retrieval, SIGIR 2010, pp. 210–217. ACM, New York
  60. Turpin, Andrew H, Hersh, William (2001). "Why batch and user evaluations do not give the same results". Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval. pp. 225–231.
  61. Beel, Joeran; Genzmehr, Marcel; Langer, Stefan; Nürnberger, Andreas; Gipp, Bela (2013-01-01). "A Comparative Analysis of Offline and Online Evaluations and Discussion of Research Paper Recommender System Evaluation". Proceedings of the International Workshop on Reproducibility and Replication in Recommender Systems Evaluation. RepSys '13. New York, NY, USA: ACM: 7–14. doi:10.1145/2532508.2532511. ISBN 9781450324656.
  62. Lakiotaki, K.; Matsatsinis; Tsoukias, A. "Multicriteria User Modeling in Recommender Systems". IEEE Intelligent Systems. 26 (2): 64–76. doi:10.1109/mis.2011.33.
  63. Gediminas Adomavicius, Nikos Manouselis, YoungOk Kwon. "Multi-Criteria Recommender Systems" (PDF).

Further reading

Books

Kim Falk (2015). Practical Recommender Systems. ISBN 9781617292705.

Scientific articles
This article is issued from Wikipedia - version of the 11/18/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.