Decision Making for Modern Information Retrieval System (WSDM'22)

Information retrieval (IR) systems have experienced extraordinary progress fueled by deep learning in the past decade. The success of neural networks has brought tremendous opportunities to model highly complex patterns in the collected data for prediction; however, the critical transition from model prediction to the final decision making in IR is far from trivial. What distinguishes IR from other domains such as computer vision and natural language processing is that it interacts directly with users and inherently involves making many complex decisions to satisfy information and user needs -- the mere prediction of relevance or classification of content is not enough. There are many desirable properties besides accuracy that IR systems should possess, such as robustness (stability), potential negative impact, long-term utility, as well as the satisfaction of various parties involved. Historically, there have always been gaps between pattern prediction and making decisions, and many of the algorithmic approaches make oversimplified assumptions about human behavior. As more and more IR tasks can accurately be solved by deep learning, the major research and production efforts have shifted to the pattern recognition side of machine learning. This leads to the aforementioned gaps being further enlarged. Ultimately, this hinders overall progress since it is unknown to which degree the improvements in relevance prediction and pattern recognition actually improve real-world decision making and complex tasks.

In recent years, we have seen an increasing number of research projects approaching the gaps via techniques such as causal inference, bandits, and reinforcement learning. Nevertheless, decision making for modern IR stands out as a novel scientific discipline where causal modelling and exploration-exploitation solutions, if practical, only answer part of the questions. On the other hand, decision making has also been long-studied by other domains such as marketing, economy, and some branches of statistics. Also, for IR systems, innovations in decision making must be accompanied by sufficient infrastructural support.

Workshop Date and Location:

The workshop will be held on 1:00 pm - 5:00 pm (MST) Feb 25, 2022 virtually online. The submission deadline is Jan. 9, 2022. Please see the Call for Contribution section below.

Attendence regarding COVID:

The conference organizer has confirmed that the workshop will be held online. Please check the conference webpage for further updates.

Registration Information:

Please note that all workshop attendees must be registered, either into the whole conference, or in a workshop only registration. Please refer to the WSDM 2022 main conference website for more information regarding the registration.

Important Dates

Dec 18, 2021 Jan 9, 2022(AOE): : Workshop paper submission deadline

Feb 1, 2022 : Workshop paper notifications

Feb 15, 2022 (AOE): Camera-ready deadline for workshop papers

1:00 pm - 5:00 pm (MST) Feb 25, 2022: Workshop Date

Speakers

Thorsten

Thorsten Joachims

Professor

Department of Computer Science, Cornell

Fair and Effective Ranking Policies for Two-Sided Markets

Search engines and recommender systems have become the dominant matchmaker for a wide range of human endeavors -- from online retail to finding romantic partners. Consequently, they carry substantial power in shaping markets and allocating opportunity to the participants. In this talk, I will discuss exogenous and endogenous reasons for why naively applying machine learning in these systems can result in ranking policies that fail to be fair in the short term, or that lead to undesirable market dynamics in the long term. Exogenous reasons often manifest themselves as biases in the training data, which then get reflected in the learned ranking policy and lead to rich-get-richer dynamics. But even when trained with unbiased data, reasons endogenous to the algorithms can lead to unfair or undesirable allocation of opportunity. To overcome these challenges, I will present new ranking algorithms that directly address both endogenous and exogenous unfairness.

Minmin Chen

Minmin Chen

Senior Staff Research Scientist

Google

Exploration in Recommender Systems

Most recommender systems are subject to the strong feedback loop created by learning from historical user-item interaction data. It creates the rich gets richer phenomenon where head contents are getting more and more exposure while tail and fresh contents are not discovered. At the same time, it pigeonholes users to contents they are already familiar with and creates myopic recommendations. The talk will discuss how exploration can help break away from the feedback loop and move toward optimizing long term user experience on recommendation platforms. We examine the roles of exploration in recommender systems from three angles: 1) system exploration to surface fresh/tail recommendations based on users' known interests; 2) user exploration to identify unknown user interests or introduce users to new interests; and 3) online exploration to utilize real-time user feedback to reduce extrapolation errors in performing system and user exploration. We discuss the challenges in measurements and optimization in different types of exploration, and propose initial solutions. We showcase how each aspect of exploration contributes to the long term user experience through offline and live experiments on industrial recommendation platforms. We hope this talk can inspire more follow up work in understanding and improving exploration in recommender systems.

Chu Wang

Chu Wang

Applied Science Manager

Amazon

Challenges in Sequential Decision Making and Reinforcement Learning for Online Advertising

In the past 30 years, we have seen online advertising invented, evolved, and matured into a trillion-dollar market. The decision making problem for online advertising has never been an easy one: there are challenges including data sparsity, system complexity, robustness & stability, long-term utility, fairness, as well as the three-body problem between shopper, advertiser, and platform. Instead of describing what we have achieved today, this talk will focus on an illustration of those fruitful but challenging areas in which academia and industry communities are inventing new technologies every day.

Yuan Gao

Yuan Gao

Staff Software Engineer

LinkedIn Ads AI

Bidding Agent Design in the LinkedIn Ad Marketplace

We establish a general optimization framework for the design of automated bidding agent in dynamic online marketplaces. It optimizes solely for the buyer’s interest and is agnostic to the auction mechanism imposed by the seller. As a result, the framework allows, for instance, the joint optimization of a group of ads across multiple platforms each running its own auction format. Bidding strategy derived from this framework automatically guarantees the optimality of budget allocation across ad units and platforms. Common constraints such as budget delivery schedule, return on investments and guaranteed results, directly translates to additional parameters in the bidding formula. We share practical learnings of bidding agent implementation in the LinkedIn ad marketplace based on this framework.

Workshop Agenda

Time Speaker Title
1:00 pm - 1:10 pm (MST) Host Chair Welcome and Opening Remarks
1:10 pm - 1:40 pm (MST) Thorsten Joachims Fair and Effective Ranking Policies for Two-Sided Markets (Link to Video)
1:40 pm - 1:50 pm (MST) Oral presentation On the Advances and Challenges of Adaptive Online Testing
1:50 pm - 2:20 pm (MST) Minmin Chen Exploration in Recommender Systems (Link to Video)
2:20 am - 2:30 am (MST) Oral presentation Query Expansion and Entity Weighting for Query Rewrite Retrieval in Voice Assistant Systems
2:30 pm - 2:50 pm (MST) Coffee Break Social
2:50 pm - 3:20 pm (MST) Chu Wang Challenges in Sequential Decision Making and Reinforcement Learning for Online Advertising (Link to Video)
3:20 pm - 3:30 pm (MST) Oral presentation Unbiased Recommender Learning from Biased Graded Implicit Feedback
3:30 pm - 4:00 pm (MST) Yuan Gao Bidding Agent Design in the LinkedIn Ad Marketplace (Link to Video)
4:00 pm - 4:30 pm (MST) Panel Discussion Speakers & Organizers

Accepted papers

Instruction for Accepted Papers

Since we would like to the accepted paper papers to have the highest exposure in our event, we held a very high standard in reviewing and selecting the submissions. We make sure that all the accepted papers will have the chance to deliver a 10-minute presentation in the oral session. Authors of the accepted papers should use the ACM Conference Proceeding templates (two column format) with the following command in the Latex file: \documentclass[sigconf]{acmart} We encourage thr authors to address the reviewers' questions and concerns in the final version. We will send the link to update the recording, and please see the workshop agenda for the assigned time slot for your presentation. Congratulations again to the acceptance of your paper!

On the Advances and Challenges of Adaptive Online Testing (Link to Paper) (Link to Presentation)

Author: Bo Yang (LinkedIn), Da Xu (Walmart Labs)
Abstract: In recent years, the interest in developing adaptive solutions for online testing has grown significantly in the industry. While the advances related to this relative new technology have been developed in multiple domains, it lacks in the literature a systematic and complete treatment of the procedure that involves exploration, inference, and analysis. This short paper aims to develop a comprehensive understanding of adaptive online testing, including various building blocks and analytical results. We also address the latest developments, research directions, and challenges that have been less mentioned in the literature.

Query Expansion and Entity Weighting for Query Rewrite Retrieval in Voice Assistant Systems (Link to Paper) (Link to Presentation)

Author: Zhongkai Sun (Amazon Alexa AI), Sixing Lu (Amazon Alexa AI), Chengyuan Ma (Amazon Alexa AI), Xiaohu Liu (Amazon Alexa AI), Chenlei Guo (Amazon Alexa AI)
Abstract: Voice assistants such as Alexa, Siri, and Google Assistant have be- come increasingly popular worldwide. However, users’ queries can be misunderstood by the system and thus degenerate user experi- ence due to speaker’s accent, semantics ambiguity, etc. In order to provide better customer experience, retrieval based query rewrit- ing (QR) systems are widely used to rewrite those unrecognized user queries. Current QR systems focus more on neural retrieval model training or direct entities retrieval for correction. However, these methods rarely focus on query expansion and entity weight- ing simultaneously, which limit the scope and accuracy of query rewrite retrieval. In this work, we proposed a novel Query Expan- sion and Entity Weighting method (QEEW), which leverages the relationships between entities in the entity catalog (consisting of users’ queries, assistant’s responses, and corresponding entities) to enhance the query rewrite performance. QEEW demonstrates improvements on all top precision metrics, particularly 6% improve- ment in top10 precision compared with no query expansion and weighting, and additionally more than 5% improvement in top10 precision compared with other baselines using query expansion and weighting.

Unbiased Recommender Learning from Biased Graded Implicit Feedback (Link to Paper) (Link to Presentation)

Author: Yuta Saito (Cornell University), Suguru Yaginuma (MC Digital, Inc.), Taketo Naito (SMN Corporation), Kazuhide Nakata (Tokyo Institute of Technology)
Abstract: Binary user-behavior logs such as clicks or views, called implicit feedback, are often used to build recommender systems because of its general availability in real practice. Most existing studies formulate implicit feedback as binary relevance feedback. However, in numerous applications, implicit feedback is observed not only as a binary indicator but also in a graded form, such as the number of clicks and the dwell time observed after a click, which we call the graded implicit feedback. The grade information should appropriately be utilized, as it is considered a more direct relevance data compared to the mere implicit feedback. However, a challenge is that the grade information is observed only for the user–item pairs with implicit feedback, whereas the grade information is unobservable for the pairs without implicit feedback. Moreover, graded implicit feedback for some user–item pairs is more likely to be observed than for others, resulting in the missing-not-at-random (MNAR) problem. To the best of our knowledge, graded implicit feedback under the MNAR mechanism has not yet been investigated despite its prevalence in real-life recommender systems. In response, we formulate a recommendation with graded implicit feedback as a statistical estimation problem and define an ideal loss function of interest, which should ideally be optimized to maximize the user experience. Subsequently, we propose an unbiased estimator for the ideal loss, building on the inverse propensity score estimator. Finally, we conduct an empirical evaluation of the proposed method on a public real-world dataset.

Call For Papers

Most IR systems consist of pattern recognition and decision-making. In the past decade, the flourishing of deep learning has provided IR with unprecedented opportunities for mining complex signals from collected data, shifting the public focus vastly toward the pattern recognition end. However, making predictions based on the discovered patterns is merely the first step toward information retrieval: the agent must strategically decide how to leverage the knowledge to maximize the various utilities of different parties. This workshop aims to identify and solve the critical challenges of transitioning pattern recognition to decision making in IR, where existing attempts focus primarily on A/B testing and sequential decision-making with reinforcement learning (RL) and multi-armed bandits (MAB).

We wish to unite researchers and practitioners from various backgrounds to identify the emerging challenges, discover the connections from various domains, and study promising solutions of decision making for modern information retrieval systems. We note that the decision making strategy may impact either the immediate performance and long-term development of real-world productions. We also pay special attention to the role of human involvement in practical decision making, and how novel applications and business incentives can be motivated by emerging technologies in such as A/B testing, RL and MAB.

All the accepted submissions will be presented at the workshop. Specific topics for reserach papers are including but not limited to:

  1. Emerging challenges in decision making for IR applications including search, recommendation and advertising;
  2. User-centric evaluation of decision making with IR systems;
  3. Designing and optimizing online or user experiments for IR systems;
  4. Online (sequential) decision making with bandits and reinforcement learning;
  5. Robust & uncertainty-aware decision making for IR;
  6. Practical causal inference & counterfactual reasoning for decision making in IR;
  7. Connecting the tools and theories for decision making from other domains, e.g. marketing, econometric and statistics, to IR problems;
  8. Modern theory of decision making;
  9. Multi-objective or multi-modality problems of decision making;
  10. Conformal inference, online & multiple hypothesis testing, optimal stopping, and false discovery control.
And we also encourage topics from:
  1. Frameworks or end-to-end solutions for industrial large-scale IR decision making;
  2. Existing or novel applications relevant to decision making in IR;
  3. Fairness and interpretability of decision making;
  4. Infrastructural support for intensive online decision making;
  5. Human-in-the-loop decision making.

Submission Directions:

We inivite quality research contributions and application studies in different formats:

  1. Orignial research papers, both long (limited to 8 contnt pages) and short (limited to 4 content pages);
  2. Extended abstrats for vision, prespective, and research proposal (2 content pages);
  3. Posters or demos on decision making systems (2 content pages).
All the submissions can provide unlimited references and supplement material at the end of the submitted paper (which needs to be in the same PDF file) focused on reproducibility or the proofs of the main theoretical results. The submitted papers must be in PDF format and formatted according to the new Standard ACM Conference Proceedings Template. For LaTeX users: unzip acmart.zip, make, and use sample-sigconf.tex as a template. Additional information about formatting and style files is available online at: https://www.acm.org/publications/proceedings-template. Paper submission and reviewing will be following the directions of the WSDM main conference. Reviews are not double-blind, and author names and affiliations should be listed. References to online material is acceptable.

Please submit your paper through this Easychair link. Please reach out to dm4ir@easychair.org for any questions.

Organizers

Da Xu

Da Xu Machine Learning Manager Walmart Labs

Jianpeng Xu

Jianpeng Xu Staff Data Scientist Walmart Labs

Jiliang Tang

Jiliang Tang Associate Professor Michigan State University

Min Liu

Min Liu Engineering Manager LinkedIn

Tobias Schnabel

Tobias Schnabel Senior Researcher Microsoft Research

Program Committee

We want to thank our program committee members for their hard work to select the valuable publications for our workshop.

  • Bo Yang, Machine Learning Engineer @ LinkedIn
  • Chuanwei Ruan, Senior Machine Learning Engineer @ Instacart
  • Cheng Jie, Staff Data Scientist @ Walmart Labs
  • Danqing Zhang, Applied Scientist @ Amazon
  • Fengshi Niu, Postdoctorial Rsearch Fellow @ Stanford
  • Luyi Ma, Senior Data Scientist @ Walmart Labs
  • Sushant Kumar, Director of Recommendation @ Walmart Labs
  • Shuyuan Xu, PhD. candidate @ Rutgers University
  • Yuting Ye, Assistant Professor @ Southern University of Science and Technology
  • Zenan Wang, Research Scientist @ JD.com
  • Zhiwei Liu, PhD. candidate @ University of Illinois, Chicago
  • To be continued...