Artwork

内容由Zeta Alpha提供。所有播客内容(包括剧集、图形和播客描述)均由 Zeta Alpha 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal
Player FM -播客应用
使用Player FM应用程序离线!

Shallow Pooling for Sparse Labels: the shortcomings of MS MARCO

1:07:17
 
分享
 

Manage episode 355037191 series 3446693
内容由Zeta Alpha提供。所有播客内容(包括剧集、图形和播客描述)均由 Zeta Alpha 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal

In this first episode of Neural Information Retrieval Talks, Andrew Yates and Sergi Castellla discuss the paper "Shallow Pooling for Sparse Labels" by Negar Arabzadeh, Alexandra Vtyurina, Xinyi Yan and Charles L. A. Clarke from the University of Waterloo, Canada.

This paper puts the spotlight on the popular IR benchmark MS MARCO and investigates whether modern neural retrieval models retrieve documents that are even more relevant than the original top relevance annotations. The results have important implications and raise the question of to what degree this benchmark is still an informative north star to follow.

Contact: castella@zeta-alpha.com

Timestamps:

00:00 — Introduction.

01:52 — Overview and motivation of the paper.

04:00 — Origins of MS MARCO.

07:30 — Modern approaches to IR: keyword-based, dense retrieval, rerankers and learned sparse representations.

13:40 — What is "better than perfect" performance on MS MARCO?

17:15 — Results and discussion: how often are neural rankers preferred over original annotations on MS MARCO? How should we interpret these results?

26:55 — The authors' proposal to "fix" MS MARCO: shallow pooling

32:40 — How does TREC Deep Learning compare?

38:30 — How do models compare after re-annotating MS MARCO passages?

45:00 — Figure 5 audio description.

47:00 — Discussion on models' performance after re-annotations.

51:50 — Exciting directions in the space of IR benchmarking.

1:06:20 — Outro.

Related material:

- Leo Boystov paper critique blog post: http://searchivarius.org/blog/ir-leaderboards-never-tell-full-story-they-are-still-useful-and-what-can-be-done-make-them-even

- "MS MARCO Chameleons: Challenging the MS MARCO Leaderboard with Extremely Obstinate Queries" https://dl.acm.org/doi/abs/10.1145/3459637.3482011

  continue reading

13集单集

Artwork
icon分享
 
Manage episode 355037191 series 3446693
内容由Zeta Alpha提供。所有播客内容(包括剧集、图形和播客描述)均由 Zeta Alpha 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal

In this first episode of Neural Information Retrieval Talks, Andrew Yates and Sergi Castellla discuss the paper "Shallow Pooling for Sparse Labels" by Negar Arabzadeh, Alexandra Vtyurina, Xinyi Yan and Charles L. A. Clarke from the University of Waterloo, Canada.

This paper puts the spotlight on the popular IR benchmark MS MARCO and investigates whether modern neural retrieval models retrieve documents that are even more relevant than the original top relevance annotations. The results have important implications and raise the question of to what degree this benchmark is still an informative north star to follow.

Contact: castella@zeta-alpha.com

Timestamps:

00:00 — Introduction.

01:52 — Overview and motivation of the paper.

04:00 — Origins of MS MARCO.

07:30 — Modern approaches to IR: keyword-based, dense retrieval, rerankers and learned sparse representations.

13:40 — What is "better than perfect" performance on MS MARCO?

17:15 — Results and discussion: how often are neural rankers preferred over original annotations on MS MARCO? How should we interpret these results?

26:55 — The authors' proposal to "fix" MS MARCO: shallow pooling

32:40 — How does TREC Deep Learning compare?

38:30 — How do models compare after re-annotating MS MARCO passages?

45:00 — Figure 5 audio description.

47:00 — Discussion on models' performance after re-annotations.

51:50 — Exciting directions in the space of IR benchmarking.

1:06:20 — Outro.

Related material:

- Leo Boystov paper critique blog post: http://searchivarius.org/blog/ir-leaderboards-never-tell-full-story-they-are-still-useful-and-what-can-be-done-make-them-even

- "MS MARCO Chameleons: Challenging the MS MARCO Leaderboard with Extremely Obstinate Queries" https://dl.acm.org/doi/abs/10.1145/3459637.3482011

  continue reading

13集单集

所有剧集

×
 
Loading …

欢迎使用Player FM

Player FM正在网上搜索高质量的播客,以便您现在享受。它是最好的播客应用程序,适用于安卓、iPhone和网络。注册以跨设备同步订阅。

 

快速参考指南