Chuyển sang chế độ ngoại tuyến với ứng dụng Player FM !
Generating Training Data with Large Language Models w/ Special Guest Marzieh Fadaee
Manage episode 355037183 series 3446693
Marzieh Fadaee — NLP Research Lead at Zeta Alpha — joins Andrew Yates and Sergi Castella to chat about her work in using large Language Models like GPT-3 to generate domain-specific training data for retrieval models with little-to-no human input. The two papers discussed are "InPars: Data Augmentation for Information Retrieval using Large Language Models" and "Promptagator: Few-shot Dense Retrieval From 8 Examples".
InPars: https://arxiv.org/abs/2202.05144
Promptagator: https://arxiv.org/abs/2209.11755
Timestamps:
00:00 Introduction
02:00 Background and journey of Marzieh Fadaee
03:10 Challenges of leveraging Large LMs in Information Retrieval
05:20 InPars, motivation and method
14:30 Vanilla vs GBQ prompting
24:40 Evaluation and Benchmark
26:30 Baselines
27:40 Main results and takeaways (Table 1, InPars)
35:40 Ablations: prompting, in-domain vs. MSMARCO input documents
40:40 Promptagator overview and main differences with InPars
48:40 Retriever training and filtering in Promptagator
54:37 Main Results (Table 2, Promptagator)
1:02:30 Ablations on consistency filtering (Figure 2, Promptagator)
1:07:39 Is this the magic black-box pipeline for neural retrieval on any documents
1:11:14 Limitations of using LMs for synthetic data
1:13:00 Future directions for this line of research
21 tập
Manage episode 355037183 series 3446693
Marzieh Fadaee — NLP Research Lead at Zeta Alpha — joins Andrew Yates and Sergi Castella to chat about her work in using large Language Models like GPT-3 to generate domain-specific training data for retrieval models with little-to-no human input. The two papers discussed are "InPars: Data Augmentation for Information Retrieval using Large Language Models" and "Promptagator: Few-shot Dense Retrieval From 8 Examples".
InPars: https://arxiv.org/abs/2202.05144
Promptagator: https://arxiv.org/abs/2209.11755
Timestamps:
00:00 Introduction
02:00 Background and journey of Marzieh Fadaee
03:10 Challenges of leveraging Large LMs in Information Retrieval
05:20 InPars, motivation and method
14:30 Vanilla vs GBQ prompting
24:40 Evaluation and Benchmark
26:30 Baselines
27:40 Main results and takeaways (Table 1, InPars)
35:40 Ablations: prompting, in-domain vs. MSMARCO input documents
40:40 Promptagator overview and main differences with InPars
48:40 Retriever training and filtering in Promptagator
54:37 Main Results (Table 2, Promptagator)
1:02:30 Ablations on consistency filtering (Figure 2, Promptagator)
1:07:39 Is this the magic black-box pipeline for neural retrieval on any documents
1:11:14 Limitations of using LMs for synthetic data
1:13:00 Future directions for this line of research
21 tập
Tất cả các tập
×Chào mừng bạn đến với Player FM!
Player FM đang quét trang web để tìm các podcast chất lượng cao cho bạn thưởng thức ngay bây giờ. Đây là ứng dụng podcast tốt nhất và hoạt động trên Android, iPhone và web. Đăng ký để đồng bộ các theo dõi trên tất cả thiết bị.