Chuyển sang chế độ ngoại tuyến với ứng dụng Player FM !
Cryptanalyzing LLMs with Nicholas Carlini
Fetch error
Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on November 11, 2025 16:14 ()
What now? This series will be checked again in the next day. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.
Manage episode 463607233 series 2956114
'Let us model our large language model as a hash function—'
Sold.
Our special guest Nicholas Carlini joins us to discuss differential cryptanalysis on LLMs and other attacks, just as the ones that made OpenAI turn off some features, hehehehe.
Watch episode on YouTube: https://youtu.be/vZ64xPI2Rc0
Transcript: https://securitycryptographywhatever.com/2025/01/28/cryptanalyzing-llms-with-nicholas-carlini/
Links:
- https://nicholas.carlini.com
- “Stealing Part of a Production Language Model”: https://arxiv.org/pdf/2403.06634
- ‘Why I attack"’: https://nicholas.carlini.com/writing/2024/why-i-attack.html
- “Cryptanalytic Extraction of Neural Network Models”, CRYPTO 2020: https://arxiv.org/abs/2003.04884
- “Stochastic Parrots”: https://dl.acm.org/doi/10.1145/3442188.3445922
- https://help.openai.com/en/articles/5247780-using-logit-bias-to-alter-token-probability-with-the-openai-api
- https://community.openai.com/t/temperature-top-p-and-top-k-for-chatbot-responses/295542
- https://opensource.org/license/mit
- https://github.com/madler/zlib
- https://ai.meta.com/blog/yann-lecun-ai-model-i-jepa/
- https://nicholas.carlini.com/writing/2024/how-i-use-ai.html
"Security Cryptography Whatever" is hosted by Deirdre Connolly (@durumcrustulum), Thomas Ptacek (@tqbf), and David Adrian (@davidcadrian)
Chương
1. Mathematical Attacks on AI Security (00:00:00)
2. AI Model Extraction and Security (00:12:07)
3. Model Extraction Security Mechanism Analysis (00:16:11)
4. Model Extraction Attack Methodology Discussion (00:29:18)
5. Training Data Extraction Attack Methodology (00:39:00)
6. Data Poisoning Attacks and Defenses (00:50:59)
7. AI Security Defense Challenges and Strategies (00:59:24)
8. Exploring AI Model Capabilities (01:06:20)
9. Challenges in AI Model Security (01:15:21)
59 tập
Fetch error
Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on November 11, 2025 16:14 ()
What now? This series will be checked again in the next day. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.
Manage episode 463607233 series 2956114
'Let us model our large language model as a hash function—'
Sold.
Our special guest Nicholas Carlini joins us to discuss differential cryptanalysis on LLMs and other attacks, just as the ones that made OpenAI turn off some features, hehehehe.
Watch episode on YouTube: https://youtu.be/vZ64xPI2Rc0
Transcript: https://securitycryptographywhatever.com/2025/01/28/cryptanalyzing-llms-with-nicholas-carlini/
Links:
- https://nicholas.carlini.com
- “Stealing Part of a Production Language Model”: https://arxiv.org/pdf/2403.06634
- ‘Why I attack"’: https://nicholas.carlini.com/writing/2024/why-i-attack.html
- “Cryptanalytic Extraction of Neural Network Models”, CRYPTO 2020: https://arxiv.org/abs/2003.04884
- “Stochastic Parrots”: https://dl.acm.org/doi/10.1145/3442188.3445922
- https://help.openai.com/en/articles/5247780-using-logit-bias-to-alter-token-probability-with-the-openai-api
- https://community.openai.com/t/temperature-top-p-and-top-k-for-chatbot-responses/295542
- https://opensource.org/license/mit
- https://github.com/madler/zlib
- https://ai.meta.com/blog/yann-lecun-ai-model-i-jepa/
- https://nicholas.carlini.com/writing/2024/how-i-use-ai.html
"Security Cryptography Whatever" is hosted by Deirdre Connolly (@durumcrustulum), Thomas Ptacek (@tqbf), and David Adrian (@davidcadrian)
Chương
1. Mathematical Attacks on AI Security (00:00:00)
2. AI Model Extraction and Security (00:12:07)
3. Model Extraction Security Mechanism Analysis (00:16:11)
4. Model Extraction Attack Methodology Discussion (00:29:18)
5. Training Data Extraction Attack Methodology (00:39:00)
6. Data Poisoning Attacks and Defenses (00:50:59)
7. AI Security Defense Challenges and Strategies (00:59:24)
8. Exploring AI Model Capabilities (01:06:20)
9. Challenges in AI Model Security (01:15:21)
59 tập
Semua episod
×Chào mừng bạn đến với Player FM!
Player FM đang quét trang web để tìm các podcast chất lượng cao cho bạn thưởng thức ngay bây giờ. Đây là ứng dụng podcast tốt nhất và hoạt động trên Android, iPhone và web. Đăng ký để đồng bộ các theo dõi trên tất cả thiết bị.