Paper Dump
Academic ·经典之作
- Transformer Attention Is All You Need code
- GPT Improving Language Understanding by Generative Pre-Training
- GPT-3 Language Models are Few-Shot Learners
- ViLT ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision
- CLIP Data Determines Distributional Robustnessin Contrastive Language-Image Pre-training (CLIP)
Papers from Prof. Yi Ma’s talk
- CTRL: Closed-Loop Transcription to an LDR via Minimaxing Rate Reduction
- effectively alleviate catastrophic forgetting Incremental Learning of Structured Memory via Closed-Loop Transcription
- explainable AI ReduNet: A White-box Deep Network from the Principle of Maximizing Rate Reduction
- unsupervised learning Unsupervised Learning of Structured Representations via Closed-Loop Transcription