Posts by Tag



-

//

Apple

BERT

CNN

Compression

Efficient Architecture

Efficient Transformer

Sparse is Enough in Scaling Transformers

Link : https://arxiv.org/pdf/2111.12763.pdf 저자/학회 특이사항 Google Research in Europe, Reformer 저자도 있고 (Łukasz Kaiser), Performer 저자도 있고(Afroz Mohiu...

Image Classification

ImageNet

Knowledge Distillation

LRA

Language Model

Sparse is Enough in Scaling Transformers

Link : https://arxiv.org/pdf/2111.12763.pdf 저자/학회 특이사항 Google Research in Europe, Reformer 저자도 있고 (Łukasz Kaiser), Performer 저자도 있고(Afroz Mohiu...

Light-weighted CNN

Low-rank Approximation

Magnitude-based

Mobile ViT

Model

Model Compression

Mohammad Rastegari

Movement Pruning

Object Detection

On-device AI

Once for all

Pruning

Quantization

Recognition

ResNet-18

SVD

Speech

Structured Pruning

Transfer Learning

Uniform Quantization

Unstructured

Vision Transformer

Wav2vec_2