Self-Distilled Quantization: Achieving High Compression Rates in Transformer-Based Language Models. (arXiv:2307.05972v1 [cs.CL]) Show more
http://arxiv.org/abs/2307.05972 #arXiv #NLProc
QOTO: Question Others to Teach Ourselves An inclusive, Academic Freedom, instance All cultures welcome. Hate speech and harassment strictly forbidden.