Meta just released a publicly available pretrained LLM (research access only, apparently) that claims GPT-3 level performance with < 10% of the parameters (13B vs 175B)
https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/
https://github.com/facebookresearch/llama