Indeed, nature has a hack: 3 billion years of evolution, 9 months and a bit of growing the hardware, 3 (5? 18?) years of training and tuning.
Large Language Models have not been trained to do AGI. Yet they have emergent abilities. But when you think about it from the perspective of evolution: how could they not?
The next step is to train them to learn faster, with less data, and understand the material more deeply. And they'll learn how to do that. Because: how could they not?
🙂