@manlycoffee Well yea there is something special about it, it gives the LLM's a standard way to interact with the world. Thats proven to be pretty powerful
I mean, calling it "special" gives the illusion to people that MCP is something only those seasoned in fine-tuning LLMs can deploy.
But other than that, sure, it is powerful, with lots of emergent properties.
hmm interesting take. Maybe im disconnected with the common perspective, but i wont why people associated MCP with fine-tuning, they arent even related.
Im confused what you mean here. Ive been workin gin the AI 20 years, not saying your wrong, just trying to understand what your saying...
MCP/skills are just any mechanism the LLM has to interact with the world. NER is the ability to classify one "thing" as another. "Jeff Freeman is a scientist" for example.
While NER is very important in NLP for sure,and yes we can fine-tune encoder-only llms to be better at NER I still am not grasping why your lumping fine-tunning + NER in with MCP, MCP is largely unrelated to NER and fine tuning, am I missing some context or something?
@manlycoffee Ahh ok that makes more sense, yes EAE can be helpful in figuring out what agent should do what, among other things. Fine-tuning can improve EAE in specialized cases (not strictly needed in general) and that can lead to situations where MCP/skills will be better utilized (by ensuring the proper agent is using the proper skill).
Though to be clear, fine-tuning is neither needed nor routinely used for this purpose. Not to say you cant, or that it wont help, just, usually a powerful generalized model is good enough for most things in that regard.
Personally I use MCP quite extensively in anything I do with LLMs and cant think of a case of an LLM that used skills that I had to fine-tune to improve its skill use.
@freemo
I think I'm using the wrong term.
I'm referring to "argument extraction".