LLMs are Zero-shot Learners

1. LLMs are Zero-shot Learners#

The zero-shot ability of LLMs shocked people when ChatGPT was released in late 2022. ChatGPT is the most popular and successful example of instruction-finetuned LLMs. Based on its vast range of understanding in words and sentences, it responds to users’ questions really well.

LLMs are particularly great at in-context tasks; when all the information is given in the user prompt. Many NLP tasks fall into this category – text summarization, named entity recognition, sentiment analysis, question-answering.

This strong zero-shot and few-shot ability is why LLMs are useful. With zero to minimal supervision by prompting or small-scale finetuning, LLM-based approaches achieved state-of-the-art performance in pretty much every NLP task.

How about in music information retrieval? How can we use LLMs in MIR? Are there language tasks in MIRs?