LIFT: Language-Interfaced Fine-Tuning for Non-Language Machine Learning Tasks

University of Wisconsin-Madison
NeurIPS 2022

*Denotes equal contribution. Author order is alphabetical.

A high-level illustration of the LIFT framework. LIFT has four steps: (i) converting the dataset into sentences, (ii) fine-tuning the pretrained LLMs (e.g., GPT) on the obtained sentences, (iii) generating predictions, and (iv) converting the predictions back to the original data format.

Abstract

Fine-tuning pretrained language models (LMs) without making any architectural changes has become a norm for learning various language downstream tasks. However, for non-language downstream tasks, a common practice is to employ task-specific designs for input, output layers, and loss functions. For instance, it is possible to fine-tune an LM into an MNIST classifier by replacing the word embedding layer with an image patch embedding layer, the word token output layer with a 10-way output layer, and the word prediction loss with a 10-way classification loss, respectively. A natural question arises: Can LM fine-tuning solve non-language downstream tasks without changing the model architecture or loss function? To answer this, we propose Language-Interfaced Fine-Tuning (LIFT) and study its efficacy and limitations by conducting an extensive empirical study on a suite of non-language classification and regression tasks. LIFT does not make any changes to the model architecture or loss function, and it solely relies on the natural language interface, enabling "no-code machine learning with LMs." We find that LIFT performs comparably well across a wide range of low-dimensional classification and regression tasks, matching the performances of the best baselines in many cases, especially for the classification tasks. We also report experimental results on the fundamental properties of LIFT, including inductive bias, robustness, and sample complexity. We also analyze the effect of pretraining on LIFT and a few properties/techniques specific to LIFT, e.g., context-aware learning via appropriate prompting, calibrated predictions, data generation, and two-stage fine-tuning.

One Model, Many Tasks

Understanding LIFT: A Deep Dive

Beyond Basics: Improvements

Cite Us


        @inproceedings{tuan_zeng_2022_lift,
          author = {Dinh, Tuan and Zeng, Yuchen and Zhang, Ruisu and Lin, Ziqian and Gira, Michael and Rajput, Shashank and Sohn, Jy-yong and Papailiopoulos, Dimitris and Lee, Kangwook},
          booktitle = {Advances in Neural Information Processing Systems},
          pages = {11763--11784},
          title = {LIFT: Language-Interfaced Fine-Tuning for Non-language Machine Learning Tasks},
          volume = {35},
          year = {2022}
         }