Skip to yearly menu bar Skip to main content


Poster

How Well Can Transformers Emulate In-Context Newton's Method?

Liu Yang · Prabhav Jain · Dimitris Papailiopoulos · Danqi Liao


Abstract: Transformer-based models have demonstrated remarkable in-context learning capabilities, prompting extensive research into its underlying mechanisms. Recent studies have suggested that Transformers can implement first-order optimization algorithms for in-context learning and even second order ones for the case of linear regression. In this work, we study whether Transformers can perform higher order optimization methods, beyond the case of linear regression. We establish that linear attention Transformers with ReLU layers can approximate second order optimization algorithms for the task of logistic regression and achieve ϵ error with only a logarithmic to the error more layers. Our results suggest the ability of the Transformer architecture to implement complex algorithms, beyond gradient descent.

Live content is unavailable. Log in and register to view live content