Optimal SLOPE Power and False Positives Trade-off
SLOPE is a relatively new convex optimization procedure for high-dimensional linear regression via the sorted l1 penalty: the larger the rank of the fitted coefficient, the larger the penalty. This non-separable penalty renders many existing techniques invalid or inconclusive in analyzing the SLOPE solution. In this talk, I demonstrate an asymptotically exact characterization of the SLOPE solution under Gaussian random designs through solving the SLOPE problem using approximate message passing (AMP). This characterization of the SLOPE solution allows us to derive the sharp asymptotic trade-off between the false discovery proportion (FDP) and true positive proportion (TPP) or, equivalently, between measures of type I and type II errors along the SLOPE path. We are able to show that while, in some problems, LASSO may have an upper bound of TPP strictly less than 1, a phenomenon known as Donoho-Tanner (DT) phase transition, our characterization of the SLOPE trade-off curve shows that SLOPE never suffers from this situation. This is joint work with Zhiqi Bu, Jason Klusowski, and Weijie Su.