Consistent Weighted Sampling, Min-Max Kernel, and Connections to Computing with Big Data

Speaker
Ping Li
Body

In this talk, I will introduce the ideas of min-max similarity (which can be viewed as a type of non-linear kernel) and consistent weighted sampling (CWS). These topics might be relatively new to the statistics community. In a paper in 2015, I demonstrated the surprisingly superb performance of min-max similarity in the context of kernel classification, compared to the standard linear or Gaussian kernels. In 2017, I generalized the min-max similarity from non-negative data to general data type and illustrated that the consistent weighted sampling (CWS), which approximately linearizes the min-max kernel to linear kernel, typically performed much better than the well-known random Fourier feature (RFF) approach. In a joint paper with Cun-Hui Zhang, we showed that, under the elliptic distribution model assumption, the (generalized) min-max similarity converges to a function of the correlation parameter. We also proved the rate of convergence. In a more recent work (https://arxiv.org/pdf/1805.02830.pdf), I showed that the empirical performance of the further generalized min-max similarity can be even comparable to boosted tree methods and deep nets. Finally, in a joint paper to appear in NeurIPS'19, we propose an efficient scheme called "Bin-Wise CWS" to dramatically improve the efficiency of the original CWS algorithm. There are also on-going joint works with Gennady Samorodnitsky and Cun-Hui Zhang (among others) for studying the theoretical properties of CWS, which is a quite unusual randomized algorithm. 

Start Time

JHN

Building Map
175