Hi all,This Wednesday, same place AA3195, same time 13:00, we have the following tea talk scheduled:Talk by : Christopher PalTitle: A New Smooth Approximation for Learning with the Zero One Loss with a Probabilistic InterpretationWe examine a new form of smooth approximation to the zero one loss which can be viewed as learning using a reformulation of the classical functional form of logistic regression. Our approach is based on using the posterior mean of a novel generalized Beta-Bernoulli formulation. This leads to a generalized logistic function that approximates the zero one loss, but retains a probabilistic formulation. We show how the underlying functional form has a number of interesting properties. The approach is also easily generalized to kernel logistic regression and beyond. We present experiments in which we learn such models using a combination of gradient descent and coordinate descent using localized grid search so as to escape from local minima. Our experiments are indicating that that this formulation can improve upon the performance of canonical models such as classical logistic regression and support vector machines and is more robust to outliers. This work is with Md. Kamrul Hasan and he may present the second half of the talk.I hope to see many of you there,Best,Razvan