Speaker: 

Zhaosong Lu

Institution: 

Simon Fraser University

Time: 

Monday, May 6, 2013 - 4:00pm to 5:00pm

Location: 

RH 306

In the first part, we discuss penalty decomposition (PD) methods for solving
a more general class of $l_0$ minimization in which a sequence of penalty
subproblems are solved by a block coordinate descent (BCD) method. Under
some suitable assumptions, we establish that any accumulation point of the
sequence generated by the PD methods satisfies the first-order optimality conditions
of the problems. Furthermore, for the problems in which the $l_0$ part is the only
nonconvex part, we show that such an accumulation point is a local minimizer of the
problems. Finally, we test the performance of the PD methods by applying them to sparse
logistic regression, sparse inverse covariance selection, and compressed sensing
problems. The computational results demonstrate that our methods generally
outperform the existing methods in terms of solution quality and/or speed.  

In the second part, we consider $l_0$ regularized convex cone programming problems.
In particular, we first propose an iterative hard thresholding (IHT) method and
its variant for solving $l_0$ regularized box constrained convex programming. We
show that the sequence generated by these methods converges to a local minimizer.
Also, we establish the iteration complexity of the IHT method for finding an
$\epsilon$-local-optimal solution. We then propose a method for solving $l_0$
regularized convex cone programming by applying the IHT method to its quadratic
penalty relaxation and establish its iteration complexity for finding an
$\epsilon$-approximate local minimizer. Finally, we propose a variant of this
method in which the associated penalty parameter is dynamically updated, and
show that every accumulation point is a local minimizer of the problem.