Distinguished Rothschild Lectures

 


Tuesday, Jan 14 16:00-18:30 

Room 570 - Education Building - 5th floor


 

Locality Sensitive Hashing and Beyond

Piotr Indyk
MIT Computer Science and Artificial Intelligence Lab
Cambridge, Massachusetts

The nearest neighbor search is a classic problem in computational geometry. Given a set of points in a d-dimensional space, its goal is to design a data structure which, given any query point, quickly finds its nearest neighbor in the set. The problem has many applications in computer science, signal processing and other areas. Typically, those applications involve point-sets that are very high-dimensional.

In this talk I will present several approximation algorithms for this problem, based on Locality-Sensitive Hashing (LSH). The latter is a class of random mappings where two points that are close to each other are more likely to be mapped together than points that are far apart. I will give an overview of the algorithms achievable using this method, their limitations, and recent results that overcome those limitations.

~~~~~~~~~~~~~

Nonblocking Concurrent Data Structures

Faith Ellen
University of Toronto

A library of concurrent data structures can make the task of developing concurrent software much easier. Although sequential data structures can be transformed into concurrent data structures in straightforward ways using locks or transactional memory, the results are generally inefficient. Some of the difficulties and techniques involved in constructing efficient concurrent data structures will be discussed, including recent work on balanced binary search trees.

 
 

Wednesday, Jan 15 16:00-17:30        

Explicit Constructions in High-Dimensional Geometry

Piotr Indyk
MIT Computer Science and Artificial Intelligence Lab
Cambridge, Massachusetts

A typical question in geometric functional analysis is: given two geometric spaces X, Y, is there an embedding of X into Y that distorts the distances between any pair of points by only a constant factor ? One of the main tools for constructing such mappings is the probabilistic method: the embedding is chosen at random, and one shows that it "works" with high probability. Unfortunately, this approach does not yield concrete (or explicit) constructions of embeddings.

Over the last few years, several explicit constructions of such embeddings have been discovered. Examples include near-Euclidean sections of L1, matrices satisfying the L1 version of the Restricted Isometry Property (RIP), and others. In this talk I will give an overview of those constructions as well as the theoretical computer science tools that enabled those developments.


 
Thursday, Jan 16 10:00-12:00  
 
Sparse Fourier Transform

Piotr Indyk
MIT Computer Science and Artificial Intelligence Lab
Cambridge, Massachusetts

The Fast Fourier Transform (FFT) is a widely used numerical algorithm. It computes the Discrete Fourier Transform (DFT) of an n-dimensional signal in O(nlog n) time. It is not known whether its running time can be improved. However, in many applications, most of the Fourier coefficients of a signal are "small" or equal to zero, i.e., the output of the transform is (approximately) sparse. In this case, it is known that one can compute the set of non-zero coefficients much faster, even in sub-linear time. In this talk I will give an overview of the recent highly efficient algorithms for computing the Sparse Fourier Transform. One of the algorithms has the running time of O(k log n), where k is the number of non-zero Fourier coefficients of the signal. This improves over the runtime of the FFT for any k = o(n). I will also give a few examples of applications impacted by these developments.