The more data we have, the more data we need to process. Whether it is internet traffic or biological data, the hardware is never fast enough. The aim of this workshop is to focus on analysing data under new models: when the data cannot be stored (e.g., identifying viruses in the Internet traffic), when we use multiple cores to analyse the data, and when we are generating short sketches of the data to be sent and analysed by someone else. We believe a new set of algorithmic techniques (which rely mostly on statistics and on data structures) can be used in these models, and wish to find such techniques and employ them. An important focus of this workshop is to find algorithms which are elegant, and thus can also be used in practice. To develop these algorithms, it is critical to connect the key subareas of algorithmic research on big data. These key subareas include streaming, sketching, and sampling. The goal of the workshop is to bring researchers together from these different sub-areas and to establish strong collaborations among the attendees.