University of Illinois at Urbana-Champaign
Computer Science Department
Large Scale Machine Learning
Deep Neural Networks
We developed a stochastic variational video prediction (SV2P) method that predicts a different possible future for each sample of its latent variables. To the best of our knowledge, our model is the first to provide effective stochastic multi-frame prediction for real-world videos. We demonstrate the capability of the proposed method in predicting detailed future frames of videos on multiple real-world datasets, both action-free and action-conditioned. We find that our proposed method produces substantially improved video predictions when compared to the same model without stochasticity, and to other stochastic video prediction methods.
We introduce a hybrid CPU/GPU version of the Asynchronous Advantage Actor-Critic (A3C) algorithm, currently the state-of-the-art method in reinforcement learning for various gaming tasks. We analyze its computational traits and concentrate on the critical aspects to leverage the GPU's computational power. We introduce a system of queues and a dynamic scheduling strategy, potentially helpful for other asynchronous algorithms as well. Our hybrid CPU/GPU version of A3C, based on TensorFlow, achieves a significant speed up compared to a CPU implementation and is made publicly available to other researchers.
Convolutional autoregressive models have recently demonstrated state-of-the-art performance on a number of generation tasks. While fast, parallel training methods have been crucial for their success, generation is typically implemented in a naive fashion where redundant computations are unnecessarily repeated. This results in slow generation, making such models infeasible for production environments. In this work, we describe a method to speed up generation in convolutional autoregressive models. The key idea is to cache hidden states to avoid redundant computation. We apply our fast generation method to the Wavenet and PixelCNN++ models and achieve up to 21x and 183x speedups respectively.
Neural networks are usually over-parameterized with significant redundancy in the number of required neurons which results in unnecessary computation and memory usage at inference time. One common approach to address this issue is to prune these big networks by removing extra neurons and parameters while maintaining the accuracy. In this paper, we propose NoiseOut, a fully automated pruning algorithm based on the correlation between activations of neurons in the hidden layers. We prove that adding additional output neurons with entirely random targets results into a higher correlation between neurons which makes pruning by NoiseOut even more efficient. Finally, we test our method on various networks and datasets. These experiments exhibit high pruning rates while maintaining the accuracy of the original network.
Video object detection is challenging because objects that are easily detected in one frame may be difficult to detect in another frame within the same clip. Recently, there have been major advances for doing object detection in a single image. These methods typically contain three phases: (i) object proposal generation (ii) object classification and (iii) post-processing. We propose a modification of the post-processing phase that uses high-scoring object detections from nearby frames to boost scores of weaker detections within the same clip. We show that our method obtains superior results to state-of-the-art single image object detection techniques.
There are nearly 700,000 people in the United States living with a primary brain and central nervous system tumor. This year, nearly 78,000 new cases of primary brain tumors are expected to be diagnosed and nearly 17,000 people will lose their battle with a brain tumor. As with any disease, earlier detection of brain tumor and treatment is likely to be helpful.
In this project, the goal is to detect the location and extension of the tumor regions using an ensemble of Convolutional Neural Networks (CNN) given multi-modal MR images. The main reason for choosing CNN as the primary algorithm is their promises in replacing handcrafted features with unsupervised feature learning and hierarchical feature extraction. This means that solving a highly medical problem like brain tumor segmentation might be possible with almost no input from experts and by just utilizing the raw data as the input.
In this project we will used oil production data that had been recorded at the end of each month
for more than 40 years from more that 1500 oil wells, to make predictions on when the oil well is going to fail. The data includes
cumulative oil and water production rates, recorded at the end of each month. This kind of forecasting is critically important for oil production companies in order to take appropriate strategy for each oil well before passing to the failure status.
For this prupose, we used two separate models to make the predictions and evaluate the results; Hidden Markov Model (HMM) and Recurrent Neural Networks (RNN). The results show accurate and supperier prdictions by RNN compared to HMM.
There is a trend in safety critical system design to monitor the system in runtime and check for the safety condition satisfaction. However, for system with higher number of dimensions checking the condition in runtime may be longer than the available time.
In this work we propose to use neural networks for runtime safety invariant checking because of their measurable runtime. The evaluated the accuracy of a proposed method on the system of a helicopter and we have reached an accuracy of 98.5916 for the points in the safe region and 99.9889 percent for the unsafe region.
This project aims at predicting the location of a Twitter user based only on the raw contents of his tweets. we select the words with the highest locality, a proposed metric to find common words with high distinction among different states. Then we use various machine learning algorithms to predict the user location based on the existence of the this words in his tweets history. The evaluation of this method which has been performed on one month of Twitter data gathered from 1st July 2014 to 31st July 2014, shows promising results.