\documentclass{article}
\usepackage{nips15submit_09, times}
\usepackage[colorlinks]{hyperref}
\usepackage{url}
\hypersetup{urlcolor={blue}}
\usepackage{amsmath}
\usepackage{algorithm}
\usepackage[noend]{algpseudocode}
\usepackage{graphicx}
\usepackage{amssymb}
\usepackage{eqnarray}
\usepackage{multicol}
\setlength\columnsep{5pt}
\usepackage[margin={.75in,.75in}]{geometry}
\makeatletter
\usepackage{blindtext} % for dummy text
\setlength\@fptop{0\p@}
\makeatother
%Bibliography setup
\usepackage[backend=biber]{biblatex}
\addbibresource{references.bib}
\title{THE DIRE CONSEQUENCES OF NEURAL NETWORKS IN SCIENCE AND ENGINEERING}
\author{
\large{Matthew Burns}\\
\texttt{\textcolor{blue}{mdburns@eng.ucsd.edu}}\\
}
% The \author macro works with any number of authors. There are two commands
% used to separate the names and addresses of multiple authors: \And and \AND.
%
% Using \And between authors leaves it to \LaTeX{} to determine where to break
% the lines. Using \AND forces a linebreak at that point. So, if \LaTeX{}
% puts 3 of 4 authors names on the first line, and the last on the second
% line, try using \AND instead of \And before the third author name.
\newcommand{\fix}{\marginpar{FIX}}
\newcommand{\new}{\marginpar{NEW}}
\nipsfinalcopy % Uncomment for camera-ready version
\begin{document}
\maketitle
Instead of overwhelming the reader with the many sources and opinions that exist, I have chosen to demonstrate in narrative how obscenely dangerous these technologies could be. If you have the opportunity, THINK ABOUT HOW THESE COULD BE USED FOR EVIL, AS WELL AS GOOD. PLEASE.
\begin{multicols}{2}
\begin{enumerate}
\item \textbf{Neural net applications}:
\begin{enumerate}
\item \href{http://www.eetimes.com/document.asp?doc_id=1266579}{\textit{Cat faces}}: Google trains a billion-parameter neural network on GPUs (\$\$\$) and discovers cats on youtube
\item \href{https://www.youtube.com/watch?v=xN1d3qHMIEQ}{\textit{Deep Mind}}: Acquired by Google for \$500M+ because they made a deep net that can learn to play Atari games at superhuman levels. Combines deep learning and reinforcement learning
\item \href{https://www.youtube.com/watch?v=EtMyH_--vnU}{\textit{Deep Learning for Decision Making and Control}}: A PhD thesis out of Berkeley: Combines deep learning with optimal control
\item \href{http://googleresearch.blogspot.com/2015/06/inceptionism-going-deeper-into-neural.html}{\textit{Inceptionism - Going Deeper into Neural Network}s}: beautiful images in this article; no comment on the science part of it
\end{enumerate}
\item \textbf{Video lecture}:
\begin{enumerate}
\item \href{https://www.youtube.com/watch?v=qgx57X0fBdA}{\textit{Deep Learning for Computer Vision}}
\item \href{http://cs.nyu.edu/~fergus/presentations/nips2013_final.pdf}{\textit{Slides}}
\end{enumerate}
\item \textbf{Reading:}
\begin{enumerate}
\item Bishop \cite{bishop1995neural}: this is the main theoretical reference for neural networks. It even has a chapter on Bayesian interpretations at the end, tying neural networks to probabilistic graphical models.
\item Efficient BackProp \cite{lecun2012efficient}: Also referred to as "Tricks of the trade" because it was republished in a book which bears the title.
\item Convolutional Neural Networks \cite{krizhevsky2012imagenet}: Trained a deep, very high dimensional ($\mathbb{R}^{60,000,000}$) convolutional network on ImageNet dataset. This network architecture learns the optimal FIR filters which produce features which are good for separating the different image classes. What is interesting is that the neural network features get better classification performance than human designed features.
\end{enumerate}
\item \textbf{Tutorial projects:} \href{http://deeplearning.stanford.edu/tutorial/}{\textit{UFLDL}}
\begin{enumerate}
\item These are programming-intensive projects, intended to be completed in teams of 2-3.
\item If you successfully finish, you will know deterministic neural networks well.
\item Sequence starts you off at ECE 174; if you took that class, then you are well prepared. You can think of these projects as a template for an independent study, taken with the other reading I mentioned, especially Bishop.
\end{enumerate}
\item \textbf{Open source projects:}
\begin{enumerate}
\item \href{https://github.com/BVLC/caffe}{\textit{Caffe}}: cutting edge, pre-trained convolutional neural net for image classification
\item \href{https://github.com/vlfeat/matconvnet}{\textit{matconvnet}}: Matlab convolutional neural net toolbox
\end{enumerate}
\item \textbf{Datasets:}
\begin{enumerate}
\item \href{http://www.image-net.org/}{\textit{ImageNet}}
\item \href{http://yann.lecun.com/exdb/mnist/}{\textit{MNIST}}
\item \href{https://www.kaggle.com/}{\textit{kaggle}}
\item \href{http://clickdamage.com/sourcecode/cv_datasets.php}{\textit{Extensive list of computer vision datasets}}
\end{enumerate}
\item \textbf{Current Events:}
\begin{enumerate}
\item \href{http://people.idsia.ch/~juergen/deep-learning-conspiracy.html?utm_campaign=Artificial\%2BIntelligence\%2BWeekly&utm_medium=email&utm_source=Artificial_Intelligence_Weekly_8}{\textit{Blowback}}
\end{enumerate}
\end{enumerate}
\end{multicols}
\printbibliography
\begin{figure}[t!] \label{fig:ibis_lowd}
\includegraphics[width=1\textwidth]{ibis.png}
\caption{\href{http://googleresearch.blogspot.com/2015/06/inceptionism-going-deeper-into-neural.html}{From \textit{ Google research blog}}: the left image is the original. To the right, we see how the network represents the image. The specific "brush strokes" in the reconstructed image are selected from a subset of the non-linear eigenvectors which the network learned from training data.} X
\end{figure}
\end{document}