Latent dirichlet allocation
Abstract We describe latent Dirichlet allocation (LDA), a generative probabilistic model for
collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian
model, in which each item of a collection is modeled as a finite mixture over an underlying
collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian
model, in which each item of a collection is modeled as a finite mixture over an underlying
Cited by 16895 Related articles All 124 versions Cite SaveSaving...Error saving. Try again? More Fewer
On spectral clustering: Analysis and an algorithm
Abstract Despite many empirical successes of spectral clustering methods—algorithms that
cluster points using eigenvectors of matrices derived from the data—there are several
unresolved issues. First, there are a wide variety of algorithms that use the eigenvectors in
cluster points using eigenvectors of matrices derived from the data—there are several
unresolved issues. First, there are a wide variety of algorithms that use the eigenvectors in
Cited by 5346 Related articles All 65 versions Cite SaveSaving...Error saving. Try again? More Fewer
[PDF][PDF] ROS: an open-source Robot Operating System
…, T Foote, J Leibs, R Wheeler, AY Ng - ICRA workshop on …, 2009 - willowgarage.com
Abstract—This paper gives an overview of ROS, an opensource robot operating system.
ROS is not an operating system in the traditional sense of process management and
scheduling; rather, it provides a structured communications layer above the host operating
ROS is not an operating system in the traditional sense of process management and
scheduling; rather, it provides a structured communications layer above the host operating
Cited by 2938 Related articles All 17 versions Cite SaveSaving...Error saving. Try again? More View as HTML Fewer
[PDF][PDF] Distance metric learning with application to clustering with side-information
Abstract Many algorithms rely critically on being given a good metric over their inputs. For
instance, data can often be clustered in many “plausible” ways, and if a clustering algorithm
such as K-means initially fails to find one that is meaningful to a user, the only recourse may
instance, data can often be clustered in many “plausible” ways, and if a clustering algorithm
such as K-means initially fails to find one that is meaningful to a user, the only recourse may
Cited by 2164 Related articles All 39 versions Cite SaveSaving...Error saving. Try again? More View as HTML Fewer
[PDF][PDF] Efficient sparse coding algorithms
Abstract Sparse coding provides a class of algorithms for finding succinct representations of
stimuli; given only unlabeled input data, it discovers basis functions that capture higher-level
features in the data. However, finding sparse codes remains a very difficult computational
stimuli; given only unlabeled input data, it discovers basis functions that capture higher-level
features in the data. However, finding sparse codes remains a very difficult computational
Cited by 1915 Related articles All 26 versions Cite SaveSaving...Error saving. Try again? More View as HTML Fewer
Cheap and fast---but is it good?: evaluating non-expert annotations for natural language tasks
Abstract Human linguistic annotation is crucial for many natural language processing tasks
but can be expensive and time-consuming. We explore the use of Amazon's Mechanical
Turk system, a significantly cheaper and faster method for collecting annotations from a
but can be expensive and time-consuming. We explore the use of Amazon's Mechanical
Turk system, a significantly cheaper and faster method for collecting annotations from a
Cited by 1347 Related articles All 37 versions Cite SaveSaving...Error saving. Try again? More Fewer
Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations
Abstract There has been much interest in unsupervised learning of hierarchical generative
models such as deep belief networks. Scaling such models to full-sized, high-dimensional
images remains a difficult problem. To address this problem, we present the convolutional
models such as deep belief networks. Scaling such models to full-sized, high-dimensional
images remains a difficult problem. To address this problem, we present the convolutional
Cited by 1248 Related articles All 29 versions Cite SaveSaving...Error saving. Try again? More Fewer
Map-reduce for machine learning on multicore
Abstract We are at the beginning of the multicore era. Computers will have increasingly
many cores (processors), but there is still no good programming framework for these
architectures, and thus no simple and unified way for machine learning to take advantage of
many cores (processors), but there is still no good programming framework for these
architectures, and thus no simple and unified way for machine learning to take advantage of
Cited by 1139 Related articles All 37 versions Cite SaveSaving...Error saving. Try again? More Fewer
Apprenticeship learning via inverse reinforcement learning
P Abbeel, AY Ng - Proceedings of the twenty-first international …, 2004 - dl.acm.org
Abstract We consider learning in a Markov decision process where we are not explicitly
given a reward function, but where instead we can observe an expert demonstrating the task
that we want to learn to perform. This setting is useful in applications (such as the task of
given a reward function, but where instead we can observe an expert demonstrating the task
that we want to learn to perform. This setting is useful in applications (such as the task of
Self-taught learning: transfer learning from unlabeled data
Abstract We present a new machine learning framework called" self-taught learning" for
using unlabeled data in supervised classification tasks. We do not assume that the
unlabeled data follows the same class labels or generative distribution as the labeled data.
using unlabeled data in supervised classification tasks. We do not assume that the
unlabeled data follows the same class labels or generative distribution as the labeled data.
Related searches
- andrew ng deep learning
- andrew ng machine learning
- andrew ng stanford
- andrew ng reinforcement learning
- andrew ng helicopter
- andrew ng learning gpu
- andrew ng sparse autoencoder
- andrew ng logistic regression
- andrew ng spectral clustering
- andrew ng latent dirichlet allocation
- andrew ng convolutional neural network
- andrew ng lecture notes
- andrew ng unsupervised learning
- andrew ng sparse coding
- andrew ng coates
- andrew ng mooc