User:Sebastian: Difference between revisions

From Machinelearning
No edit summary
No edit summary
 
(18 intermediate revisions by the same user not shown)
Line 1: Line 1:
Vipul: "ok, so for ML wiki, I think you should pick some page or pages to write fully (i.e., long pages) and discuss with @Issa and me as you're doing it, so we can thnink through the right structure of the pages Vipul In parallel, you can continue the process of creating small, stub pages as you learn things Vipul Vipul Naik That's the T-shaped idea: do a few things deeply and then a lot of things (wide) and later you can deepen those other things."
All existing pages here[https://machinelearning.subwiki.org/wiki/Special:AllPages?fbclid=IwAR0TX3jIjbxD2WB-Vn8hqLkaQ3w2hDUcFyzW48ql9SaovFyzbR6M7cyq4yk]
All existing pages here[https://machinelearning.subwiki.org/wiki/Special:AllPages?fbclid=IwAR0TX3jIjbxD2WB-Vn8hqLkaQ3w2hDUcFyzW48ql9SaovFyzbR6M7cyq4yk]


Red links:
Red links:


* [[Agglomerative clustering]] ([https://www.youtube.com/watch?v=IUn8k5zSI6g])
* [[Anomaly detection]]
* [[Autoencoder]]
* [[Artificial intelligence]]
* [[Bayesian linear regression]]
* [[Bit]]
* [[Byte]]
* [[Centroid linkage clustering]]
* [[Classification]]
* [[Complete linkage clustering]]
* [[Data mining]]
* [[Data mining]]
* [[Dendogram]]
* [[Dimensionality reduction]] ([https://www.youtube.com/watch?v=-OEgiMH5aok])
* [[Evaluation metrics]]
* [[Evaluation metrics]]
* [[Artificial intelligence]]
* [[Expert system]]
* [[Unsupervised learning]] ([[clustering]], [[dimensionality reduction]], [[recommender system]]s, [[deep learning]])
* [[Fast forest quantile regression]]
* [[Classification]]
* [[Feature extraction]] ([https://www.youtube.com/watch?v=-OEgiMH5aok])
* [[Semi-supervised learning]]
* [[Feature selection]] ([https://www.youtube.com/watch?v=-OEgiMH5aok])
* [[Reinforcement learning]]
* [[Hierarchical clustering]]
* [[Lasso regression]]  
* [[Linear regression]] (expand)
* [[Machine learning algorithms]]
* [[Machine learning applications]]
* [[Machine learning applications]]
* [[Machine learning algorithms]]
* [[Manifold hypothesis]]
* [[Mean linkage clustering]]
* [[Multiple regression]]
* [[Ordinal regression]]
* [[Ordinal regression]]
* [[Partitional clustering]]
* [[Poisson regression]]
* [[Poisson regression]]
* [[Fast forest quantile regression]]
* [[Linear regression]] (expand)
* [[Polynomial regression]]  
* [[Polynomial regression]]  
* [[Lasso regression]]  
* [[Reinforcement learning]]
* [[Semi-supervised learning]]
* [[Simple regression]] ([[Simple linear regression]], [[Simple non-linear regression]])
* [[Single linkage clustering]]
* [[Stepwise regression]]
* [[Stepwise regression]]
* [[Unsupervised learning]] ([[clustering]], [[dimensionality reduction]], [[recommender system]]s, [[deep learning]], [[Density estimation]], [[Market basket analysis]])
* [[Ridge regression]]
* [[Ridge regression]]
* [[Bayesian linear regression]]
 
* [[Neural network regression]]
* [[Neural network regression]]
* [[Decision forest regression]]
* [[Decision forest regression]]
Line 33: Line 69:
* [[pandas]]
* [[pandas]]
* [[Dataset]]
* [[Dataset]]
* [[Standard deviation]]
* [[TensorFlow]]
* [[TensorFlow]]
* [[Training accuracy]]
* [[Training accuracy]]
* [[Out-of-sample accuracy]]
* [[Out-of-sample accuracy]]
* [[Procgen Benchmark]]
* [[Optimization]]


KNN (K-nearest neighbors)
* [[Silhouette score]] ([https://www.youtube.com/watch?v=IUn8k5zSI6g])
 
* [[Soft computing]]


* [[K-nearest neighbor]]


* [[NumPy]] [https://www.coursera.org/lecture/machine-learning-with-python/python-for-machine-learning-WQgHa]
* [[NumPy]] [https://www.coursera.org/lecture/machine-learning-with-python/python-for-machine-learning-WQgHa]

Latest revision as of 04:50, 11 May 2022

Vipul: "ok, so for ML wiki, I think you should pick some page or pages to write fully (i.e., long pages) and discuss with @Issa and me as you're doing it, so we can thnink through the right structure of the pages Vipul In parallel, you can continue the process of creating small, stub pages as you learn things Vipul Vipul Naik That's the T-shaped idea: do a few things deeply and then a lot of things (wide) and later you can deepen those other things."


All existing pages here[1]

Red links: