Learn&Record: Is South Korea Disappearing?
近日,韩国总统尹锡悦正式宣布韩国进入“人口国家紧急状态” ,将采取措施解决超低出生率问题。尹锡悦表
示,如果不扭转低生育老龄化趋势,韩国的人口最终可能会灭绝
From: The New York Times, Dec. 2, 2023
中文文本为纽约时报官方译文,仅供参考
转载自:LearnAndRecord: 韩国会消失吗?
For some time now, South Korea has been a striking case study in the depopulation problem that hangs over the developed world. Almost all rich countries have seen their birthrates settle below replacement level, but usually that means somewhere in the neighborhood of 1.5 children per woman. For instance in 2021 the United States s ...
ML Note Course 1 Week 3-2: Over-fitting and Under-fitting
Generalization 泛化
means the model can make good predictions even on brand new examples that it has never seen before.
Under-fitting(high bias)
means the model does not fit the training set well, which is also called high bias.
Addressing Under-fitting
Add more features as input
Redesign a more complex model
Over-fitting(high-variance,高方差)
means the model fits the training set too well to generalize to new examples that’s never seen before.
Addressing Over-fitting
Collect more training dat ...
ML Note Course 1 Week 3-1: Classification
Logistic Regression 逻辑回归
solve binary classification 解决二分类问题
sigmoid function / logistic function:
g(z)=11+e−z,0<g(z)<1g(z) = \frac{1}{1+e^{-z}},0<g(z)<1
g(z)=1+e−z1,0<g(z)<1
z=w⃗⋅x⃗+bz = \vec{w} \cdot \vec{x} +b
z=w⋅x+b
logistic regression model:
fw⃗,b(x⃗)=g(w⃗⋅x⃗+b)=11+e−(w⃗⋅x⃗+b)f_{\vec{w},b}(\vec{x}) = g(\vec{w} \cdot \vec{x} +b) = \frac{1}{1+e^{-(\vec{w} \cdot \vec{x} +b)}}
fw,b(x)=g(w⋅x+b)=1+e−(w⋅x+b)1
means: probability that y = 1, namely $ P(y=1|x;\vec{w},b)$
Cost F ...
ML Note Course 1 Week 1-2: Linear Regression
f(x) = wx + b
w,bw,bw,b: parameters(参数/coefficients(系数)/weights(权重)
Cost Function
最常用于线性回归的成本函数Squared error cost function(平方误差成本函数):
J(w,b)=12m∑1m(y^(i)−y(i))2J(w,b)=\frac{1}{2m} \sum_1^m (\hat{y}^{(i)}-y^{(i)})^2
J(w,b)=2m11∑m(y^(i)−y(i))2
其中,
y^−y(i)\hat{y}-y^{(i)}y^−y(i)称为error(误差);
mmm为训练集规模;
分母多除2为了使后续 计算更简洁
将y^(i)\hat{y}^{(i)}y^(i)替换为fw,b(x(i)f_{w,b}(x^{(i)}fw,b(x(i)等价于:
J(w,b)=12m∑1m(fw,b(x(i))−y(i))2J(w,b)=\frac{1}{2m} \sum_1^m (f_{w,b}(x^{(i)})-y^{(i)})^2
J(w,b)=2m11∑m(fw,b(x( ...
ML Note Course 1 Week 1-1: Machine Learning
机器学习 = 找出一个函数
设定范围: 找出候选候选函数集合(Deep Learning(CNN, Transformer…), Decision Tree, etc.)
设定标准: 找出评量函数好坏的标准(Supervised Learning, Semi-supervised Learning, RL, etc.)
目标: 找出最好的函数(利用Gradient Descent(Adam, AdmaW…),Genetic Alogorithm, Backpropagation…)
快速了解机器学习基本原理–李宏毅
Supervised learning 监督学习
使用最多,学习进步最快。
learning algorithms learns from “right answers”
映射
x==>y
input=>output
Regression 回归
predict any number out of infinitly possible numbers
f(x) = wx + b
w,b: parameters(参数/coefficients(系数)/weig ...