JACKIE CHAN
@cksdn50077392
Reviews Written
-
Average Rating
-
Posts
Q&A
Bayesian Opt ๊ด๋ จ ์ง๋ฌธ
์ ์ดํดํ์ต๋๋ค~ ๊ฐ์ฌํฉ๋๋ค!
- 0
- 7
- 541
Q&A
Bayesian Opt ๊ด๋ จ ์ง๋ฌธ
๊ทธ ๊นํ๋ธ ๋ด์ฉ ๊ทธ๋๋ก ์ ๋๋ค..! ๊ฑฐ๊ธฐ์ f1_score ๋ง ํธ์ถํด์ ์ฑ๋ฅ์ธก์ ํด๋ณด์๋๋ ์ฑ๋ฅ์ด ์๋์ค๋ค์ ใ
- 0
- 7
- 541
Q&A
Bayesian Opt ๊ด๋ จ ์ง๋ฌธ
### ๋ฐ์ดํฐ ์ ์ฒ๋ฆฌ !pip install bayesian-optimization import numpy as np import pandas as pd import matplotlib.pyplot as plt import matplotlib cust_df = pd.read_csv(r"C:\Users\user\Desktop\datasets\santander-customer-satisfaction/train.csv",encoding='latin-1') print('dataset shape:', cust_df.shape) cust_df.head(3) cust_df.info() print(cust_df['TARGET'].value_counts()) unsatisfied_cnt = cust_df[cust_df['TARGET'] == 1]['TARGET'].count() total_cnt = cust_df['TARGET'].count() print('unsatisfied ๋น์จ์ {0:.2f}'.format((unsatisfied_cnt / total_cnt))) cust_df.describe( ) print(cust_df['var3'].value_counts( )[:10]) # var3 ํผ์ฒ ๊ฐ ๋์ฒด ๋ฐ ID ํผ์ฒ ๋๋กญ cust_df['var3'].replace(-999999, 2, inplace=True) cust_df.drop('ID',axis=1 , inplace=True) # ํผ์ฒ ์ธํธ์ ๋ ์ด๋ธ ์ธํธ๋ถ๋ฆฌ. ๋ ์ด๋ธ ์ปฌ๋ผ์ DataFrame์ ๋งจ ๋ง์ง๋ง์ ์์นํด ์ปฌ๋ผ ์์น -1๋ก ๋ถ๋ฆฌ X_features = cust_df.iloc[:, :-1] y_labels = cust_df.iloc[:, -1] print('ํผ์ฒ ๋ฐ์ดํฐ shape:{0}'.format(X_features.shape)) from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X_features, y_labels, test_size=0.2, random_state=0) train_cnt = y_train.count() test_cnt = y_test.count() print('ํ์ต ์ธํธ Shape:{0}, ํ ์คํธ ์ธํธ Shape:{1}'.format(X_train.shape , X_test.shape)) print(' ํ์ต ์ธํธ ๋ ์ด๋ธ ๊ฐ ๋ถํฌ ๋น์จ') print(y_train.value_counts()/train_cnt) print('\n ํ ์คํธ ์ธํธ ๋ ์ด๋ธ ๊ฐ ๋ถํฌ ๋น์จ') print(y_test.value_counts()/test_cnt) from xgboost import XGBClassifier from sklearn.metrics import roc_auc_score # n_estimators๋ 500์ผ๋ก, random state๋ ์์ ์ํ ์๋ง๋ค ๋์ผ ์์ธก ๊ฒฐ๊ณผ๋ฅผ ์ํด ์ค์ . xgb_clf = XGBClassifier(n_estimators=500, random_state=156) # ์ฑ๋ฅ ํ๊ฐ ์งํ๋ฅผ auc๋ก, ์กฐ๊ธฐ ์ค๋จ ํ๋ผ๋ฏธํฐ๋ 100์ผ๋ก ์ค์ ํ๊ณ ํ์ต ์ํ. xgb_clf.fit(X_train, y_train, early_stopping_rounds=100, eval_metric="auc", eval_set=[(X_train, y_train), (X_test, y_test)]) xgb_roc_score = roc_auc_score(y_test, xgb_clf.predict_proba(X_test)[:,1],average='macro') print('ROC AUC: {0:.4f}'.format(xgb_roc_score)) from sklearn.model_selection import GridSearchCV # ํ์ดํผ ํ๋ผ๋ฏธํฐ ํ ์คํธ์ ์ํ ์๋๋ฅผ ํฅ์์ํค๊ธฐ ์ํด n_estimators๋ฅผ 100์ผ๋ก ๊ฐ์ xgb_clf = XGBClassifier(n_estimators=100) params = {'max_depth':[5, 7] , 'min_child_weight':[1,3] ,'colsample_bytree':[0.5, 0.75] } # ํ์ดํผ ํ๋ผ๋ฏธํฐ ํ ์คํธ์ ์ํ์๋๋ฅผ ํฅ์ ์ํค๊ธฐ ์ํด cv ๋ฅผ ์ง์ ํ์ง ์์. gridcv = GridSearchCV(xgb_clf, param_grid=params) gridcv.fit(X_train, y_train, early_stopping_rounds=30, eval_metric="auc", eval_set=[(X_train, y_train), (X_test, y_test)]) print('GridSearchCV ์ต์ ํ๋ผ๋ฏธํฐ:',gridcv.best_params_) xgb_roc_score = roc_auc_score(y_test, gridcv.predict_proba(X_test)[:,1], average='macro') print('ROC AUC: {0:.4f}'.format(xgb_roc_score)) # n_estimators๋ 1000์ผ๋ก ์ฆ๊ฐ์ํค๊ณ , learning_rate=0.02๋ก ๊ฐ์, reg_alpha=0.03์ผ๋ก ์ถ๊ฐํจ. xgb_clf = XGBClassifier(n_estimators=1000, random_state=156, learning_rate=0.02, max_depth=5,\ min_child_weight=1, colsample_bytree=0.75, reg_alpha=0.03) # evaluation metric์ auc๋ก, early stopping์ 200 ์ผ๋ก ์ค์ ํ๊ณ ํ์ต ์ํ. xgb_clf.fit(X_train, y_train, early_stopping_rounds=200, eval_metric="auc",eval_set=[(X_train, y_train), (X_test, y_test)]) xgb_roc_score = roc_auc_score(y_test, xgb_clf.predict_proba(X_test)[:,1],average='macro') print('ROC AUC: {0:.4f}'.format(xgb_roc_score)) # n_estimators๋ 1000์ผ๋ก ์ฆ๊ฐ์ํค๊ณ , learning_rate=0.02๋ก ๊ฐ์, reg_alpha=0.03์ผ๋ก ์ถ๊ฐํจ. xgb_clf = XGBClassifier(n_estimators=1000, random_state=156, learning_rate=0.02, max_depth=7,\ min_child_weight=1, colsample_bytree=0.75, reg_alpha=0.03) # evaluation metric์ auc๋ก, early stopping์ 200 ์ผ๋ก ์ค์ ํ๊ณ ํ์ต ์ํ. xgb_clf.fit(X_train, y_train, early_stopping_rounds=200, eval_metric="auc",eval_set=[(X_train, y_train), (X_test, y_test)]) xgb_roc_score = roc_auc_score(y_test, xgb_clf.predict_proba(X_test)[:,1],average='macro') print('ROC AUC: {0:.4f}'.format(xgb_roc_score)) from xgboost import plot_importance import matplotlib.pyplot as plt %matplotlib inline fig, ax = plt.subplots(1,1,figsize=(10,8)) plot_importance(xgb_clf, ax=ax , max_num_features=20,height=0.4) ### LightGBM ๋ชจ๋ธ ํ์ต๊ณผ ํ์ดํผ ํ๋ผ๋ฏธํฐ ํ๋ from lightgbm import LGBMClassifier lgbm_clf = LGBMClassifier(n_estimators=500) evals = [(X_test, y_test)] lgbm_clf.fit(X_train, y_train, early_stopping_rounds=100, eval_metric="auc", eval_set=evals, verbose=True) lgbm_roc_score = roc_auc_score(y_test, lgbm_clf.predict_proba(X_test)[:,1],average='macro') print('ROC AUC: {0:.4f}'.format(lgbm_roc_score)) from sklearn.metrics import f1_score f1_score(y_test, lgbm_clf.predict(X_test)) from sklearn.model_selection import GridSearchCV # ํ์ดํผ ํ๋ผ๋ฏธํฐ ํ ์คํธ์ ์ํ ์๋๋ฅผ ํฅ์์ํค๊ธฐ ์ํด n_estimators๋ฅผ 100์ผ๋ก ๊ฐ์ LGBM_clf = LGBMClassifier(n_estimators=200) params = {'num_leaves': [32, 64 ], 'max_depth':[128, 160], 'min_child_samples':[60, 100], 'subsample':[0.8, 1]} # ํ์ดํผ ํ๋ผ๋ฏธํฐ ํ ์คํธ์ ์ํ์๋๋ฅผ ํฅ์ ์ํค๊ธฐ ์ํด cv ๋ฅผ ์ง์ ํ์ง ์์ต๋๋ค. gridcv = GridSearchCV(lgbm_clf, param_grid=params) gridcv.fit(X_train, y_train, early_stopping_rounds=30, eval_metric="auc", eval_set=[(X_train, y_train), (X_test, y_test)]) print('GridSearchCV ์ต์ ํ๋ผ๋ฏธํฐ:', gridcv.best_params_) lgbm_roc_score = roc_auc_score(y_test, gridcv.predict_proba(X_test)[:,1], average='macro') print('ROC AUC: {0:.4f}'.format(lgbm_roc_score)) lgbm_clf = LGBMClassifier(n_estimators=1000, num_leaves=32, sumbsample=0.8, min_child_samples=100, max_depth=128) evals = [(X_test, y_test)] lgbm_clf.fit(X_train, y_train, early_stopping_rounds=100, eval_metric="auc", eval_set=evals, verbose=True) lgbm_roc_score = roc_auc_score(y_test, lgbm_clf.predict_proba(X_test)[:,1],average='macro') print('ROC AUC: {0:.4f}'.format(lgbm_roc_score)) ### BaysianOptimization์ ์ด์ฉํ ํ์ดํผ ํ๋ผ๋ฏธํฐ ํ๋ ํ์ดํผ ํ๋ผ๋ฏธํฐ ํ๋ ๋์์ Dictionary ํํ๋ก ์ ์ํฉ๋๋ค. ์ด๋ ๊ฐ๋ณ ํ์ดํผ ํ๋ผ๋ฏธํฐ๋ ํํํํ์ ๋ฒ์๊ฐ์ผ๋ก ์ฃผ์ด์ง๋๋ค. ์๋ฅผ ๋ค์ด num_leaves์ ๊ฐ์ 24~45 ์ฌ์ด์ ๊ฐ์ ์ ๋ ฅํ๋ ค๋ฉด 'num_leaves':(24, 45)๋ก ๋ถ์ฌํด์ผ ํฉ๋๋ค. ์ด ๋ ์ ์ํด์ผ ํ ์ฌํญ์ num_leaves๋ ์ ์ํ๊ฐ๋ง ๊ฐ๋ฅํ ํ์ดํผ ํ๋ผ๋ฏธํฐ์์๋ ๋ถ๊ตฌํ๊ณ BaysianOptimization ํด๋์ค๊ฐ ํด๋น ํ๋ผ๋ฏธํฐ์ ๋ฒ์๊ฐ์ ์ ๋ ฅ ๋ฐ์ผ๋ฉด ์ด๋ฅผ ๋ฌด์กฐ๊ฑด ์ ์ํ์ด ์๋ ์ค์ํ ๊ฐ์ผ๋ก ์ธ์ํ์ฌ ๊ฐ์ ์ถ์ถํ๋ ๊ฒ์ ๋๋ค. ์ฆ 24.5, 25.4, 30.2, 27.2 ์ ๊ฐ์ด ์ค์ํ ๊ฐ์ num_leaves ๊ฐ์ผ๋ก ์ค์ ํ๋ ค๊ณ ์๋ํ๋๋ฐ, ์ด๋ ์คํ ์ค๋ฅ๋ฅผ ๋ฐ์ ์ํต๋๋ค. ์ด๋ฌํ ์คํ ์ค๋ฅ๋ฅผ ๋ง๊ธฐ ์ํด์๋ ํธ์ถ๋๋ BayesianOptimization ํ๊ฐ ํจ์๋ด์์ XGBoost/LightGBM์ ํ์ดํผ ํ๋ผ๋ฏธํฐ๋ฅผ ๋ค์ ์ ์ํ ๊ฐ์ผ๋ก ๋ณ๊ฒฝํ๋ฉด ๋ฉ๋๋ค. ์ด์ ๋ํด์๋ ๋ค์ ๋ค์์ ์ธ๊ธํ๋๋ก ํ๊ฒ ์ต๋๋ค. bayes_params = { 'num_leaves': (24, 45), 'colsample_bytree':(0.5, 1), 'subsample': (0.5, 1), 'max_depth': (4, 12), 'reg_alpha': (0, 0.5), 'reg_lambda': (0, 0.5), 'min_split_gain': (0.001, 0.1), 'min_child_weight':(5, 50) } ํ ์คํธ ํด๋ณผ ํ์ดํผ ํ๋ผ๋ฏธํฐ์ ๋ฒ์ ๊ฐ์ ์ค์ ํ์์ผ๋ฉด BaysianOptimization์์ ํธ์ถํ์ฌ ๋ชจ๋ธ์ ์ต์ ํํ๋ ํจ์๋ฅผ ๋ง๋ค์ด ๋ณด๊ฒ ์ต๋๋ค. ํด๋น ํจ์๋ BaysianOptimization์์ ํ์ดํผ ํ๋ผ๋ฏธํฐ๋ฅผ ํ๋ํ๊ธฐ ์ํด ํธ์ถ๋๋ฉด ์ ๋๋ก ํ๋์ด ๋๊ณ ์๋์ง๋ฅผ ํ๋จํ๊ธฐ ์ํด์ ๋ชจ๋ธ์ ํ์ต/ํ๊ฐํ๊ณ ์ด์ ๋ฐ๋ฅธ ํ๊ฐ ์งํ๋ฅผ ๋ฐํํ๋ ํ์์ผ๋ก ๋ง๋ค์ด์ง๋๋ค. ์ด ํ๊ฐ ํจ์๋ BayesianOptimization ๊ฐ์ฒด์์ ํ๋ผ๋ฏธํฐ๋ฅผ ๋ณ๊ฒฝํ๋ฉด์ ํธ์ถ๋๋ฏ๋ก ํจ์์ ์ธ์๋ก ์์์ ๋์ ๋๋ฆฌ๋ก ์ค์ ๋ ํ๋ผ๋ฏธํฐ๋ค์ ๊ฐ์ง๊ฒ ๋ฉ๋๋ค. from lightgbm import LGBMClassifier from sklearn.metrics import roc_auc_score def lgb_roc_eval(num_leaves, colsample_bytree, subsample, max_depth, reg_alpha, reg_lambda, min_split_gain, min_child_weight): params = { "n_estimator":200, "learning_rate":0.02, 'num_leaves': int(round(num_leaves)), 'colsample_bytree': colsample_bytree, 'subsample': subsample, 'max_depth': int(round(max_depth)), 'reg_alpha': reg_alpha, 'reg_lambda': reg_lambda, 'min_split_gain': min_split_gain, 'min_child_weight': min_child_weight, 'verbosity': -1 } print("params:", params) lgb_model = LGBMClassifier(**params) lgb_model.fit(X_train, y_train, eval_set=[(X_test, y_test)], early_stopping_rounds=30, eval_metric="auc", verbose=False ) best_iter = lgb_model.best_iteration_ print('best_iter:', best_iter) valid_proba = lgb_model.predict_proba(X_test, num_iteration=best_iter)[:, 1] roc_preds = roc_auc_score(y_test, valid_proba) print('roc_auc:', roc_preds) return roc_preds BayesianOptimization ๊ฐ์ฒด๋ฅผ ์์ฑํฉ๋๋ค. ์ด๋ ์์ฑ ์ธ์๋ก ์์์ ๋ง๋ ํ๊ฐํจ์ lgb_roc_eval ํจ์์ ํ๋ํ ํ์ดํผ ํ๋ผ๋ฏธํฐ์ ๋ฒ์๊ฐ์ ์ค์ ํ ๋์ ๋๋ฆฌ ๋ณ์์ธ bayes_params๋ฅผ ์ ๋ ฅํฉ๋๋ค. from bayes_opt import BayesianOptimization BO_lgb = BayesianOptimization(lgb_roc_eval, bayes_params, random_state=0) ์ด์ ์ ๋ ฅ๋ฐ์ ํ๊ฐํจ์์ ํ๋ํ ํ์ดํผ ํ๋ผ๋ฏธํฐ์ ๊ฐ์ ๋ฐ๋ณต์ ์ผ๋ก ์ ๋ ฅํ์ฌ ์ต์ ํ์ดํผ ํ๋ผ๋ฏธํฐ๋ฅผ ํ๋ํ ์ค๋น๊ฐ ๋์์ต๋๋ค. BayesianOptimization๊ฐ์ฒด์์ maximize()๋ฉ์๋๋ฅผ ํธ์ถํ๋ฉด ์ด๋ฅผ ์ํํ ์ ์์ต๋๋ค. BO_lgb.maximize(init_points=5, n_iter=10) BayesianOptimization ๊ฐ์ฒด์ res ์์ฑ์ ํ์ดํผ ํ๋ผ๋ฏธํฐ ํ๋์ ํ๋ ๊ณผ์ ์์์ metric ๊ฐ๊ณผ ๊ทธ๋์ ํ์ดํผ ํ๋ผ๋ฏธํฐ ๊ฐ์ ๊ฐ์ง๊ณ ์์. BO_lgb.res BayesianOptimization ๊ฐ์ฒด์ max ์์ฑ์ ์ต๊ณ ๋์ ์ฑ๋ฅ Metric๋ฅผ ๊ฐ์ง๋์ ํ์ดํผ ํ๋ผ๋ฏธํฐ ๊ฐ์ ๊ฐ์ง๊ณ ์์. BO_lgb.max max_params = BO_lgb.max['params'] max_params['num_leaves'] = int(round(max_params['num_leaves'])) max_params['max_depth'] = int(round(max_params['max_depth'])) lgbm_clf = LGBMClassifier(n_estimators=1000, learning_rate=0.02, **max_params) evals = [(X_test, y_test)] lgbm_clf.fit(X_train, y_train, early_stopping_rounds=100, eval_metric="auc", eval_set=evals, verbose=True) lgbm_roc_score = roc_auc_score(y_test, lgbm_clf.predict_proba(X_test)[:,1],average='macro') print('ROC AUC: {0:.4f}'.format(lgbm_roc_score)) p = lgbm_clf.predict(X_test) from sklearn.metrics import f1_score f1_score(y_test, p) test_df = pd.read_csv(r'C:\Users\user\Desktop\datasets\santander-customer-satisfaction\test.csv', encoding = 'latin-1') test_df['var3'].replace(-999999, 2, inplace=True) test_df.drop('ID',axis=1 , inplace=True) testp = lgbm_clf.predict(test_df) testp_df = pd.Series(testp) testp_df.value_counts()
- 0
- 7
- 541
Q&A
Bayesian Opt ๊ด๋ จ ์ง๋ฌธ
ํ์ง๋ง ๊ฐ์ฌ๋์ด ๊นํ๋ธ์ ์ฌ๋ ค์ฃผ์ bayesian opt ์ฝ๋๋ฅผ ๊ทธ๋๋ก ๋๋ ค๋ f1_score ๊ฐ ๋งค์ฐ ๋ฎ๊ฒ ๋์ค๋ค์ ใ roc_auc_score ๊ฐ ๋์ผ๋ฉด f1_score ๊ฐ ๋์์ผํ๋๊ฒ ์ผ๋ฐ์ ์ธ ๊ฒ์ผ๋ก ์๊ณ ์๋๋ฐ, ๋ค๋ฅธ ๋ฐ์ดํฐ์ ์ด๊ฒ์ ๊ฒ ์ ์ฉํค๋ณผ ๊ฒฝ์ฐ์๋ ์ด๋ฐ ์ฌ๋ก๊ฐ ๊ฐํน ์์๋ค์ ใ ใ ์ด๋ค ๋ฌธ์ ์ธ์ง ํ์ ์ด ์ด๋ ค์ ์ง๋ฌธ๋๋ฆฝ๋๋ค.
- 0
- 7
- 541
Q&A
Anchor box
์ ๊ฐ ๋ญ๊ฐ ์ฐฉ๊ฐํ๊ณ ์์๋ ๊ฒ ๊ฐ๋ค์.... anchor box์ ํจ์ฉ์ RPN์ ํ์ต๋ชฉํ๋ฅผ ์ ์ํ๋ ๊ฒ์ ์์๋ ๊ฒ ๊ฐ์ต๋๋ค! RPN ๋คํธ์ํฌ ๋ด๋ถ์ ์ผ๋ก ์๋ํ๋ ๊ฒ์ด ์๋๋ผ, ํ์ต์ ์ํด anchor box๋ผ๋ ๊ฐ๋ ์ ๋์ ํด์ labelling ํ๊ณ , ์์น์ ๋ณด๋ฅผ ์ ๊ณตํจ์ผ๋ก์ RPN์ด ํ์ต๋๊ฒ ํ๋ ๊ฒ ๋ง์ง์?! (์ฌ์ง) ์ด RPN loss ํจ์์์ t_i ๋ positive anchor box์ ์์น์ ๋ณด๋ผ๊ณ ์ดํดํ๋ฉด ๋ ๊น์?
- 0
- 3
- 561
Q&A
์ต์ปค๋ฐ์ค
๊ทธ๋ฆฌ๊ณ 16์ด๋ผ๋ ์ซ์๋ ์ด๋์ ํ์ด๋์จ๊ฑด๊ฐ์? ใ ใ ๊ฒฐ๊ตญ VGG๋ ๋ญ๋ ์ฌ์ฉํด์ ๋์จ ํผ์ณ๋งต์ ๊ฐ๋กx์ธ๋กx์ต์ปค๋ฐ์ค์ ์ ๊ฐ ์ ์ฒด ์ต์ปค๋ฐ์ค์ ์ ์๋๊ฐ์?
- 0
- 4
- 421
Q&A
์ต์ปค๋ฐ์ค
์ํ ์ด์ฐ๋ณด๋ฉด anchor box๋ฅผ ํตํด ์ด๊ธฐ๊ฐ์ธํ (?)์ ํ๋ค๊ณ ์๊ฐํ๋ฉด ๋ ๊น์? ๊ทธ๋ฆฌ๊ณ RPN์์ classification ์ ๋ด๋ดํ๋ ๋ถ๋ถ์ 18๊ฐ์ ์ฑ๋๋ก ์ด๋ฃจ์ด์ ธ ์๋๋ฐ, ์ฌ๊ธฐ์๋ 9๊ฐ์ ์ต์ปค๋ฐ์ค(์ ํด๋นํ๋ ๋ฌผ์ฒด)๊ฐ ์กด์ฌํ ํ๋ฅ /์กด์ฌํ์ง ์์ ํ๋ฅ ์ด ๋ค์ด๊ฐ๋ ๊ฑด๊ฐ์?
- 0
- 4
- 421
Q&A
5:35 RoI pooling ์ถ๋ ฅ๊ฐ์ ๋ํ ์ง๋ฌธ
์ต์ข ์ ์ผ๋ก FC layer์ ์ ๋ฌ๋๋ vector ์ ์ฐจ์์ 2000*7*7*512 ๊ฐ ๋ง๋ ๊ฒ์ด์ง์??
- 0
- 2
- 280
Q&A
feature map
์๋ณธ feature map ํ๋๋น 2000๊ฐ์ ROI ๊ฐ ์์ฑ๋๋ ์ ์ ๋ง๋๋ฐ ์๋ก์ด feature ๊ฐ ์๊ธฐ๋๊ฑด ์ดํด๋ฅผ ๋ชปํ๊ฒ ๋ค์ ใ
- 1
- 4
- 436
Q&A
SPM ์ ๋ถ๋ฉด
์ ์ง๋ฌธ์ ์๋๊ฐ ์กฐ๊ธ ์๋ชป ์ ๋ฌ๋ ๊ฒ ๊ฐ์ต๋๋ค! ์ ๋ ๊ฐ๋ ์ ์ผ๋ก level2์ ๋ฐ์ดํฐ๋ฅผ ์กฐํฉํ๋ฉด level1, level0 ์ ๋ฐ์ดํฐ๋ฅผ ์ถ์ถํ ์ ์์ผ๋ฏ๋ก ๊ตณ์ด ๋ค ํฉ์ณ์ 63๊ฐ์ ๋ฒกํฐ๊ฐ์ด ํ์ํ ์ด์ ๊ฐ ๊ถ๊ธํ์ต๋๋ค!
- 0
- 3
- 307




