基于迭代模糊聚類算法與K近鄰和數(shù)據(jù)字典的集成TSK模糊分類器
doi: 10.11999/JEIT190214
-
1.
江南大學數(shù)字媒體學院 無錫 214122
-
2.
湖州師范學院信息工程學院 湖州 313000
-
3.
貴州民族大學工程實訓中心 貴陽 550025
Iterative Fuzzy C-means Clustering Algorithm & K-Nearest Neighbor and Dictionary Data Based Ensemble TSK Fuzzy Classifiers
-
1.
School of Digital Media, Jiangnan University, Wuxi 214122, China
-
2.
School of Information Engineering, Huzhou University, Huzhou 313000, China
-
3.
Engineer Training Center, Guizhou Minzu University, Guiyang 550025, China
-
摘要:
該文提出一種新型的集成TSK模糊分類器(IK-D-TSK),首先通過并行學習的方式組織所有0階TSK模糊子分類器,然后每個子分類器的輸出被擴充到原始(驗證)輸入空間,最后通過提出的迭代模糊聚類算法(IFCM)作用在增強驗證集上生成數(shù)據(jù)字典,從而利用KNN對測試數(shù)據(jù)進行快速預測。IK-D-TSK具有以下優(yōu)點:在IK-D-TSK中,每個0階TSK子分類器的輸出被擴充到原始入空間,以并行方式打開原始(驗證)輸入空間中存在的流形結(jié)構(gòu),根據(jù)堆棧泛化原理,可以保證提高分類精度;和傳統(tǒng)TSK模糊分類器相比,IK-D-TSK以并行方式訓練所有的子分類器,因此運行速度可以得到有效保證;由于IK-D-TSK是在以IFCM & KNN所獲得的數(shù)據(jù)字典的基礎(chǔ)上進行分類的,因此具有強魯棒性。理論和實驗驗證了模糊分類器IK-D-TSK具有較高的分類性能、強魯棒性和高可解釋性。
-
關(guān)鍵詞:
- TSK模糊分類器 /
- 迭代模糊聚類算法 /
- 數(shù)據(jù)字典 /
- 可解釋性
Abstract:A new ensemble TSK fuzzy classifier (i,e. IK-D-TSK) is proposed. First, all zero-order TSK fuzzy sub-classifiers are organized in a parallel way, then the output of each sub-classifier is augmented to the original (validation) input space, finally, the proposed Iterative Fuzzy C-Means (IFCM) clustering algorithm generates dictionary data on augmented validation dataset, and then KNN is used to predict the result for test data. IK-D-TSK has the following advantages: the output of each zero-order TSK subclassifier is augmented to the original input space to open the manifold structure in parallel, according to the principle of stack generalization, the classification accuracy can be improved; Compared with traditional TSK fuzzy classifiers which trains sequentially, IK-D-TSK trains all the sub-classifiers in parallel, so the running speed can be effectively guaranteed; Because IK-D-TSK works based on dictionary data obtained by IFCM & KNN, it has strong robustness. The theoretical and experimental results show that IK-D-TSK has high classification performance, strong robustness and high interpretability.
-
表 1 IFCM算法
輸入:數(shù)據(jù)集${ { X} } = \{ { { { x} }_1},{ { { x} }_2}, ··· ,{ { { x} }_N}\} \in {R^{N \times D} }$,及其標簽$ { { Y} } = $ $\{ {y_1},{y_2}, ··· ,{y_N}\} $,真實類別數(shù)Q,每一類的聚類中心點數(shù)
c,每一類的樣本數(shù)${N_1},{N_2}, ··· ,{N_Q}$,最大誤差閾值$\tau $。輸出:中心點矩陣${{V}}$及其標簽。 (1)通過FCM初始化每類中的中心點,然后形成中心點矩陣${{ V}}$。
初始化q=1,其中$1 \le q \le Q$。(2)重復; (a)重復; ?、佼?i \in \left\{ {1,2, ··· ,{N_q}} \right\}$時,通過式(12)和式(13)計算隸屬度
${\mu ^q}\left( {{{ x}}_i^q,{{ v}}_j^q} \right)$;當$ i \in \{ {N_q}{\rm{ + }}1,{N_q}{\rm{ + 2}}, ··· ,{N_q}{\rm{ + }}$$\left( {Q - 1} \right) \cdot c \}$
時,通過式(14)和式(15)計算隸屬度${\mu ^q}\left( {{{ v}}_i^{\bar q},{{ v}}_j^q} \right)$;?、谕ㄟ^式(17)計算中心點${{ v}}_j^q$; (b)直到中心點矩陣保持幾乎不變或達到內(nèi)部迭代的最大次數(shù)
為止;(c)利用${{ v}}_j^q$更新中心點矩陣${{ V}}$,并且$q = ( q + $$ 1 ){\rm{ mod }}\;Q$;
(3)直到$\mathop {\max }\limits_{j \in \left\{ {1,2, ··· ,Q \cdot c} \right\}} \left\| {{{ v}}_j^q - {{ v}}_j^{q - 1}} \right\| < \tau $或達到外部最大迭代次
數(shù)為止;(4)根據(jù)中心點矩陣${{ V}}$輸出所有的中心點及其標簽。 下載: 導出CSV
表 2 IK-D-TSK學習算法
輸入:訓練數(shù)據(jù)集${ {{D} }_{\rm tr} }{\rm{ = } }\left[ { { {{X} }_{\rm tr} }\;{ {{Y} }_{\rm tr} } } \right]$,驗證數(shù)據(jù)集${{{D}}_v}{\rm{ = }}\left[ {{{{X}}_v}\;{{{Y}}_v}} \right]$, 其中${ {{X} }_{\rm tr} }$和${{{X}}_v}$分別表示訓練數(shù)據(jù)和驗證數(shù)據(jù),對應的標
簽集為${ {{Y} }_{\rm tr} }$和${{{Y}}_v}$,子分類器數(shù)$L$, ${K_1},{K_2}, ··· ,{K_L}$表示每
個子分類器的模糊規(guī)則數(shù)輸出:IK-D-TSK的結(jié)構(gòu),數(shù)據(jù)字典 訓練過程 (1)初始化:為每個子分類器從${{{D}}_{\rm tr}}$中隨機抽樣訓練數(shù)據(jù)子集
${{{D}}_1},{{{D}}_2}, \!···\! ,{{{D}}_L}$,并且${{{D}}_1} \cup {{{D}}_2} \cup ······ \cup $${{{D}}_L}={{{D}}_{\rm tr}} $(2)并行訓練L個零階TSK模糊子分類器; (a)為每個子分類器分配模糊規(guī)則數(shù); (b)構(gòu)造5個高斯型隸屬度函數(shù),在每一維上從中心點集合{0,
0.25, 0.50, 0.75, 1.00}中隨機指定一個值并構(gòu)造規(guī)則組合矩
陣${{{ \varTheta }}_l}{\rm{ = }}{[\upsilon _{ik}^l]_{{K_l} \times d}}$. 通過給每個元素分配一個隨機正數(shù)來構(gòu)
造核寬度矩陣${{{ \varPhi }}_l}= {\rm{ [}}\delta _{ik}^l{{\rm{]}}_{{K_l} \times d}}$,利用式(2)計算模糊隸屬度,
正則化并構(gòu)造矩陣$ \qquad{ {{X} }_g} = \left[ {\begin{array}{*{20}{c} }\tilde \omega _1^1 & \tilde \omega _1^2 & ··· & \tilde \omega _1^{ {K_l} }\\\tilde \omega _2^1 & \tilde \omega _2^2 & ···& \tilde \omega _2^{ {K_l} }\\ \vdots & \vdots & \ddots & \vdots \\\tilde \omega _{ {N_l} }^1 & \tilde \omega _{ {N_l} }^2 &··· & \tilde \omega _{ {N_l} }^{ {K_l} }\end{array} } \right]_{ {N_l} \times {K_l} } \qquad\quad (18)$ 通過LLM計算后件參數(shù)${{{ a}}_g}$,即 $\qquad\qquad\qquad\ { {{a} }_{\rm g} } = {\left( \left( { {1 / C} } \right){{I} } + { {{X} }_{\rm g}^{\rm T} }{ {{X} }_{\rm g}}\right)^{ - 1} } {{X} }_{\rm g}^{\rm T} {{y} } \qquad\qquad\qquad\ \ (19)$ 其中${{ I}}$是$K \times K$單位矩陣,C是給定的正則化參數(shù); (c)通過式(3)生成L個子分類器的輸出函數(shù)${F_1}\left( {{ x}} \right),{F_2}\left( {{ x}} \right), $ $ ··· ,{F_L}\left( {{ x}} \right)$; (3)生成增強驗證數(shù)據(jù)集; 對于驗證數(shù)據(jù)集的每個樣本,計算對應每個輸出函數(shù)${F_1}\left( {{ x}} \right)$, $ {F_2}\left( {{ x}} \right), ··· ,{F_L}\left( {{ x}} \right)$的值并將其作為增強特征,將原始特征和增強 特征合并,從而形成增強驗證數(shù)據(jù)集${{ D}}_v^{\rm new}{\rm{ = }}\left[ {{{{ X}}_v}\;{{{\bar { X}}}_v}\;{{{ Y}}_v}} \right]$,其中 ${{\bar { X}}_v}$表示驗證數(shù)據(jù)的增強特征集; (4)生成數(shù)據(jù)字典; 在${ { D} }_v^{\rm new}$上調(diào)用IFCM算法后,生成代表性的中心點及其對應的
標簽,去掉增強特征,即得到數(shù)據(jù)字典。預測過程 (1)對于任何測試樣本,利用KNN方法在數(shù)據(jù)字典上找到最近的
k個點,基于投票策略,確定其類標;(2)輸出測試樣本的標簽。 下載: 導出CSV
表 3 數(shù)據(jù)集
數(shù)據(jù)集 類別數(shù) 特征數(shù) 樣本數(shù) SATimage(SAT) 6 36 6435 MUShroom(MUS) 2 22 8124 WAVeform3(WAV) 3 21 5000 PENBased(PENB) 10 16 10992 WDBc(WDB) 2 14 569 ADUlt(ADU) 2 14 48841 下載: 導出CSV
表 4 IK-D-TSK參數(shù)設(shè)置
數(shù)據(jù)集 分類器 規(guī)則數(shù) 數(shù)據(jù)字典 (WDB) 3 2~15 3~4 (WAV) 1.10~120
2.15~140
3.18~16017~20 (PENB) 10~13 (SAT) 5 1.5~90
2.8~120
3.10~150
4.13~170
5.15~19010~13 (ADU) 40~45 (MUS) 20~23 下載: 導出CSV
表 5 各分類器運行時間比較結(jié)果(s)
數(shù)據(jù)集 Zero-order-TSK-FC[1] First-order-TSK-FC[14] IFCM-KNN-C DBN[18] BLS[19] IK-D-TSK 5%噪音 10%噪音 5%噪音 10%噪音 5%噪音 10%噪音 5%噪音 10%噪音 5%噪音 10%噪音 5%噪音 10%噪音 訓練時間 訓練時間 訓練時間 訓練時間 訓練時間 訓練時間 訓練時間 訓練時間 訓練時間 訓練時間 訓練時間 訓練時間 測試時間 測試時間 測試時間 測試時間 測試時間 測試時間 測試時間 測試時間 測試時間 測試時間 測試時間 測試時間 WDB 0.0216
(0.0039)0.0224
(0.0057)0.0237
(0.0034)0.0243
(0.0023)0.0162
(0.0019)0.0141
(0.0018)4.1844
(0.1861)4.1555
(0.1592)0.0122
(0.0013)0.0122
(0.0011)0.0209
(0.0032)0.0205
(0.0023)0.0004 0.0005 0.0004 0.0004 0.0016 0.0016 0.0086 0.0079 0.0102 0.0104 0.0021 0.0020 WAV 0.7982
(0.0256)0.7984
(0.0346)3.8207
(0.0719)4.1065
(0.2303)0.2863
(0.0222)0.2808
(0.0181)35.4047
(0.2407)35.2445
(0.1511)0.0256
(0.0028)0.0261
(0.0016)0.3333
(0.0366)0.3130
(0.0409)0.0050 0.0071 0.0059 0.0112 0.0128 0.0129 0.0430 0.0391 0.0155 0.0170 0.0143 0.0142 PENB 0.9656
(0.0181)0.9794
(0.0320)3.7465
(0.1615)3.9261
(0.1764)0.5067
(0.0225)0.4809
(0.0151)15.1945
(0.1656)15.2313
(0.1790)0.0189
(0.0013)0.0191
(0.0012)0.6105
(0.0372)0.5659
(0.0323)0.0098 0.0097 0.0196 0.0224 0.0353 0.0311 0.0086 0.0086 0.0124 0.0125 0.0352 0.0340 MUS 0.9496
(0.0230)0.9965
(0.0377)7.6208
(0.2844)8.1693
(0.2367)0.8053
(0.0629)0.8124
(0.0223)47.1515
(0.2267)47.3102
(0.3248)0.0323
(0.0038)0.0321
(0.0032)0.9432
(0.0415)0.9513
(0.0323)0.0123 0.0125 0.0283 0.0361 0.0253 0.0231 0.0469 0.0602 0.0189 0.0187 0.0241 0.0244 SAT 1.2282
(0.0720)1.2301
(0.0738)13.3579
(0.4825)14.2199
(0.6745)0.3393
(0.0262)0.3221
(0.0134)338.383
(1.2035)346.9789
(4.4332)0.1491
(0.0052)0.1578
(0.0099)0.4881
(0.0441)0.4528
(0.0383)0.0073 0.0062 0.0167 0.0254 0.0183 0.0184 0.2492 0.2039 0.0644 0.0658 0.0209 0.0209 ADU 5.9016
(0.1901)6.0366
(0.1239)15.9947
(0.8757)17.3695
(0.8218)3.1255
(0.0415)3.0311
(0.0215)56.4922
(0.3625)64.3266
(0.6555)0.0337
(0.0028)0.0389
(0.0051)5.9502
(0.7296)5.5299
(0.5056)0.0322 0.0370 0.0768 0.1047 0.1126 0.1127 0.0305 0.0656 0.0200 0.0230 0.1549 0.1536 下載: 導出CSV
表 6 WDB數(shù)據(jù)集在IK-D-TSK上生成的數(shù)據(jù)字典
${{{ \upsilon }}_{1,1}} = [0.3221,0.6299,0.3633,0.3023,0.5487,0.5950,0.5260,0.3796,0.4162,0.4037,0.5162,0.2613,0.7203,0.4236, - 1]{\rm{ }}$ ${{{ \upsilon }}_{1,2}} = [0.3589,0.5702,0.3630,0.2741,0.5715,0.5258,0.5245,0.4388,0.4216,0.3926,0.4954,0.2346,0.5913,0.3333, - 1]{\rm{ }}$ ${{{ \upsilon }}_{1,3}} = [0.2962,0.5501,0.4035,0.2355,0.5358,0.5635,0.5233,0.4925,0.3430,0.3778,0.5045,0.4081,0.7043,0.5754, - 1]$ ${{{ \upsilon }}_{2,1}}{\rm{ = [}}0.3555,0.5604,0.3788,0.2586,0.5516,0.5644,0.5155,0.4579,0.4592,0.3885,0.5256,0.3284,0.5952,0.1384{\rm{,1]}}$ ${{{ \upsilon }}_{2,2}} = [0.3646,0.3985,0.2364,0.2755,0.4574,0.5489,0.4467,0.4598,0.3965,0.4276,0.4772,0.4100,0.4240,0.2729,1]$ ${{{ \upsilon }}_{2,3}} = [0.3582,0.6097,0.2785,0.3392,0.3736,0.6051,0.5651,0.4549,0.4203,0.3447,0.4312,0.4583,0.5412,0.1683,1]$ 下載: 導出CSV
-
TEH C Y, KERK Y W, TAY K M, et al. On modeling of data-driven monotone zero-order TSK fuzzy inference systems using a system identification framework[J]. IEEE Transactions on Fuzzy Systems, 2018, 26(6): 3860–3874. doi: 10.1109/TFUZZ.2018.2851258 PEDRYCZ W and GOMIDE F. Fuzzy Systems Engineering: Toward Human-Centric Computing[M]. Hoboken, NJ: Wiley, 2007: 85–101. TAKAGI T and SUGENO M. Fuzzy identification of systems and its applications to modeling and control[J]. IEEE Transactions on Systems, Man, and Cybernetics, 1985, SMC-15(1): 116–132. doi: 10.1109/TSMC.1985.6313399 TAO Dapeng, CHENG Jun, YU Zhengtao, et al. Domain-weighted majority voting for crowdsourcing[J]. IEEE Transactions on Neural Networks and Learning Systems, 2019, 30(1): 163–174. doi: 10.1109/TNNLS.2018.2836969 HU Mengqiu, YANG Yang, SHEN Fumin, et al. Robust Web image annotation via exploring multi-facet and structural knowledge[J]. IEEE Transactions on Image Processing, 2017, 26(10): 4871–4884. doi: 10.1109/TIP.2017.2717185 ZHANG Yuanpeng, ISHIBUCHI H, and WANG Shitong. Deep Takagi-Sugeno-Kang fuzzy classifier with shared linguistic fuzzy rules[J]. IEEE Transactions on Fuzzy Systems, 2018, 26(3): 1535–1549. doi: 10.1109/TFUZZ.2017.2729507 CORDON O, HERRERA F, and ZWIR I. Linguistic modeling by hierarchical systems of linguistic rules[J]. IEEE Transactions on Fuzzy Systems, 2002, 10(1): 2–20. doi: 10.1109/91.983275 NASCIMENTO D S C, BANDEIRA D R C, CANUTO A M P, et al. Investigating the impact of diversity in ensembles of multi-label classifiers[C]. 2018 International Joint Conference on Neural Networks, Rio de Janeiro, Brazil, 2018: 1–8. doi: 10.1109/IJCNN.2018.8489660. BISHOP C M. Pattern Recognition and Machine Learning[M]. New York: Springer, 2006: 51–75. 王士同, 鐘富禮. 最小學習機[J]. 江南大學學報: 自然科學版, 2010, 9(5): 505–510. doi: 10.3969/j.issn.1671-7147.2010.05.001WANG Shitong and CHUNG K F L. On least learning machine[J]. Journal of Jiangnan University:Natural Science Edition, 2010, 9(5): 505–510. doi: 10.3969/j.issn.1671-7147.2010.05.001 TUR G, DENG Li and HAKKANI-TüR D, et al. Towards deeper understanding: Deep convex networks for semantic utterance classification[C]. 2012 IEEE International Conference on Acoustics, Speech and Signal Processing, Kyoto, Japan, 2012: 5045-5048. doi: 10.1109/ICASSP.2012.6289054. WOLPERT D H. Stacked generalization[J]. Neural Networks, 1992, 5(2): 241–259. doi: 10.1016/s0893-6080(05)80023-1 ZADEH L A. Fuzzy sets[J]. Information and Control, 1965, 8(3): 338–353. doi: 10.1016/S0019-9958(65)90241-X DENG Zhaohong, JIANG Yizhang, CHUNG F L, et al. Knowledge-leverage-based fuzzy system and its modeling[J]. IEEE Transactions on Fuzzy Systems, 2013, 21(4): 597–609. doi: 10.1109/TFUZZ.2012.2212444 GU Xin, CHUNG F L, ISHIBUCHI H, et al. Multitask coupled logistic regression and its fast implementation for large multitask datasets[J]. IEEE Transactions on Cybernetics, 2015, 45(9): 1953–1966. doi: 10.1109/TCYB.2014.2362771 BACHE K and LICHMAN M. UCI machine learning repository[EB/OL]. http://archive.ics.uci.edu/ml, 2015. ALCALá-FDEZ J, FERNáNDEZ A, LUENGO J, et al. KEEL data-mining software tool: Data set repository, integration of algorithms and experimental analysis framework[J]. Journal of Multiple-Valued Logic & Soft Computing, 2011, 17(2/3): 255–287. HINTON G E, OSINDERO S, and TEH Y W. A fast learning algorithm for deep belief nets[J]. Neural Computation, 2006, 18(7): 1527–1554. doi: 10.1162/neco.2006.18.7.1527 CHEN C L P and LIU Zhulin. Broad learning system: An effective and efficient incremental learning system without the need for deep architecture[J]. IEEE Transactions on Neural Networks and Learning Systems, 2018, 29(1): 10–24. doi: 10.1109/TNNLS.2017.2716952 -