非线性中智集的集结算法及其在多属性群决策中的应用研究
Research on Aggregation Algorithm of Nonlinear Intelligent Sets and Its Application in Multi-Attribute Group Decision Making
摘要: 本文针对偏好信息由中智集(NS)表示的多属性群决策问题(MAGDM)进行研究,将静态决策环境下的中智集扩展为动态决策环境下的非线性中智集,并开发了相应的投影模型和集结算法。首先,本文给出了非线性中智集的定义及运算法则。然后,将非线性中智数投影为三维空间中的曲线,用曲线之间所围成曲面的面积大小来描述决策者偏好之间的差异,从而完成非线性中智集空间投影模型的建立。最后,开发基于模拟植物生长算法(PGSA)的空间曲线集结算法,通过寻找与所有偏好曲线围成曲面面积之和最小的最优集结曲线来完成非线性中智集的集结,并结合TOPSIS算法完成多属性群决策问题中的方案排序工作。文章的实验部分通过一个具体案例来说明本文所提出方法的有效性。
Abstract: This paper investigates the problem of multi-attribute group decision making (MAGDM) where preference information is represented by a neutrosophic set (NS). It extends the concept of neutral set from static decision environments to nonlinear neutrosophic set in dynamic decision environments, and develops a corresponding projection model and aggregation algorithm. Firstly, we provide the definition and algorithm for nonlinear neutrosophic sets. Then, we project the nonlinear neutral set onto a curve in three-dimensional space, describing differences in decision makers’ preferences through the surface area between curves. This allows us to establish a projection model for the space of nonlinear neutral sets. Finally, we develop a space curve aggregation algorithm based on the plant growth simulation algorithm (PGSA). By identifying an optimal aggregation curve with minimal sum of surface areas between all preference curves, we assemble the nonlinear neutral set and combine it with TOPSIS algorithm to sort schemes in multi-attribute group decision making problems. The experimental section demonstrates the effectiveness of our proposed method through a specific case.
文章引用:邱骏达, 汤嘉立, 由从哲, 李鹏, 吴炳洋. 非线性中智集的集结算法及其在多属性群决策中的应用研究[J]. 人工智能与机器人研究, 2024, 13(3): 622-635. https://doi.org/10.12677/airr.2024.133064

1. 引言

1965年,Zadeh [1]提出模糊集(FS)理论,首次利用隶属函数来描述事物的模糊性,突破了精确数学理论“非此即彼”的固有思想,从数学上消除了计算机无法处理模糊信息的禁锢,开创了模糊数学的研究领域。近年来,国内外学者对模糊集理论进行扩展,相继提出了多种模糊数据集理论并将其应用到MAGDM问题中。其中,中智集[2]通过真隶属度T、不确定性隶属度I、假隶属度F对直觉模糊逻辑进行扩展,消除了隶属度和非隶属度的取值限制,极大地提高了对不确定模糊信息的描述能力,作为NS的子类,简化中智集(SNS) [3] [4]、单值中智集(SVNS) [5] [6]、区间值中智集(INS) [7] [8]和多值中智集(MNS) [9]也逐渐被引入。

然而,上述研究成果均用来解决静态决策环境下的MAGDM问题,随着决策环境的日益复杂,决策者偏好随着事态的发展会呈现出非线性的变化,传统的静态模糊数据集很难准确地描述真实的决策者偏好信息,最终影响偏好集结矩阵的构建质量。本文针对这个问题,定义了动态决策环境下的非线性中智集来描述决策者偏好信息,并开发相应的集结算法构造偏好集结矩阵,结合TOPSIS算法完成方案排序,从而解决复杂决策环境下的MAGDM问题。

2. 非线性中智集

定义1假设X是一个对象空间,该空间中的任意一个元素为x,则X上的一个非线性中智集定义如下:

A={ x, T A ( x ), I A ( x ), F A ( x ) |0,xX } (1)

其中, T A ( x )= { T A ( x t ) } t=1 I A ( x )= { I A ( x t ) } t=1 ,和 F A ( x )= { F A ( x t ) } t=1 分别表示时间序列l下x的非线性真值隶属度函数、非线性不确定性隶属度函数和非线性假值隶属度函数。 T A ( x t ) I A ( x t ) F A ( x t ) 分别取[0, 1]中的实数,它们的和取[0, 3]中的实数。

定义2假设存在一个非线性中智数 α= ( a t , b t , c t ) t=1 ,则它的评分函数和准确度函数定义如下:

s( α )= t=1 1 3 ( a t +1 b t +1 c t ) (2)

h( A )= t=1 ( a t c t ) (3)

距离函数在模糊理论中描述一个元素与另一个元素之间的距离,定义3给出了非线性中智集的局部距离、全局距离和关键时间节点距离。

定义3假设AB是两个非线性中智集,则它们之间的距离函数定义如下:

d q ( A,B )= i=1 n [ 1 3 t=k h ( | T A t ( x i ) T B t ( x i ) | q + | I A t ( x i ) I B t ( x i ) | q + | F A t ( x i ) F B t ( x i ) | q ) ] 1/q (4)

q = 1,q = 2和 q 时, d 1 ( A,B ) d 2 ( A,B ) d ( A,B ) 分别表示两者之间的加权Hamming距离、加权Euclidean距离和加权Chebyshev距离。

定义4假设AB是两个非线性中智集,则它们之间的Jaccard、Dice和余弦相似度定义如下:

S J DNSNS ( A,B )= C( A,B ) n[ E( A )+E( B )C( A,B ) ] (5)

S D DNSNS ( A,B )= 2C( A,B ) n[ E( A )+E( B ) ] (6)

S C DNSNS ( A,B )= C( A,B ) n E( A )E( B ) (7)

其中,两者之间的相关性C (A, B)和各自的信息熵E (A)和E (B)计算方法如下:

C( A,B )= i=1 n t=1 [ T A t ( x i ) T B t ( x i )+ I A t ( x i ) I B t ( x i )+ F A t ( x i ) F B t ( x i ) ] , (8)

E( A )= i=1 n t=1 [ ( T A t ( x i ) ) 2 + ( I A t ( x i ) ) 2 + ( F A t ( x i ) ) 2 ], (9)

E( B )= i=1 n t=1 [ ( T B t ( x i ) ) 2 + ( I B t ( x i ) ) 2 + ( F B t ( x i ) ) 2 ]. (10)

3. 非线性中智集空间集结模型

3.1. 空间最优集结曲线

定义5假设 r( ) 是三维问题空间 E 3 中长度为 的空间曲线, r( ) 在Frenet坐标系中被定义为 { T,N,B } ,其中 T= r ( ) 的切向量, N= | T | 1 T 的法向量, B=T×N 的副法向量。 的Frenet曲线公式如下:

{ dT d =κN, dN d =κT+τN, dB d =τN. (11)

其中, κ τ 是曲线 r( ) 的曲率函数和扭转函数。

定义6假设 R={ r ( ) 1 ξ 1 ,,r ( ) m ξ m } 是三维问题空间 E 3 中有界封闭区里一个包含m条空间曲线的空间曲线集。 ξ i [ 1,0 ] 是空间曲线 r ( ) i ξ i 的权重信息,所有空间曲线的权重信息和为1( i=1 m ξ i =1 )。如果存在空间曲线 r ( ) * ,其与所有给定空间曲线 r ( ) i ξ i 之间围成的空间面积 ( r ( ) * ,r ( ) i ξ i ) 满足如下公式:

s ξ =min i=1 m ξ i ( r ( ) * ,r ( ) i ξ i ) . (12)

则称 r ( ) * R中的空间最优集结曲线(如图1所示)。

Figure 1. Spatially optimal aggregation curve in three-dimensional space

1. 三维空间中的空间最优集结曲线

3.2. 非线性中智集空间集结模型

Figure 2. Projection diagram of intelligent set in three-dimensional space in nonlinear

2. 非线性中智集三维空间投影示意图

首先,将中智集投影为三维空间 E 3 中的曲线。假设 A 1 ,, A M M个包含相同时间序列长度的动态中智集,共同构成了一个如下所示的动态中智集族:

A 1 ={ A 1 1 = T A 1 ( x 1 ), I A 1 ( x 1 ), F A 1 ( x 1 ) ,, A n 1 = T A 1 ( x n ), I A 1 ( x n ), F A 1 ( x n ) }, A M ={ A 1 M = T A M ( x 1 ), I A M ( x 1 ), F A M ( x 1 ) ,, A n M = T A M ( x n ), I A M ( x n ), F A M ( x n ) } (13)

T A ( x ) I A ( x ) F A ( x ) 分别作为x轴,y轴和z轴来构建三维空间 E 3 。显然,动态中智集族会被投影为 E 3 n个曲线集合。不同动态中智集中的空间曲线 A h k ( 1kM and 1hn ) 被划分到同一个曲线集合 C n 中。图2为10个包含3个备选方案,时序长度为10的非线性中智集族的三维空间投影示意图(n = 3, M = 10, =10 ):

图3为非线性中智集偏好信息差异示意图,空间曲线之间所围成的面积大小描述了 A h k A h j ( 1kM 1jM )的偏好信息差异程度,面积越大,偏好信息差异越大,反之亦然。

Figure 3. Schematic diagram of preference information difference in nonlinear neutrosophic sets

3. 非线性中智集偏好信息差异示意图

Figure 4. Single nonlinear neutrosophic preference curve

4. 单条非线性中智综合偏好曲线

定义7假设 A h * 是一个时序长度为 的动态中智曲线, A h * A h k ( 1kM 1hn )之间围成的空间面积之和满足如下公式:

s( A h * )=min k=1 M ( A h * , A h k ) (14)

则将 A h * = T A * ( x h ), I A * ( x h ), F A * ( x h ) 定义为动态中智集曲线集 A h 的非线性中智综合偏好曲线(如图4所示,其中M = 15, =20 ):

进而,非线性中智集空间集结模型可以生成如图5所示的非线性中智集族 A k 的最优集结集合 A * ={ A 1 * ,, A n * } (n = 3, M = 10, =10 ):

Figure 5. Schematic diagram of optimal aggregation set in nonlinear neutrosophic set

5. 非线性中智集最优集结集合示意图

4. 非线性中智集集结算法

4.1. 空间区域面积求解算法

Figure 6. Differential idea of space curve

6. 空间曲线微分思想

由非线性空间曲线包围的表面区域很难计算,相关算法的时间复杂度随时间序列长度的变化呈指数增长。本文提出的聚合算法采用了微分的思想来解决这个问题。

假设AB是两个时间序列长度为 的非线性中智集,令 t h+1 t h =Δt ( 1h Δt 是一个无限小的时间间隔)。 A t h A t h+1 之间存在一个线性关系(B集合亦然)。图6所示为 A t h A t h+1 之间以及 B t h B t h+1 之间围成的一个凸四边形:

显然, s h+1 可以由公式(15)~(21)进行计算:

s h = 1 2 ( ab 1 cos 2 α +cd 1 cos 2 β ) (15)

a=| A t h B t h |= [ ( T A t h ( x ) T B t h ( x ) ) 2 + ( I A t h ( x ) I B t h ( x ) ) 2 + ( F A t h ( x ) F B t h ( x ) ) 2 ] 1/2 (16)

b=| A t h A t h+1 |= [ ( T A t h ( x ) T A t h+1 ( x ) ) 2 + ( I A t h ( x ) I A t h+1 ( x ) ) 2 + ( F A t h ( x ) F A t h+1 ( x ) ) 2 ] 1/2 (17)

c=| A t h+1 B t h+1 |= [ ( T A t h+1 ( x ) T B t h+1 ( x ) ) 2 + ( I A t h+1 ( x ) I B t h+1 ( x ) ) 2 + ( F A t h+1 ( x ) F B t h+1 ( x ) ) 2 ] 1/2 (18)

d=| B t h B t h+1 |= [ ( T B t h ( x ) T B t h+1 ( x ) ) 2 + ( I B t h ( x ) I B t h+1 ( x ) ) 2 + ( F B t h ( x ) F B t h+1 ( x ) ) 2 ] 1/2 (19)

cosα=cos A t h B t h , A t h A t h+1 = A t h B t h A t h A t h+1 | A t h B t h || A t h A t h+1 | = [ T B t h ( x ) T A t h ( x ), I B t h ( x ) I A t h ( x ), F B t h ( x ) F A t h ( x ) ][ T A t h+1 ( x ) T A t h ( x ), I A t h+1 ( x ) I A t h ( x ), F A t h+1 ( x ) F A t h ( x ) ] | ( T B t h ( x ) T A t h ( x ), I B t h ( x ) I A t h ( x ), F B t h ( x ) F A t h ( x ) ) || ( T A t h+1 ( x ) T A h ( x ), I A t h+1 ( x ) I A t h ( x ), F A t h+1 ( x ) F A t h ( x ) ) | (20)

cosβ=cos B t h+1 A t h+1 , B t h+1 B t h = B t h+1 A t h+1 B t h+1 B t h | B t h+1 A t h+1 || B t h+1 B t h | = [ T B t h+1 ( x ) T A t h+1 ( x ), I B t h+1 ( x ) I A t h+1 ( x ), F B t h+1 ( x ) F A t h+1 ( x ) ][ T B t h+1 ( x ) T B t h ( x ), I B t h+1 ( x ) I B t h ( x ), F B t h+1 ( x ) F B t h ( x ) ] | ( T B t h+1 ( x ) T A t h+1 ( x ), I B t h+1 ( x ) I A t h+1 ( x ), F B t h+1 ( x ) F A t h+1 ( x ) ) || ( T B t h+1 ( x ) T B t h ( x ), I B t h+1 ( x ) I B t h ( x ), F B t h+1 ( x ) F B t h ( x ) ) | (21)

然后,公式(22)可以计算出AB之间围成的面积:

s( A,B )= h=1 1 s h (22)

4.2. 非线性中智偏好信息最优集结算法

本文使用改进后的模拟植物生长算法(PGSA)进行偏好信息的集结。假设有M个决策者使用非线性中智集 d ij k ( 1kM 1im 1jn )来对具有n个决策属性的m个备选方案进行评价。决策者权重向量为 ( ξ 1 ,, ξ M ) T ,备选方案评价决策属性权重向量为 ( ω 1 ,, ω n ) T ( ξ k [ 0,1 ] k=1 M ξ k =1 ω j [ 0,1 ] j=1 n ω j =1 )。决策者偏好矩阵如下所示:

[ d ij k ] m×n =[ A 1,1 k A 1,n k A m,1 k A m,n k ]=[ { A 1,1 t 1 ,, A 1,1 t } k { A 1,n t 1 ,, A 1,n t } k { A m,1 t 1 ,, A m,1 t } k { A m,n t 1 ,, A m,n t } k ]             =[ { T d 1 1 t 1 , I d 1 1 t 1 , F d 1 1 t 1 k , , T d 1 1 t , I d 1 1 t , F d 1 1 t k } { T d n 1 t 1 , I d n 1 t 1 , F d n 1 t 1 k , , T d n 1 t , I d n 1 t , F d n 1 t k } { T d 1 m t 1 , I d 1 m t 1 , F d 1 m t 1 k , , T d 1 m t , I d 1 m t , F d 1 m t k } { T d n m t 1 , I d n m t 1 , F d n m t 1 k , , T d n m t , I d n m t , F d n m t k } ] (23)

假设在点集 A i.j t h 中存在M个已知非线性中智偏好信息点 A i.j t h k Ω ,其中 Ω E 3 中长度为L的有界闭箱。改进后的PGSA算法求解最优集结点 A i.j t h * 的主要步骤如下:

Step 1:随机选取一个初始生长点 γ 0 Ω ,算法步长 λ=L/ 200 。令 Γ min = γ 0 F min =f( γ 0 ) ,其中 f( γ 0 )= k=1 M ξ k | γ 0 A i,j t h k | γ 0 的生长素浓度( γ 0 A i.j t h k 中所有偏好信息点之间的加权Euclidean距离)。

Step 2:选取g个备用生长点 γ z 0 Ω ,其生长素浓度计算公式如下:

C γ z 0 = f( γ z 0 ) z=1 g f( γ z 0 ) (24)

为了避免算法陷入局部最优,构建一个轮盘赌,备用生长点在轮盘赌上所占面积比例由其生长素浓度决定,随机选取新的生长点 γ 0* (生长素浓度高的点更容易被选上,生长素浓度低的点也有可能被选上)。令 γ 0 = γ 0* Γ min = γ 0 F min =f( γ 0 )

Step 3:以 γ 0 作为旋转中心建立L-系统( θ= 22.5 ),生长出一根长度为 λ 的树枝,这就是植物的第一层。在L-系统中继续选取g个备用生长点 γ z 1

Step 4:更新所有备用生长点的生长素浓度。如果 f( γ 0 )f( γ z 1 ) ,那么令 C γ z 1 =0 。否则, γ z 1 的新生长素浓度由如下公式计算:

C γ z 1 = f( γ 0 )f( γ z 1 ) z=1 g ( f( γ 0 )f( γ z 1 ) ) (25)

Step 5:建立新的轮盘赌选取新的生长点 γ 1* ,令 γ 1 = γ 1* Γ min = γ 1 F min =f( γ 1 )

Step 6:以 γ 1 作为旋转中心建立L-系统( θ= 22.5 ),生长出一根长度为 λ 的树枝,这就是植物的第二层。在L-系统中继续选取g个备用生长点 γ z 2

Step 7:为了避免算法陷入局部最优,将第一、二两层中的所有备选生长点的生长激素浓度更新如下:

1) 如果 f( γ 0 )f( γ z 1 ) ,则 C γ z 1 =0 ,否则 C γ z 1 由如下公式计算:

C γ z 1 = f( γ 1 )f( γ z 1 ) z=1 g ( f( γ 1 )f( γ z 1 ) ) + z=1 g ( f( γ 1 )f( γ z 2 ) ) (26)

2) 如果 f( γ 1 )f( γ z 2 ) ,则 C γ z 2 =0 ,否则 C γ z 2 由如下公式计算:

C γ z 2 = f( γ 1 )f( γ z 2 ) z=1 g ( f( γ 1 )f( γ z 1 ) ) + z=1 g ( f( γ 1 )f( γ z 2 ) ) (27)

Step 8:建立新的轮盘赌选取新的生长点 γ 2* ,令 γ 2 = γ 2* Γ min = γ 2 F min =f( γ 2 )

Step 9:重复Step 6~8,当迭代次数超过 ϕ (本文中 ϕ=2000 )或者近100次 F min 未产生更新,算法结束。 A i.j t h * = Γ min 就是 A i.j t h 的最优集结点。

所有决策者针对第i个备选方案的第j个决策属性的综合非线性中智偏好信息可以描述如下:

A i.j * ={ A i.j t 1 * ,, A i.j t * }={ T d i j t 1 * , I d i j t 1 * , F d i j t 1 * ,, T d i j t * , I d i j t * , F d i j t * } (28)

整个多属性群决策问题中的决策者综合偏好矩阵可以描述如下:

[ d ij * ] m×n =[ A 1 * A m * ]=[ A 1,1 * A 1,n * A m,1 * A m,n * ]=[ { A 1,1 t 1 * ,, A 1,1 t * } { A 1,n t 1 * ,, A 1,n t * } { A m,1 t 1 * ,, A m,1 t * } { A m,n t 1 * ,, A m,n t * } ] (29)

5. 备选方案排序方法

本文将TOPOSIS算法进行拓展,使之可以生成非线性中智综合偏好矩阵的正/负理想解,并结合投影理论计算备选方案的综合得分,从而完成备选方案排序。

定义8假设 [ d ij * ] m×n 是MAGDM问题中的决策者综合偏好矩阵,则针对第j个决策属性的正理想向量 A j *+ ={ A j t 1 *+ ,, A j t *+ } 和负理想向量 A j * ={ A j t 1 * ,, A j t * } 可以用如下公式计算:

A j t h *+ = ( T d i j t h * ) max k , ( I d i j t h * ) min k , ( F d i j t h * ) min k (30)

A j t h * = ( T d i j t h * ) min k , ( I d i j t h * ) max k , ( F d i j t h * ) max k (31)

显然,决策者综合偏好矩阵的正理想解 A *+ 和负理想解 A * 可以用如下公式计算:

A *+ ={ A 1 *+ ,, A n *+ }={ A 1 t 1 *+ ,, A 1 t *+ ,, A n t 1 *+ ,, A n t *+ } (32)

A * ={ A 1 * ,, A n * }={ A 1 t 1 * ,, A 1 t * ,, A n t 1 * ,, A n t * } (33)

定义9假设 [ d ij * ] m×n 是MAGDM问题中的决策者综合偏好矩阵, A *+ A * 是决策者综合偏好矩阵的正/负理想解, A i * 是决策者综合偏好矩阵中第i个备选方案的综合偏好向量。则第i个备选方案的综合得分可以由如下公式计算:

S( A i )=Prj ( A i ) + Prj ( A i ) (34)

Prj + ( A i )= Prj T A *+ ( T A i * )+ Prj I A *+ ( T A i * ) Prj F A *+ ( T A i * ) (35)

Prj ( A i )= Prj T A * ( T A i * )+ Prj I A * ( T A i * ) Prj F A * ( T A i * ) (36)

Prj T A *+ ( T A i * )= j=1 n h=1 ( T d i j t h * ( T d i j t h * ) max k ω j ) ( h=1 ( T d i j t h * ) max k ) 2 ω j (37)

Prj I A *+ ( I A i * )= j=1 n h=1 ( I d i j t h * ( I d i j t h * ) min k ω j ) ( h=1 ( I d i j t h * ) min k ) 2 ω j (38)

Prj F A *+ ( F A i * )= j=1 n h=1 ( F d i j t h * ( F d i j t h * ) min k ω j ) ( h=1 ( F d i j t h * ) min k ) 2 ω j (39)

Prj T A * ( T A i * )= j=1 n h=1 ( T d i j t h * ( T d i j t h * ) min k ω j ) ( h=1 ( T d i j t h * ) min k ) 2 ω j (40)

Prj I A * ( I A i * )= j=1 n h=1 ( I d i j t h * ( I d i j t h * ) max k ω j ) ( h=1 ( I d i j t h * ) max k ) 2 ω j (41)

Prj F A * ( F A i * )= j=1 n h=1 ( F d i j t h * ( F d i j t h * ) max k ω j ) ( h=1 ( F d i j t h * ) max k ) 2 ω j (42)

6. 仿真实验

一个由4名决策者组成的管理团队将经过5周的考察,从3个投资项目中选出最佳项目。决策者 e k ( 1k4 ) 使用非线性中智集 ( A i,j t h ) k 来对方案 A i (A1玉米期货,A2大豆期货,A3小麦期货)的决策属性( C j C1净现值,C2回报率,C3效益成本分析,C4成本回收期)进行评价。决策者权重向量 ξ= ( 0.25,0.3,0.3,0.15 ) T ,决策属性权重向量 ω= ( 0.1,0.4,0.25,0.25 ) T 。决策者偏好矩阵如表1~4所示:

Table 1. Preference matrix of e1

1. e1的偏好矩阵

A1

C1

t1: (0.28, 0.46, 0.28)

t2: (0.27, 0.44, 0.27)

t3: (0.30, 0.47, 0.31)

t4: (0.34, 0.47, 0.33)

t5: (0.33, 0.42, 0.28)

C2

t1: (0.10, 0.27, 0.61)

t2: (0.08, 0.28, 0.59)

t3: (0.14, 0.24, 0.60)

t4: (0.09, 0.23, 0.64)

t5: (0.10, 0.25, 0.59)

C3

t1: (0.65, 0.18, 0.22)

t2: (0.62, 0.24, 0.27)

t3: (0.68, 0.23, 0.22)

t4: (0.65, 0.23, 0.27)

t5: (0.67, 0.23, 0.23)

C4

t1: (0.53, 0.34, 0.15)

t2: (0.56, 0.33, 0.18)

t3: (0.57, 0.30, 0.16)

t4: (0.56, 0.33, 0.14)

t5: (0.56, 0.29, 0.16)

A2

C1

t1: (0.24, 0.45, 0.56)

t2: (0.19, 0.47, 0.52)

t3: (0.19, 0.45, 0.57)

t4: (0.24, 0.44, 0.57)

t5: (0.23, 0.46, 0.57)

C2

t1: (0.16, 0.45, 0.26)

t2: (0.19, 0.46, 0.29)

t3: (0.21, 0.42, 0.28)

t4: (0.19, 0.46, 0.29)

t5: (0.19, 0.45, 0.29)

C3

t1: (0.48, 0.09, 0.16)

t2: (0.48, 0.07, 0.20)

t3: (0.44, 0.14, 0.20)

t4: (0.48, 0.07, 0.21)

t5: (0.46, 0.14, 0.17)

C4

t1: (0.74, 0.09, 0.11)

t2: (0.72, 0.05, 0.10)

t3: (0.74, 0.10, 0.09)

t4: (0.75, 0.06, 0.09)

t5: (0.77, 0.06, 0.11)

A3

C1

t1: (0.16, 0.33, 0.68)

t2: (0.14, 0.38, 0.68)

t3: (0.20, 0.36, 0.73)

t4: (0.16, 0.32, 0.70)

t5: (0.18, 0.35, 0.70)

C2

t1: (0.26, 0.24, 0.35)

t2: (0.29, 0.26, 0.34)

t3: (0.27, 0.21, 0.33)

t4: (0.26, 0.21, 0.33)

t5: (0.26, 0.24, 0.37)

C3

t1: (0.17, 0.05, 0.86)

t2: (0.19, 0.07, 0.87)

t3: (0.23, 0.12, 0.87)

t4: (0.22, 0.06, 0.86)

t5: (0.20, 0.09, 0.84)

C4

t1: (0.45, 0.14, 0.23)

t2: (0.46, 0.11, 0.22)

t3: (0.43, 0.08, 0.24)

t4: (0.42, 0.09, 0.24)

t5: (0.41, 0.13, 0.24)

Table 2. Preference matrix of e2

2. e2的偏好矩阵

A1

C1

t1: (0.17, 0.28,0.06)

t2:(0.15, 0.22, 0.56)

t3: (0.18, 0.27, 0.61)

t4: (0.17, 0.24, 0.54)

t5: (0.18, 0.26, 0.55)

C2

t1: (0.29, 0.48, 0.12)

t1:(0.29, 0.48, 0.12)

t1: (0.29, 0.48, 0.12)

t1: (0.29, 0.48, 0.12)

t1: (0.29, 0.48, 0.12)

C3

t2: (0.19, 0.47, 0.17)

t2:(0.19, 0.47, 0.17)

t2: (0.19, 0.47, 0.17)

t2: (0.19, 0.47, 0.17)

t2: (0.19, 0.47, 0.17)

C4

t3: (0.31, 0.50, 0.18)

t3:(0.31, 0.50, 0.18)

t3: (0.31, 0.50, 0.18)

t3: (0.31, 0.50, 0.18)

t3: (0.31, 0.50, 0.18)

续表

A2

C1

t1: (0.33, 0.29, 0.44)

t1:(0.33, 0.29, 0.44)

t1: (0.33, 0.29, 0.44)

t1: (0.33, 0.29, 0.44)

t1: (0.33, 0.29, 0.44)

C2

t2: (0.37, 0.31, 0.43)

t2:(0.37, 0.31, 0.43)

t2: (0.37, 0.31, 0.43)

t2: (0.37, 0.31, 0.43)

t2: (0.37, 0.31, 0.43)

C3

t3: (0.35, 0.34, 0.45)

t3:(0.35, 0.34, 0.45)

t3: (0.35, 0.34, 0.45)

t3: (0.35, 0.34, 0.45)

t3: (0.35, 0.34, 0.45)

C4

t4: (0.36, 0.32, 0.44)

t4:(0.36, 0.32, 0.44)

t4: (0.36, 0.32, 0.44)

t4: (0.36, 0.32, 0.44)

t4: (0.36, 0.32, 0.44)

A3

C1

t1: (0.33, 0.29, 0.44)

t1:(0.33, 0.29, 0.44)

t1:(0.33, 0.29, 0.44)

t1: (0.33, 0.29, 0.44)

t1: (0.33, 0.29, 0.44)

C2

t2: (0.37, 0.31, 0.43)

t2:(0.37, 0.31, 0.43)

t2: (0.37, 0.31, 0.43)

t2: (0.37, 0.31, 0.43)

t2: (0.37, 0.31, 0.43)

C3

t3: (0.35, 0.34, 0.45)

t3:(0.35, 0.34, 0.45)

t3: (0.35, 0.34, 0.45)

t3: (0.35, 0.34, 0.45)

t3: (0.35, 0.34, 0.45)

C4

t4: (0.36, 0.32, 0.44)

t4:(0.36, 0.32, 0.44)

t4: (0.36, 0.32, 0.44)

t4: (0.36, 0.32, 0.44)

t4: (0.36, 0.32, 0.44)

Table 3. Preference matrix of e3

3. e3的偏好矩阵

A1

C1

t1: (0.22, 0.21, 0.63)

t1: (0.22, 0.21, 0.63)

t1: (0.22, 0.21, 0.63)

t1: (0.22, 0.21, 0.63)

t1: (0.22, 0.21, 0.63)

C2

t2: (0.23, 0.19, 0.66)

t2: (0.23, 0.19, 0.66)

t2: (0.23, 0.19, 0.66)

t2: (0.23, 0.19, 0.66)

t2: (0.23, 0.19, 0.66)

C3

t3: (0.28, 0.17, 0.66)

t3: (0.28, 0.17, 0.66)

t3: (0.28, 0.17, 0.66)

t3: (0.28, 0.17, 0.66)

t3: (0.28, 0.17, 0.66)

C4

t4: (0.24, 0.15, 0.67)

t4: (0.24, 0.15, 0.67)

t4: (0.24, 0.15, 0.67)

t4: (0.24, 0.15, 0.67)

t4: (0.24, 0.15, 0.67)

A2

C1

t5: (0.25, 0.19, 0.65)

t5: (0.25, 0.19, 0.65)

t5: (0.25, 0.19, 0.65)

t5: (0.25,0.19, 0.65)

t5: (0.25, 0.19, 0.65)

C2

t1: (0.20, 0.77, 0.38)

t1: (0.20, 0.77, 0.38)

t1: (0.20, 0.77, 0.38)

t1: (0.20, 0.77, 0.38)

t1: (0.20, 0.77, 0.38)

C3

t2: (0.19, 0.78, 0.34)

t2: (0.19, 0.78, 0.34)

t2: (0.19, 0.78, 0.34)

t2: (0.19, 0.78, 0.34)

t2: (0.19, 0.78, 0.34)

C4

t3: (0.21, 0.79, 0.36)

t3: (0.21, 0.79, 0.36)

t3: (0.21, 0.79, 0.36)

t3: (0.21, 0.79, 0.36)

t3: (0.21, 0.79, 0.36)

A3

C1

t4: (0.21, 0.77, 0.38)

t4: (0.21, 0.77, 0.38)

t4: (0.21, 0.77, 0.38)

t4: (0.21, 0.77, 0.38)

t4: (0.21, 0.77, 0.38)

C2

t5: (0.24, 0.82, 0.38)

t5: (0.24, 0.82, 0.38)

t5: (0.24, 0.82, 0.38)

t5: (0.24, 0.82, 0.38)

t5: (0.24, 0.82, 0.38)

C3

t1: (0.37, 0.44, 0.25)

t1: (0.37, 0.44, 0.25)

t1: (0.37, 0.44, 0.25)

t1: (0.37, 0.44, 0.25)

t1: (0.37, 0.44, 0.25)

C4

t2: (0.40, 0.49, 0.19)

t2: (0.40, 0.49, 0.19)

t2: (0.40, 0.49, 0.19)

t2: (0.40, 0.49, 0.19)

t2: (0.40, 0.49, 0.19)

Table 4. Preference matrix of e4

4. e4的偏好矩阵

A1

C1

t1: (0.38, 0.19, 0.61)

t1: (0.38, 0.19, 0.61)

t1: (0.38, 0.19, 0.61)

t1: (0.38, 0.19, 0.61)

t1: (0.38, 0.19, 0.61)

C2

t2: (0.36, 0.20, 0.61)

t2: (0.36, 0.20, 0.61)

t2: (0.36, 0.20, 0.61)

t2: (0.36, 0.20, 0.61)

t2: (0.36, 0.20, 0.61)

C3

t3: (0.36, 0.22, 0.62)

t3: (0.36, 0.22, 0.62)

t3: (0.36, 0.22, 0.62)

t3: (0.36, 0.22, 0.62)

t3: (0.36, 0.22, 0.62)

C4

t4: (0.34, 0.18, 0.61)

t4: (0.34, 0.18, 0.61)

t4: (0.34, 0.18, 0.61)

t4: (0.34, 0.18, 0.61)

t4: (0.34, 0.18, 0.61)

A2

C1

t5: (0.36, 0.24, 0.55)

t5: (0.36, 0.24, 0.55)

t5: (0.36, 0.24, 0.55)

t5: (0.36, 0.24, 0.55)

t5: (0.36, 0.24, 0.55)

C2

t1: (0.41, 0.61, 0.20)

t1: (0.41, 0.61, 0.20)

t1: (0.41, 0.61, 0.20)

t1: (0.41, 0.61, 0.20)

t1: (0.41, 0.61, 0.20)

C3

t2: (0.37, 0.58, 0.17)

t2: (0.37, 0.58, 0.17)

t2: (0.37, 0.58, 0.17)

t2: (0.37, 0.58, 0.17)

t2: (0.37, 0.58, 0.17)

C4

t3: (0.39, 0.63, 0.21)

t3: (0.39, 0.63, 0.21)

t3: (0.39, 0.63, 0.21)

t3: (0.39, 0.63, 0.21)

t3: (0.39, 0.63, 0.21)

A3

C1

t4: (0.41, 0.65, 0.23)

t4: (0.41, 0.65, 0.23)

t4: (0.41, 0.65, 0.23)

t4: (0.41, 0.65, 0.23)

t4: (0.41, 0.65, 0.23)

C2

t5: (0.39, 0.62, 0.18)

t5: (0.39, 0.62, 0.18)

t5:(0.39, 0.62, 0.18)

t5: (0.39, 0.62, 0.18)

t5: (0.39, 0.62, 0.18)

C3

t1: (0.42, 0.34, 0.16)

t1: (0.42, 0.34, 0.16)

t1: (0.42, 0.34, 0.16)

t1: (0.42, 0.34, 0.16)

t1: (0.42, 0.34, 0.16)

C4

t2: (0.46, 0.36, 0.13)

t2: (0.46, 0.36, 0.13)

t2: (0.46, 0.36, 0.13)

t2: (0.46, 0.36, 0.13)

t2: (0.46, 0.36, 0.13)

用本文开发算法构建如下所示的决策者综合偏好矩阵:

A 1 * =[ C 1 C 2 C 3 C 4 { ( 0.2074,0.2517,0.6054 ) { ( 0.3063,0.5351,0.1762 ) { ( 0.4348,0.2976,0.1718 ) { ( 0.6192,0.2541,0.1720 ) ( 0.1989,0.2148,0.5847 ) ( 0.2926,0.4901,0.1799 ) ( 0.4600,0.3600,0.1300 ) ( 0.6273,0.2526,0.2103 ) ( 0.2470,0.2373,0.6185 ) ( 0.3147,0.5383,0.2041 ) ( 0.4600,0.3500,0.1600 ) ( 0.5881,0.2576,0.2080 ) ( 0.2164,0.2186,0.5712 ) ( 0.3315,0.5449,0.2204 ) ( 0.4664,0.3384,0.1656 ) ( 0.6047,0.2467,0.1684 ) ( 0.2076,0.2524,0.5575 ) } ( 0.2935,0.5392,0.1768 ) } ( 0.4800,0.3500,0.1500 ) } ( 0.5963,0.2269,0.2070 ) } ]

A 2 * =[ C 1 C 2 C 3 C 4 { ( 0.3405,0.2957,0.4878 ) { ( 0.2300,0.3900,0.3800 ) { ( 0.5500,0.1700,0.2300 ) { ( 0.6400,0.1500,0.1900 ) ( 0.3723,0.3072,0.4667 ) ( 0.2000,0.3900,0.3200 ) ( 0.5700,0.2300,0.2600 ) ( 0.6000,0.1700,0.2200 ) ( 0.3474,0.3346,0.4910 ) ( 0.2260,0.3484,0.3327 ) ( 0.5800,0.2300,0.2400 ) ( 0.6600,0.1400,0.2100 ) ( 0.3603,0.3143,0.4679 ) ( 0.2500,0.3700,0.3300 ) ( 0.5200,0.2300,0.2500 ) ( 0.5900,0.1600,0.2100 ) ( 0.3500,0.2700,0.4900 ) } ( 0.2500,0.3800,0.3600 ) } ( 0.5300,0.2100,0.2200 ) } ( 0.6200,0.1700,0.2300 ) } ]

A 3 * =[ C 1 C 2 C 3 C 4 { ( 0.3032,0.2487,0.5816 ) { ( 0.3100,0.2300,0.4200 ) { ( 0.3581,0.2029,0.6415 ) { ( 0.3107,0.2550,0.3648 ) ( 0.2700,0.2600,0.5800 ) ( 0.2872,0.2416,0.4173 ) ( 0.3416,0.1970,0.6451 ) ( 0.3137,0.2314,0.3908 ) ( 0.2506,0.2403,0.5803 ) ( 0.3121,0.2391,0.4439 ) ( 0.3626,0.1783,0.6652 ) ( 0.3020,0.2393,0.3212 ) ( 0.2785,0.2487,0.6041 ) ( 0.3200,0.1900,0.4200 ) ( 0.3560,0.2013,0.6702 ) ( 0.3067,0.2742,0.3703 ) ( 0.2840,0.2403,0.5668 ) } ( 0.3394,0.2121,0.4179 ) } ( 0.3133,0.1766,0.6633 ) } ( 0.3120,0.2687,0.3245 ) } ]

决策者综合偏好矩阵的正/负理想解如下所示:

A *+ =[ C 1 C 2 C 3 C 4 { ( 0.3500,0.2300,0.1600 ) { ( 0.5255,0.2300,0.1762 ) { ( 0.5500,0.1100,0.1700 ) { ( 0.6400,0.1500,0.1720 ) ( 0.3723,0.2000,0.1900 ) ( 0.5370,0.1900,0.1799 ) ( 0.5700,0.1100,0.1300 ) ( 0.6273,0.1700,0.2103 ) ( 0.3500,0.2373,0.1700 ) ( 0.5391,0.2000,0.2041 ) ( 0.5800,0.1000,0.1200 ) ( 0.6600,0.1400,0.2080 ) ( 0.3603,0.1800,0.2000 ) ( 0.5852,0.1900,0.2204 ) ( 0.5200,0.1100,0.1656 ) ( 0.6047,0.1600,0.1684 ) ( 0.3500,0.2300,0.1600 ) } ( 0.5318,0.2000,0.1768 ) } ( 0.5300,0.0900,0.1400 ) } ( 0.6200,0.1697,0.2070 ) } ]

A * =[ C 1 C 2 C 3 C 4 { ( 0.1304,0.5216,0.6054 ) { ( 0.2300,0.5351,0.6600 ) { ( 0.3300,0.4700,0.6415 ) { ( 0.2300,0.2550,0.5800 ) ( 0.1709,0.4922,0.5847 ) ( 0.2000,0.4901,0.6900 ) ( 0.3000,0.4800,0.6451 ) ( 0.2800,0.2526,0.5291 ) ( 0.1402,0.4889,0.6185 ) ( 0.2260,0.5383,0.6900 ) ( 0.3000,0.4900,0.6652 ) ( 0.2800,0.2576,0.4700 ) ( 0.1640,0.4723,0.6041 ) ( 0.2500,0.5449,0.6600 ) ( 0.3100,0.4600,0.6702 ) ( 0.2485,0.2742,0.6870 ) ( 0.1654,0.4745,0.5668 ) } ( 0.2500,0.5392,0.6800 ) } ( 0.3000,0.4200,0.6633 ) } ( 0.2300,0.2687,0.5735 ) } ]

最终方案得分如下所示:

S( A 1 )=0.00186,S( A 2 )=0.000549,S( A 3 )=0.00027

显然, A 2 A 3 A 1

7. 实验对比

将本文开发算法与SVNS算子[10]、SvNCNTWA算子[11]以及SvNCNTWG [11]进行比较,对上一章所设计的仿真案例进行分析。所得结果如下:

1) 本文算法: A 2 A 3 A 1

2) SVNS: A 2 A 3 A 1 ;

3) SvNCNTWA: A 2 A 3 A 1 ;

4) SvNCNTWG: A 3 A 2 A 1 .

相关准确性测度对比如表5所示。

Table 5. Accuracy comparsion

5. 准确度比较

d ^ H ( d ij * , d ij e k )

C ^ ( d ij * , d ij e k )

E( d ij * )

K ^ ( d ij * , d ij e k )

PGSA

82.7795

166.6943

84.0071

3.7176

EL-SVNS

84.5682

158.5288

85.2588

3.5521

SvNCNTWA

2.967163

162.2254

86.2458

3.6628

SvNCNTWG

3.48762

161.5345

87.0126

3.5872

显然本文算法所得结果与决策者偏好矩阵之间相似度更高,熵测度更低,结果更为准确。

8. 总结与展望

8.1. 总结

本文介绍了DNSNS及其集结模型,以解决MAGDM问题。本研究的主要贡献如下:

1) DNSNS使解决方案属性随时间变化和外部环境变化的表示成为可能,从而促进了专家提供的实时模糊偏好信息的描述。

2) 所提出的聚合模型将非线性专家模糊偏好信息映射到三维曲线上,可直观地捕捉偏好信息的动态变化,同时避免了数据预处理对方案属性间关系造成的影响。此外,该模型准确地描述了空间曲线所包围区域的大小,以表示偏好信息之间的差异。

3) 在本研究中,我们采用PGSA算法来识别出一组从所有偏好曲线出发的Euclidean距离之和最小的聚合曲线。所得结果符合帕累托最优原则,与直接采用正负理想解进行偏好聚合的TOPSIS相比,具有更高的准确性。此外,在算法执行过程中,PGSA会动态调整所有节点的生长方向,并通过从节点中选择新的生长点来更新其浓度,从而避免了与最近邻搜索(NNS)相比的局部最优陷阱。

8.2. 展望

在未来的研究中,我们将探索使用非线性TOPSIS (NR-TOPSIS)来替代传统TOPSIS,以提高对非线性偏好信息的集结精度。此外,我们将研究欧几里得n维空间聚合模型(n > 3)及其相应的聚合算法,以解决涉及动态非线性犹豫模糊语言信息的多属性群决策问题。

基金项目

1) 常州市科技支撑计划(社会发展),项目号:CE20235043。

2) 江苏理工学院2022年校教学改革与研究项目“OBE视域下课程思政素材库与案例库的建设研究——以数据结构课程为例”(项目号:11610312306)。

NOTES

*通讯作者。

参考文献

[1] Zadeh, L.A. (1965) Fuzzy Sets. Information and Control, 8, 338-353.
https://doi.org/10.1016/s0019-9958(65)90241-x
[2] Sarkar, D. and Srivastava, P.K. (2024) Recent Development and Applications of Neutrosophic Fuzzy Optimization Approach. International Journal of System Assurance Engineering and Management, 19, e02243.
[3] Donbosco, J.S.M. and Ganesan, D. (2023) The Energy of Interval Valued Neutrosophic Matrix in Decision-Making to Select the Manager for the Company Project. Operations Research and Decisions, 33, 35-51.
https://doi.org/10.37190/ord230403
[4] Shi, X., Kosari, S., Rashmanlou, H., Broumi, S. and Satham Hussain, S. (2023) Properties of Interval-Valued Quadripartitioned Neutrosophic Graphs with Real-Life Application. Journal of Intelligent & Fuzzy Systems, 44, 7683-7697.
https://doi.org/10.3233/jifs-222572
[5] Fahmi, A., Aslam, M. and Ahmed, R. (2023) Decision-Making Problem Based on Generalized Interval-Valued Bipolar Neutrosophic Einstein Fuzzy Aggregation Operator. Soft Computing, 27, 14533-14551.
https://doi.org/10.1007/s00500-023-08944-w
[6] Qiu, J. and Li, L. (2017) A New Approach for Multiple Attribute Group Decision Making with Interval-Valued Intuitionistic Fuzzy Information. Applied Soft Computing, 61, 111-121.
https://doi.org/10.1016/j.asoc.2017.07.008
[7] Elrawy, A., Smarandache, F. and Temraz, A.A. (2024) Investigation of a Neutrosophic Group. Journal of Intelligent & Fuzzy Systems, 46, 2273-2280.
https://doi.org/10.3233/jifs-232941
[8] Köseoğlu, A., Şahin, R. and Merdan, M. (2019) A Simplified Neutrosophic Multiplicative Set‐Based TODIM Using Water‐Filling Algorithm for the Determination of Weights. Expert Systems, 37, e12515.
https://doi.org/10.1111/exsy.12515
[9] Şahin, R. and Küçük, G.D. (2018) Group Decision Making with Simplified Neutrosophic Ordered Weighted Distance Operator. Mathematical Methods in the Applied Sciences, 41, 4795-4809.
https://doi.org/10.1002/mma.4931
[10] Garg, H. (2024) A New Exponential-Logarithm-Based Single-Valued Neutrosophic Set and Their Applications. Expert Systems with Applications, 238, Article ID: 121854.
https://doi.org/10.1016/j.eswa.2023.121854
[11] Ye, J., Du, S. and Yong, R. (2023) Multi-Criteria Decision-Making Model Using Trigonometric Aggregation Operators of Single-Valued Neutrosophic Credibility Numbers. Information Sciences, 644, Article ID: 118968.
https://doi.org/10.1016/j.ins.2023.118968