医学图像配准的多策略优化方法库及LLM动态调度研究
Research on Multi-Strategy Optimization Method Library and LLM Dynamic Scheduling for Medical Image Registration
摘要: 医学图像配准中,不同优化方法在不同样本和优化阶段的表现并不一致,单一固定算法往往难以兼顾精度与稳定性。针对CT-MRI多模态配准中的9参数受限仿射变换问题,文章构建了包含8种方法的候选库,并在三组等预算配置下比较其整体精度、阶段收敛特征和跨样本稳定性。结果表明,各方法之间存在明显互补性:CRO-ZZ-QLS整体表现最好,Powell收敛较快但容易停留在局部最优,CRO-ZZ-QLS与CRO-SL在跨样本上更稳定。在此基础上,进一步提出基于LLM的动态调度方法,使其在等评估预算下从候选算法与种群配置中进行离散选择。实验结果显示,LLM调度的平均NMI为0.2388,较最佳静态方法提高0.72%,且14个样本中有9个取得最优结果。说明将全局探索与后期局部精修结合起来,有助于进一步提升配准性能。
Abstract: In multimodal medical image registration, different optimization methods often behave differently across samples and optimization stages, so a single fixed algorithm is usually insufficient. For the 9-parameter constrained affine transformation problem in CT-MRI registration, this paper builds a candidate library of eight methods and compares their overall accuracy, stage-wise convergence, and cross-sample stability under three equal-budget settings. The results show clear complementarity among the methods: CRO-ZZ-QLS achieves the best overall performance, Powell converges quickly but is more likely to get trapped in local optima, and CRO-ZZ-QLS, together with CRO-SL, shows better stability across samples. Based on these observations, an LLM-based dynamic scheduling method is further introduced to select algorithms and population settings under the same evaluation budget. Experimental results show that the proposed scheduler reaches a mean NMI of 0.2388, improving the best static method by 0.72% and obtaining the best result on 9 of 14 samples. These results suggest that combining global exploration with later local refinement is beneficial for improving registration performance.
文章引用:钟泽伟. 医学图像配准的多策略优化方法库及LLM动态调度研究[J]. 计算机科学与应用, 2026, 16(5): 13-23. https://doi.org/10.12677/csa.2026.165159

参考文献

[1] Nie, Q., Zhang, X., Hu, Y., Gong, M. and Liu, J. (2024) Medical Image Registration and Its Application in Retinal Images: A Review. Visual Computing for Industry, Biomedicine, and Art, 7, Article No. 21. [Google Scholar] [CrossRef] [PubMed]
[2] Chen, J., Liu, Y., Wei, S., Bian, Z., Subramanian, S., Carass, A., et al. (2025) A Survey on Deep Learning in Medical Image Registration: New Technologies, Uncertainty, Evaluation Metrics, and Beyond. Medical Image Analysis, 100, Article ID: 103385. [Google Scholar] [CrossRef] [PubMed]
[3] Sengupta, D., Gupta, P. and Biswas, A. (2022) A Survey on Mutual Information Based Medical Image Registration Algorithms. Neurocomputing, 486, 174-188. [Google Scholar] [CrossRef
[4] 郑子涵, 谢颖华, 蒋学芹, 等. 基于深度学习的无监督单模态医学图像配准算法[J]. 计算机科学与应用, 2023, 13(1): 57-64.
[5] Klein, S., Staring, M., Murphy, K., Viergever, M.A. and Pluim, J. (2010) Elastix: A Toolbox for Intensity-Based Medical Image Registration. IEEE Transactions on Medical Imaging, 29, 196-205. [Google Scholar] [CrossRef] [PubMed]
[6] 石先英, 杨奋林. 基于高斯混合模型的非刚性点集配准[J]. 应用数学进展, 2024, 13(8): 3826-3836.
[7] Maes, F., Collignon, A., Vandermeulen, D., Marchal, G. and Suetens, P. (1997) Multimodality Image Registration by Maximization of Mutual Information. IEEE Transactions on Medical Imaging, 16, 187-198. [Google Scholar] [CrossRef] [PubMed]
[8] Salcedo-Sanz, S., Del Ser, J., Landa-Torres, I., Gil-López, S. and Portilla-Figueras, J.A. (2014) The Coral Reefs Optimization Algorithm: A Novel Metaheuristic for Efficiently Solving Optimization Problems. The Scientific World Journal, 2014, Article ID: 739768. [Google Scholar] [CrossRef] [PubMed]
[9] Bermejo, E., Chica, M., Damas, S., Salcedo-Sanz, S. and Cordón, O. (2018) Coral Reef Optimization with Substrate Layers for Medical Image Registration. Swarm and Evolutionary Computation, 42, 138-159. [Google Scholar] [CrossRef
[10] Hansen, N. and Ostermeier, A. (2001) Completely Derandomized Self-Adaptation in Evolution Strategies. Evolutionary Computation, 9, 159-195. [Google Scholar] [CrossRef] [PubMed]
[11] Powell, M.J.D. (1964) An Efficient Method for Finding the Minimum of a Function of Several Variables without Calculating Derivatives. The Computer Journal, 7, 155-162. [Google Scholar] [CrossRef
[12] Leskovar, M., Heyland, M., Trepczynski, A. and Zachow, S. (2025) Comparison of Global and Local Optimization Methods for Intensity-Based 2D–3D Registration. Computers in Biology and Medicine, 186, Article ID: 109574. [Google Scholar] [CrossRef] [PubMed]
[13] 王艳辉, 王冠洲, 何才壮. 群体智能算法在无人机路径规划中的应用[J]. 计算机科学与应用, 2025, 15(1): 21-27.
[14] 刘义庆, 屠义强, 卢厚清. 遗传算法协同优化改进理论与应用研究[J]. 软件工程与应用, 2025, 14(4): 765-771.
[15] Durgut, R., Aydin, M.E. and Atli, I. (2021) Adaptive Operator Selection with Reinforcement Learning. Information Sciences, 581, 773-790. [Google Scholar] [CrossRef
[16] Yin, S. and Xiang, Z. (2024) Adaptive Operator Selection with Dueling Deep Q-Network for Evolutionary Multi-Objective Optimization. Neurocomputing, 581, Article ID: 127491. [Google Scholar] [CrossRef
[17] Wolpert, D.H. and Macready, W.G. (1997) No Free Lunch Theorems for Optimization. IEEE Transactions on Evolutionary Computation, 1, 67-82. [Google Scholar] [CrossRef
[18] Dokeroglu, T., Kucukyilmaz, T. and Talbi, E. (2024) Hyper-Heuristics: A Survey and Taxonomy. Computers & Industrial Engineering, 187, Article ID: 109815. [Google Scholar] [CrossRef
[19] Huang, C., Li, Y. and Yao, X. (2020) A Survey of Automatic Parameter Tuning Methods for Metaheuristics. IEEE Transactions on Evolutionary Computation, 24, 201-216. [Google Scholar] [CrossRef
[20] Wu, X., Wu, S., Wu, J., Feng, L. and Tan, K.C. (2025) Evolutionary Computation in the Era of Large Language Model: Survey and Roadmap. IEEE Transactions on Evolutionary Computation, 29, 534-554. [Google Scholar] [CrossRef
[21] Yang, C., Wang, X., Lu, Y., et al. (2024) Large Language Models as Optimizers. arXiv: 2309.03409.
[22] 卢烨, 陈磊. LLM-NAS: 一种基于大型语言模型的神经架构搜索进化框架[J]. 计算机科学与应用, 2026, 16(2): 405-413.
[23] Zhong, R., Hussien, A.G., Yu, J. and Munetomo, M. (2025) LLMOA: A Novel Large Language Model Assisted Hyper-Heuristic Optimization Algorithm. Advanced Engineering Informatics, 64, Article ID: 103042. [Google Scholar] [CrossRef
[24] Guo, P.F., Chen, Y.H., Tsai, Y.D., et al. (2024) Towards Optimizing with Large Language Models. arXiv: 2310.05204.
[25] Zhang, T., Yuan, J. and Avestimehr, S. (2024) Revisiting OPRO: The Limitations of Small-Scale LLMs as Optimizers. Findings of the Association for Computational Linguistics ACL 2024, Bangkok, 11-16 August 2024, 1727-1735. [Google Scholar] [CrossRef