过程纹理的语义描述与预测
Semantic Descriptions and Prediction for Procedural Texture
DOI: 10.12677/CSA.2017.76064, PDF, HTML, XML, 下载: 1,440  浏览: 3,395  国家自然科学基金支持
作者: 王丽娜*, 孙 鑫, 董军宇:中国海洋大学计算机科学与技术系,山东 青岛;刘 君:青岛农业大学理学与信息科学学院,山东 青岛;杨占宾:青岛高校软控股份有限公司,山东 青岛
关键词: 过程纹理语义描述预测Procedural Texture Semantic Description Prediction
摘要: 不同模式的过程纹理,往往是由带有不同参数的数学模型产生,这些参数又是经过有经验的研究人员的精心挑选而得到。而大多数人在日常生活与工作中,常常会用一些语言描述例如“规则的”,“蕾丝状的”和“重复的”等来定义或者寻找想要得到的纹理,希望能够借此被推荐合适的生成模型和参数来产生符合条件的纹理图像。然而这就造成了人的思维描述和纹理图像的生成模型和参数之间巨大的鸿沟。因此,对纹理图像添加语义描述,可以建立起人的视觉感知与图像之间沟通的桥梁。通过对人们所定义的语义进行分析,可以帮助人们找到合适的生成模型和参数来产生符合自己描述的纹理图像。本文以纹理图像语义描述为切入点,通过收集人们对纹理图像的语义描述,借助于多标签学习算法构建预测模型,为人们和过程纹理图像之间的沟通奠定基础。
Abstract: Procedural textures with different patterns are normally generated from mathematical models with parameters carefully selected by experienced users. However, for naive users, the intuitive way to obtain a desired texture is to provide semantic descriptions such as “regular”, “lacelike” and “repetitive” and then a procedural model with proper parameters will be automatically suggested to generate the corresponding textures. By contrast, it is less practical for users to learn mathematical models and tune parameters based on multiple examinations of large numbers of generated textures. Taken the semantic description of textures as the breakthrough point, this study explores the way to automatically generate human desired textures by collecting and analyzing people’s descriptions, so that it can lay the foundation for the communication between human descriptions and procedural textures.
文章引用:王丽娜, 刘君, 孙鑫, 董军宇, 杨占宾. 过程纹理的语义描述与预测[J]. 计算机科学与应用, 2017, 7(6): 537-545. https://doi.org/10.12677/CSA.2017.76064

参考文献

[1] Leung, T. and Malik, J. (2001) Representing and Recognizing the Visual Appearance of Materials Using Three- Dimensional Textons. International Journal of Computer Vision, 43, 29-44.
https://doi.org/10.1023/A:1011126920638
[2] Schwartz, G. and Nishino, K. (2013) Visual Material Traits: Recognizing Per-Pixel Material Context. IEEE International Conference on Computer Vision Workshops, Sydney, 1-8 December 2013, 883-890.
https://doi.org/10.1109/iccvw.2013.121
[3] Sharan, L., Liu, C., Rosenholtz, R. and Adelson, E.H. (2013) Recognizing Materials Using Perceptually Inspired Features. International Journal of Computer Vision, 103, 348-371.
https://doi.org/10.1007/s11263-013-0609-0
[4] Dana, K.J., et al. (1999) Reflectance and Texture of Real-World Surfaces. ACM Transactions on Graphics, 18, 1-34.
https://doi.org/10.1145/300776.300778
[5] Ojala, T., Pietikäinen, M. and Mäenpää, T. (2002) Multiresolution Gray-Scale and Rotation Invariant Texture Classification with Local Binary Patterns. IEEE Transactions on Pattern Analysis & Machine Intelligence, 24, 971-987.
https://doi.org/10.1109/TPAMI.2002.1017623
[6] Oxholm, G., Bariya, P. and Nishino, K. (2012) The Scale of Geometric Texture: Springer Berlin Heidelberg. European Conference on Computer Vision, Firenze, 7-13 October 2012, 58-71.
https://doi.org/10.1007/978-3-642-33718-5_5
[7] Amadasun, M. and King, R. (1989) Textural Features Corresponding to Textural Properties. IEEE Transactions on Systems, Man, and Cybernetics, 19, 1264-1274.
https://doi.org/10.1109/21.44046
[8] Gårding, J. (1992) Shape from Texture for Smooth Curved Surfaces. Journal of Mathematical Imaging and Vision, 2, 327-350.
https://doi.org/10.1007/BF00121877
[9] Ferrari, V. and Zisserman, A. (2008) Learning Visual Attributes. Conference on Neural Information Processing Systems, Vancouver, 8-10 December 2008, 433-440.
[10] Bhushan, N., Rao, A.R. and Lohse, G.L. (1997) The Texture Lexicon: Understanding the Categorization of Visual Texture Terms and Their Relationship to Texture Images. Cognitive Science, 21, 219-246.
https://doi.org/10.1207/s15516709cog2102_4
[11] Cimpoi, M., Maji, S., Kokkinos, I., Mohamed, S. and Vedaldi, A. (2014) Describing Textures in the Wild. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, 23-28 June 2014, 3606-3613.
https://doi.org/10.1109/cvpr.2014.461
[12] Perronnin, F. and Dance, C. (2007) Fisher Kernels on Visual Vocabularies for Image Categorization. IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, 17-22 June 2007, 1-8.
https://doi.org/10.1109/cvpr.2007.383266
[13] Donahue, J., Jia, Y., Vinyals, O., Hoffman, J., Zhang, N., Tzeng, E., et al. (2013) DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition. Computer Science, 50, 815-830.
[14] Jun, L. (2015) Procedural Textures[EB/OL].
https://figshare.com/articles/procedural_textures/1289700, 2017-6-12.
[15] Brinker, K., Fürnkranz, J. and Hüllermeier, E. (2006) A Unified Model for Multilabel Classification and Ranking. European Conference on Artificial Intelligence, Riva Del Garda, 29 August-1 September 2006, 489-493.
[16] Geng, X. and Luo, L. (2014) Multilabel Ranking with Inconsistent Rankers. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, 23-28 June 2014, 3742-3747.
https://doi.org/10.1109/cvpr.2014.478
[17] Cha, S. (2007) Comprehensive Survey on Distance/Similarity Measures between Probability Density Functions. City, 1, 1.
[18] Geng, X., Yin, C. and Zhou, Z. (2013) Facial Age Estimation by Learning from Label Distributions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35, 2401-2412.
https://doi.org/10.1109/TPAMI.2013.51
[19] Krizhevsky, A., Sutskever, I. and Hinton, G.E. (2012) Imagenet Classification with Deep Convolutional Neural Networks. Advances in Neural Information Processing Systems, Harrahs and Harveys, Lake Tahoe, 3-8 December 2012, 1097- 1105.