欢迎访问《空军工程大学学报》官方网站!

咨询热线:029-84786242 RSS EMAIL-ALERT
基于深度CRF模型的图像语义分割方法
DOI:
作者:
作者单位:

作者简介:

通讯作者:

中图分类号:

TP391

基金项目:

国家自然科学基金(41601436;61403414;61703423)


An Image Semantic Segmentation Based on Deep CRF Model
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    从图像中提取多种特征向量堆叠为一个高维特征向量用于图像语义分割,会导致部分特征向量的分类能力减弱或丢失。针对此问题,提出了一种结合深度卷积神经网络AlexNet和条件随机场的图像语义分割方法。利用预训练好的AlexNet模型提取图像特征,再通过条件随机场对多特征及上下文信息的有效利用来实现图像的语义分割。与利用传统经典特征的方法进行对比,实验结果表明:在利用AlexNet模型提取特征进行图像语义分割时,Conv5层为最有效的特征提取层,在Stanford background和Weizmann horse数据集下的识别准确率分别为81.0%和91.7%,均高于其他2种对比方法,说明AlexNet可以提取更有效的特征,得到更高的语义分割精度。

    Abstract:

    Aimed at the problems that varieties of feature vectors extracted from the image is stacked into a highdimensional feature vector for image semantic segmentation, and these lead to the weakening or loss of the classification ability of some feature vectors, an image semantic segmentation method based on deep convolution neural network AlexNet and conditional random fields is proposed. The pretrained AlexNet model is utilized for extracting image features, and then the semantic segmentation of the image is achieved through the efficient use of conditional random fields for multiple features and context information. The experimental results compared with the methods using the traditional classical features show that Conv5 is the most effective feature extraction layer when AlexNet model is used to extract features for image semantic segmentation. The recognition accuracy in the Stanford background and Weizmann horse datasets is respectively 81.0% and 91.7%, and both the accuracy rates are higher than that of the two comparison methods, indicating that the deep convolution neural network can extract more effective features and obtain higher semantic segmentation accuracy.

    参考文献
    相似文献
    引证文献
引用本文

胡涛,李卫华,秦先祥,邱浪波,李小春.基于深度CRF模型的图像语义分割方法[J].空军工程大学学报,2018,19(5):52-57

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:
  • 最后修改日期:
  • 录用日期:
  • 在线发布日期: 2018-12-17
  • 出版日期: