欢迎访问《空军工程大学学报》官方网站!

咨询热线:029-84786242 RSS EMAIL-ALERT
结合语义分割图的注意力机制文本生成图像
DOI:
作者:
作者单位:

作者简介:

通讯作者:

中图分类号:

TP391.41

基金项目:

国家自然科学基金(62203344);陕西省自然科学基础研究重点项目(2022JZ-35);陕西高校青年创新团队项目


Combination with Attention Mechanism Text Generation Images
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    针对生成对抗网络生成图像存在结构不完整、内容不真实、质量差的问题,提出一种结合语义分割图的注意力机制文本到图像生成模型(SSA-GAN)。首先采用一种简单有效的深度融合模块,以全局句子向量作为输入条件,在生成图像的同时,充分融合文本信息。其次结合语义分割图像,提取其边缘轮廓特征,为模型提供额外的生成和约束条件。然后采用注意力机制为模型提供细粒度词级信息,丰富所生成图像的细节。最后使用多模态相似度计算模型计算细粒度的图像-文本匹配损失,更好地训练生成器。通过CUB-200和Oxford-102 Flowers数据集测试并验证模型,结果表明:所提模型(SSA-GAN)与StackGAN、AttnGAN、DF-GAN以及RAT-GAN等模型最终生成的图像质量相比,IS指标值最高分别提升了13.7%和43.2%,FID指标值最高分别降低了34.7%和74.9%,且具有更好的可视化效果,证明了所提方法的有效性。

    Abstract:

    Aimed at the problems that generative adversarial network is incomplete in structure, unreal in content and poor in quality of images generated, an attention mechanism text-to-image generation model combined with semantic segmentation graph (SSA-GAN) is proposed. First, taking global sentence vectors as input conditions, a simple and effective deep fusion module is utilized for fully fusing text information while generating images are generating simultaneously. Second, the semantically segmented images are combined to extract their edge profile features to provide additional generative and constraint conditions for the model, and the attention mechanism is used to provide fine-grained word-level information for the model to enrich the details of the generated images. Finally, a multimodal similarity computation model is used to compute fine-grained image-text matching loss to further train the generator. The model is tested and validated by CUB-200 and Oxford-102 Flowers datasets, and the results show that the proposed model (SSA-GAN) improves the quality of the final generated images. Compared to the models such as StackGAN, AttnGAN, DF-GAN, and RAT-GAN, the IS increases in metrics values by 13.7% and 43.2%, respectively. And the FID in metric values is reduced to 34.7% and 74.9%, respectively.

    参考文献
    相似文献
    引证文献
引用本文

梁成名,李云红,李丽敏,苏雪平,朱绵云,朱耀麟.结合语义分割图的注意力机制文本生成图像[J].空军工程大学学报,2024,25(4):118-127

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:
  • 最后修改日期:
  • 录用日期:
  • 在线发布日期: 2024-07-27
  • 出版日期: