Style Genes: Leveraging Generative AI for Artwork Authentication through Artistic Style Consistency Analysis

Main Article Content

Jiaying Li

Abstract

The proliferation of sophisticated art forgeries poses mounting challenges for authentication practices in today's art market. This paper introduces a novel framework that leverages generative artificial intelligence to verify artwork authenticity by analyzing artistic style consistency. We conceptualize artist-specific stylistic signatures as "style genes" and employ fine-tuned diffusion models to extract and analyze these inherent characteristics. This approach combines prompt engineering techniques with inverse style-matching protocols to assess whether questioned artworks align with the stylistic fingerprints of attributed artists. Experimental validation on master Chinese ink paintings (including Qi Baishi, Xu Beihong, and Zhang Daqian) and Western oil paintings (including Picasso, Monet, and Van Gogh) demonstrates superior performance compared to traditional methods, achieving 94.3% accuracy in forgery detection while maintaining interpretability through multimodal contextual analysis (integrating visual, textual, and historical data).

Article Details

Section

Articles

How to Cite

Style Genes: Leveraging Generative AI for Artwork Authentication through Artistic Style Consistency Analysis. (2026). Journal of Sustainability, Policy, and Practice, 2(1), 87-100. https://schoalrx.com/index.php/jspp/article/view/79

References

1. L. A. Gatys, A. S. Ecker, and M. Bethge, "Image style transfer using convolutional neural networks," In Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2414-2423. doi: 10.1109/cvpr.2016.265

2. A. Elgammal, Y. Kang, and M. Den Leeuw, "Picasso, matisse, or a fake? Automated analysis of drawings at the stroke level for attribution and authentication," In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 32, No. 1)., April, 2018. doi: 10.1609/aaai.v32i1.11313

3. Y. Bin, W. Shi, Y. Ding, Z. Hu, Z. Wang, Y. Yang, and H. T. Shen, "GalleryGPT: Analyzing paintings with large multimodal models," In Proceedings of the 32nd ACM International Conference on Multimedia, October, 2024, pp. 7734-7743. doi: 10.1145/3664647.3681656

4. Y. Zhang, N. Huang, F. Tang, H. Huang, C. Ma, W. Dong, and C. Xu, "Inversion-based style transfer with diffusion models," In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2023, pp. 10146-10156. doi: 10.1109/cvpr52729.2023.00978

5. V. Dumoulin, J. Shlens, and M. Kudlur, "A learned representation for artistic style," arXiv preprint arXiv:1610.07629, 2016.

6. H. Ugail, D. G. Stork, H. Edwards, S. C. Seward, and C. Brooke, "Deep transfer learning for visual analysis and attribution of paintings by Raphael," Heritage Science, vol. 11, no. 1, p. 268, 2023. doi: 10.1186/s40494-023-01094-0

7. A. Baldrati, M. Bertini, T. Uricchio, and A. Del Bimbo, "Exploiting CLIP-based multi-modal approach for artwork classification and retrieval," In International Conference Florence Heri-Tech: The Future of Heritage Science and Technologies, May, 2022, pp. 140-149. doi: 10.1007/978-3-031-20302-2_11

8. Z. Wang, L. Zhao, and W. Xing, "Stylediffusion: Controllable disentangled style transfer via diffusion models," In Proceedings of the IEEE/CVF international conference on computer vision, 2023, pp. 7677-7689.

9. X. Huang, and S. Belongie, "Arbitrary style transfer in real-time with adaptive instance normalization," In Proceedings of the IEEE international conference on computer vision, 2017, pp. 1501-1510.

10. L. Schaerf, E. Postma, and C. Popovici, "Art authentication with vision transformers," Neural Computing and Applications, vol. 36, no. 20, pp. 11849-11858, 2024. doi: 10.1007/s00521-023-08864-8

11. J. Chung, S. Hyun, and J. P. Heo, "Style injection in diffusion: A training-free approach for adapting large-scale diffusion models for style transfer," In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2024, pp. 8795-8805.

12. D. Wynen, C. Schmid, and J. Mairal, "Unsupervised learning of artistic styles with archetypal style analysis," Advances in Neural Information Processing Systems, vol. 31, 2018.

13. Y. Deng, F. Tang, W. Dong, C. Ma, X. Pan, L. Wang, and C. Xu, "StyTR2: Image style transfer with transformers," In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 11326-11336.

14. J. Ostmeyer, L. Schaerf, P. Buividovich, T. Charles, E. Postma, and C. Popovici, "Synthetic images aid the recognition of human-made art forgeries," PLoS ONE, vol. 19, no. 2, p. e0295967, 2024. doi: 10.1371/journal.pone.0295967

15. N. Garcia, C. Ye, Z. Liu, Q. Hu, M. Otani, C. Chu, and T. Mitamura, "A dataset and baselines for visual question answering on art," In European Conference on Computer Vision, August, 2020, pp. 92-108. doi: 10.1007/978-3-030-66096-3_8