Treffer: 基于计算机视觉的轮胎断面高效建模方法研究.

Title:
基于计算机视觉的轮胎断面高效建模方法研究. (Chinese)
Alternate Title:
Research on Efficient Modeling Method of Tire Cross-section Based on Computer Vision. (English)
Source:
China Rubber Industry; Sep2025, Vol. 72 Issue 9, p657-665, 9p
Database:
Complementary Index

Weitere Informationen

An efficient modeling method of the tire cross-section, referred to as the CV-CAD-Python method, was proposed by integrating computer vision (CV) technology with secondary development technology of AutoCAD software using the Python language. A photography platform of the tire cross-section was built, and a standardized process for the automation modeling of the tire cross-section was established. Firstly, the tire cross-section images were captured through an image acquisition system, and the edge features were enhanced through image preprocessing and enhancement techniques. The Canny operator was then employed to extract pixel coordinates of the edge features of the tire cross-section. Finally, the spline curves were generated by connecting every three pixel points through secondary development technology of AutoCAD software using Python language, enabling the efficient automation modeling of complex tire cross-section. This method could shorten the modeling time of the tire cross-section from several hours to 1 minute, and could be widely applied in multiple fields. [ABSTRACT FROM AUTHOR]

提出综合计算机视觉 (CV) 技术和基于 Python 语言的 AutoCAD 软件二次开发技术的轮胎断面高效建模方法 (简称 CV-CAD-Python 方法), 搭建了轮胎断面拍摄平台, 建立了轮胎断面自动化建模规范流程: 首先通过图像采集系统 获取轮胎断面图像, 采用图像预处理和图像增强技术突出边缘特征; 然后选用 Canny 算子获取轮胎断面边缘特征的像素 点坐标; 最后采用基于 Python 语言的 AutoCAD 软件二次开发技术, 每间隔 3 个像素点进行样条曲线连线, 实现了复杂轮胎 断面自动化高效建模。本方法可以将轮胎断面建模时间由数小时缩短至 1 min, 可推广应用于多个领域。 [ABSTRACT FROM AUTHOR]

Copyright of China Rubber Industry is the property of Editorial Office of China Rubber Industry and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)