Residential Collegefalse
Status已發表Published
Concise and Effective Network for 3D Human Modeling From Orthogonal Silhouettes
Liu, B.1,2; Liu, X.P.2; Yang, Z. X.3; Wang, C. L.4
2022-03-31
Source PublicationThe ASME Journal of Computing and Information Science in Engineering
ISSN1530-9827
Volume22Issue:5Pages:051004
Abstract

In this article, we revisit the problem of 3D human modeling from two orthogonal silhouettes of individuals (i.e., front and side views). Different from our previous work (Wang et al. (2003, “Virtual Human Modeling From Photographs for Garment Industry,” Comput. Aided Des., 35, pp. 577–589).), a supervised learning approach based on the convolutional neural network (CNN) is investigated to solve the problem by establishing a mapping function that can effectively extract features from two silhouettes and fuse them into coefficients in the shape space of human bodies. A new CNN structure is proposed in our work to extract not only the discriminative features of front and side views but also their mixed features for the mapping function. 3D human models with high accuracy are synthesized from coefficients generated by the mapping function. Existing CNN approaches for 3D human modeling usually learn a large number of parameters (from 8.5 M to 355.4 M) from two binary images. Differently, we investigate a new network architecture and conduct the samples on silhouettes as the input. As a consequence, more accurate models can be generated by our network with only 2.4 M coefficients. The training of our network is conducted on samples obtained by augmenting a publicly accessible dataset. Learning transfer by using datasets with a smaller number of scanned models is applied to our network to enable the function of generating results with gender-oriented (or geographical) patterns.

KeywordArtificial Intelligence Computer Aided Design Virtual Prototyping
DOI10.1115/1.4054001
URLView the original
Indexed BySCIE
Language英語English
WOS Research AreaComputer Science ; Engineering
WOS SubjectComputer Science, Interdisciplinary Applications ; Engineering, Manufacturing
WOS IDWOS:000851558400009
The Source to ArticlePB_Publication
Scopus ID2-s2.0-85142339205
Fulltext Access
Citation statistics
Document TypeJournal article
CollectionTHE STATE KEY LABORATORY OF INTERNET OF THINGS FOR SMART CITY (UNIVERSITY OF MACAU)
Faculty of Science and Technology
Corresponding AuthorWang, C. L.
Affiliation1.Nanchang Hangkong Univ, Sch Math & Informat Sci, Nanchang 330063, Jiangxi, Peoples R China
2.Dalian Univ Technol, Sch Math Sci, Dalian 116024, Peoples R China
3.Univ Macau, State Key Lab Internet Things Smart City, Dept Electromech Engn, Macau 999078, Peoples R China
4.Univ Manchester, Dept Mech Aerosp & Civil Engn, Manchester M1 3NJ, Lancs, England
Recommended Citation
GB/T 7714
Liu, B.,Liu, X.P.,Yang, Z. X.,et al. Concise and Effective Network for 3D Human Modeling From Orthogonal Silhouettes[J]. The ASME Journal of Computing and Information Science in Engineering, 2022, 22(5), 051004.
APA Liu, B.., Liu, X.P.., Yang, Z. X.., & Wang, C. L. (2022). Concise and Effective Network for 3D Human Modeling From Orthogonal Silhouettes. The ASME Journal of Computing and Information Science in Engineering, 22(5), 051004.
MLA Liu, B.,et al."Concise and Effective Network for 3D Human Modeling From Orthogonal Silhouettes".The ASME Journal of Computing and Information Science in Engineering 22.5(2022):051004.
Files in This Item:
There are no files associated with this item.
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Liu, B.]'s Articles
[Liu, X.P.]'s Articles
[Yang, Z. X.]'s Articles
Baidu academic
Similar articles in Baidu academic
[Liu, B.]'s Articles
[Liu, X.P.]'s Articles
[Yang, Z. X.]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Liu, B.]'s Articles
[Liu, X.P.]'s Articles
[Yang, Z. X.]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.