Design and Implementation of Interactive Augmented Trial Room
International Journal of Computer Science and Engineering |
© 2015 by SSRG - IJCSE Journal |
Volume 2 Issue 3 |
Year of Publication : 2015 |
Authors : Dr. N.Pughazendi, G.Madankumar, R.Rajkumar, R.Ramsuraj |
How to Cite?
Dr. N.Pughazendi, G.Madankumar, R.Rajkumar, R.Ramsuraj, "Design and Implementation of Interactive Augmented Trial Room," SSRG International Journal of Computer Science and Engineering , vol. 2, no. 3, pp. 35-39, 2015. Crossref, https://doi.org/10.14445/23488387/IJCSE-V2I3P122
Abstract:
This paper gives user friendly visual interface which auto-detect the human face and tries to merge the chosen accessories (either jewelry or eye-glasses) on them using a webcam as an input device and displays it to the screen based on Augmented Reality [AR]. With this, a lot time is saved to choose the accessories in a virtual display. To achieve this we use HAAR algorithm which takes the responsibility to detect the face thereby merging the accessory. Here the accessories are merged using the joints and position of the coordinates. Thus by doing so, the accessories are automatically positioned to the detected human face using an affine transformation. In addition, our proposed paper also detects the red pixels on the user‟s finger tip to change the accessories based on the gesture automatically. Thus this makes an user-friendly virtual trial room application instead of a real-time trail room.
Keywords:
Augmented reality, Merging accessories, fitting system, face pose and scale.
References:
[1] W. Broll, I. Lindt, J. Ohlenburg, I. Herbst, M. Wittkamper, and T. Novotny, “An infrastructure for realizing custom-tailored augmented reality user interfaces,” IEEE Trans. Visualiz. Comput.Graph., vol. 11, no. 6, pp. 722–733, Nov.–Dec. 2005.
[2] D. Kalkofen, E. Mendez, and D. Schmalstieg, “Comprehensible visualization for augmented reality,” IEEE Trans. Visualiz. Comput.Graph.,vol. 15, no. 2, pp. 193–204, Mar.–Apr. 2009.
[3] S. Lee, J. Choi, and J. Park, “Interactive E-learning system using pattern recognition and augmented reality,” IEEE Trans. Consum. Electron.,vol. 55, no. 2, pp. 883–890, May 2009.
[4] M. Sidhu and L. Kang, “Interactive augmented reality environments for engineering with open systems,” in Proc. ICOS, 2011, pp. 1–5.
[5] A. Dias, M. Ordaz, and J. Lopez, “From 2D to augmented reality,” inProc. CISTI, 2011, pp. 1–7.
[6] Kyle Chard and Kris Bubendorfer and Simon Caton and Omer Rana, “Social Cloud Computing: A Vision for Socially Motivated Resource Sharing,” IEEE Transactions on Services Computing, vol. 4, no. 4, 2011.
[7] Foster, Ian and Kesselman, Carl and Tuecke, Steven, “The Anatomy of the Grid: Enabling Scalable Virtual Organizations,” International Journal of High Performance Computing Applications, vol. 15, no. 3, pp. 200–222, August 2001.
[8] E. Korpela, D. Werthimer, D. Anderson, J. Cobb, and M. Leboisky,“SETI@homemassively distributed computing for SETI,” Computingin Science Engineering, vol. 3, no. 1, pp. 78–83, jan/feb 2001.
[9] A. Beberg, D. Ensign, G. Jayachandran, S. Khaliq, and V. Pande, “Folding@home: Lessons from eight years of volunteer distributed computing,” in Parallel Distributed Processing. IPDPS 2009., May 2009, pp. 1–8.
[10] J. Howe, “The Rise of Crowdsourcing,” Wired Magazinehttp://www.wired.com/wired/archive/14.06/crowds.html, June 2006.
[11] D. Anderson, “BOINC: a system for public-resource computing and storage,” in Grid Computing. Fifth IEEE/ACM International Workshop on, Nov 2004, pp. 4–10.
[12] Top 500 Supercomputer Sites, “November 2010 | Top 500 Supercomputing Sites,” http://www.top500.org/lists/2010/11, June 2011.
[13] F. Costa, L. Silva, and M. Dahlin, “Volunteer cloud computing: Mapreduceover the internet,” in Parallel and Distributed Processing Workshops and Phd Forum (IPDPSW).
[14] P. Cesar, J. Vierinen, and P. Vuorimaa, “Open graphical framework for interactive TV,” in Proc. ISMSE, 2003, pp. 21–28.
[15] K. Kang, J. Kim, H. Lee, H. Chang, S. Yang, Y. Kim, et al., “Metadata broadcasting for personalized service: A practical solution,” ETRI J., vol. 26, no. 5, pp. 452–466, 2004.
[16] S. Kim, J. Cha, J. Kim, J. Ryu, S. Eom, N. Mahalik, et al., “A novel test-bed for immersive and interactive broadcasting production using augmented reality and haptics,” IEICE Trans. Inf. Syst., vol. E89-D, no. 1, pp. 106–110, 2006.
[17] G. Holbling, T. Rabl, D. Coquil, and H. Kosch, “Interactive TV services on mobile devices,” IEEE Multimedia, vol. 15, no. 2, pp. 72–76, Apr.–Jun. 2008.
[18] I. Lee, H. Lee, J. Hong, and J. Cha, “Interactive contents player for rich media service,” IEEE Trans. Consum. Electron., vol. 55, no. 1, pp. 112–118, Feb. 2009.
[19] C. Mart´ın, L. Garc´ıa, J. Men´endez, and G. Cisneros, “Access services based on MHP interactive applications,” IEEE Trans. Consum.Electron.,vol. 56, no. 1, pp. 198–202, Feb. 2010.
.[20] Y. Lim, K. Kim, and J. Jeong, “Hybrid interactive data service architecture over T-DMB with mobile network,” IEEE Trans. Consum.Electron.,vol. 56, no. 4, pp. 2826–2833, Nov. 2010.