LI Sihan,LUO Kai,JIN Xiaofeng*.Research on mobile platform target localization algorithmbased on Kinect information fusion[J].Journal of Yanbian University,2018,44(01):69-73.
基于Kinect信息融合的移动平台目标定位算法研究
- Title:
- Research on mobile platform target localization algorithm based on Kinect information fusion
- Keywords:
- Kinect; information fusion; depth information; target localization; sound source localization; particle filter
- 分类号:
- TP391.41
- 文献标志码:
- A
- 摘要:
- 针对场景的光照变化和遮挡、混响等因素对目标定位准确性和鲁棒性的影响,提出了一种基于Kinect音视频融合的目标定位方法.在获取场景的颜色、深度和声源定位信息后,首先利用获取的深度信息剔除背景信息,然后分别对颜色、深度和声源定位的模型计算似然函数,最后融合上述3种似然函数,并在粒子滤波框架下实现目标定位.实验结果表明,音视频信息融合的目标定位平均准确率达到90.7%,相比于同一场景下独立使用视频和音频定位的准确率分别提高了9.1%和16.9%.
- Abstract:
- Aiming at the influence of illumination variation, occlusion and reverberation on the accuracy and robustness of object location, a new method based on Kinect audio-video fusion is proposed. After obtain the color, depth and sound source location information of the scene, firstly, the background information is eliminated by depth information. Secondly, the likelihood function is computed for the model of color, depth and sound source location. Finally, fused three likelihood functions and implemented target location under framework of the particle filter. The experimental results show that the average accuracy of proposed method reaches to 90.7%, in contrast with singly using video and sound source location methods, the proposed method increased accuracy by 9.1% and 16.9% respectively.
参考文献/References:
[1] Cheng C, Ansari R. Kernel particle for visual tracking[J]. IEEE Signal Processing Letters, 2005,12(3):242-245.
[2] 宋丹,赵保军,唐林波.融合角点特征与颜色特征的Mean-Shift目标跟踪算法[J].系统工程与电子技术,2012,34(1):199-203.
[3] 张玲,蒋大永,何伟,等.基于Mean-shift的改进目标跟踪算法[J].计算机应用,2008,28(12):3120-3122.
[4] 李剑汶,周慧,王小阳,等.基于超声波时差定位和卡尔曼滤波的服务机器人导航方法[J].南京大学学报(自然科学版),2015(S1):82-86.
[5] 闫河,刘婕,杨德红,等.基于特征融合的粒子滤波目标跟踪新方法[J].光电子:激光,2014(10):1990-1999.
[6] 许婉君,侯志强,余旺盛,等.一种改进的多特征融合目标跟踪算法[J].电光与控制,2015(12):34-39.
[7] 甘明刚,陈杰,王亚楠,等.基于Mean Shift算法和NMI特征的目标跟踪算法研究[J].自动化学报,2010,36(9):1332-1336.
[8] 谢静.基于音视频融合的目标跟踪算法[D].天津:天津大学,2009:47-55.
[9] 陶巍,刘建平,张一闻.基于麦克风阵列的声源定位系统[J].计算机应用,2012,32(5):1457-1459.
[10] Hoseinnezhad R, Vo B N, Vo B T, et al. Bayesian integration of audio and visual information for multi-target tracking using a CB-member filter[C]//IEEE International Conference on Acoustics, Speech, & Signal Processing, 2011:2300-2303.
[11] Steer M A, Al-Hamadi A, Michaelis B, et al. Audio-visual data fusion using a particle filter in the application of face recognition[C]//20th International Conference on Pattern Recognition, Istanbul, Turkey, 2010:4392-4395.
[12] Kasutani E, Yamada A. The MPEG -7 color layout descriptor: a compact image feature description for high-speed image/video segment retrieval[C]//International Conference on Image Processing, 2001:674-677.
[13] 景思源,冯西安,张亚辉.广义互相关时延估计声定位算法研究[J].声学技术,2014(5):464-468.
[14] 郭连朋,陈向宁,刘彬.Kinect传感器的彩色和深度相机标定[J].中国图像图形学报,2014,19(11):1584-1590.
相似文献/References:
[1]罗凯,金小峰*.基于Kinect的跌倒行为识别算法[J].延边大学学报(自然科学版),2016,42(02):156.
LUO Kai,JIN Xiaofeng*.Research on fall behavior recognition based on Kinect[J].Journal of Yanbian University,2016,42(01):156.
备注/Memo
收稿日期: 2017-04-21
基金项目: 吉林省科技厅自然科学基金资助项目(20140101225JC)
*通信作者: 金小峰(1970—),男,教授,研究方向为智能信息处理.