学 术

分享到微信 ×
打开微信“扫一扫”
即可将网页分享至朋友圈
名师讲堂:High Dynamic Range Video: recent research activities and perspectives
文:教师发展中心 来源:党委教师工作部、人力资源部(教师发展中心) 时间:2016-11-03 4274

  人力资源部教师发展中心开展“名师讲堂”系列活动,定期邀请国内外知名专家学者、国家级教学名师等作专题报告,旨在加强广大师生的学术思想交流和碰撞,促进青年教师成长,开拓学生视野。

  “名师讲堂”第46期活动特别邀请IEEE Fellow、法国国家科研中心Frederic Dufaux教授主讲,欢迎广大师生参与。具体安排如下:

  主 题:Local features for RGBD image matching under viewpoint changes

  主讲人:法国国家科研中心 Frederic Dufaux教授

  时 间:2016年11月8日(周二)14:30

  地 点:清水河校区品学楼B106

  主持人:电子工程学院 朱策教授

  主办单位:人力资源部教师发展中心

  承办单位: 电子工程学院、机器人研究中心

  内容简介:

  In the last five-to-ten years, 3D acquisition has emerged in many practical areas thanks to new technologies that enable a massive generation of texture+depth (RGBD) visual content, including infrared sensors Microsoft Kinect, Asus Xtion, Intel RealSense, Google Tango, laser 3D scanners (LIDARs). The increasing availability of this enriched visual modality, combining both photometric and geometric information about the observed scene, opens up new horizons for different classic problems in vision, robotics and multimedia. In this thesis, we address the task of establishing local visual correspondences in images, which is a basic task that numerous higher-level problems are settled with. The local correspondences are commonly found through local visual features. While these have been exhaustively studied for traditional images, little work has been done so far for the case of RGBD content.
  In this talk, we contribute with several new approaches of keypoint detection and descriptor extraction, that preserve the conventional degree of keypoint covariance and descriptor invariance to in-plane visual deformations, but aim at improved stability to out-of-plane (3D) transformations in comparison to existing texture-only and texture+depth local features. In order to assess the performance of the proposed approaches, we revisit a classic feature repeatability and discriminability evaluation procedure, taking into account the extended modality of the input. Along with this, we conduct experiments using application-level scenarios on RGBD datasets acquired with Kinect sensors. The results show the advantages of the new proposed RGBD local features in terms of stability under viewpoint changes.

  主讲人简介:

1.jpg

  Dr. Frederic Dufaux is a CNRS Research Director at Telecom ParisTech. He is also Editor-in-Chief of Signal Processing: Image Communication.

  Frédéric received his M.Sc. in physics and Ph.D. in electrical engineering from EPFL in 1990 and 1994 respectively. He has over 20 years of experience in research, previously holding positions at EPFL, Emitall Surveillance, Genimedia, Compaq, Digital Equipment, and MIT. He has been involved in the standardization of digital video and imaging technologies, participating both in the MPEG and JPEG committees. He is the recipient of two ISO awards for his contributions.

  Frederic is a Fellow of IEEE. He was Vice General Chair of ICIP 2014. He is an elected member of the IEEE Image, Video, and Multidimensional Signal Processing (IVMSP) and Multimedia Signal Processing (MMSP) Technical Committees. He is the Chair of the EURASIP Special Area Team on Visual Information Processing.

  His research interests include image and video coding, distributed video coding, 3D video, high dynamic range imaging, visual quality assessment, video surveillance, privacy protection, image and video analysis, multimedia content search and retrieval, and video transmission over wireless network. He is the author or co-author of 3 books, more than 120 research publications and 17 patents issued or pending.


                   人力资源部教师发展中心

                     2016年11月2日


编辑:李思扬  / 审核:林坤  / 发布:林坤

"