•   When: Friday, April 28, 2017 from 11:00 AM to 12:00 PM
  •   Speakers: Dr. Vishy Swaminathan
  •   Location: Nguyen Engineering Room 4201
  •   Export to iCal

 Abstract

This talk will introduce some of the ongoing research in the Big Data Experience Lab at Adobe Research with special emphasis on improving video experiences with machine learning. We will see how to improve video using not only the recent advances in video and streaming technologies but also data driven insights on video consumption. We will briefly look at leveraging the newer version of the HTTP protocol (HTTP/2) to solve the current latency and efficiency issues for rich video experiences including the 360-degree Virtual Reality video consumption. We use machine learning techniques to pre-emptively push parts of the video (with HTTP/2) that are likely to be in the user’s field of view at a higher quality than other parts of the 360-degree video to improve the overall performance. Then, we will see how to incorporate users’ context along with consumption data to substantially improve personalized video recommendation algorithms.  Recommendations, even when user personalized, can be frustrating when the wrong videos are recommended at the wrong time and place, e.g., when hour long shows are recommended for a 10-minute train ride, when children’s shows are recommended at Friday 9pm, etc. The context information available in collected video session analytics information can be used to make the recommendations not only user personalized but also relevant to the user's current context. We use a class of techniques called Factorization Machines which strives to find the latent features of the context, the user, and the video by factoring the interactions among them. The algorithm incorporates the user’s session context such as the device, location, time of the day as well as the available video metadata and user information provide context-aware recommendations for each user session. We will conclude by outlining potential opportunities for further research.  

Speaker Bio

Vishy (Viswanathan) Swaminathan is a Principal Scientist in Adobe Research working on next generation video technologies. His areas of research include video streaming and analytics, recommendations, processing, coding, and digital rights management. His research work has substantially influenced various technologies in Adobe’s video delivery, recommendations, and DRM products. Some of his recent work include the guts of Adobe’s video recommendations, cloud DVR compression, and HTTP Dynamic Streaming which won the ‘Best Streaming Innovation of 2011′ Streaming Media Readers’ Choice Award. Prior to joining Adobe, Vishy was a senior researcher at Sun Microsystems Laboratories. Vishy has contributed to multiple standards and specifications and received 3 certificates of appreciation from ISO for his contributions to MPEG Standards. He was the lead editor of the MPEG-4 Systems Standard and currently edits the MPEG DASH Server Push Standard. Previously, he chaired multiple organizations including the Technical Committee of the Internet Streaming Media Alliance, JSR 158 on Java Stream Assembly API, and the MPEG-J ad hoc groups.  Vishy received his MS and Ph.D. in electrical engineering fromUtah State University. He received his B.E degree from the College ofEngineering, Guindy, Anna University, Chennai. Vishy has authored severalpapers, articles, RFCs, and book chapters, and has over 30 issued patents.He is on the program committee for a number of IEEE and ACM conferences,is an area chair for ACM Multimedia 2017, and most recently was theprogram chair for an internal Tech Summit at Adobe attended by 2800 of itsbrightest technical minds.**************************************** 

 

Posted 6 years, 11 months ago