Mitesh Patel

I am a Developer Relationship Manager at NVIDIA, where I work with researchers in higher education to execute their ideas using NVIDIA SDK(s) and/or platforms. Before joining NVIDIA, I was a Senior Research Scientist at FX Palo Alto Laboratory, Inc. (FXPAL). While at FXPAL, I extensively worked on developing novle systems using machine learning/ deep learning in the domian of indoor localization, user behavior modeling, activity recognition, sensor fusion, using variety of sensors such as RF sensors, RGB, RGB-D images, LiDAR as well as big user data harnessed through web. Prior to joining FXPAL, I was a Research Scientist at Yahoo! Labs where I worked in the Ad Science team on user behavior modeling problem based on user-app interaction data logged on different Yahoo properties.

I received my Ph.D. in Robotics from Center of Autonomous Systems (CAS) at the University of Technology Sydney in 2014 where I focused on modeling a wide spectrum of high-level user activities (activities of daily living) using different probabilistic techniques.

Detailed CV (pdf)

Projects

Indoor Localization using Sensors on Smart Devices

In this project we developed localization technologies that leverages on RF sensors such as Bluetooth Low Energy (BLE) beacons or WiFi-RTT access points. The system was developed it can provide various level of localization resolution such as proximity, room level, precise coordinates (like google maps). The system is modular in nature and can utilize information from multiple sensors such as BLE, WiFi-RTT, floor plan, Inertial Measurement Unit (IMU) etc. The system modular such that it can be adapted for variety of applications such as manufacturing, museum visits, hospitals, office visits and can be deployed as a smartphone application.

Localization of Endoscope

In this project we developed an image based endoscope localization system that combines deep learning predictions with traditional computer vision methods to estimate the pose of the endoscope. The deep learning model classifies the area/zone of the endoscope which is further utilized to do feature matching using traditional computer vision technique.

Activity Recognition using RF Sensors

In this project we developed an activity recognition using RF sensors which can be mounted under desks or walls. The system was tested for variety of applications such as activity prediction at checkout counters (e.g. scanning items, bagging), activity prediction performed by office desk users (e.g. typing on keyboard, reading on desk) and space utilization of display counters in stores (e.g. apple stores). The system is non-intrusive in natures as it only captures movement through RF signal reflection.

ContextualNet: An image based localization platform

In this work, we developed an image based localization system that estimates the location of the robot/smart device user using RGB images. The system is developed using a CNN-LSTM based deep learning framework. The combined system is able to exploit spatial relationship within an image using the CNN layers and the temporal relationship between images using LTSM layers. The system was tested in real-time both on a robot frame and a native smartphone application.

Publications