Mohamed Fazil
Robotics, Mechatronics, Biomechanics, Rapid Prototyping, Deep Learning
"Let us make this world a better place " - Seeking a progressive organization that will provide an opportunity for me to capitalize on my technical skills and abilities in the field of Robotics, Mechatronics, Deep Learning, and Assistive Technology. And also a passionate landscape and travel photographer.

My Bio
I am Mohamed Fazil, a graduate student majoring in Robotics from University at Buffalo, New York, and a Spring 2021 Grad. Building unique tech applications and exploring their potentials have always been my passion from the start. Added to this I am also a passionate landscape photographer and a traveler.
My experiences in developing application-based robots and long-term experiences in C++, Python, and Deep Learning would enable me to take up various tasks in many organizations.
My current robotic research involves developing a Virtual Reality simulation environment to train and test deep learning models for robotic prosthetic arm control inside Mujoco Environment. My previous work on robotics was the development and prototyping of a Stereo-Vision Based tabletop robot with VR support. Additionally, I have familiarity with Linux, ROS, C++, Python, and computer vision for robotics.
Please go through my Project page to learn more about my experiences.

My Significant Works
To show my stance in Robotics research, Design, and rapid prototyping.
Autonomous Ground Vehicle with 3D Perception for Mapping / Multi-Object Pose Tracking
Tracked Differential Drive Robot + Jetson Nano + Realsense D435i + 3-DOF Manipulator
Built a ground robot for environment exploration, a 3-DOF manipulator with Realsense D435i RGB-D camera mounted on a Tracked differential drive mobile Robot fully controlled with ROS using Jetson Nano board. The Robot uses EKF localization fusing the wheel odometry with IMU(MPU6050) for state estimation. Real-Time Appearance based mapping (RTAB-Map) is used for building 3D Space/2D grid maps and localization. Scripted a ROS Multi-Object Tracker node that projects objects in 3D space and broadcasts it to the main TF tree. Built an organized repo of ROS packages for the Robot’s Configuration, Control, Perception and Navigation
![]() IMG_20211111_150813.jpg |
---|
![]() realsense_explorer.png |
![]() meliobject.gif |
![]() multi_object_track.gif |
![]() rtab.gif |
3-DOF Desktop Robot for Autonomous Object Tracking in 3D space
3-DOF Manipulator + Intel RealSense D435i + ROS (GitHub)
Adding 3D perception applications to a desktop 3-DOF robot initially designed to mimic the human head’s dexterity. This is a ROS package for Intel realsense D435i with a 3-DOF Manipulator robot that can be used for Indoor Mapping and localization of objects in the world frame with an added advantage of the robot's dexterity. The 3-DOF Manipulator is a self-built custom robot where the URDF with the depth sensor is included. The package covers the Rosserial communication with Arduino nodes to control the robot's Joint States and PCL pipelines required for autonomous mapping/Localization/Tracking of the objects in real-time.
![]() object_following.gif |
---|
![]() sw_model.PNG |
![]() ROBOT_Description.png |
![]() manual_perception.gif |
![]() yolo_depth.PNG |
![]() 3D_pose_estimation_node.PNG |
![]() rtab_map_room.gif |
Smart Rollator with Rocker Bogie Suspension and IMU-Linear Actuators based weight distribution control
University Award Project, Mechanical Engineering Capstone Project (2019)
An advanced rollator(wheel walker) for Elderly and disabled people will help them walk on any obstacle-filled ground, go uphill and down the hill, and climb steps. Uses Rocker Bogie suspension design with linear actuators modifying the user's weight distribution over the rollator in different terrains. Developed unique mechanical design, a mechatronic system consists of linear actuators, Arduino Mega board, BLDC motors with feedback control, and IoT connectivity that can always be relied upon by the elderly. I am now looking forward to developing it as a Product.
![]() Picture1.jpg |
---|
![]() pic.jpg |
![]() weight_distribution.PNG |
![]() inaivi_mechtronics.PNG |
![]() inaivi_design.PNG |
![]() mechatronics_components.jpg |
![]() structure testing.png |
![]() mujoco cap.PNG |
Robotic Prosthesis Researcher - Deep Learning, Motion Capture, EMG, Virtual Reality
Researcher at Assistive Wearable Robotics Lab, UB (AWEAR)
I finished my Master's research project on the title "Deep Learning approach to Robotic Prosthetic Wrist Control using EMG signals" under Professor Dr. Jiyeon Kang in Assistive Wearable Robotics Lab (AWEAR), where I designed experiments, collected forearm EMG signals, did 3D reconstruction from Motion capture system and applied Convolutional Neural Networks to predict angular velocities and classify wrist movements that can be implemented in a realtime controller. Previously worked on Myoware based prosthetic hook which involves EMG Signals, Imu Data, 3D reconstructions and kinematics, Euler Angles calculation of rigid body segments, develop Virtual Reality simulation in Mujoco physics engine, and Vicon Motion Capture System.
![]() Robot |
---|
![]() mujoco cap.PNG |
![]() Child with VR Set |
![]() Image by ThisisEngineering RAEng |
ROS Autonomous SLAM with Rapidly Exploring Random Tree (RRT)
Mobile Robotics and Lidar Perception (GitHub)
Developed a ROS package for Autonomous environment exploration using SLAM in a Gazebo environment which uses a Laser Range sensor to construct a real-world map at the same dynamically using Rapidly Exploring Random Tree algorithm. The robot uses ROS Navigation stack and RVIZ to visualize the perception of the robot in the environment. My Medium Story of this Project.
![]() Screenshot from 2021-01-06 00-22-39.png |
---|
![]() navigation3.gif |
![]() gmapping2.gif |
![]() Screenshot-from-2019-09-04-21-21-24-1024 |
HapTap - The elderly haptics based monitoring device
Product design and Wearable device
The Haptap is an haptics based assistive device which consist of various sensors required for health monitoring and IMU to track the user as well as help them communicate using haptics . This is an IoT device that has its own standalone server and multi-device interface which can be used in any medical oriented environment that requires the users to be tracked without physically volunteering. The concept of Haptics was originally developed to help with people who are suffering from the condition 'Cerebral Palsy'. Iam currently looking forward to kickstarting this product and looking for investors.
![]() Picture1_edited.jpg |
---|
![]() Picture2_edited.jpg |
![]() circuit haptap.jpg |
![]() node-red ui.PNG |
Stereo Vision-based robot for Remote Monitoring with VR (Publication)
Published in Scopus indexed IJEAT Journal
Abstract: The Machine vision systems have been playing a significant role in visual monitoring systems. With the help of
stereovision and machine learning, it will be able to mimic human-like visual systems and behavior towards the
environment. In this paper, we present a stereo vision-based 3-DOF robot which will be used to monitor places from remote using cloud server and internet devices. The 3-DOF robot will
transmit human-like head movements, i.e., yaw, pitch, roll and produce 3D stereoscopic video and stream it in real-time. This video stream is sent to the user through any generic internet
devices with VR box support, i.e., smartphones giving the user a First-person real-time 3D experience and transfers the head motion of the user to the robot also in Real-time. The robot will also be able to track moving objects and faces as a target using
deep neural networks which enables it to be a standalone monitoring robot. The user will be able to choose specific subjects to monitor in a space. The stereovision enables us to track the
depth information of different objects detected and will be used to track human interest objects with their distances and sent to the cloud. A full working prototype is developed which showcases the capabilities of a monitoring system based on stereo vision, robotics, and machine learning.
![]() ypr_edited.jpg |
---|
![]() fig1.jpg |
![]() Presentation2_edited.jpg |
![]() stereo maping.jpg |
Touch-less Clock-Time System for University at Buffalo
Artificial Intelligence Institute, University at Buffalo
Touchless clock-in system Designed and deployed a web-based application in which uses face recognition with deep learning. The web application was built in Angular and deployed through Google Cloud Console with Python flask backend which also runs in a GCP Cloud Server. It uses Dlib deep learning library in python and Manages an unstructured database in the cloud through MongoDB. A completely touch-less user/employee attendance system for University at Buffalo. The prototype system consists of a Python interactive bot for the user to interact using voice inputs and guided voice outputs. The system stores all the user information with a 128-dimensional face descriptor created for individual faces. A back-end monitoring web interface was also created using NodeRed.
![]() floe_chart.jpg |
---|
![]() system_diag.png |
![]() cvip_snap.png |
Awards and Certification

Twice Best Project University Award Winner
2018-2019
Vellore Institute of Technology, May 2019
Received the "Best Project Award" twice consecutive semesters for my projects "The Smart Rollator for Elderly" and "The HapTap"

IoT Challenge Winner - Pragyan 2017
National Institute of Technology, Trichy, March 2017
Me and my teammates together won the National Level IOT Challenge conducted by NIT Trichy, 2017 where we developed a IOT based laborer Protective suit which can be used to monitor their health conditions when working in hazardous environments.

International Accessibility Summit 2017 - Delegate
Indian Institute of Technology, Madras, January 2017
Was selected and attended the International Accessibility of 2017 organized by the Indian Institute of Technology, Madras which was also attended by world-renowned scientists and policymakers on the field of Assistive Technology. Presented my Modul-i product concept to the panelists.