Computer Vision Engineer • Perception • 3D Vision • Robotics • Machine Learning
I am a Computer Vision Engineer working on learning-based visual perception and 3D understanding. My work spans 3D reconstruction, pose estimation, multi-view geometry, and deep learning, including transformer-based vision architectures and vision–language models for robust visual reasoning.
I build end-to-end vision systems across sensor integration (ROS / ROS2), dataset generation, calibration, model training and inference, and geometric optimization, with a focus on scalability, generalization, and real-world performance.
Feel free to reach out:
Core tools for computer vision, perception research, and robotics sensing:
Selected work in computer vision and robotics perception.
Developed an event-based perception pipeline for satellite pose estimation and 3D reconstruction, designed to operate under extreme space-imaging conditions such as high relative motion and rapid illumination changes.
Leveraged neuromorphic (event) cameras with SfM-style geometric optimization to recover accurate camera pose and 3D structure from asynchronous event data.
Focus: Event-based vision, satellite pose estimation, 3D geometry, Structure-from-Motion, perception pipelines
Adapted a state-of-the-art transformer-based object detector for deployment in a real-world campus environment, addressing domain shift beyond standard benchmark datasets.
Designed and implemented the full detection pipeline, including dataset curation, data preprocessing, model fine-tuning, evaluation, and benchmarking.
Outcome: Achieved top detection and classification accuracy among all project teams and demonstrated strong generalization.
Focus: Transformer models, object detection, PyTorch, dataset curation, model evaluation
Built a ROS-based serial interface to synchronize an Arduino-controlled rotating polarizer with a Prophesee event camera for condition-driven data capture.
Parses real-time serial data, publishes motor shaft RPM, and auto-triggers rosbag recording based on thresholds.
Outcome: Reliable, repeatable event-camera recording for experiments.
Focus: ROS, Python, serial communication, synchronization, real-time systems, event-based vision
Built an interactive 3D digital twin from UAV imagery using SfM/SLAM pipelines, producing dense point clouds and photorealistic scene representations.
Used Gaussian Splatting for efficient rendering and smooth viewpoint navigation.
Focus: Point clouds, Gaussian Splatting, 3D reconstruction, multi-view geometry, UAV perception
Built a synthetic dataset by simulating a UAV with multiple sensors in CARLA for 3D reconstruction and pose estimation.
Designed synchronized RGB + event sensor pipelines, trajectories, and ground-truth pose.
Dataset: 15 GB simulated multi-sensor dataset
Focus: CARLA, Unreal Engine, multi-sensor simulation, UAV perception, dataset engineering
Built a regression pipeline to predict flower petal counts from RGB images using pretrained CNNs.
Evaluated ResNet50 and VGGNet backbones with custom regression heads; best performance with ResNet50.
Focus: CNN regression, transfer learning, image counting, PyTorch
Developed a novel hand–eye calibration pipeline for event cameras using a rotating polarized lens mounted in front of an event sensor on a robotic arm.
The system enables robust calibration and event generation from challenging surfaces by exploiting polarization cues, while synchronizing robot motion and event streams through ROS-based control and data acquisition.
Focus: Event-based vision, hand–eye calibration, polarization, robotic perception, ROS-based sensor integration
Visual snapshots from perception experiments, reconstruction pipelines, calibration setups, and vision results.
Selected roles focused on perception, 3D vision, and simulation-driven validation.
Artificial Intelligence and Robotics Lab, Saint Louis University
Artificial Intelligence and Robotics Lab, Saint Louis University
Saint Louis University
Department of Computer Science, Saint Louis University
Academic background in computer science and engineering.
Saint Louis University, USA
Jamia Hamdard University, India
Status: Published in IEEE Access
Read PaperAuthors: S.S. Malik, A. Khan, Dr. Sapna Jain
Published in Springer Nature (ICT Systems and Sustainability).
Read PaperAuthors: Shahid Shabeer Malik, Aneeque Khan
Published in: IEEE Xplore
Read PaperStarting Fall 2025
Tech Lead for open-source software projects at SLU, mentoring undergraduates on real-world engineering delivery.
September 16–17, 2024
Presented a poster on “3D Satellite Pose Estimation and Reconstruction with Neuromorphic Cameras” at the Midwest Computer Vision Workshop.
April 20, 2025
Received an internship offer to work on autonomous navigation experiments (reinforcement learning + perception pipelines).
Recognition for academic excellence, research contributions, and creative work.
Aug 2023 – May 2024 · Saint Louis University
Awarded full tuition and funding by the Department of Computer Science for academic excellence.
Jun 2024 – May 2025 · SLUAIR Lab, Saint Louis University
Received full funding through a Graduate Research Assistantship to conduct research in AI, computer vision, and robotics.
Jun 2025 – Dec 2025 · Saint Louis University
Re-awarded full tuition support by the Department of Computer Science for continued academic and research achievement.
Saint Louis University · Principles of Software Development
Recognized for the best creative presentation for clarity, innovation, and technical execution.