About Me
I am a robotics and technology educator with a strong passion for embedded systems, intelligent robots, and applied artificial intelligence. I currently teach and mentor university students in areas such as robotics, IoT, computer networks, and AI-driven perception systems, with a strong emphasis on hands-on learning and real-world implementation.
My technical background spans ESP32, STM32, Raspberry Pi, and Jetson platforms, where I integrate sensors, motors, communication protocols (I2C, UART, CAN, MQTT), and cloud services into complete robotic and IoT systems. I am particularly interested in ROS 2–based robotics, mobile robot navigation, sensor fusion, and vision-based control.
In recent years, my work has increasingly focused on computer vision and AI for robotics, including object detection with YOLO, hand and gesture recognition with MediaPipe, lightweight neural networks for edge devices, and real-time deployment on embedded hardware. I enjoy guiding students through the full pipeline from dataset collection and model training to deployment and robot integration.
Beyond teaching, I actively support student teams in robotics competitions and applied research projects, encouraging innovation, teamwork, and critical thinking. My long-term goal is to build and lead a cutting-edge robotics and AI laboratory that bridges academic learning with industry-ready skills, empowering students to create impactful, intelligent systems.
I believe that technology education should be practical, experimental, and inspiring, and I continuously explore new tools and methodologies to make learning engaging and relevant in a rapidly evolving technological landscape.
Research Interests
My research interests lie at the intersection of robotics, embedded systems, and artificial intelligence, with a strong emphasis on real-time deployment, perception, and control on resource-constrained platforms.
🤖 Robotics & Autonomous Systems
- Mobile robot navigation and motion control
- Differential-drive, omni-wheel, and service robots
- ROS 2–based robot software architectures
- Odometry, localization, and real-time robot control
🧠 Embedded Systems & Edge AI
- Embedded AI on ESP32, STM32, Raspberry Pi, and Jetson platforms
- Hardware–software co-design for real-time embedded systems
- Communication protocols: I2C, UART, CAN bus, SPI
- Power-efficient and reliable embedded system design
👁️ Computer Vision for Robotics
- Vision-based perception for autonomous robots
- Object detection and tracking using YOLO and lightweight CNNs
- Hand and gesture recognition using MediaPipe and deep learning
- Camera-based navigation and human–robot interaction
🔗 Sensor Fusion & Intelligent Control
- Fusion of IMU, encoder, LiDAR, ultrasonic, and vision data
- Kalman filtering and state estimation
- PID and model-based control strategies
- Vision-assisted decision making and navigation
🌐 IoT & Robotics-Connected Systems
- Integration of robotics with IoT platforms and cloud services
- Real-time data logging and visualization
- MQTT, Node-RED, InfluxDB, Grafana, and dashboard systems
- Smart sensing and monitoring applications
🎓 Robotics Education & Applied Research
- Project-based robotics and AI education
- Low-cost, scalable robotics laboratory platforms
- Mentoring student research and robotics competition teams
- Translating research concepts into hands-on learning experiences
🛠 Technical Skills
- Programming: Python, C/C++, MicroPython
- Platforms: ESP32, STM32, Raspberry Pi, Jetson
- Robotics: ROS 2, motor control, encoders, IMU, LiDAR
- AI & Vision: YOLO, MediaPipe, CNNs, Edge AI
- IoT & Data: MQTT, Node-RED, InfluxDB, Grafana
- Tools: Git, GitHub, VS Code, Linux
