
“From Circuits to Surgery: My Mission to Revolutionize Cancer Care with Robotics”
By Vishwas Gowda — Mechatronics Engineer & Medical Robotics Researcher
Standing in an operating theatre—surrounded by beeping monitors, focused surgeons, and the quiet intensity of saving a life—I realized something simple and profound: engineering can be medicine.
I’m a mechatronics engineer turned medical-robotics researcher, currently developing a fluorescence-guided robotic system for cancer residual detection at the Mandya Institute of Medical Sciences (MIMS) in India. This project is more than a technical build—it’s a mission to reduce uncertainty, re-operations, and stress for patients and families in resource-limited settings.
The problem: days of uncertainty after cancer surgery
In many tumour resections, surgeons excise the tumour with a margin and then send tissue for histopathology. Results can take 5–10 days. If residual malignant cells are found, the patient returns for re-excision.
This lag creates mental, physical, and financial strain, and can risk disease progression. It also increases burden on surgical teams and systems (re-surgery, chemotherapy, radiotherapy). In some neurosurgical workflows, intraoperative fluorescence agents are injected; while powerful, they can have post-operative side effects and aren’t universally available.
What if residual cancer could be flagged in the OT, within seconds—without waiting days for a lab report?
The solution: a robot that sees what surgeons can’t
That question led me to design a robotic residual-detection system that scans the resection bed, detects fluorescence, and alerts the OT team in real-time. The robot doesn’t replace the surgeon—it becomes a second set of eyes, extending human perception when it matters most.
Core imaging & sensing stack
- Fluorescence detection camera (FLIR/Basler with emission filters): captures subtle post-resection fluorescence.
- Edmund Optics 12 mm C-Mount lens for the fluorescence channel.
- Excitation LED (Thorlabs M850L3): drives the appropriate excitation for the fluorophore.
- Optical emission filter: passes the emitted band to the camera, rejecting excitation light.
- Raspberry Pi HQ camera (6 mm lens) for contextual visual imaging.
- ToF & ultrasonic sensors: measure wound depth and local tissue profile.
Perception & control
- AI-powered segmentation highlights suspect areas.
- 6–7 DOF robotic arm executes a calibrated linear scan over the resection field.
- Real-time feedback loop raises alerts on potential danger zones.
- Joystick/game controller (e.g., Logitech F310) enables surgeon-friendly manual control and image capture.
Outcome goal: real-time, in-situ detection to inform immediate surgical decisions and reduce repeat surgeries.
Building it from scratch: hardware, software, and choices that matter
Every component was chosen to balance cost, precision, maintainability, and availability in Indian public hospitals.
Compute & OS
- NVIDIA Jetson Xavier (primary compute)
- Ubuntu + ROS 2 Humble (motion/control, camera trigger, messaging)
- colcon (workspace builds)
Imaging & streaming
- OpenCV for real-time processing
- Janus WebRTC / MJPEG for low-latency, in-OT streaming and remote view
AI & inference
- PyTorch for model training
- ONNX Runtime / OpenVINO for lightweight deployment in theatre
Tooling & data
- Tesseract OCR for digitising cancer reports
- Pandas / Power BI to build structured datasets and dashboards
Languages
- Python & C++ (nodes, drivers, perception, orchestration)
ROS 2 nodes (current graph)
/joy_node– joystick input/controller_servo_node– arm motion / servo control/rpi_hq_camera_node– visual context stream/image_saver_node– snapshot and metadata store/tof_sensor_node– depth sensing/ultrasonic_sensor_node– supplemental distance sensing/basler_flir_camera_node– fluorescence channel acquisition/ai_model_processor_node– segmentation / inference/feedback_alert_node– in-OT alerts / HMI/rosbag_recorder_node– dataset logging (for QA & model improvement)
Workflow note: Image capture can be triggered from the Logitech F310. Video is displayed on a monitor adjacent to the OT table so the surgeon can inspect and, where necessary, resect immediately. ToF data generates a 3D depth map of the wound; snapshots are auto-annotated to feed back into AI training.
What breaks in the real world (and how I fixed it)
- Access to surgical-grade parts is limited → I iterated with robust, widely available components and designed with replaceability in mind.
- Trust in a solo researcher is earned slowly → I built transparent demos, published intermediate results, and prioritised surgeon feedback.
- OT latency & sterility constraints → I optimised capture paths, reduced jitter in ROS, and validated the system in non-sterile mock-OTs before any clinical exposure.
- Calibration & stability → I created repeatable routines for arm homing, illumination levels, and exposure, and used rosbag to reproduce edge cases.
Social impact: engineering for access
Delays in detection are not just technical gaps—they’re a health-equity problem. In district hospitals, repeat surgeries mean lost wages, mounting costs, and eroding trust. By keeping this system affordable, modular, and open-source-friendly, my aim is deployment across tier-2/3 public hospitals in India.
Imagine district OTs where a two-minute scan prevents a second surgery. That’s the future I’m building toward.
My journey: from rejection to researcher
Three years ago, I walked into MIMS with a device idea to reduce premature deliveries. I was turned away—a tier-3 college engineer pitching medical innovation didn’t land. That rejection stung.
I left for Germany to study Mechatronics, trained with surgeons and researchers, learned ROS, AI, bionics, biomechanics, and came home—not to prove a point, but to honour the mission. Today, I’m a Visiting Researcher at MIMS (General Surgery) with ethics approval to work on surgical robotics.
Special thanks to Dr. N. Lingaraju, Neurosurgeon & Associate Professor at MIMS, for opening the doors and supporting this work.
What’s next
- Suture-tracking automation for surgical robots
- Decision-support for anaesthesia (signals + AI)
- OCR + analytics for histopathology to enable longitudinal insights
- Dermatology CV models for rare skin-condition support
- Multimodal speech-and-vision tools for psychiatric assessment
Conclusion
Surgical robots aren’t just engineering marvels—they’re instruments of hope.
They can see where we can’t, act when we hesitate, and support medicine when humans reach their limits. If our code can prevent a second surgery, reduce anxiety, and return a parent home a little sooner—that’s the only engineering that matters.
References
- Edmund Optics — Imaging optics and filters.
- Thorlabs — Illumination and optomechanics.
- Franka Emika — Medical-grade collaborative robotic arms.
- Shanta V., Swaminathan R., et al. “Challenges in early diagnosis and treatment of cancer in India.” The Lancet Oncology (2014).
- Nguyen Q. T., Tsien R. Y. “Fluorescence-guided surgery with live molecular navigation—a new cutting edge.” Nature Reviews Cancer (2013).
- Katić D., et al. “Intraoperative use of hyperspectral imaging and AI in oncologic surgery.” IEEE Trans. Medical Robotics and Bionics (2021).
- World Health Organization. Guide to Cancer Early Diagnosis (2017).
