DeepScan3D: Single X-ray to 3D CT Reconstruction using Neural Radiance Fields

Domain: Medical Imaging | Computer Vision | Generative AI

Objective

Developed a NeRF-based framework in PyTorch to reconstruct high-fidelity 3D CT-like volumes from a single 2D X-ray image.

Key Features

  • Learned latent code representation for efficient 3D reconstruction
  • Differentiable volume rendering using the Lambert-Beer law
  • GAN training loop with SSIM and reconstruction losses
  • Synthetic DRRs from real CT datasets for supervision
  • Self-supervised novel view consistency
  • Test-time latent optimization for unseen inputs

Technologies Used

  • PyTorch
  • Neural Radiance Fields (NeRF)
  • Generative Adversarial Networks (GANs)
  • Computer Vision
  • Medical Image Processing

Technical Implementation

  • Designed and implemented full architecture including self-supervised novel view consistency
  • Used synthetic DRRs from real CT datasets for supervision, eliminating the need for multiple real X-ray views
  • Integrated components include learned latent code representation and differentiable volume rendering
  • Implemented GAN training loop with SSIM and reconstruction losses
  • Added test-time latent optimization to handle unseen inputs

Impact

  • Potential to reduce radiation exposure in medical imaging
  • Cost-effective 3D reconstruction from single 2D images
  • Advancement in medical AI and computer vision research
  • Innovative application of NeRF technology in healthcare