Can the Student Outperform the Master? A Plan Comparison Between Pinnacle Auto-Planning and Eclipse knowledge-Based RapidPlan Following a Prostate-Bed Plan Competition

April Smith, Andrew Granatowicz, Cole Stoltenberg, Shuo Wang, Xiaoying Liang, Charles A. Enke, Andrew O. Wahl, Sumin Zhou, Dandan Zheng

Research output: Contribution to journalArticle


PURPOSE: Pinnacle Auto-Planning and Eclipse RapidPlan are 2 major commercial automated planning engines that are fundamentally different: Auto-Planning mimics real planners in the iterative optimization, while RapidPlan generates static dose objectives from estimations predicted based on a prior knowledge base. This study objectively compared their performances on intensity-modulated radiotherapy planning for prostate fossa and lymphatics adopting the plan quality metric used in the 2011 American Association of Medical Dosimetrists Plan Challenge. METHODS: All plans used an identical intensity-modulated radiotherapy beam setup and a simultaneous integrated boost prescription (68 Gy/56 Gy to prostate fossa/lymphatics). Auto-Planning was used to retrospectively plan on 20 patients, which were subsequently employed as the library to build an RapidPlan model. To compare the 2 engines' performances, a test set including 10 patients and the Plan Challenge patient was planned by both Auto-Planning (master) and RapidPlan (student) without manual intervention except for a common dose normalization and evaluated using the plan quality metric that included 14 quantitative submetrics ranging over target coverage, spillage, and organ at risk doses. Plan quality metric scores were compared between the Auto-Planning and RapidPlan plans using the Mann-Whitney U test. RESULTS: There was no significant difference between the overall performance of the 2 engines on the 11 test cases ( P = .509). Among the 14 submetrics, Auto-Planning and RapidPlan showed no significant difference on most submetrics except for 2. On the Plan Challenge case, Auto-Planning scored 129.9 and RapidPlan scored 130.3 out of 150, as compared with the average score of 116.9 ± 16.4 (range: 58.2-142.5) among the 125 Plan Challenge participants. CONCLUSION: Using an innovative study design, an objective comparison has been conducted between 2 major commercial automated inverse planning engines. The 2 engines performed comparably with each other and both yielded plans at par with average human planners. Using a constant-performing planner (Auto-Planning) to train and to compare, RapidPlan was found to yield plans no better than but as good as its library plans.

Original languageEnglish (US)
Pages (from-to)1533033819851763
JournalTechnology in cancer research & treatment
StatePublished - Jan 1 2019



  • Auto-Plan
  • KBP
  • RapidPlan
  • automation
  • treatment planning

ASJC Scopus subject areas

  • Oncology
  • Cancer Research

Cite this