Simulation Tool Benchmarking and Verification
Brent Dixon
Idaho National Laboratory
Abstract
All developers of fuel cycle simulation tools have needed to grapple with the difficulty of verifying their performance. While individual equations can be checked for accuracy, the performance of the code as a whole is more difficult. Typical techniques range from running highly simplified models that can be manually verified to cross-comparison with other codes that have themselves been verified through similar means. Newer codes can also be verified against benchmarks and other results previously generated by established codes.This presentation will discuss a number of the previous activities that have been undertaken to verify the performance of fuel cycle simulation tools, including unit tests [1], code-to-code comparisons [2], and a number of international benchmarking efforts [3,4,5]. The short presentation will summarize each of these efforts, including the approach, the tests or scenarios considered, codes evaluated, and other information as a lead-in to a general discussion on code verification and the potential for building a library of unit tests and benchmark cases. The presenter has been involved with all of these previous activities referenced here and will provide first-person information on the processes used and general lessons learned. This presentation may be paired with the proposed presentation of Bo Feng, “Valuable Lessons from Fuel Cycle Code Comparisons” and would also benefit from a brief presentation by Anthony Scopatz on material on developing standardized benchmarks [6].