Austin Benchmark Suite for Computational Bioelectromagnetics

Last Update: Oct. 12, 2016

Jackson W. Massey, Chang Liu, Anton Menshov, and Ali E. Yilmaz

Department of Electrical and Computer Engineering
The University of Texas at Austin, Austin, TX 78712, USA

Introduction

The benchmark suite at this website aims to provide information about the state-of-the-art in bioelectromagnetic (BioEM) simulation methods for computing electromagnetic scattering from human models illuminated by impressed time-harmonic sources in the UHF band (300 MHz-3 GHz).

Such simulations are important in the design of body-centric wireless communication systems, wireless implants, and medical imaging systems. They are also needed to develop body-area networking standards and to ensure that exposure to non-ionizing radiation remains lower than values specified in safety standards. As a result of decades long research and development efforts in computational electromagnetics and advances in computer hardware/software infrastructure, today, a large (and expanding) set of computational methods are available for performing such BioEM simulations [1]. Indeed, various commercial and academic simulation tools currently rely on implementations of these methods. It is becoming more and more difficult to identify the “best” simulation method among alternatives to solve a BioEM problem of interest or to determine “how much better” one method is over others [2] because

  • a large and increasing number of competitive methods can be used
  • underlying computer hardware/software infrastructure continues to evolve rapidly
  • a high level of expertise is needed to apply specialized methods effectively
  • methods are often evaluated primarily by their developers, who become judge, jury, and executioner of their own work; such assessments are prone to intentional or unintentional biases and over-optimistic performance estimates [3]

As a result, there is an increasing risk that BioEM simulation methods will be judged primarily on subjective factors (e.g., generality, simplicity, familiarity/popularity, or user friendliness) rather than objective scientific/engineering merits (e.g., accuracy, efficiency, scalability) [2]. It is our contention that publicly available verification, validation, and performance benchmarks like those presented at this website can

  • help systematically combat the problem of the ubiquity of error [4]
  • inform researchers in the field and the public about the state of the art
  • lower barriers to entry for new researchers/methods
  • reduce importance of subjective factors when judging simulation methods
  • increase the credibility of the results obtained and claims made by computational scientists and engineers