High Performance Three-Dimensional MHD Simulations of Space Plasmas with Adaptive Mesh Refinement
Ý
Tamas I. Gombosi, Darren L. DeZeeuw, Clinton P.T. Groth,
Hal G. Marshall, Kenneth G. Powell, and Quentin F. Stout
Ý
College of Engineering, The University of Michigan
Ann Arbor, MI 48109
tamas@umich.edu
Ý
Ý

Global computational models based on first principles represent a very important component of efforts to understand the intricate processes coupling the Sun to the geospace environment. The hope for such models is that they will eventually fill the gaps left by measurements, extending the spatially and temporarily limited observational database into a self-consistent global understanding of our space environment.

Presently, and in the foreseeable future, magnetohydrodynamic (MHD) models are the only models that can span the enormous distances present in space plasmas. MHD codes have been used successfully to model many important processes. In spite of its inherent limitations, MHD represents a powerful global modeling tool which has significantly advanced our understanding of the space environment.

This talk summarizes the fundamental approach taken in our new multiscale MHD model and its applications to various space plasmas.

Method

We have recently devised a method to develop a highly scalable, massively parallel, Block-Adaptive Mesh Refinement (Block-AMR) algorithm for MHD applications. This method was implemented in the Block-Adaptive Tree Solar-wind Roe Upwind Scheme (BATS-R-US) code which was used to solve a variety of space plasma-physics problems. The solver makes use of modern algorithmic advances, including solution-adaptive methods and high-resolution upwind technology. The basic elements are:

Previous tightly coupled upwind-type schemes for MHD have been based on the one-dimensional Riemann problem obtained by noting that, for one-dimensional problems, the div(B)=0 condition reduces to the constraint that Bx =const. This allows the fifth row and column of the Jacobian matrix (which is related to the normal component of the magnetic field at the interface) to be dropped, yielding a 7 ¥ 7 system. Applying schemes based on this one-dimensional Riemann problem to MHD problems in more than one dimension leads to a serious problem, however. Because there is no evolution equation for the component of the magnetic field normal to a cell face, a separate procedure for updating this portion of the magnetic field must be implemented, and must be implemented in such a way as to meet the div(B)=0 constraint. Typically, this is done by the use of a projection scheme which solves a global elliptic equation in every few time-steps to "clean" the magnetic field from the divergence created by the numerical algorithm.

The advantages of the flux-based approach described above are:

Parallel Implementation

The basic data structure used in the BATS-R-US approach is that of adaptive blocks.  Adaptive blocks partition space into regions, each of which is a regular nx ¥ ny ¥ nz Cartesian grid of cells called a block.  If a block needs to be refined, then the block is replaced by 8 child subblocks, each of which is again a nx ¥ ny ¥ nz Cartesian grid of cells.  If coarsening is needed, then the 8 children are replaced by their parent.  Thus the blocks can be thought of as corresponding to the leaves of a tree-based decomposition.

Blocks maintain explicit pointers to their neighbors, and are implemented with a periphery of ghost cells which contain copies of neighboring cells' state variables.  If neighbors have different resolutions, then interpolation or restriction operations are used to determine the values of the ghost cells.  The number of layers of ghost cells is determined by the order of the spatial accuracy employed.

Fig. 1. Parallel scaling of the BATS-R-US code.
Some of the advantages of adaptive blocks, as compared to alternatives such as adaptive cell-based trees and fully unstructured grids, are: The ultimate proof of the strengths of adaptive blocks (and of the entire BATS-R-US approach) is in the achieved performance.  On a 512 processor Cray T3E-600, the code has achieved sustained performance of 70 Gflops, which is close to 25% of peak performance.  This parallel performance was achieved by combining high single-processor utilization with near-perfect scalability.  Figure 1. shows the measured scale-up for sustained performance, as the number of blocks per processor is held constant.  Note that at 512 processors, the efficiency is still over 99%.

The BATS-R-US code was developed in Fortran 90, with MPI for explicit message passing.  Use of these standards allows significant portability, along with high performance.  The BATS-R-US code has been run on SGI and Sun workstations, on SGI shared-memory machines, on the Cray T3D and T3E, and on the IBM SP2.  Because of the high calculation/communication ratio of a block, coupled with various other features of the code, relatively large message-passing latency can be tolerated, with the code still achieving very high scalability.  This greatly simplifies aspects such as load balancing, which is currently accomplished by merely balancing the number of blocks on each processor.

Applications

With the help of our new multiscale 3D MHD algorithm we have successfully simulated a wide variety of space plasma phenomena including: coronal mass ejections (CME); the terrestrial magnetosphere; the interaction of the heliosphere with the interstellar medium; and the interaction of the solar wind with Jovian and Saturnian satellites. This presentation briefly outlines the results of the following simulations.

Solar wind interaction with comets and the origin of cometary xrays. Observations of comet Hyakutake by ROSAT revealed the emission of soft xrays from the coma. It was proposed that the observed xray emission is caused by charge exchange excitations (CXE) of high charge-state solar wind ions with neutral molecules or atoms in the coma. Using our multiscale MHD model and an appropriate ion-chemical network, we successfully simulated the observed images and the spatial distribution of xray intensities.

Solar wind interaction with Earth. We simulated the interaction of the magnetic field of the Earth with supersonic, superalfvenic solar wind. Our model treats the ionosphere-magnetosphere interface in a self-consistent manner. It assumes the field-aligned magnetospheric currents close in the ionosphere as superposition of Hall and Pedersen currents. The ionospheric potential and convection are self-consistently calculated and convection velocities are taken into account as boundary conditions from the magnetosphere simulation.

Expansion of the solar corona into the interplanetary medium and formation of coronal mass ejections. We used our BATS-R-US code to investigate the generation, structure, and evolution of pressure-driven coronal mass ejections (CMEs) from the inner solar corona at 1 Rs out into inter-planetary space to distances exceeding 1 AU. The pressure-driven CMEs are generated in inner solar corona, which is assumed to be a large rotating reservoir of hot magnetized plasma with an embedded multipole magnetic field.

The interaction of the heliosphere with the local interstellar medium. Our simulation takes into account the multicomponent nature of the plasma, all heliospheric magnetic fields and a self-consistent model of the interstellar neutrals. Particular emphasis is given to the role of the magnetic fields in the global heliosphere. We used the latest observational results to simulate the heliosphere during the past solar minimum and we compare computational results with observational predictions.

The interaction of Titan with the Saturnian magnetosphere. We simulated the interaction of Saturn's magnetosphere with Titan's upper atmosphere and exosphere. The effects of a conducting ionosphere, exospheric mass loading and a possible intrinsic magnetic field were taken into consideration. Our simulations reproduce the main features of the Voyager-1 flyby observation quite well.

The interaction of Io with the magnetosphere of Jupiter. We simulated the interaction of Jupiter's magnetosphere with Io's upper atmosphere and exosphere. The effects of a conducting ionosphere, exospheric mass loading and a possible intrinsic magnetic field are taken into consideration. The combination of the adaptive refinement with the high-resolution scheme allows detailed modeling of the near Io region, while still resolving both the upstream region and the satellite's wake. The results agree with the observations of the Galileo spacecraft very well.

References

Combi, M.R., K. Kabin, T.I. Gombosi, D.L. DeZeeuw, and K.G. Powell, Io's plasma environment during the Galileo flyby: Global three-dimensional MHD modeling with adaptive mesh refinement, J. Geophys. Res., in press, 1998.

DeZeeuw, D., and K.G. Powell, An adaptively-refined Cartesian mesh solver for the Euler equations, J. Comput. Phys., 104, 55, 1992.

DeZeeuw, D.L., T.I. Gombosi, A.F. Nagy, K.G. Powell, and J.G. Luhmann, A new axisymmetric MHD model of the interaction of the solar wind with Venus, J. Geophys. Res., 101, 4,547-4,556, 1996.

Gombosi,T.I., K.G. Powell, and D.L. DeZeeuw, Axisymmetric modeling of cometary mass loading on an adaptively refined grid: MHD results, J. Geophys. Res., 99, 21,525, 1994.

Gombosi,T.I., D.L. DeZeeuw, R.M. Häberli, and K.G. Powell, Three-dimensional multiscale MHD model of cometary plasma environments, J. Geophys. Res., 101, 15,233, 1996.

Häberli, R.M., T.I. Gombosi, M.R. Combi, D.L. DeZeeuw, and K.G. Powell, Modeling of cometary x--rays caused by solar wind minor ions, Science, 276, 939, 1997.

Linde, T.J., T.I. Gombosi, P.L. Roe, K.G. Powell, and D.L. DeZeeuw, The heliosphere in the magnetized local interstellar medium: Results of a 3D MHD simulation, J. Geophys. Res., in press, 1998.

Powell, K.G., An approximate Riemann solver for magnetohydrodynamics (that works in more than one dimension), ICASE Report No. 94-24, 1994.

Powell, K.G., P.L. Roe, R.S. Myong, T.I. Gombosi, and D.L. DeZeeuw, An upwind scheme for magnetohydrodynamics, AIAA Paper 95-1704-CP, 1995.

Powell, K.G., D.L. De Zeeuw, T.I. Gombosi, and P.L. Roe, Multiscale Adaptive Upwind Scheme for Magnetohydrodynamics: MAUS-MHD, to be submitted to J. Comput. Phys., 1998.

Stout, Q.F., DeZeeuw, D.L., Gombosi, T.I.. Groth, C.P.T., Marshall, H., Powell, K.G., Adaptive blocks: A high-performance data structure, Proc. Supercomputing'97, 1997.

Ý