key: cord-0945981-8te2kkq7 authors: Southern, James; Pitt-Francis, Joe; Whiteley, Jonathan; Stokeley, Daniel; Kobashi, Hiromichi; Nobes, Ross; Kadooka, Yoshimasa; Gavaghan, David title: Multi-scale computational modelling in biology and physiology date: 2007-08-11 journal: Prog Biophys Mol Biol DOI: 10.1016/j.pbiomolbio.2007.07.019 sha: 4f691cfe206f42b89d2f7099524fdf9fd663f44b doc_id: 945981 cord_uid: 8te2kkq7 Recent advances in biotechnology and the availability of ever more powerful computers have led to the formulation of increasingly complex models at all levels of biology. One of the main aims of systems biology is to couple these together to produce integrated models across multiple spatial scales and physical processes. In this review, we formulate a definition of multi-scale in terms of levels of biological organisation and describe the types of model that are found at each level. Key issues that arise in trying to formulate and solve multi-scale and multi-physics models are considered and examples of how these issues have been addressed are given for two of the more mature fields in computational biology: the molecular dynamics of ion channels and cardiac modelling. As even more complex models are developed over the coming few years, it will be necessary to develop new methods to model them (in particular in coupling across the interface between stochastic and deterministic processes) and new techniques will be required to compute their solutions efficiently on massively parallel computers. We outline how we envisage these developments occurring. The biomedical sciences have undergone a revolution over the past decade. Advances in biotechnology, underpinned by a massive leap in computational resources, have provided a wealth of biological data at all levels of biological organisation. At the molecular and cellular levels the various genome and proteome projects (States et al., 2006; International Human Genome Sequencing Consortium, 2005; Snow et al., 2005) , coupled with advances and innovations in microscopy and biological imaging, have provided descriptions of unprecedented detail of the constituent parts and basic structures of living organisms. The bulk of these data to date, particularly at the molecular level, has been obtained from in vitro, and usually static, laboratory experiments. Although increasingly dynamic changes are also being considered, experimental complexity usually restricts observations to single, or very restricted, spatial and/or temporal scales. However, the primary purpose of this massive experimental and data collection programme is to determine how biological and physiological function arises within living organisms, so that increasingly interest is turning towards 'integrative' or 'systems' approaches to the problem of function determination (Kitano, 2002) . A full understanding of biological function emerges only if we are able to integrate all relevant information at multiple levels of organisation to recreate dynamic interactions. For example, Noble (2002) explains how the success of drug therapy depends not only on the structure and direct function of the targeted protein but also on how it interacts with its surroundings: without this information we may not know which transporters, enzymes or receptors are active in the diseased state and, further, we will be unable to predict (potentially fatal) side effects of the therapy that are due to these interactions. In general, these dynamic interactions cannot be recreated purely by experimental observation, and the only feasible approach is to develop mathematical and computational models which couple together the underlying complex interacting non-linear processes. An iterative interplay between experiment and modelling then provides descriptions of biological processes with the dynamic complexity to give the required biological function (Werner, 2005) , and it is clear that such models will have to take into account the multiple spatial and temporal scales across which these processes take place. Hunter and Borg (2003) demonstrate that the spatial scales that may need to be considered in a complete model of the human body range from 1 nm (protein) to 1 m (whole body)-10 orders of magnitudeand the temporal scales from 1 ms (ion channel gating) to 10 9 s (human lifetime)-16 orders of magnitude. Such a wide range of scales requires a coupled hierarchy of models, each describing behaviour at a different scale, to encapsulate the complete system. In general, these resulting mathematical models are sufficiently complex that they can only be solved numerically, resulting in the increasing importance of computational simulation methods. In this paper, we review some of the existing approaches to the computational multi-scale modelling of physiological and biological systems (see also the reviews by Schnell et al. (2007) ; Bassingthwaighte et al., 2005) . We begin by giving what we hope is a helpful classification of multi-scale modelling in biology based on levels of biological organisation rather than absolute length or time scales. We then go on to discuss what we see as the key modelling, numerical, and computational issues that arise in such multi-scale approaches, before reviewing two of the areas in which these techniques have reached a high level of maturity: molecular dynamics (MD) and whole heart modelling. We conclude by discussing the progress made in these two fields towards addressing the difficulties inherent in the numerical solution of a multi-scale computational model and outline the type of problems we expect to see in the future as we try to model increasingly complex problems. Before we define 'multi-scale' in the context of biological models and modelling, it is first necessary to clarify exactly what is meant by a scale in a biological or physiological sense. Spatially, a natural way to do this is to split processes up according to their position within the biological hierarchy, i.e. whether they represent interactions between organs, within a tissue, between cells, and so on. We refer to these hierarchical positions as levels of biological organisation. On the whole they are relatively well-defined (e.g. we can intuitively say whether a given process involves discrete cells or a tissue or whether it occurs within an organ or across a larger organ system). However, it is not easy to give concrete values for the lengths at which we switch from one level to the next-for example, different types of cell can have very different sizes and some organs may be larger than small organ systems (especially if the comparison is done between different organisms). Major levels of biological organisation are shown in Fig. 1 , and range from the environment (largest length scale) to the quantum level. In the following sections we define each of these levels in terms of the biological processes that they contain, discuss the types of models that are generally used to encapsulate the biology, and consider the computational tractability of these models. Where appropriate, we will also give references to major reviews (or textbooks) which contain more details of the modelling work that is being conducted at that level. For the organ level and below, more concrete examples of models at each level are discussed in Sections 4 and 5. The most detailed analysis of a biological system would start at the quantum level, with each electron-electron interaction in the system being considered separately. The most accurate representation of the interactions of electrons is obtained by solving the Schro¨dinger equation (known as an ab initio calculation). However, the complex nature of the interactions means that it is currently not feasible to solve the time-dependant Schro¨dinger equation (for any system containing more than one atom), and the timeindependent equation can only be solved for systems with fewer than 15 electrons (Miller and Clary, 2004; Thøgersen and Olsen, 2004) . For this reason, quantum mechanical (QM) simulations usually exploit some simplified model of the interactions between electrons (for a discussion, see Jensen, 2006) . Using commercially available QM packages, the structure of relatively large molecules can be investigated. For example, using Q-Chem the structure of a hexadecapeptide (80 heavy atoms) can be calculated in between 10 and 100 h (depending on the model chosen) on a single 2 MHz Opteron processor (Shao et al., 2006) . Next, at the molecular level, the system is defined in terms of the (classical) interactions between atoms (and ions). The system is governed by Newton's laws and the differences between the models are in the method used to calculate the forces, which are due to the deformation of chemical bonds, hydrogen bonding, and electrostatic, dipole-dipole and van der Waals interactions. Many different force fields have been suggested: most represent a trade-off between accuracy (exact expressions are known for many of the forces) and numerical tractability (see, for example, Schlick (2002) for an overview of commonly used force fields). In contrast to the quantum level, simulations of relatively large dynamic systems are possible at the molecular level: the modelling of these systems is known as MD whereas structural determination using inter-atomic force fields is referred to as molecular mechanics (MM). Commercial MM packages can be used to estimate the structure of very large molecules. (Although the problems are computationally tractable, the accuracy of the predictions is dependent on the accuracy of the force field used.) In MD simulations, computational tractability is an issue even for relatively small systems: simulations of pure lipids can deal with length scales of the order of 10-30 nm and time scales of 'hundreds of nanoseconds' (Tieleman, 2006) . However, for the simulation of a more complex system-a patch of lipid bilayer containing a single ion channel and surrounded by water and ions (with tens of thousands of particles)- Tieleman (2006) records that the simulation of just 1 ns of activity requires between 1 and 2 days. If the system in which we are interested contains a large number of atoms and is required to be simulated for a significant length of time, then the solution of the atomic level model will not be numerically tractable (see above). However, it is possible to reduce the number of degrees of freedom in the system by treating groups of particles as single entities, allowing simulations to be run for longer, but with a reduction in the accuracy of the results. As at the molecular level, Newton's laws govern the behaviour of the system and the modelling process amounts to the determination of an appropriate force field to describe the interaction between particles. Tieleman et al. (2001) describe how 'coarse-graining' can be used in the simulation of a large number of ions passing through a channel in order to calculate a macroscopic current. (The tens of nanoseconds that could be achieved with a MD simulation would not be sufficient to determine such a current.) We describe this level, where functional units consisting of several molecules are our base unit, as the macro-molecular level. This is the last level at which stochastic methods are routinely used: at higher levels of functionality the interactions between particles are averaged and deterministic models result. When the number of particles present in a system becomes sufficiently large, it becomes possible to treat the system as a single continuum. This in turn allows the flux of various substances to be modelled by processes such as diffusion or convection, which can be modelled using partial differential equations (PDEs) and spatially varying density functions rather than having to keep track of individual particles (or groups of particles). The natural upper bound to this single continuum level is the whole cell, where the cell membrane and the structural alignment between cells break up the continuum. Hence, we will refer to this as the subcellular level, in line with our convention of naming different length scales after their biological functionality. Note that in the physical sciences it is often referred to as the mesoscopic scale (since it lies between the microscopic and macroscopic scales). A wide variety of sub-cellular processes have been modelled. These are too numerous to give an exhaustive list, but include processes such as actin-myosin dynamics (Negroni and Lascano, 1996; Campbell et al., 2001) . However, the location of the sub-cellular level on the interface between stochastic and deterministic events makes it difficult to include in a multi-scale model since it can be difficult to couple models across this interface. (Nevertheless, the sub-cellular level can be considered the 'top' level in multi-scale models of ion channels, since the current through the channel is a sub-cellular process.) The basic structural and functional unit of an organism is the cell-sometimes referred to as 'the building block of life' (Alberts et al., 2002) . The typical size of a cell (10 -5 m, although there is a great deal of variety between different types of cell) also puts them at the interface between the mostly biochemical processes discussed above and the resulting macroscopic physiological properties. This property makes the cellular level a natural starting point for many integrative physiological models: cells are small enough to encapsulate many microscopic processes, yet large enough that they influence macroscopic behaviour. Cells are also selfcontained and isolated from their surroundings by a selectively permeable membrane (and in the case of prokaryotic cells also by a rigid cell wall). So, many of the interactions of a cell with other cells and the extracellular space can be summarised by recording the flux of various substances (ions, nutrients) across the cell membrane-suggesting that the formulation of conservation laws in the form of ordinary differential equations (ODEs) to describe them provides a natural way of modelling their behaviour. Many cell models have been proposed over the years. Many of these (over 350, from a variety of fields at the time of writing) are accessible online via the CellML repository (http://www.cellml.org, and see Lloyd et al., 2004) . Noble and Rudy (2001) and Rudy and Silva (2006) have described the development of cardiac cell models, which are by far the most advanced cell models. Using current desktop computer technology, even the most advanced cell models can be simulated faster than real time using purpose built software packages such as COR (Garny et al., 2003) . At the tissue level a large group of connected cells of the same type perform a specific function. In addition to the cells, a region of tissue will typically also contain an extra-cellular matrix (ECM) that holds the cells together and gives them some structural stability. Since the cell models discussed in the previous section do not include the ECM, mechanical models are usually formulated at the tissue level. Properties that result from the interactions between cells (such as the propagation of a cardiac action potential or the development of a cancerous tumour) must also be modelled at this level. Since these processes depend on position within the tissue, the models usually consist of PDEs (or sometimes cellular automata) rather than the ODEs generally seen at the cellular level. Keener and Sneyd (1998) give an overview of a wide variety of tissue level models to describe many different physiological processes. A survey of mechanical models of various types of tissue can be found in Fung (1993) , while Humphrey (2003) provides a more recent review of the mechanics of soft tissue. An overview of cancer modelling is given in Preziosi (2003) . The amount of time required to run simulations of tissue level models can vary greatly. Generally, they can be solved relatively quickly if there is no interaction with models at the cell level-for example, the solution of a model containing a small number of coupled (and possibly nonlinear) PDEs on a finite element (FE) mesh containing many thousands of nodes (or on a finite difference (FD) grid with a similar number of degrees of freedom) may take minutes to hours on a single processor. If, however, cellular models are to be embedded in the tissue model then the time taken to simulate the model can increase dramatically-particularly if the cellular models show behaviour on multiple time scales necessitating small time steps. An organ is a discrete unit performing a function or group of functions (e.g. the heart, brain, lungs). Usually the organ consists of a single main tissue type (e.g. myocardium in the heart) and several sporadic types (nervous, connective, blood). At the organ level, modelling usually consists of integrating tissue level models with one another and with a representation of the geometry of the organ that is being modelled. For example, a FE mesh might be constructed based on the shape of a real heart and encoding information about the fibre orientation at each mesh point (Vetter and McCulloch, 1998) . Various organ level heart models have been proposed and are discussed in more detail in Section 5 (see also the review by Kerckhoffs et al. (2006) ). However, it will be seen that these models are generally not of the whole heart but of single ventricles, atria or other subunits of the heart. Models of other organs are generally less mature than those of the heart, with most existing work concentrating on developing individual tissue level models rather than attempting to link them together. Recall that tissue level models can be computationally demanding. Since organ level models generally have tissue models embedded within them they require even longer to simulate. In fact, as will be seen in Section 5, accurate ventricular models often require several days to simulate even a few hundred milliseconds on very powerful computers. A group of organs that work together to provide a common function is referred to as an organ system. Examples include the nervous system, circulatory system, respiratory system and immune system. It follows that two or more of the organ models described in the previous section could be coupled to provide a model of an organ system. An initial candidate, building on current work on organ level models of the heart, could be the cardiovascular system. Modelling at the organ level is not yet mature enough for significant work to begin at the organ system level. However, Hunter et al. (2002) describe how ontological information such as publications, images, movies, models and anatomical notes have been collated from disparate databases to form a central resource. They also show how the bones, muscles and tendons around the knee can be built into an integrated model of the leg with mechanical stresses from the highest spatial level feeding back to the cell level where the balance of osteoblasts and osteoclasts in the bone is maintained. In a review by Crampin et al. (2004) there is a detailed description of how the cardiac and lung models are built from the cell level upwards. The geometric modelling of branching airways and alveoli bears substantial similarity with that of the pulmonary network of capillaries in the lungs. This is in turn similar to the blood supply in the heart. One can see that some of the problems tackled in modelling individual organs are duplicated in other organs. However, the task of combining organs into a single organ system is not as simple as running several models in tandem since there are feedback mechanisms (such as blood pressure and oxygenation) which cross between the systems. The development of sophisticated models of organ systems will require new models to be developed to describe these interfaces between the various organs that make up the system. An organism is an individual life form. Although very small organisms consisting of a small number of cells (possibly one) exist, in the context of this review the organism level of biological organisation implies that the life form is made up of a network of interacting organ systems and organs (very small biological organisms can be modelled at the cellular or tissue level). According to the UK computer science community, the computational modelling of a simple biological organism such as the nematode worm is seen as a major computational challenge for part of this century (Harel, 2004 (Harel, , 2005 . With that in mind, modelling an organism such as a human must be viewed as ambitious-and consequently there is little existing work to discuss here. However, it would be a natural progression from the development of whole organ and organ system models to the construction of a 'virtual physiological human' (VPH): an integrated computer model to predict the response of a real organism to different physiological and pathological conditions. Hunter and Borg (2003) show that much of the work in constructing such a VPH relies on working from the bottom level upwards (from the gene via proteins and cells to the total organism). There are notable places in the modelling framework where information flows back down through the biological levels, such as signal pathways which couple events at the tissue level back to gene expression at a sub-cellular level. Finally, each organism must be considered as a part of its environment. This includes both other organisms (of the same and different species) and other external factors such as air quality or drug delivery. Each of these components of the environment may impact on the organism level model and, hence, may also need to be modelled. Further, the organism may play an active part in shaping its environment, leading to a two-way feedback mechanism. Most of the work in modelling biological systems within an environment has concentrated either on population growth between species which are competing for resources or on epidemiology-the spread of disease through a population of individuals. Interactions between individuals are modelled at a coarse scale using empirical data to provide the probability of certain events occurring. A typical compartmental epidemic model for the spread of a virus is the 'SIR' model, in which the interactions of three distinct populations of individuals (unexposed and therefore susceptible to the virus, infectious, or recovered from the virus) are modelled. A stochastic compartmental model was successfully used by Riley et al. (2003) to fit known clinical data from a recent outbreak of severe acute respiratory syndrome (SARS). The model is then used to predict the future spread of the disease under certain constraints such as travel restrictions. This type of analysis is common in epidemiology, and gives a good indication of the use of treatments and quarantine. However, its limitation is that the epidemic spread parameters are normally measured empirically. There is no coupling between the biological level of the virus and that of population mixing. The discussion of levels of biological organisation given here is not intended to be exhaustive. While the 10 levels described above provide a natural range of length scales in many applications, there may be times when alternatives are appropriate. For example, genetic information does not fit comfortably into any of the levels: the complexity of the data DNA encodes for cannot be encapsulated by atomic or macro-molecular level simulations of its structure or dynamics. So, in an integrative model of tumourigenesis (the formation of a cancer, thought to be due to a genetic mutation) a more appropriate initial choice of levels might be: genetic, proteomic (gene expression), sub-cellular and cellular, although to model the growth and spread of a tumour accurately a number of tissue and organ level effects would also need to be included. Similarly, in some applications it may be more natural to merge two of the levels discussed here or to split one or more of them into distinct sub-levels. As with all areas of mathematical modelling, the way in which the model is formulated should take into account the nature of the system being modelled rather than simply following a rigid framework set out in advance. A model is an abstract representation of a complex system in mathematical form, i.e. in terms of systems of equations and a description of the geometry on which they are valid. Usually the formulation of a model to describe a system requires some simplifying assumptions, so the solution of the equations is an approximation to the behaviour of the original system. The simplified model can be used to give insights into the factors that influence the behaviour of the more complex real system (e.g. the identification of key parameters, or prediction of future behaviour). The accuracy of the model is, in general, dependent on the validity of the assumptions and the development of a good model is usually an iterative process, with successive generations of the model providing better approximations to the real system. Since chemical reactions can affect the behaviour of a whole organism (even to the point of causing it to die) any reasonably complex biological or physiological model will include processes that vary on a wide range of time and length scales: such models are commonly termed multi-scale models. In the context of biology and physiology, we have defined various length scales in terms of levels of biological organisation. Here we suggest a definition of multi-scale as being a model which includes components from two or more of these levels of organisation (multiple length scales) or if it includes some processes that occur much faster in time than others (multiple time scales). Examples of multi-scale physiological models are the bidomain and monodomain models of cardiac electrophysiology (see, for example, Keener and Sneyd,1998). These models describe tissue (or organ) level propagation of an electrical wave front. However, the wave front is caused by the existence of cell membranes containing conducting ion channels-a cellular level process. The chemical reactions that regulate the ion channels occur on multiple time scales. A further example of a multi-scale physiological model may be taken from respiratory physiology. The transport of gas into the lungs from the atmosphere is modelled at the level of the organ (Tawhai et al., 2000; Scherer et al., 1988; Whiteley et al., 2002) as is the transport of blood through the lungs (Burrowes et al., 2004) . These two processes are coupled by the transport of gas across the alveolar membrane, another organ level process. However, this diffusion is driven by reactions that take place within individual red blood cells and occur on a wide range of time scales and length scales (Whiteley, 2006b) . As well as interactions on multiple scales, an integrative model in biology or physiology may also require consideration of several different physical processes: such models are known as multi-physics models. Again, examples can be taken from modelling of the heart. The force that makes cardiac tissue contract during a heart beat is generated by the electrical activity of the heart. In turn, the mechanical contraction affects the electrical activity of the heart. Furthermore, both the mechanical deformation and the fluid flow in the chambers of the heart are clearly closely coupled. A general model of the heart should include the three physical processes of electrical generation and propagation, soft tissue deformation, and fluid flow. A second example of a multiphysics problem is avascular tumour growth. There is extensive literature devoted to modelling tumour growth (see Fasano et al. (2006) for a recent review). The growth of a tumour displaces surrounding tissue. This displacement is usually modelled by nonlinear elasticity. This deformation results in the surrounding tissue exerting a force on the tumour, which affects the growth of the tumour (Chen et al., 2001) . Similar interactions between different physical processes can be seen in many other applications in the life sciences. The models that are used to describe multi-scale and multi-physics phenomena in the life sciences are generally too complex to allow exact solutions to be written down. So, numerical methods must be used to generate approximate solutions to the model equations. The choice of an appropriate numerical method is an important part of the modelling process since this can impact on both the accuracy of the solution and the amount of time required for the simulation. The final step in the modelling process is the development of software packages that apply appropriate numerical techniques to the system of model equations used to describe a biological system. These software packages are referred to as simulators. At some of the levels of organisation discussed above the system must be modelled stochastically, while at others it is possible to use deterministic models. In formulating a multi-scale model across this divide it is necessary to consider how to link these two very different types of model. This kind of modelling work is still at a very early stage. Burrage et al. (2004) have reviewed some of the possible methods for incorporating stochastic processes in a deterministic framework (in particular the formulation of stochastic differential equations) in the context of chemical reactions that occur on multiple time scales. However, to our knowledge, this has not been applied to any large-scale problem in the life sciences. For this reason we will mostly consider the computational modelling of stochastic processes and that of deterministic processes separately. In this section, we will outline some other difficulties in formulating and solving a multi-scale (or multi-physics) computational model for which more progress towards solutions has been made. For the moment we restrict ourselves to broad principles. More concrete examples (and extensive references) can be found in the case studies that follow in Sections 4 and 5. In stochastic processes the computational issues are more likely to be due to the sheer number of particles that are interacting with one another than to the interaction between the scales. In general, the time scales on which the lower-level processes occur are much faster than those on which the higher-level processes occur. Usually this means that the lower-level processes can be assumed to occur instantaneously and can be included in a representation of the force field at the higher level. Storing the positions of each of the particles can cause problems in memory allocation and calculating the force resulting from each interaction can also lead to a very large number of calculations needing to be performed. In general, these factors mean that large stochastic models are usually computationally intensive. The switch to a model at a higher level of organisation is usually determined by the need to ensure that the calculations can be performed in a reasonable period of time (perhaps days rather than weeks or months in this context). Work on coupling stochastic models from different levels together (e.g. QM/MM coupling) is beginning to be done and examples of this are considered in Section 4. When coupling together deterministic models of processes that occur on different scales or as part of different physical systems it is tempting simply to couple existing codes for the separate models to one another. However, this does not take into account how inaccuracies in the values of the variables that are passed between the two models may affect the combined model-one variable may be accurate enough in one model but when these models are coupled may first introduce errors into the solution of the other model, and in turn the solution of the combined model. In order to prevent these inaccuracies from occurring we should consider the whole as a single model rather than the combination of two simpler ones. When models are coupled together in this way, obviously the number of variables becomes very large. In addition, the range of length scales and time scales will also increase. The system of governing equations is now a large system modelling multi-scale processes. Equations representing multi-scale processes are stiff. As a rule of thumb it is usually much more efficient to use implicit numerical methods to solve stiff problems. There are several drawbacks with implementing implicit methods for the large non-linear systems that typically arise from physiological systems: we need to be able to write down the Jacobian matrix, which is usually non-trivial; sufficient memory is required to store the matrix; and it can be very expensive to compute its entries. However, explicit methods are vulnerable to numerical instability. So neither is ideal-the alternative is to use semiimplicit methods where the large problem is broken down into a system of smaller sub-problems and in each sub-problem some variables are solved implicitly and some explicitly. Choosing the decomposition in a suitable manner allows the benefits of both implicit and explicit methods. Often a natural choice for uncoupling a multi-physics model is given by consideration of the different physical processes included in the model. Alternatively, in some cases, rather than combining the two models explicitly as described above, it may be possible to use the results of one model as input to the second. For example, if a reaction occurs on a very fast time scale then we may be able to assume that the chemicals in that reaction are in equilibrium. This eliminates one of the differential equations from the system, making its solution more straightforward and less computationally intensive. Sometimes it may also be possible to formulate boundary or initial conditions for a model on a larger spatial or time scale in terms of the results of another model at a smaller scale. In Section 5 some examples of the application of these deterministic multi-scale modelling techniques in the field of heart modelling will be seen. In many areas of biology and physiology multi-scale and multi-physics models are still in their infancy. However, there are some areas in which it is much more mature. In the two sections that follow we review the progress that has been made in simulating the behaviour of ion channels using MD and in whole heart modelling. In doing this we show how these fields have addressed the issues we described above. Between them, MD and heart modelling span most of the different scales discussed in Section 2-from quantum to organ level. They are also implicitly linked to one another, as it will be seen that the cellular level models used in heart modelling encapsulate the bulk behaviour of ion channels. The interface between the two fields is (approximately) at the level at which the interface between deterministic (heart) and stochastic (MD) is found. That each is considered separately emphasises the difficulty in modelling across this divide. The two fields also provide a contrast in the way multi-scale modelling has progressed. Many of the (relatively recent) heart models have been developed with an integrated organ model in mind. In ion channel modelling, on the other hand, the models have generally been developed separately and only now are links between the different levels being introduced. The key to the electrophysiological behaviour of the cell is the ion channel. Ion channels are the functional result of genes encoding transmembrane proteins which form nanoscale pores through the cell membrane. They are crucial to electrical signalling in nerves, muscles and synapses. The effects of drugs and hormones on cellular function are greatly influenced by ion channels, making them prime candidates for research. Ion channels exhibit a high level of permeability, but this is combined with extreme selectivity and a fine level of control over ionic flux (Roux, 2005) . The latter two properties make ion channels intriguing, and raise them above the level of a simple bionanotube. Ion channels conduct at rates of up to 10 million ions per second, but are capable of differentiating between ions of similar sizes. Potassium channels conduct potassium ions at rates of up to 1000 times that of the slightly smaller sodium ion. This degree of selectivity is conferred by a sequence of five highly conserved residues precisely placed in the neck of the channel. The TXGYG motif is common to all potassium channels and in the tetrameric arrangement of most potassium channels forms the selectivity filter. The filter comprises a set of oxygen rings along the passage, of precisely the correct dimension for binding dehydrated K + ions. K + ions in solution have a hydration shell, and when they pass into the selectivity filter they dehydrate and bind to the carbonyl oxygen atoms in the filter. The binding directly balances the energetic cost of the dehydration. The precise positioning of the carbonyl groups means that the energetic gain from binding a K + ion balances the dehydration cost, but is not enough to balance the cost of dehydrating a Na + ion (Hille, 2001) . Channels are not necessarily passive tunnels from one side of the membrane to the other, but can often react rapidly to external stimuli to close and open the conduction pathway. Voltage sensitive channels are equipped with a voltage sensor in the form of a collection of charged particles whose motion is controlled by the potential field across the membrane. This motion may cause conformational changes in the protein that affect conductance. Ligand sensitive channels have receptor sites that bind chemical messengers or transmitters. The binding of the messenger to the receptor alters the channel's state from either open or closed. Ligand sensitive channels are sometimes referred to as ionotropic receptors. They are usually specific to the ionic species they are permeable to, and binding of the transmitter usually opens the channel. In a few cases, the binding decreases channel conductance by causing a closing of the channel. The binding causes a conformational change in the protein which brings about the change in conductance. When the bound ligand dissociates, the channel reverts to its resting state. Channels can also be gated by stretching (mechanosensitive channels) and by a ligand sensitive mechanism where the receptor is not directly coupled to the channel (second messenger gating). The fine-grained control which ion channels are capable of exerting, and the precision required to perform it, leads directly to their Achilles' heel. Mutations in the genes coding for ion channels can cause very minor changes at the atomic level-a few atoms more, or a few atoms less, depending on which residue has been altered. These changes, however, can have devastating effects at the level of the whole body. Mutations in a calcium channel can cause epilepsy; cystic fibrosis arises from mutations in a chloride channel; and fatal cardiac arrhythmias are brought about through mutations in potassium channels. Clearly, these biomolecules exert an influence far beyond their immediate scale of operation-thus there is a very real requirement for multi-scale approaches. Ion channel modelling has been performed at a variety of different scales using a variety of mathematical and computational techniques (Levitt, 1999; Kuyucak et al., 2001) . Here we will concentrate on methods where the notion of individual particles is retained. In some cases, only the ions remain as discrete entities; in others, atoms are combined to form new types of particle (i.e. the macro-molecular level as described in Section 2.3). But we are still looking below the scale at which molecules are identical featureless entities. For ion channels, the processes we are most interested in are permeation and gating, as described above. Unfortunately, both of these events occur on microsecond time scales, which renders them out of the reach of completely atomistic simulations at present. This does not prevent these methods from giving us some insight into what is happening, and there are other methods which do allow us to study these events. Some of these methods are described below. QM simulation methods are capable of giving a very accurate treatment of the system under study. The methods can be primarily ab initio, in which case they are extremely computationally intensive, reducing their applicability to a few atoms, or they may be semi-empirical, in which case they can be used for larger systems at the expense of accuracy. Either type of method is impractical to use for entire proteins, but desirable in cases where we wish to study bond breakage and formation, or the stabilisation of transition states, for example. To circumvent the problems present in the QM approach, in biomolecular simulation QM is typically combined with an empirical MM method, resulting in a hybrid system where the region of interest is represented quantum mechanically, but the surrounding protein/lipid/solvent is represented atomistically with an empirically derived potential function (QM/MM). Rega et al. (2004) applied the QM/MM technique to study proton hopping in the gramicidin A ion channel. The transport of ions along the gramicidin channel is regulated by a network of highly correlated hydrogen bonds. Quantum mechanics is thus an ideal method to analyse the mechanism of proton translocation. However, to represent the complete gramicidin system at that level would be prohibitively expensive, so a molecular mechanical description is used. Conventional wisdom in biochemistry is that protein structure leads directly to function. So if we could obtain accurate structures surely we would immediately be able to understand their functions? Unfortunately life is not quite so simple. X-ray crystallography is capable of resolving protein structures to within 2 Å or better, but even this astonishing level of detail goes only a small part of the way to explaining protein function, as it is only a snapshot of the protein in one particular configuration of the millions that it may adopt. MD is a technique which addresses this concern, and provides insight into the dynamics of biomolecules . In MD simulations, the starting point is a set of atomic positions in three-dimensional space. The time evolution of these atoms is then simulated by repeatedly integrating their equations of motion according to Newton's second law. The laws of classical mechanics, as applied here, require an empirical potential energy function in order to be able to compute the forces acting on the atoms. Potential energy functions in MD typically include bond length potentials, bond angle potentials, torsional potentials and electrostatic potentials. In these simulations, the time step must be kept short enough so that the forces can be assumed to be constant and to ensure stability of the solutions to the differential equations. These restrictions usually result in time steps of around 2 fs, which confine total simulation times to the order of tens of nanoseconds. A typical MD ion channel simulation set-up consists of a protein structure embedded in a slab of lipid bilayer, surrounded on both sides by solvent, with ions and waters present in the channel in accordance with experiment (see Fig. 2 ). Confining the ion channel structure to the pore region leads to a total simulation size of approximately 50,000 atoms. MD simulations have examined both the permeation pathway of channels (e.g. Berne`che and Roux, 2000; Shrivastava and Sansom, 2000) and the gating mechanism (e.g. Beckstein et al., 2003) . However, direct computation of ionic flux or gating motions is not yet feasible and there is a lack of fully resolved channel structures containing voltage sensors. We now look at an approach for gaining access to a longer time scale, concentrating on the permeation pathway. In the subsequent section, we will examine a method at the macro-molecular scale used to study the gating mechanism. Fig. 2 . Cutaway diagram of a typical MD potassium channel set-up. Two subunits of the tetramer have been removed for clarity. Shown are the lipid bilayer in a box of water, two subunits of the pore region of the protein, two ions in the filter separated by a water molecule, some water in the cavity and the waters positioned just outside the filter region. To access longer time scales than is possible in MD, two major approximations are made in Brownian dynamics simulations. The first makes use of the fact the ions pass through the channel on a time scale that is significantly shorter than major protein motions, and thus the protein may be described as a fixed collection of charges acting upon the ions. The other main approximation is to treat the solvent as implicit, thus avoiding the explicit calculation of the motion of several thousand water atoms. One could go a step further and use a continuum model for the ions themselves. However, the Poisson-Nernst-Planck (PNP) and Poisson-Boltzmann (PB) continuum theories have been shown to be unreliable as the size of the system under study approaches the Debye length Moy et al., 2000) . The random movements of an ion in an electrolyte solution can be described as Brownian motion. The time evolution of a particle undergoing Brownian motion can be described by the Langevin equation which contains both a dissipative force and a random force acting on the particle. Both these forces occur from the same origin-namely the random collisions between ions and water molecules in thermal agitation. However, when the ions are proximate to the ion channel they come under the influence of an electric force arising from four sources. The transmembrane potential creates an electric field which will influence nearby ions. Added to this field is the contribution from charged particles contained in the channel protein. All the ions in the electrolyte will also contribute to the field. The fourth charge is induced when the ion nears the protein wall and is a surface charge of the same polarity at the interface between the protein and water. The contribution from all these sources must be accounted for when performing Brownian dynamics simulations (Chung and Kuyucak, 2002) . Chung et al. (1999) applied Brownian dynamics to the potassium channel in their permeation study. They used an idealised representation of the channel, based on the structure obtained by Doyle et al. (1998) . Their simulations produced current-concentration curves showing qualitative agreement with experiment, demonstrated the expected saturation property, and exhibited reversal potentials that agreed with those calculated by the Nernst equation. Elastic network models fall loosely into the category of coarse-grained methods. They sit among a group of methods designed to access long time scale protein motions (Elber, 2005; Tozzini, 2005) . Based on normal mode analysis, these methods are not suitable for direct prediction of permeation, but find their application in understanding gating and other long time scale channel motions. Normal mode analysis (NMA) is a standard technique for studying the long-time dynamics of biomolecules. It is frequently used for identifying the slowest motions of the molecular system. The normal modes of a molecule are given by absorptions in its vibrational spectrum. Normal mode analysis studies motions of small amplitude in a single-well potential to which a harmonic approximation has been made. The analysis is therefore confined to a conformational sub-state. Major conformational changes of the protein involve the crossing of energy barriers to move to different conformational sub-states, and are outside the limits of the approximation being made in this type of analysis. Up until recently, complex potential terms comprising geometric distortions, steric repulsions, and van der Waals and electrostatic interactions were used to perform NMA. Tirion (1996) proposed a replacement of these potential terms with a simpler pair-wise Hookean potential controlled by a single parameter. This simplified potential is still sufficient to describe the low-frequency motions of large globular proteins, while removing the necessity of a computationally expensive energy minimisation step. The approach proposed by Tirion was developed by Bahar et al. (1997) into a model for proteins whereby the protein in its folded state is assumed to correspond to a three-dimensional elastic network. The junctions of the elastic network are described by the C-alpha atoms of the protein, and these undergo Gaussiandistributed fluctuations under the influence of nearby atoms. Using only the configuration provided by the crystal structure, the single parameter elastic network model is sufficient to model the C-alpha temperature factors of 12 proteins taken from the Protein Data Bank. Shrivastava and Bahar (2006) applied the network model technique to a set of potassium channel structures. Their analysis showed that all five channels displayed a similar corkscrew motion in opening the cytoplasmic end of the pore. This was despite sequential, structural and activational differences in the channels under study. Elastic network models use an extremely simple representation of the forces acting on the network's junctions which are in turn simplified representations of the atomic structure of the protein. In the next section, we examine a method for coarse-graining atomic representations that derives its parameters from the underlying atomistic methods, and is one step closer to a multi-scale method. Izvekov and Voth (2005) note that CG potentials can be expected to have less transferability than atomistic ones, because they are potentials of mean force, and may be sensitive to changes in temperature and other thermodynamic variables. It is therefore desirable to be able to re-derive CG potentials under the same thermodynamic conditions as they will be used, and thus efficiency of the derivation process becomes important. A multi-scale coarse-grain (MS-CG) force-matching method is proposed which can determine an effective pairwise force field from given trajectory and force data. Conventional force-matching methods use a least-squares approach, which is only tractable for small numbers of parameters. In biomolecular systems the number of parameters becomes prohibitively large. Izvekov and Voth (2005) apply a spline interpolation to the short-ranged part of the force, and thus the force field to be fit depends linearly on the parameters and the least-squares problem can be rewritten as an overdetermined system of linear equations which can then be solved efficiently. They also found that force matching incorporated Coulombic effects despite omitting the Coulomb term while the force field was being fit. This was ascribed to the MS-CG force field capturing environmental screening effects. The MS-CG system allowed simulations to be run approximately 50 times faster than the underlying MD, and this improvement is expected to increase for larger systems due to the lack of explicit long-range interactions. However, dynamic properties are still unavailable from MS-CG. Extra terms such as Langevin friction and random force terms need to be included to be able to extract dynamical information from a MS-CG simulation. The MS-CG method was applied to the gramicidin A ion channel in Shi et al. (2006) . This application is particularly interesting because it utilised an atomistic model for the protein while using a coarse-grain representation of the lipid bilayer and solvent molecules which surround the protein. Similar to the mixed QM/MM calculations, this multi-scale approach results in atomic detail where you need it most, with significantly reduced computational cost. The mixed atomistic/coarse-grained model showed good agreement with the corresponding all-atom simulation (Shi et al., 2006) . Although models involving ion channels cover a wide range of spatial and temporal scales, there have been few published efforts aimed at combining multiple models of different scales into a coherent whole. One such effort was that of Mashl et al. (2001) . They describe a hierarchical approach to predicting the permeation of the potassium channel KcsA. At the time of the study, no open structure of the KcsA channel was available, only the closed structure obtained by Doyle et al. (1998) . Mashl and colleagues created molecular models of the open channel with different sizes of the pore entrance by rotating the M1 and M2 helices to obtain an open structure in agreement with available experimental data. The titratable residues of the models had their ionisation states calculated for the pH values of the open and closed channel. This enabled potential profiles to be computed for single potassium ions passing through the channel. MD simulations of potassium ions in the pore were performed to obtain velocity autocorrelation functions, from which the fluid dynamic diffusion coefficient was derived. This coefficient was in good agreement with the bulk diffusion coefficient of potassium ions in water. Having obtained the ionic mobility, Brownian dynamics simulations were then performed in one dimension to determine the ionic fluxes. Onedimensional Brownian dynamics is an effective constraint on the ion to only move in the axial direction of the channel. From the ionic flux measurements, current-voltage curves and average channel conductance were both shown to agree reasonably well with experiment. The mean number of ions in the channel at various transmembrane potentials was compared with the computed Using flux-ratio exponent and they were found to be in good agreement. As methods become more reliable and efficient, computers faster, and programs more widely available, then approaches similar to the above are likely to become more common. Unfortunately, the disadvantage of multi-scaling in this way is that it requires the practitioner to be skilled in a wide range of techniques, increasing the demand in human terms. This problem can of course be offset through collaboration, both within academia and with industry. A relatively small group of software packages have been developed over the course of more than 20 years to run MD simulations. Many of the simulations described above have been performed using one of these established packages. The software is typically very general, being able to deal with a wide range of different problems rather than being specific to ion channel simulations. Commonly used MD software packages include GROMACS (Lindahl et al., 2001; van der Spoel et al., 2005) , CHARMM (MacKerell et al., 1998) , AMBER (Case et al., 2006) and NAMD (Phillips et al., 2005) . Each of these packages consists of a complex force field to describe interactions between the particles in a system and numerical algorithms to solve Newton's second law based on the forces predicted by these fields. With the exception of NAMD (which began life in the mid-1990s as a parallel implementation of the AMBER force field) each of the packages was developed over 20 years ago in individual research groups (GROMACS at Groningen, CHARMM at Harvard, AMBER at UCSF). Since being released (each is freely available for academic use) they have been maintained and developed by large communities worldwide. New experimental results have led to updated force fields and advances in computing to improved algorithms (in particular for parallel implementation) and performance. The use of established, general packages like these will be seen to contrast with the situation in heart modelling described in Section 5, where individual research groups write their own software to solve specific problems. For some of the more novel (and recent) techniques described above, such as coarse-graining, Brownian dynamics and the hierarchical approach, the situation is more like that in heart modelling: customised code is written for specific problems and is usually not freely available, even to other researchers. Taking a look further up the scale from contemporary ion channel simulation, we can see opportunities to push back the boundaries of utility of molecular simulation. As mentioned at the beginning of this section, single residue mutations in potassium channels can affect electrophysiological behaviour at the scale of the whole heart. Cardiac cellular models are capable of reproducing the phenotype of these mutations (Gima and Rudy, 2002) but they rely on experimental data for their parameterisation. Obtaining these experimental data is both difficult and time consuming. If it were possible to incorporate simulated data into these models, then the effects of previously unknown mutations would be able to be examined at the cost of mere computation time. At present, the cardiac cellular models descend in scale only as far as the level of modelling ion channels as multi-state Markovian entities, characterised by conductance and transition rates (Clancy and Rudy, 2002) . These Markov models are derived from experimental data rather than simulation. Looking at the problem from the opposite perspective, the simulation of ion channels has extended as far up as predicting the permeation properties for a single channel. To complete the bridge from atomistic to cellular, one needs to provide a model for channel gating which can be derived purely from simulation. As the precise gating mechanism is unknown for most channels, and it as yet cannot be simulated directly, this presents a serious but eminently worthwhile challenge. A second multi-scale problem in the life sciences is the simulation of a beating heart. Within the heart, processes at the tissue, cellular, sub-cellular and even lower levels influence the organ level behaviour. Several coupled physical processes occur at each level and there are complex feedback mechanisms between the scales in both directions. Further, almost all of the processes occur on multiple time scales. These properties make the development of an accurate whole heart simulator very difficult, both in terms of the formulation of appropriate models and the development of appropriate numerical algorithms and powerful computers to make the solution of the resulting systems of equations tractable. Since the early 1990s (Henriquez et al., 1990; Panfilov and Holden, 1993; Panfilov and Keener, 1993) various groups have developed software to simulate cardiac behaviour. Progress has been rapid and by 2003 integrative models of the ventricles, the atria or the sino-atrial node were being used as paradigms for the development of a 'physiome' model of the whole human body (Hunter and Borg, 2003) . In this review we focus mostly on ventricular models and the term 'organ level model' will often be used to describe a representation of a whole ventricle (i.e. a level of organisation above the tissue level) rather than of the whole heart. Most cardiac models include processes acting at three levels: cellular, tissue and organ. There are two-way feedback mechanisms between each of these scales so they cannot be easily uncoupled. Further processes at the sub-cellular level and below are implicitly embedded in the cell models that are used and it may be necessary to include information from the organ system and organism levels in the future. (For instance, the heart receives feedback from the circulatory and cardiovascular systems.) Multiple time scales are also present in the heart-this can most easily be seen by noting that the time it takes for the heart to contract during a beat is much less than the time between beats. In addition to this multi-scale behaviour, the integrated heart also exhibits multi-physics, with interactions occurring between models of electrophysiology, tissue mechanics, metabolism and circulation (coronary and systemic). Here we outline some of the methods used to develop the models used in the simulators discussed later. In general, electrophysiology is considered at both the cellular and tissue level and mechanics at the tissue level. These models are then embedded in an organ level representation of the geometry of the heart (or ventricle, atria or sino-atrial node). To date, models of metabolism and circulation have not been implemented in multi-physics cardiac simulators. Current simulators use a variety of different models of the electrophysiology of cardiac cells. The majority of these models utilise the Hodgkin-Huxley formulation (Hodgkin and Huxley, 1952) to represent ion channel dynamics This models ion channel dynamics using 'gating variables' that are functions of potential. The gating variables represent the probability of any given ion channel (or part of a channelmany channels are controlled by more than one gating variable) being 'open'. When a channel is open, a transmembrane potential drives a capacitive current through it. The total transmembrane current is calculated by multiplying this nominal current by the proportion of channels that are open. Early models of cardiac cells included only two (FitzHugh, 1961; Nagumo et al., 1962) or three (Noble, 1960) variables. A second generation of cardiac cell models included ion concentrations (Beeler and Reuter, 1977) , which were involved in regulating the values of the gating variables, and incorporated more ion channels as they were discovered experimentally (sometimes in response to predictions of the models, e.g. McAllister et al., 1975) . State-of-the-art cardiac cell models can include a large number of nonlinear ODEs. (For example, the model of Noble et al. (1998) contains 23 ODEs.) They include terms to describe the currents due to ion pumps and exchangers (DiFrancesco and Noble, 1985; Luo and Rudy, 1991) . Intra-cellular mechanisms, in particular the regulation of the Ca 2+ concentration by the sarcolemma, can also be included (Luo and Rudy, 1994a, b; Faber and Rudy, 2000; Winslow et al., 2000) . The Kyoto model of Matsuoka et al. (2003) and the model of Sachse et al. (2003) include sarcomere dynamics and cardiomyocytes-allowing the cell to contract in response to changes in intra-cellular ionic concentrations. Stretch-activated ion channels have also been included in ionic models (Kohl et al., 1999; Healy and McCulloch, 2005) , allowing for two-way feedback between the electrophysiological and mechanical models of the heart. Although there are a large number of (nonlinear) ODEs in the models, their individual solutions can be computed extremely quickly. However, problems may arise when a large number of cellular components are embedded in a tissue or organ level model-the cumulative expense of solving many thousands of complex cell models may be high. For this reason, many simulators still use the early models with few ionic currents. Further, Fenton and Karma (1998) have attempted to work back from the more complex models to determine the model with minimal ionic complexity that accurately reproduces the cardiac action potential. The resulting model contains three channel currents (the same number as in some of the earliest cell models). The discussion of cell models in this section should not be considered to be at all exhaustive. There are many more models available than those mentioned here. In particular, only ventricular cell models have been discussed. A much wider selection of models (including a description of the physiology that they describe) can be found at the online CellML repository (www.cellml.org, and see Lloyd et al., 2004) . At the tissue level there is one main electrophysiological model-the bidomain model (Plonsey, 1988; Sepulveda et al., 1989; Henriquez et al., 1990) . This model predicts the transmembrane potential by modelling cardiac tissue as a homogenised two-phase medium. At each point in the solution space (which must be discretised for the model to make sense), there is an intracellular potential and an extracellular potential. In each space, current flow between the points is modelled by a cable equation. The difference between these two potentials drives a current across the cell membrane, which is modelled at each point as a capacitor connected in parallel with the ionic currents defined in the cellular level model. So, if the bidomain model is solved using FEs (or finite differences) there is a separate cellular model embedded at each node of the mesh (or grid). Simplifications to the bidomain model are sometimes used to reduce the computational requirements of the simulation. The monodomain model is obtained by assuming either that the extracellular potential is constant or that the conductivity tensors for the two domains are proportional. In both cases the equation for the model reduces to a single equation for the intracellular potential. The eikonal-diffusion model (Tomlinson et al., 2002) describes the propagation of a wave front within the cardiac tissue, but does not contain the more detailed description of the shape of the action potential seen in the bidomain and monodomain models. Further, it cannot be coupled to cellular level models (as it only defines the location of the wavefront and has no transmembrane current term), and so cannot be used to model the whole heart consequences of cellular problems (such as long QT syndrome). All of the packages we discuss below for whole organ simulations use one of the above models of action potential propagation. However, other methods have been used for smaller-scale simulations. Early simulation methods (Moe et al., 1964; Restivo et al., 1990; Mitchell et al., 1992) used cellular automata models to allow their simulations to run more quickly. However, the requirement that each cell is in one of a number of discrete states at any given time is physiologically unrealistic. Other studies, such as that of Winslow et al. (1993) , have coupled cell models together into small networks by modelling the electrical properties of gap junctions. However, these models require the consideration of every cell-cell interface, making the computational power required well beyond that currently available. There is now a consensus that the bidomain model represents the most accurate representation of the physiology that is numerically tractable. Many different models have been suggested to describe the mechanics of cardiac tissue, with almost every simulator using a different one. However, there are many similarities between the various models. All use some type of nonlinear, finite elasticity model to describe the passive properties of the tissue. These can be broadly classified according to whether they use exponential (Lin and Yin, 1998; Usyk et al., 2002) or pole-zero law models for the strain energy function. If the simulator uses a geometrical model of the heart (see below) which includes myofibre orientation, then the tissue is usually assumed to be either transversely isotropic (with the direction in which it behaves differently aligned with the fibres) or orthotropic. Where the mechanics is included in a multi-physics model, the coupling is generally achieved by including an 'active tension' term which causes the tissue to contract in response to the electrical activation (Negroni and Lascano, 1996; Hunter et al., 1998; Lin and Yin, 1998; Campbell et al., 2001; Usyk et al., 2002) . Generally, the contraction first occurs when the Ca 2+ concentration reaches some critical value, acts in the direction of the myofibres (if included in the geometry) and then tails off over time. In some cases (Negroni and Lascano, 1996; Campbell et al., 2001 ) the active tension is calculated by explicitly modelling the average behaviour of a large number of sarcomeres (which exist at the sub-cellular or cellular level). A natural extension would be to remove the averaging and include active tension at the cellular level. This process has been begun by Matsuoka et al. (2003) . At the organ level, various representations of the geometry of real ventricles and atria and the orientation of the Purkinje fibres and myofibres within them have been developed for use in simulations. Similar representations of the whole heart are not available yet. Here we concentrate on ventricular models. Anatomically accurate FE meshes have been constructed from experimental measurements of the hearts of dogs (Nielsen et al., 1991; LeGrice et al., 1995) , pigs (Stevens and Hunter, 2003) and rabbits (Vetter and McCulloch, 1998) . These models include realistic fibre architecture, which is key in the study of mechanical contraction (Kerckhoffs et al., 2006) , dynamics of life-threatening arrhythmias (Efimov et al., 1995; Qu et al., 2000; Clayton et al., 2006) and the termination of them (Rodriguez et al., 2005; Trayanova, 2006) . Recent advances in imaging technology have allowed the generation of anatomically accurate representations of the human heart (including fibre orientations) to be constructed from MRI (Watanabe et al., 2004) and CT scans (Xia et al., 2005) . Also at the organ level, models of the coronary circulation have been developed. A technique for generating a realistic coronary arterial tree was developed by Kassab et al. (1993) and a model of the capillaries followed (Kassab and Fung, 1994) . These models were proposed for the pig heart, but similar structures can be expected in the hearts of other animals. Smith et al. (2000 Smith et al. ( , 2002 have proposed a two-scale model (large vessels and microvasculature) of the blood flow through the resulting circulatory system and embedded it within the canine heart mesh of LeGrice et al. (1995) . However, these circulatory models have yet to be integrated into multi-physics models with the mechanics or electrophysiology of the beating heart. The ultimate goal of heart modelling is to develop an integrated system of models from the cellular, tissue and organ scales, together with appropriate numerical algorithms and enough computing power to solve the resulting system of equations for use in a whole heart simulator. The most advanced cardiac simulators include models of both electrophysiology and mechanics (multi-physics) and from each of the cellular, tissue and organ levels (multi-scale). A simulation package has been developed at the University of Auckland to 'strongly couple' the electrophysiology and mechanics of the heart (Nickerson et al., 2001 (Nickerson et al., , 2005 . At the cellular level, either the Noble model (Noble et al., 1998) or the Fenton-Karma model (Fenton and Karma, 1998 ) is used to determine the membrane potential and a calcium transient generates an active tension. At the tissue level, the monodomain model is used to model action potential propagation and the HMT model (Hunter et al., 1998) to describe the tissue mechanics. The simulations were run by modelling the left ventricle as a transversely isotropic (in both mechanical and electrophysiological terms) hollow prolate spheroid. A realistic fibre orientation was used. The coupling between the two physical processes is seen in the generation of an active tension dependent on the electrophysiology in the HMT model and in the deformation of the domain on which the electrophysiological model is solved. The method used for solving the coupled system involves decoupling the electrophysiological and mechanical models at each time step and advancing in time using the explicit Euler method to update each in turn. This is an improvement over weak coupling methods (where one type of physics is simulated over a long time frame and then this solution is used as input for the second type of physics), but it is not truly a tightly coupled system (where both types of physics would be solved for simultaneously at each time step). A mixed FE (for the mechanics) and FD (for the electrophysiology) solution method is used. The FD grid is embedded in the FE mesh and has a finer resolution . Simulation of 1 s of cardiac activity requires 3 weeks of computation on eight processors of an IBM Regatta P690 supercomputer, with parallelisation implemented using OpenMP. CMISS, the simulation package developed by the Auckland group, is freely available for academic use. In a similar study, Usyk and McCulloch (2003) have simulated the solution of a coupled electromechanical model of the heart on an oblate spheroid (Usyk et al., 2000) including fibre orientation and sheet architecture. At the cellular level they use the FitzHugh-Nagumo equations (three ion channels, FitzHugh, 1961; Nagumo et al., 1962) for the electrophysiology. As with the Auckland work, the mechanical properties are considered at the tissue level and are coupled to a monodomain model of electrical propagation, in this case with an additional diffusion term in the fibre direction (Usyk et al., 2002) . The mechanical properties of the cardiac tissue were modelled using the orthotropic model of Usyk et al. (2000) , which is responsible for the coupling between the two physical processes. Again, there is no direct feedback from the mechanical model to the electrophysiological model, but the electrophysiological model is solved on a mesh deformed according to the mechanical model. The (systemic) circulatory system is included in the model via the inclusion of a ventricular pressure function coupled to a Windkessel model of arterial impedance. The model is simulated using a FE mesh (finer for the electrophysiology than for the mechanics as was the case for the Auckland simulator) and an adaptive Runge-Kutta method for the time step (Usyk et al., 2002) . The simulation of 1.2 s of cardiac activity requires 72 h of computational time on a single processor of a Silicon Graphics Origin 2000 (Usyk and McCulloch, 2003) . This is very much faster than the comparable simulation performed by the Auckland group described above. (Recall that this took 3 weeks using 8 processors.) It is not clear exactly why there is such a difference in the simulation times, but one possible reason is that the mesh used in the UCSD work appears to be coarser (48 elements for the mechanical component) than that used in Auckland (128 elements for the mechanical component-both simulators use a finer mesh for the electrophysiology). Since the stability of the explicit Euler method depends on grid spacing this also suggests that a smaller time step is required for the Auckland simulation than the UCSD one, further increasing the computational demands. The trade-off for the additional time required to simulate the system is that the Auckland results may be more accurate due to their finer mesh and smaller time step. The UCSD cardiac simulation package, Continuity 6, is distributed free for academic use by the National Biomedical Computation Resource (NCBR). Another advanced whole ventricle simulator has been developed by Watanabe et al. (2004) at the University of Tokyo. Modelling is similar to that used by the Auckland and UCSD groups, with the FitzHugh-Nagumo model (FitzHugh, 1961; Nagumo et al., 1962 ) being used at the cellular level and coupled to the monodomain model at the tissue level. The mechanics of the heart tissue are modelled using the Lin-Yin constitutive law (Lin and Yin, 1998 ) and, since this does not include an active tension term, coupled to the electrophysiology using the model of Negroni and Lascano (1996) . Uniquely, the Tokyo model includes a third physical process: the flow of blood within the chambers of the heart. The blood is modelled as a Newtonian fluid and its motion is strongly coupled to the deformation of the cardiac tissue using an arbitrary Lagrangian-Eulerian (ALE) method (Zhang and Hisada, 2001) . Simulations are performed using FE methods on a mesh generated from MRI images of human hearts. The mesh used has approximately 20,000 degrees of freedom (comparable to that used at UCSD) and a predictor-multicorrector algorithm based on the Newark-b scheme is used for the time integration. No information is included about the computational resources required to run the simulations, although the need to use 'higher performance computers and parallel resources' in future suggests that the simulations are computationally intensive even on the relatively coarse mesh used here. A different approach to a multi-physics simulation of the beating heart is taken by Kerckhoffs et al. (2005) , who have coupled a tissue level eikonal-diffusion model to a nonlinear elastic model of cardiac mechanics (Kerckhoffs et al., 2003) . No cellular level model is required as the membrane potential is fully determined at the tissue level. The mechanical model used (also at the tissue level) is due to Kerckhoffs et al. (2003) and is coupled to the electrical propagation model by the presence of an active tension that depends on the passive tension and the time since depolarisation (i.e. active tension is larger immediately after the wave front passes through a region). In simulations that use this method, only the propagation wave front is seen-not the entire action potential. Further, the two physical processes are only weakly coupled: the propagation of the electrical wave is determined first by the solution of the eikonal-diffusion equation and then this is used as an input for the mechanical model-hence there is no feedback mechanism between the two physical processes, reducing the accuracy of the model. However, the trade-off for these simplifications is that the computational time is significantly reduced: one complete cardiac cycle can be computed in approximately 5 h on a Silicon Graphics 64-bit Origin 2000 at 225 MHz (Kerckhoffs et al., 2005) . Simulation of both physical processes used tri-linear basis functions on a hexahedral FE mesh. Another interesting multi-physics whole heart simulator has been developed by McQueen and Peskin (2000) . This simulator is unusual in that it has been developed by researchers with a background in computational fluid dynamics rather than in electrophysiology. Hence, the model does not have an electrophysiological component (usually the starting point for an integrative model of the heart), but instead couples the cardiac tissue mechanics to fluid flow within the chambers of the heart. The simulator uses more advanced numerical techniques than many of the alternatives in coupling these two physical processes (only the simulator of Watanabe et al. (2004) is comparable): they are updated simultaneously at each time step using an immersed boundary method (Lai and Peskin, 2000) . Recently Gurev et al. (2006) have coupled the bidomain model of electrophysiology to an exponential mechanical model of cardiac tissue in a two-dimensional sheet geometry in order to investigate the affect of ventricular dilation on defibrillation of the heart. At the cellular level the Luo-Rudy dynamic model (Luo and Rudy, 1994a, b) is coupled to the electroporation model of DeBruin and Krassowska (1998) and a modified version of the model of Zabel et al. (1996) for a stretch-activated channel. Interaction between the two physical processes is through stretch-activated ion channels that alter the transmembrane current. Simulations were performed with the tissue either stretched or unstretched. (Electrical activation does not introduce an active tension or any other mechanism to cause contraction of the cardiac tissue.) An explicit FE method was used, with quadrilateral elements, bilinear basis functions and a spatial resolution of 100 mm. To date, no organ level results for this model have been published, but the methods described could be generalised to anatomically accurate three-dimensional geometries. Although the final aim of heart modelling must be the development of multi-physics simulators that recreate the interactions between the different physical processes that occur in the heart, major advances have been made using single physics models of the heart-in particular models of its electrophysiology. Since the interactions between the physical processes are not a factor in these simulators, much more sophisticated models of the single physical process can often be used than in their multi-physics equivalents. Holden (2002, 2004) have developed the Sheffield Cardiac Arrhythmia Model (SCAM)-a package to simulate the generation of an electrocardiogram (ECG). As well as modelling the electrophysiology of cardiac tissue, SCAM also considers the propagation of the action potential through the whole torso and its detection by electrodes on the chest and back. The heart model is similar to that used by the Auckland group, with the Fenton-Karma equations being coupled to the monodomain model and the explicit Euler method being used to advance in time. Further, the Auckland canine heart model (Nielsen et al., 1991) is used to generate a FD grid at the organ level. Electrical propagation in the torso is also included (with the torso modelled as an isotropic Ohmic resistor) in order to simulate ECG generation. SCAM is noteworthy for its representation of behaviour at the organ system level (the inclusion of a whole torso model). Similar work has since been carried out by Pullan et al. (2003) , who have embedded an electromechanical model of the heart within the torso in order to elucidate the contributions of contraction to the ECG. Typical results from a simulation using SCAM are shown in Fig. 3 . More sophisticated FD simulations of the monodomain model on the rabbit ventricular model of Vetter and McCulloch (1998) have been carried out by Fenton et al. (2005) . Several different cellular models (Fenton and Karma, 1998; Nygren et al., 1998; Fox et al., 2002) have been considered in these simulations. A phasefield approach is used to overcome the difficulty of applying Neumann boundary conditions to a FD scheme with complex geometry by making the boundary spatially diffuse. The resulting simulator is very fast: 1 s of electrical activity takes approximately 1.5 h to simulate on a single 667 MHz Alpha processor and using a grid with 2.7 million degrees of freedom-although this speed is at least in part due to the very simple cellular level model that is used. Note also that an alternative method for applying Neumann boundary conditions in a FD implementation-the immersed interface method-has been proposed by Dumnet and Keener (2003) . However, the simplest way to overcome these problems may be to use FE methods instead. For comparison, Nakazawa and Suzuki (2002) at the National Cardiovascular Center in Japan have used the much more complex Luo-Rudy dynamic model (Luo and Rudy, 1994a) in a FE simulation of the monodomain equations. On a mesh with roughly twice the number of degrees of freedom (5.64 million) as that of Fenton et al. (2005) , 6 h of computational time are required to simulate each second of cardiac activity on an NEC SX4/16 supercomputer. Some of this additional time will be due to the additional overhead required for a FE implementation and the additional degrees of freedom, but a large proportion of it is because of the increased complexity of the cellular model (which, recall, is embedded at each node). All of the simulators discussed so far in this section and in Section 5.4 use the monodomain model for the propagation of the electrical action potential (except for those of Kerckhoffs et al. (2005) , where only the Fig. 3 . Effect of regional differences in action potential shape and duration on repolarisation and the electrocardiogram in the canine heart. Top panel left, action potentials for endocardial, mid-myocardial and epicardial cells during pacing at 1000 ms intervals. Top panel right, APD restitution for the three cell types obtained in a thin strip of simulated tissue using an S1 S2 protocol, with a basic (S1) cycle length of 500 ms. The cell model used is the four-variable reduced model described in Cherry and Fenton (2004) with parameter sets fitted to experimental recordings from canine endocardial, mid-myocardial and epicardial cells as described in Cherry et al. (2003) . The parameter sets were obtained by personal communication from E.M. Cherry and F.H. Fenton. Lower panel shows colour-coded repolarisation times in long and short axis slices through the Auckland canine ventricle geometry (Nielsen et al., 1991) , obtained using an explicit FD solution of the monodomain model with a space step of 0.25 mm, a time step of 0.05 ms, and anisotropic diffusion with coefficients of 0.002 cm 2 ms -1 and 0.0005 cm 2 ms -1 along fibres and transverse to fibres, respectively. The four images show repolarisation patterns during steady pacing at 500 ms intervals from an epicardial distribution of sites of earliest activation based on experimental data (Scher and Spach, 1979) . The top two images show repolarisation with uniform epicardial cells, and the lower two images show repolarisation patterns obtained during steady pacing with layers of endocardial, mid-myocardial and epicardial cells, each of equal thickness. The right-hand panel shows a simulated electrocardiogram assuming the heart is immersed in an infinite volume conductor. The traces shown were computed for the uniform epicardial model (blue) and for the heterogeneous model (red), showing that the different pattern of repolarisation associated with heterogeneity produces an upright instead of a biphasic T wave. Courtesy of Dr. Richard Clayton. activation time rather than the complete action potential is calculated, and Gurev et al. (2006) , where the simulations are restricted to the tissue level and a two-dimensional geometry). This is because the numerical solution of the bidomain equations is 'computationally intensive' compared with that of the monodomain equations (Vigmond et al., 2002) . The additional time required to solve the bidomain equations mean that they have not yet been implemented in a multi-physics whole heart (or whole ventricle) simulator. However, single-physics electrophysiological simulations using the bidomain equations and realistic heart geometries have been performed. The group led by Trayanova have developed a bidomain simulator for the rabbit ventricle based on the UCSD geometry and fibre orientation (Trayanova, 2006) . The model has been used to simulate cardiac defibrillation (Rodriguez et al., 2004 (Rodriguez et al., , 2005 Trayanova, 2006) as well as the applications of mechanical impacts to initiate or terminate arrhythmias (Li et al., 2004 (Li et al., , 2006 . At the cellular level, the Trayanova group use Luo-Rudy-type models of varying complexity (including the Beeler-Reuter, LR1, LRd models). Ashihara et al. (2004) have extended the Nakazawa and Suzuki (2002) simulator to one that couples the bidomain model to an improved Luo-Rudy cardiac cell model (Faber and Rudy, 2000) . Vigmond et al. (2003) have written a package known as Cardiac Arrhythmia Research Package (CARP) for the simulation of the bidomain equations at a whole organ level. CARP is constructed in a modular way so that any cardiac cell model and FE mesh can be used at the cellular level. Plank et al. (2007) have used CARP with various cellular models to simulate defibrillation on the UCSD rabbit ventricular geometry of Vetter and McCulloch (1998) (see Fig. 4 ). The modelling techniques used by both the Trayanova group and the developers of CARP are broadly similar and have been described by Vigmond et al. (2002) , who evaluate a number of different techniques for the numerical solution of the bidomain equations. They suggest that they can be solved efficiently by decoupling the extra-and intra-cellular potentials and advancing in time using Euler's method for each in turn. The method is implemented in parallel using MPI. FE methods are preferred to finite differences or finite volumes since they can better deal with the irregular geometry, curved boundaries and inhomogeneous conductivity of the heart. Observe that these simulations of the bidomain equations use more complex cellular models than many of the monodomain simulators discussed above. The use of these more advanced cell models is possible because the reduced computational demands of a single physics simulator in comparison with a multi-physics one outweighs the increase in complexity that results from switching from a monodomain to a bidomain model. Recent results (Plank et al., 2007) show that the diffusion solution in CARP can be further sped up by applying algebraic multi-grid preconditioning before running the conjugate gradient-based linear algebra solver to produce the intra-and extra-cellular voltages. The published benchmark test results are impressive, showing that CARP is faster than any of the other bidomain solvers. An impressive figure is that the parallel efficiency of the benchmark tests remains above 75% for up to 32 processors in a distributed memory cluster. The authors have tested CARP on up to 128 processors and have seen that parallel efficiency remains high (above 50%) on four 32-way nodes of the UK national high-performance computing facility HPCx. However, it must be stressed that the simulation of real-world problems is still a computationally bound problem. The benchmarks use a simplified ionic cell model (Beeler and Reuter, 1977) but still take about 10 h of computation time to solve 200 ms (about half a beat of a rabbit's heart) on 16 processors. In order to model realistic disease, such as the onset of a heart attack, one would need to be able to calculate heart function over a period of at least 15 min (about 5000 times longer). Since most computational heart models are based on cardiac cells and heart geometry from relatively small mammals (rabbit, dog or pig) there is a further level of complication when it comes to modelling a human heart attack-the cell models contain about four times more parameters than the average rabbit cell model, the geometry is larger and the heart beat is slower. Clearly, major advances in the development of an effective heart simulator have been made in a short space of time (15 years), especially with regard to electrophysiology and, to a lesser extent, mechanics. Detailed models have been developed from the cellular to organ levels, coupled together and simulated on powerful computers to reproduce the dynamics of a beating (healthy) heart over short time periods or to study defibrillation. However, much work still remains to be done in order to develop a multi-physics simulator that is sufficiently accurate-and sufficiently fast-for use in real-world applications involving coupling the electrophysiology and mechanics to models of the circulatory system and metabolism (perhaps in silico testing of drugs or personalised medicine). Such a simulator would likely need to use the bidomain equations as part of a multi-physics model. No such simulator exists at the moment. In order to model an abnormal heart or drug action (rather than a normally beating heart) it will also be necessary to develop models to describe unhealthy cardiac cells and tissue. These models are at a much less advanced stage (see the review by Bassingthwaighte et al., 2005) . One area in which significant advances could be made is in improving numerical methods. In a recent publication (Whiteley, 2006a) , it is shown that the speed of the solution of the cell models can be vastly improved (the solution is computed roughly 15 times faster) by using implicit numerical methods for ARTICLE IN PRESS Fig. 4 . A rabbit ventricular geometry model (Vetter and McCulloch, 1998) with smooth epicardial and endocardial surfaces and anatomically realistic fibre orientation was discretised using an unstructured grid with an average discretisation of 250 mm. The bidomain equations were discretised on this non-uniform grid using linear tetrahedral elements with a time step of 8 ms. The active membrane behaviour was described by the rabbit ventricular Puglisi model (Puglisi and Bers, 2001) incorporating an electroporation current (De Bruin and Krassowka, 1998 ) and a hypothetical I a current (Cheng et al., 1999) . Details of the applied numerical methods are found in Plank et al. (2007) . Two plate electrodes, a stimulation electrode and a grounding electrode, were used to stimulate the ventricles by delivering a train of ten pulses (see top panel). Subsequently, electric activity was simulated for another 2 s. Simulations were carried out with C m ¼ 1 mF cm -2 and b ¼ 1400 cm -1 . Initially, conductivity along the fibres was set to s il ¼ 1.74 mS cm -1 and s el ¼ 6.25 mS cm -1 , and transverse to the fibres to s it ¼ 0.19 mS cm -1 and s et ¼ 2.36 mS cm -1 , in the intracellular and interstitial domain, respectively (Clerc, 1976) . The conductivity of the surrounding fluid was set to s b ¼ 1.0 mS cm -1 . To determine suitable parameters which lead to a re-entry under the given protocol, the basic cycle length (BCL) of the pulses and the wave length of the tissue, l, were varied. From the series of simulations, a standard was chosen with a BCL of 200 ms and l reduced to 0.66 of the nominal l, as computed with the default conductivity settings, leading to a sustained figure-of-eight re-entry circulating around the apex (see bottom panels). Courtesy of Dr. Gernot Plank. integrating the differential equations. Furthermore, the use of partial evaluation and look-up tables (Cooper et al., 2006) may be used to increase efficiency by a further factor of five. Combining these speedups gives an overall speedup of 40. Further unpublished results by Whiteley suggest that, if one uses a judicious choice of a coarse spatial mesh combined with fine spatial meshes in places where cell activation is under way, it may be possible to speed computation up by a factor of 2500 (compared with computation on a fine mesh alone) with minimal loss of accuracy. The combination of all of Whiteley's and Cooper's techniques should provide a speed-up of several orders of magnitude. However, to our knowledge, only a few of these techniques have been implemented in the solution of real cardiac problems at the time of writing. We might speculate that a single processor could be able to solve 15 min of bidomain simulation time in less than a day of real time. Should these speed-ups be realised, there are still computationally challenging problems to be solved. During a coronary occlusion (heart attack), portions of the heart tissue are starved of oxygen and become infarcted. Over the course of time, the functionality of neighbouring tissue adapts to the new environment, this phenomenon being known as re-modelling. Experimentalists have produced data from which models of healthy, infarcted and re-modelled tissue may be derived, but the longitudinal study of how tissue morphs from its healthy state is yet to be investigated. In order to model the variation of a single cell, we may need to augment the known cell models for healthy cells by providing many more parameters. Re-modelling after an occlusion to the atria typically takes a day, whereas re-modelling after an occlusion to the ventricles (more likely to be fatal) continues over the course of several weeks. In the time following a heart attack, the victim becomes susceptible to cardiac arrhythmia. To counteract this susceptibility, clinicians may prescribe a drug treatment and/or allow the heart to be shocked by a pacemaker. The upshot is that to model properly the onset and treatment of cardiac infarction requires the solution of a bidomain simulation of the entire heart for weeks of simulation time, involving a more complicated (and as yet unknown) cell model together with the ion-channel effects of particular drug treatments. Such a simulation would be many orders of magnitude harder to solve than any recent studies and would still not provide the realism of electromechanical coupling or the flow of blood in the heart cavities and blood supply. Another ambitious project, which is within the sights of computational cardiac modellers, is to run long-QT drug trials in computational models. At the present time, many potential prescription drugs fail to come to market because they are discovered to cause the side-effect of elongating the QT phase of the ECG in some patients. This side-effect (which is normally unrelated to the original clinical use of the drug) is dangerous because it may lead to cardiac arrhythmias and fibrillation in a significant population of people taking the medication. Clinical testing on humans occurs towards the end of the drug-development cycle after vast resources have already been spent. If computer models of cardiac function could be used earlier in the cycle in order to screen for long-QT syndrome, then the drug discovery process could be streamlined. In order to do this we would need to be able to run cardiac models for several weeks of simulation time, in healthy and diseased heart models, and to model the interaction of drugs at the cell-membrane level. In the previous two sections we have considered two contrasting approaches to the multi-scale computational modelling of biological problems. MD simulations have, until recently, usually been restricted to a single scale. At each length scale, advanced and very general simulation packages have been developed over a period of more than 20 years and are based on well established force-field models and solution algorithms. These packages can deal with a wide range of different types of problem (e.g. they are not specific to ion channels). The switch to a less detailed level of functionality (quantum to molecular to coarse-grained) occurs either when the simulations begin to require unrealistically large amounts of time to run or when the amount of memory required to calculate the solution exceeds that available. In heart modelling, on the other hand, much more work has been done on considering how to couple together models from different levels of biological organisation and from different physical processes. The models themselves are at a much more dynamic stage of development, with new models being proposed continuously. The software used to compute solutions of the model equations is also more ad hoc in nature than in MD, with almost every active research group writing their own packages to simulate their models. In describing these two fields we have seen that significant strides have already been taken towards the efficient and accurate solution of multi-scale problems in the life sciences. Heart modellers have begun to develop efficient algorithms to couple together several very different models with two-way feedback, and then efficiently compute a numerical solution of the combined system. They are now beginning to see some of the computational issues that have already been addressed in MD (simulation times measured in weeks rather than hours and the need for more memory than is ever available). Similarly, the increasing use of multi-scale models in fields such as ion channel modelling is leading to researchers in MD being faced with problems similar to those already being considered in heart modelling. In both fields the use of parallel computing technology is extensively used to reduce the amount of time that is necessary in order to complete the simulation. In the longer term we are likely to want to consider modelling on very long time scales or across the interface between stochastic and deterministic processes in order to understand how very low-level (molecular and macro-molecular) events can give rise to higher-level macroscopic behaviour. This type of work is beginning to be carried out in projects such as the Physiome Project of the International Union of Physiological Sciences (IUPS) (Kohl et al., 2000; Hunter et al., 2002; Hunter and Borg, 2003 ) that aims to construct a ''quantitative description of the physiological dynamics and functional behaviour of the intact organism'' (Bassingthwaighte, 1995) . The requirements of projects such as these will give rise to new problems that have not been considered until now. Increased accuracy (both in the model and the numerical techniques used to solve it) is needed in order to simulate very long time scales to prevent the model solution from gradually drifting away from the true behaviour of the system as the simulation progresses. The causes of this drift can be difficult to identify as the problem will not manifest itself in short test runs of the simulator. Solutions related to the biology of the system, such as thermodynamically valid cell models, might be developed in order to reduce this drift. In order to model across the divide between stochastic and deterministic processes, it is possible that entirely new modelling techniques will have to be developed. However, the development of these techniques will likely be necessary in order to fully explain how molecular level interactions (between drugs and proteins) lead to whole organ phenomena such as the long-QT syndrome and, ultimately, the death of whole organisms. Finally, the computational requirements of simulating the very complex models that will result from these types of problem will require the use of very powerful, massively parallel supercomputers and the development of improved numerical algorithms to efficiently implement the parallelisation. Spiral wave control by a localized stimulus: a bidomain model study Direct evaluation of thermal fluctuations in proteins using a single-parameter harmonic potential Toward modelling the human physionome Multi-scale modeling of cardiac energetics Ion channel gating: insights via molecular simulations Reconstruction of action potential of ventricular myocardial fibres Molecular dynamics of the KcsA K + channel in a bilayer membrane A multi-scaled approach for simulating chemical reaction systems Modelling RBC and neutrophil distribution through an anatomically based pulmonary capillary network Nonlinear myofilament regulatory processes affect frequency dependent muscle fiber stiffness The influence of growth-induced stress from the surrounding medium on the development of multicell spheroids Nonuniform responses of transmembrane potential during electric field stimulation of single cardiac cells Suppression of alternans and conduction blocks despite steep APD restitution: electrotonic, memory and conduction velocity restitution effects Effects of wall heterogeneity in an anatomically realistic model of canine ventricles: a simulation study (Abstract) Recent advances in ion channel research Permeation of ions across the potassium channel: Brownian dynamics studies Na + channel mutation that causes both Brugada and long-QT syndrome phenotypes: a simulation study of mechanism Computational framework for simulating the mechanisms and ECG of re-entrant ventricular fibrillation Filament behaviour in a computational model of ventricular fibrillation in the canine heart Phase singularities and filaments: simplifying complexity in computational models of ventricular fibrillation Directional differences of impulse spread in trabecular muscle from mammalian heart On the application of partial evaluation to the optimisation of cardiac electrophysiological simulations Tests of continuum theories as models of ion channels. II. Poisson-Nernst-Planck theory versus Brownian dynamics Computational physiology and the physiome project Electroporation and shock-induced transmembrane potential in a cardiac fiber during defibrillation strength shocks A model of cardiac electrical activity incorporating ionic pumps and concentration changes The structure of the potassium channel: molecular basis of K + conduction and selectivity A numerical method for solving anisotropic elliptic boundary problems in 3D Dynamics of rotating vortices in the Beeler-Reuter model of cardiac tissue Long-timescale simulation methods Action potential and contractility changes in [Na + ] i overloaded cardiac myocytes: a simulation study Mathematical modelling of tumour growth and treatment Vortex dynamics in three-dimensional continuous myocardium with fiber rotation: filament instability and fibrillation Modelling wave propagation in realistic heart geometries using the phase-field method Impulses and physiological states in theoretical models of nerve membrane Ionic mechanism of electrical alternans Biomechanics: Mechanical Properties of Living Tissues Cellular open resource (COR): a public CellML based environment for modelling biological function Ionic current basis of electrocardiographic waveforms: a model study Molecular dynamics simulations of proteins in lipid bilayers Cardiac defibrillation and the role of mechanoelectric feedback in postshock arrhythmiogenesis A grand challenge for computing: full reactive modeling of a multi-cellular animal A Turing-like test for biological modeling An ionic model of stretch-activated and stretch-modulated currents in rabbit ventricular myocytes A planar slab bidomain model for cardiac tissue Ion Channels of Excitable Membranes A quantitative description of membrane current and its applications to conduction and excitation in nerve Continuum biomechanics of soft biological tissues Integration from proteins to organs: the Physiome Project A 'pole-zero' constitutive law for myocardium Modelling the mechanical properties of cardiac muscle Finishing the euchromatic sequence of the human genome A multiscale coarse-graining method for biomolecular systems Introduction to Computational Chemistry Topology and dimensions of pig coronary capillary network Morphometry of pig coronary arterial trees Homogeneity of cardiac contraction despite physiological asynchrony of depolarization: a model study Electromechanics of paced left ventricle simulated by straightforward mathematical model: comparison with experiments Computational methods for cardiac electrophysiology Systems biology: a brief overview Stretch-induced changes in heart rate and rhythm: clinical observations, experiments, and mathematical models Computational modelling of biological systems: tools and visions Models of permeation in ion channels An immersed boundary method with formal second-order accuracy and reduced numerical viscosity Laminar structure of the heart: ventricular myocoyte arrangement and connective tissue architecture in the dog Modeling of ion channels Induction of ventricular arrhythmias following mechanical impact: a simulation study in 3D Myocardial ischemia lowers precordial thump efficacy: an inquiry into mechanisms using threedimensional simulations A multiaxial constitutive law for mammalian left ventricular myocardium in steady-state barium contracture or tetanus GROMACS 3.0: a package for molecular simulation and trajectory analysis CellML: its future, present and past A model of the ventricular cardiac action potential. Depolarization, repolarization and their interaction A dynamic model of the cardiac ventricular action potential. I. Simulations of ionic currents and concentration changes A dynamic model of the cardiac ventricular action potential. II. After depolarizations, triggered activity and potentiation CHARMM: the energy function and its parameterization with an overview of the program Hierarchical approach to predicting permeation in ion channels Role of individual ionic current systems in ventricular cells hypothesized by a model study Reconstruction of the electrical activity of cardiac Purkinje fibres A three-dimensional computer model of the human heart for studying cardiac fluid dynamics Quantum free energies of the conformers of glycine on an ab initio potential energy surface Cellular automaton model of ventricular fibrillation A computer model of atrial fibrillation Tests of continuum theories as models of ion channels. I. Poisson-Boltzmann theory versus Brownian dynamics An active pulse transmission line simulating nerve axon Virtual heart: a comprehensive simulation and visualization of electrophysiological phenomena in the heart A cardiac muscle model relating sarcomere dynamics to calcium kinetics A model of cardiac cellular electromechanics New developments in a strongly coupled cardiac electromechanical model Mathematical-model of geometry and fibrous structure of the heart Cardiac action and pacemaker potential based on the Hodgkin-Huxley equations Models of cardiac ventricular action potentials: iterative interaction between experiment and simulation Improved guinea-pig ventricular cell model incorporating a diadic space, i kr and i ks , length-and tension-dependent processes Mathematical model of an adult human atrial cell: the role of K + currents in repolarization Computer simulation of re-entry sources in myocardium in two and three dimensions Re-entry generation in anisotropic twisted myocardium Scalable molecular dynamics with NAMD Algebraic multigrid preconditioner for the cardiac bidomain model Bioelectric sources arising in excitable fibres (ALZA lecture) Cancer Modelling and Simulation LabHEART: An interactive computer model of rabbit ventricular myocyte ion channels and Ca transport Cardiac electrical activity-from heart to body surface and back again Scroll wave dynamics in a three-dimensional cardiac tissue model: roles of restitution, thickness and fibre rotation Hybrid ab-initio/empirical molecular dynamics: combining the ONIOM scheme with the atom-centered density matrix propagation (ADMP) approach A logical state model of re-entrant ventricular activation Transmission dynamics of the etiological agent of SARS in Hong Kong: impact of public health interventions Cardiac vulnerability to electric shocks during phase 1A of acute global ischemia Differences between left and right ventricular chamber geometry affect cardiac vulnerability to electric shocks Ion conduction and selectivity in K + channels Computational biology in the study of cardiac ion channels and cell electrophysiology Quantitative reconstruction of cardiac electromechanics in human myocardium: assembly of electrophysiological and tension generation models Cardiac depolarisation and repolarisation and the electrocardiogram Numerical and experimental study of steady-state CO 2 and inert gas washout Molecular Modeling and Simulation: An Interdisciplinary Guide Multi-scale modeling in biology Current injection into a two-dimensional anisotropic bidomain Mixed atomistic and coarse-grained molecular dynamics: simulation of a membrane-bound ion channel Common mechanism of pore opening shared by five different potassium channels Simulations of ion permeation through a potassium channel: molecular dynamics of KcsA in a phospholipid bilayer Generation of an anatomically based geometric coronary model An anatomically based model of transient coronary blood flow in the heart Multi-scale computational modelling of the heart How well can simulation predict protein folding kinetics and thermodynamics? Challenges in deriving highconfidence protein identifications from data gathered by a HUPO plasma proteome collaborative study Sarcomere length changes in a 3-D mathematical model of the pig ventricles Generation of an anatomically based three-dimensional model of the conducting airways A coupled cluster and full configuration interaction study of CN and CN Computer simulations of transport through membranes: passive diffusion, pores, channels and transporters Simulation approaches to ion channel structure-function relationships Large amplitude elastic motions in proteins from a single-parameter, atomic analysis A finite-element method for an eikonal equation model of myocardial excitation wavefront propagation Coarse grained models for proteins Defibrillation of the heart: insights into mechanisms from modelling studies Relationship between regional shortening and asynchronous electrical activation in a threedimensional model of ventricular electromechanics Effect of laminar orthotropic myofiber architecture on regional stress and strain in the canine left ventricle Computational model of three-dimensional cardiac electromechanics GROMACS: fast, flexible and free Three-dimensional analysis of regional cardiac function: a model of rabbit ventricular anatomy Computational techniques for solving the bidomain equations in three dimensions Computational tools for modeling electrical activity in cardiac tissue Multi-physics simulation of left ventricular filling dynamics using fluid-structure interaction finite element method Meeting report: the future and limits of systems biology An efficient numerical technique for the solution of the monodomain and bidomain equations Some factors affecting pulmonary oxygen transport The effects of ventilation pattern on carbon dioxide transfer in three computer models of the airways Generation and propagation of ectopic beats induced by spatially localized Na-K pump inhibition in atrial network models Electrophysiological modeling of cardiac ventricular function: from cell to organ Analysis of cardiac ventricular wall motion based on a three-dimensional electromechanical biventricular model Effect of sustained load on dispersion of ventricular repolarization and conduction time in the isolated intact rabbit heart Analysis of fluid-structure interaction problems with structural buckling and large domain changes by ALE finite element method We are grateful to Professor Mark Sansom and Dr. Blanca Rodriguez for valuable discussions, to Dr. Gernot Plank for sending a preprint of his work and for useful discussions, and to Elizabeth Cherry and Flavio Fenton, who provided the parameter set for the action potential model shown in Fig. 3 (by personal communication to Richard Clayton). D.S. acknowledges support from an EPSRC Life Sciences Interface Doctoral Training Centre studentship, and we thank the EPSRC for support through the eScience Pilot Project in Integrative Biology (GR/S72023/01).