key: cord-0537313-tn1v1shk authors: Nagel, Andrew M.; Magill, Martin; Haan, Hendrick W. de title: Studying First Passage Problems using Neural Networks: A Case Study in the Slit-Well Microfluidic Device date: 2022-04-26 journal: nan DOI: nan sha: c89039ef818ea16455ffc79a970984345a1357c8 doc_id: 537313 cord_uid: tn1v1shk This study presents deep neural network solutions to a time-integrated Smoluchowski equation modeling the mean first passage time of nanoparticles traversing the slit-well microfluidic device. This physical scenario is representative of a broader class of parameterized first passage problems in which key output metrics are dictated by a complicated interplay of problem parameters and system geometry. Specifically, whereas these types of problems are commonly studied using particle simulations of stochastic differential equation models, here the corresponding partial differential equation model is solved using a method based on deep neural networks. The results illustrate that the neural network method is synergistic with the time-integrated Smoluchowski model: together, these are used to construct continuous mappings from key physical inputs (applied voltage and particle diameter) to key output metrics (mean first passage time and effective mobility). In particular, this capability is a unique advantage of the time-integrated Smoluchowski model over the corresponding stochastic differential equation models. Furthermore, the neural network method is demonstrated to easily and reliably handle geometry-modifying parameters, which is generally difficult to accomplish using other methods. Micro-and nanofluidic devices (MNFDs) are tools that can be used to manipulate or detect molecules with high precision [1] [2] [3] [4] [5] . For instance, the slit-well MNFD was proposed by Han and Craighead [6] as a tool for sorting otherwise free-draining polymers, such as DNA, according to chain length. The same device has also been shown to induce separation of free-draining nanoparticles by size [7, 8] . The slit-well is operated by electrophoret- ically forcing analytes across a periodic array of deeper regions (wells) and shallower regions (slits) between two fixed planes (see Fig. 1 ). Its sorting effect has been comprehensively studied through theoretical, numerical, and experimental investigations, which have identified a variety of distinct mechanisms that are relevant in different operational regimes. At a high level, the sorting effect depends nonlinearly on the size of the analytes and the magnitude of the applied electric field, as well as the shape and size of the device's wells and slits [2, [7] [8] [9] [10] [11] . In particular, depending on the choice of these parameters, the mobility of analytes can be made either increasing or decreasing with respect to molecule size The practical relevance of biotechnologies such as MNFDs has been stressed in the last year; for instance, Berkenbrock et al. [12] surveyed the potential of microfluidics as a means of rapidly testing large numbers of people for COVID-19 infections. Shepherd et al. [13] studied a parallelized MNFD that generated scalable lipid nanoparticle formulations needed for applications in RNA therapeutics and vaccines. Nonetheless, the design and optimization of MNFDs is often challenging, as it entails simultaneously considering the influence of many design parameters (e.g., operating voltage, solvent composition, device geometry, etc.) on multiple nonlinearly interdependent phenomena. In many cases, important biological phenomena can fruitfully be modeled as first passage processes [14] . Moreover, in the study of MNFDs, key transport phenomena are often captured by only the first few moments of an appropriate first passage time distribution. For example, the translocation of a polymer through a nanopore is aptly described as a first passage process, and the mean translocation time is a widely studied metric [15] [16] [17] . Magill et al. [18] showed that, for the special class of MNFDs with periodic geometries featuring small bottlenecks, the long-term dynamics of molecules driven through the system depend exclusively on the first and second moments of their first passage times across one subunit of the device. The ability to focus on a handful of first passage time moments can greatly simplify the problem of characterizing and designing MNFDs. This emphasis on the first few moments of the first passage time is of particular interest in light of a convenient mathematical property of the Smoluchowski equation 1 that describes the motion of analytes through MNFDs. For instance, the dynamics of nanoparticles electrophoretically driven through an MNFD can be modeled by the Smoluchowski equation as where ρ is the position distribution of the particles over space and time, D and µ are the diffusion and freesolution mobility coefficients of the particles, and E is the applied electric field. In first-passage problems where the domain geometry and applied fields are time-invariant, Eqn. 1 can be integrated over time to obtain the timeintegrated Smoluchowski equation [19] −ρ 0 = ∇ · D∇g 0 − µ Eg 0 , where ρ 0 is the initial condition for ρ. The new field g 0 is defined as g 0 (x, y) := ∞ 0 ρ(x, y, t) dt. The integral of g 0 in any region is the average residence time of particles in that region between initialization and absorption. In particular, it therefore has the property that when Ω is the entire spatial domain, τ is the stochastic first passage time of the particles to the absorbing boundary conditions, and τ is the mean first passage time (MFPT). Moreover, this formulation can be extended recursively to all higher-order moments as well. For instance, the field g 1 satisfying has the property that it integrates over the spatial domain to yield the second moment of the first passage time. In this work, we will refer to the time-integrated 1 Note that the Smoluchowski equation is also variously known as the Kolmogorov forward equation, the Fokker-Planck equation, or the convection-diffusion equation, with certain names more common in certain areas of application. Smoluchowski equation (Eqn. 2) as the g 0 equation, Eqn. 5 as the g 1 equation, and the hierarchy of equations collectively as the moment equations. A more comprehensive discussion of these moment equations can be found in standard references such as Redner [19] . Since the first few moments of first passage time distributions are so important to MNFD phenomena, it is natural to wonder whether solving the moment equations directly might be a useful line of investigation. In practice, however, it appears that this is rarely done. Redner [19] shows the power of the moment equations for theoretical analysis of first passage problems, especially in the purely diffusive regime where direct analogies with electrostatics can be made. In the context of MNFDs, Magill et al. [18] showed that measuring g 0 approximately via particle simulations can aid in understanding the effect of design parameters on system dynamics. Similarly, Wang et al. [8] analyzed plots of the time-integrated particle position densities in a periodic model of the slit-well device; however, these maps were constructed in a manner subtly different from g 0 , and in particular do not have the property of integrating to the MFPT. The authors are unaware of other studies in which the moment equations are solved numerically towards the goal of understanding the effect of MNFD design parameters on first passage time behaviors. Moreover, even though the Smoluchowski equation is also an important mathematical model to study first passage time problems outside biophysics [20] [21] [22] [23] , we have found no examples in which the g 0 equation (nor any of the higher moment equations) were studied numerically in applied contexts. A major barrier to the goal of solving the moments equations numerically in biophysics is the so-called curse of dimensionality. That is, for most common numerical methods for partial differential equations (PDEs), the computational cost grows exponentially in the dimensionality of the underlying domain. Thus, whereas highly effective techniques like the finite element method (FEM) can be used to solve PDEs in simple biophysical scenarios, like that of noninteracting nanoparticles, they fail when applied to the high-dimensional PDEs describing the dynamics of many-body systems such as polymers. Indeed, particle-based simulation methods do not exhibit the curse of dimensionality, and this can be seen as a major reason for the dominance of particle simulations over PDE-based calculations in biophysics. In this work, we attempt to resolve this barrier using a new numerical method for PDEs that does not suffer from the curse of the dimensionality. The technique, which we refer to as the neural network method (NNM), is inspired by the success of deep learning at solving highdimensional problems in machine learning, such as image processing and natural language processing [24] [25] [26] . A growing body of theoretical and numerical evidence suggests that it can robustly solve high-dimensional PDEs [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] . In particular, the NNM has already been used to study high-dimensional problems in biophysics [38] . The NNM has also been shown to solve parameterized problems directly across a continuous range of parameter values [28, 42] . As the number of parameters increases, the problem of solving a highly parameterized PDE can exhibit yet another curse of dimensionality. Because the neural network method shares information across parameter space, it is also able to overcome this computational challenge [43] [44] [45] . Note that parameterized solutions to PDEs typically cannot be obtained using FEM, particle simulations, or similar methods. Rather, this goal is usually accomplished using reduced order modeling (ROM) techniques [46, 47] . ROM methods typically interpolate between a relatively small number of high-fidelity solutions computed at a handful of reference points in parameter space in order to approximate solutions at new points in parameter space. Whereas most classical ROM methods interpolate to new parameter choices via a linear combination of the reference solutions, the NNM is intrinsically nonlinear. Other nonlinear ROMs based on deep neural networks have been proposed in the literature [48] [49] [50] [51] [52] . However, these methods require a database of FEM or other classical solutions be computed prior to training, whereas the NNM simultaneously solves the target PDE and acts as an ROM method over parameter space. In addition, dealing with parameters that modify the domain geometry using classical reduced-order methods can be challenging as these are typically constructed using mesh-based approaches. Although special ROMs can be developed for geometric parameters in some cases [53] [54] [55] , the mesh-free nature of the NNM is intrinsically advantageous for this application [42, 56] . The primary goal of this paper is to study the effectiveness of the NNM as a tool for solving the g 0 equation in MNFDs by focusing on a sufficiently complicated representative device as shown in Fig. 2 . The specific problem under consideration is as follows: for an ensemble of thermal particles initially located in the left slit of one periodic subunit of the slit-well device, compute the MFPT of these particles to the right slit. Here, the particles are driven by an electric field E = λ E 0 for a field strength constant λ and a baseline electric field E 0 . The baseline electric field is a solution to Laplace's equation for a voltage drop of 2 across the domain. It was obtained using the NNM in the manner described in Magill et al. [57] , and plots of E 0 are included here in App. B. The particles represent nanoparticles with diameters σ, diffusion coefficients D, and free-solution mobilities µ. The nanoparticles are assumed to be free-draining, with µ independent of σ, so that separation by size would not occur in free solution. Conversely, the diffusion coefficient is assumed to emerge from Stokes' law and the fluctuation-dissipation theorem, so that D ∝ σ −1 . For simplicity, these behaviors are implemented as µ = 1 and D = σ −1 . The g 0 equation is thus reduced to where the particle size σ and field strength λ are the two free parameters, and E 0 is the reference electric field. The problem geometry is shown in Fig. 2 , along with depictions of the particle-based, Smoluchowski, and g 0 representations of the problem. The domain is meant to represent a single periodic subunit of the slit-well device illustrated in Fig. 1 . In the particle-based model of the problem [ Fig. 2(a) ], an ensemble of noninteracting nanoparticles are initially located in the left slit, and then these particles proceed to move under a combination of thermal diffusion and electrophoretic drift until reaching the far right purple wall in the right slit. In the Smoluchowski model [ Fig. 2(b) ], individual particles are eschewed, and the time evolution of the entire distribution of particle positions is modeled instead. Here, the initial position of particles is modeled by the initial condition ρ 0 (x; σ), located in the left slit. Finally, in the g 0 equation [ Fig. 2 (c)], the time-dependence of the Smoluchowski is accounted for implicitly by integration over all time. Here, the initial condition ρ 0 (x; σ) now appears as a source term in the (time-independent) PDE. In each schematic of Fig. 2 , the grey regions represent physical walls. These were modeled as short-range repulsive boundaries (in the particle model; see App. A) or no-flux boundary conditions in the continuum models [equations defined in the legends of Fig. 2 (b) and (c)]. As a result of excluded volume interactions, the particle centers cannot come closer than a distance of roughly σ/2 from the repulsive boundaries. This exclusion zone is depicted by the dashed black line in Fig. 2 . To model this in the continuum models, the no-flux boundary conditions are applied at the boundary of the exclusion zone (i.e., along the dotted black lines in Fig. 2) , rather than at the nominal boundaries (i.e., along the gray walls in Fig. 2) . The nominal dimensions of the domain Ω 0 are the same as those described in Magill et al. [57] . In particular, the topmost and bottommost walls are a distance 2L y = 6.25 apart, the leftmost and rightmost boundaries are 2L x = 10 apart, and the curvature of the re-entrant corners is set to R = 1 (see below). The total horizontal lengths of the slits and the well were set equal, to L x , and the slits were given a height of h slit = L x /4 = 1.25. As discussed in Magill et al. [57] , the standard formulation of the NNM struggles to solve problems exhibiting singularities. For this reason, the re-entrant corners of the slit-well device geometry have been rounded (i.e., represented by circular arcs of finite curvature). Similarly, the NNM was found to perform poorly when the initial distribution of particles was too sharp. Instead, particles were initialized in a Gaussian band in the left slit, given by uniform distribution in y multiplied by a Gaussian distribution in x: Here r s = 0.25 is the width of the Gaussian band in the x direction, h slit = L y − y slit − σ is the height of the band in the y direction, and x s = −L x + 1 is the center of the band. Technically, ρ 0 requires a correction factor to be properly normalized over this bounded domain, as the Gaussian distribution in x is normalized over the entire real line, but the discrepancy is numerically insignificant. The first passage time of the particles is computed when their centers cross the rightmost boundary of the domain for the first time (purple in Fig. 2 ). In the continuum models, this is represented by an absorbing boundary condition (i.e., a homogeneous Dirichlet condition). Physically, this boundary corresponds to the interface between consecutive periodic subunits of the slit-well, and not to a physical wall. As such, in contrast to the no-flux boundary condition on the gray walls, the placement of the absorbing boundary does not depend on σ. As a simplifying assumption, particles were prevented from moving through the leftmost boundary of the domain. Mathematically, this was imposed by a no-flux boundary condition. Physically, this corresponds to the synthetic condition that particles cannot move against the direction of the imposed electric field into the previous periodic subunit of the slit-well; we will refer to this as the no-backflow condition. The location of this no-backflow boundary condition was fixed independently of σ. Of course, in the actual slit-well device there is always a nonzero probability of particle backflow. The simplification was made here because it allows the g 0 equation to be posed in a much simpler domain (i.e., a single periodic subunit). However, as a result of this modeling choice, there will be discrepancies between the MFPT results reported in this paper and the results of previous studies of the slit-well device (such as Cheng et al. [7] and Wang et al. [8] ), especially at low electric field strengths. Nevertheless, as the results in Sec. IV will indicate, the major features of the slit-well system are preserved despite the no-backflow condition. Furthermore, the simplified model still contains several mathematical features that are expected to be common to many MNFDs and particularly difficult for the NNM to resolve: re-entrant corners, a nonuniform electric field, and nontrivial dependence on physical and geometric problem parameters. As stated above, the purpose of this paper is to study the performance of the NNM when solving a problem with the characteristic features of a typical MNFD problem. Certain features, such as the highly skewed geometry of the fully periodic slit-well and the singularities associated with the fully sharp re-entrant corners, are more technically challenging and relegated to future work. The NNM implementation used for this work was similar to that previously described by Magill et al. [57] . In the fixed parameter experiments (Sec. IV B 1), the true solution g 0 (x) of the g 0 equation (Eqn. 6) was approximated by a deep neural network g 0 (x) trained to minimize a composite loss functional The first loss term consisted of where F is the operator on the righthand side of the g 0 equation (Eqn. 6). Thus, L PDE [ g 0 ] quantifies the extent to which g 0 satisfied the g 0 equation (Eqn. 6) throughout the domain Ω. Note that, as discussed in Sec. II, Ω depends on σ. The second loss term was defined as The final term was given by Similarly to L PDE , the last loss term L norm quantifies the extent to which the approximate solution satisfies the PDE inside the domain Ω. However, whereas L PDE is a local measure of the residual of Eqn. 6, L norm is a global measure. Specifically, L PDE is the mean of the squared residual, while L PDE is the square of the mean residual. In theory, L norm is a redundant loss term and simply setting L PDE to zero is sufficient to ensure that g 0 satisfies the g 0 equation (Eqn. 6). In practice, however, training without L norm was found to produce approximate solutions that captured the shape of the true solution fairly accurately, but struggled to converge on the correct magnitude (i.e., differed from the true solution by a small multiplicative factor). We note that our use of L norm is similar to the normalization process used by Al-Aradi et al. [58] in solving the time-dependent Smoluchowski equation. The use of L norm can also be contrasted with previous work by Avrutskiy [59] . There, Avrutskiy [59] showed benefits to adding redundant loss terms that encourage the solution to satisfy the derivative of the PDE operator F = 0 over the spatial domain. Here, the term L norm is a redundant FIG. 3. Fully connected feedforward neural network of width w and depth d mapping coordinates (x, y) and problem parameters (λ, σ) to an outputg0(x, y; λ, σ). At each node, a weighted sum of the incoming arrows and a bias is computed and passed through an activation function. The network's parameters are optimized such thatg0(x, y; λ, σ) approximately satisfies the target PDE and BCs. loss term that encourages g 0 to agree with the integral of the PDE operator F over the spatial domain Ω. These additional loss terms can be thought of as soft constraints on the training process, or equivalently as regularization terms constructed out of prior knowledge of the target problem. In Secs. IV B 2-4, the neural network was parameterized with respect to field strength λ, particle size σ, or both. Thus, the loss terms were redefined as The notations Ω σ , F λ,σ , and B λ,σ indicate that the domain changes with σ, and the PDE and BC operators change with both λ and σ. The angled brackets indicate averages over the parameter values. In other words, the loss used for the parameterized neural networks is identical to that used for the fixed parameter experiments, with the additional step of averaging the loss over parameter space. Note that the electric field E was obtained by computing the electric potential u using the NNM methodology of Magill et al. [57] . Of course, this is not strictly necessary as u could just as easily be approximated by some other method (e.g., FEM). However, the intention was to illustrate the ease with which previously computed NNM solutions can be fed into the loss functional of new NNM solutions. A contour plot illustrating both u and E is included in App. B (Figure 8 ). Note that the electric po-tential is defined on the nominal domain Ω 0 corresponding to σ = 0, which differs from the actual domain Ω on which g 0 is defined. All of the NNM experiments in this work were conducted with fully-connected feedforward neural networks of depth d = 3 and width w = 50. The hyperbolic tangent was used for activation functions in the hidden layers, while the output layer was linear. To solve the nonparameterized problems (Sec. IV B 1), the approximate solution g 0 was constructed as with where The experiments in Secs. IV B 2-4 considered parameterized neural networks, where one or both of the problem parameters λ and σ were included as additional inputs to the network. In these cases, the networks were defined as with f i defined as before for i = 2 . . . d + 1, but with f 1 adjusted to where where m is the length of the parameter vector m. In other words, the parameters (λ or σ or both) were concatenated to the end of the input vector of the network, and the weight matrices were adjusted accordingly. This is illustrated schematically in Fig. 3 . The same approach was used by Sirignano and Spiliopoulos [28] and Hennigh et al. [42] , but can be contrasted with the recently proposed DeepONet architecture of Lu et al. [60] . Training was conducted in Tensorflow [61] version 1.15 with all unspecified hyperparameters set to their default values. The weights were initialized using the Glorot method [62] , and biases were initialized to zero. Weights were iteratively updated using the Adam optimizer [63] to minimize L with the learning rate set to 10 −3 in Sec. IV B 1, and to 10 −4 in Secs. IV B 2-4. In each iteration, the integrals in L were approximated by Monte Carlo sampling using the same procedure described in Magill et al. [57] . Specifically, rejection sampling was applied to 10,000 nominal samples generated in the bounding box [−5, 5] × [−3.125, 3.125], and each smooth subunit of the boundary was randomly sampled with a linear density of about 13 points per unit length. For the parameterized network experiments (Secs. IV B 2-4), the relevant problem parameters were also sampled randomly in each training iteration. These samples were generated uniformly at random, with λ drawn from [5, 50] and σ drawn from [0.125, 0.625]. In particular, it was necessary to sample the parameter σ before sampling points in Ω, since the extent of Ω varies with σ. One random parameter vector was drawn per training iteration. The testing loss was evaluated every 1000 training iterations, using ten times more samples than during a training step. In the parameterized network experiments (Secs. IV B 2-4), the testing loss was averaged across 100 random parameter vectors. Training was continued for a fixed number of iterations (600,000 epochs for the fixed parameter experiments, and 30,000,000 epochs for the parameterized network experiments). The final network was taken as that which achieved the lowest testing loss across all iterations. The MFPT problem in the slit-well domain cannot be solved exactly in closed form due to the complex nature of the geometry. Instead, approximate ground truth solutions to the problem were obtained using the finite element method (FEM). Following Magill et al. [57] , the problem for g 0 was solved using a mixed FEM formulation implemented in FEniCS [64] . The electric field E included in the PDE (Eqn. 6) was obtained by also approximating the electric potential u by a mixed FEM formulation. As stated above, u is defined on the nominal domain Ω 0 , whereas g 0 is defined on a smaller domain depending on σ. Thus, for the FEM solutions it was necessary to first solve u and E on a discretization of Ω 0 , project E onto a discretization of the appropriate Ω, and then define the variational problem for g 0 on Ω. The mesh decomposition of the domain was conducted using the mshr package in FEniCS. The resolution parameter was set to 200, and the re-entrant corners were approximated linearly by 400 segments each. The same mesh parameters were used for all values of σ, and for the nominal domain, Ω 0 , on which u was solved. This section details results obtained using the NNM to solve the g 0 equation modeling the MFPT of nanoparticles driven through the slit-well device (described in Sec. II). The focus throughout is on the relationship between key problem parameters and observables of physical interest, where g 0 acts as a proxy between the two. The first observable of interest is, naturally, the mean first passage time τ . As described in Equation 4 , τ can be obtained by integrating g 0 over the domain Ω. Throughout this paper, the integration of g 0 to estimate 4 . NNM solutions to the g0 equation subject to (a) a small particle size and a weak electric field, (b) a small particle size and a strong electric field, (c) a large particle size and a weak electric field, and (d) a large particle size and a strong electric field. τ is accomplished using the same Monte Carlo procedure described for L PDE in Sec. III. In practice, an observable of greater interest than the mean first passage time itself is the net electrophoretic mobility of the nanoparticles through the slit-well device over long timescales [7, 8] . In particular, the electrophoretic mobility is typically defined as where x t is the ensemble average of the x position at time t, and E c is a characteristic scale for the applied electric field strength. It is not clear whether µ electro can be inferred directly from the g 0 problem being solved here. Instead, the present paper will investigate a similar observable of interest, which will be called the effective mobility where L 0 is the mean horizontal distance from ρ 0 to the absorbing wall. The characteristic field strength is chosen of the form E c = V c /L c , where V c is a characteristic voltage drop and L c is a characteristic length scale. Since the overall voltage drop across the system is of order one and proportional to the field strength λ, we choose V c = λ. For numerical simplicity, we also choose L c = L 0 , thus obtaining the final equality in Eqn. 22 . The effective mobility is expected to exhibit similar features to the electrophoretic mobility, as both consist of characteristic particle velocities divided by characteristic electric field strengths. A comprehensive exploration of the relationship between the two mobility definitions is left to future work. A. Characteristics of g0 Figure 4 shows contour plots of g 0 solutions computed using the NNM, with the corresponding estimates of τ and µ eff shown in the legends. The four subplots correspond to the four essential parameter regimes alluded to in Fig. 1 . Note that the magnitude of the color scale varies across the four subplots. First, consider the solution of g 0 in Fig. 4 (a) corresponding to small particles (σ = 0.125) driven by a weak field (λ = 5.0). Here, g 0 has a maximum in the left slit near the peak of the initial particle distribution ρ 0 . Naturally, since the particle positions are initialized according to ρ 0 , the average residence time in that region is relatively high; this feature is common to all four subplots in Fig. 4 . Outside the left slit, g 0 decreases nearly monotonically from left to right, eventually reaching a value of zero on the absorbing boundary. The shape of this function is nearly visually indistinguishable from the solution with σ = 0.125 and λ = 0 (not shown), and is characteristic of predominantly diffusive dynamics in all regions of the domain. Figure 4 (b) again shows g 0 for small particles (σ = 0.125), but now driven by a much stronger field (λ = 50.0). In contrast with the monotonically decreasing solution in Fig. 4 (a), in this scenario g 0 is relatively constant throughout most of the domain until a boundary layer near the absorber. In fact, here g 0 even exhibits some minor nonmonotonic features: a shadow is evident in the bottom-left of the well, and a local maximum is attained at the entrance to the right slit. Despite the strong driving field, the highly diffusive small particles readily explore the entirety of the well before absorption. Drift and diffusion effects are relatively balanced in this case, with the uniformity in x reflecting strongly driven motion in the horizontal direction, and the uniformity in y reflecting rapid diffusion in the vertical direction. Figure 4 (c) shows g 0 for large particles (σ = 0.625) driven by a weak field (λ = 5.0). Notice that the walls of the domain are shifted inward by 0.5σ, reflecting the reduced area that can be occupied by the center of mass of larger particles (Sec. II). In this scenario, the smaller diffusion coefficient of the larger particles balances the weaker field, resulting in a solution that more closely resembles that in Fig. 4 (b) than that in Fig. 4(a) . Here, the nonmonotonic features noted in Fig. 4 (b) are even more pronounced. In fact, the solution in Fig. 4 (c) is visibly increasing from left to right across the well, in contrast to both the solutions in Figs. 4(a,b) . Although these nonmonotonic features are field-driven effects, the relative uniformity of g 0 in the vertical direction illustrates that diffusion is still an important transport mechanism in this regime. Thus, as was the case in Fig. 4(b) , drift and diffusion are of comparable importance; the differences between the two solutions are primarily due to the modifications to the domain geometry. Finally, Fig. 4 (d) shows g 0 computed for large particles (σ = 0.625) subject to a strong electric field (λ = 50.0). Here, the shape of the solution differs significantly from those in all of Figs. 4(a-c). In Fig. 4(d) , g 0 takes on very small values throughout the entire well, and decreases substantially from the top of the well to its bottom. The combination of the low diffusion coefficient and the very strong driving force causes the large particles to remain primarily streamlined in the upper region of the well as they move rapidly from ρ 0 to the absorber. The MFPTs τ and effective mobilities µ eff in the four scenarios of Fig. 4 are consistent with the expected sorting mechanisms in each regime [7] . When the field is strong, smaller particles have a larger τ and lower µ eff than larger particles. The converse is true at weak fields. Future work should explore the relationship of g 0 , τ , and µ eff with standard explanations for these phenomena, such as the entrance effect [7, 65] . The purpose of the discussion in this section was to illustrate the variety of complicated behaviours that arise in g 0 solutions across the different physically meaningful parameter regimes in the slit-well. Of particular note was the significant impact of σ on g 0 through geometric effects (i.e., comparing (b) to (c)). In Sec. IV B, parameterized NNM solutions will be trained to interpolate nonlinearly between all four solutions in Fig. 4 . Ultimately, in Sec. IV B 4 this will yield continuously differentiable mappings between both problem parameters λ and σ and both key physical observables τ and µ eff , thereby capturing the entirety of this rich sorting mechanism in a single numerical solution. In this section, g 0 will be leveraged as a proxy for the calculation of the metrics τ and µ eff . In practice, it is common in MNFD research and development (and scientific research more broadly) to study how such key metrics change in response to variations in the system parameters. The simplest approach to characterizing this variation is to compute or measure the metrics independently for a large number of parameter choices. In Sec. IV B 1, the NNM is applied to precisely this task of calculating τ and µ eff for many combinations of particle size σ and field strength λ. The above approach, however, requires repeated calculation of the key metrics which can be expensive when considering many independent parameters. As discussed in Sec. I, the NNM can be leveraged to solve such parameterized problems directly across continuous ranges of parameter values. The high-dimensional function g 0 (x, y; λ, σ) implicitly encodes τ and µ eff as continuously differentiable functions of σ and λ. The NNM is used to approximate this function directly in Secs. IV B 2-4, for g 0 solutions parameterized directly by λ, σ, or both simultaneously. Throughout Sec. IV B, four quantities are used to characterize the performance of the NNM across parameter space. These quantities are all plotted in Fig. 5 , with each column corresponding to one of the four NNM formulations discussed above. Naturally, both the MFPT τ and the effective mobility µ eff are included in the analysis. These are plotted in the first two rows respectively, alongside the reference values computed using FEM. That is, Fig. 5(a-d) shows τ computed using the four NNM variations, and Fig. 5(e-h) shows the corresponding µ eff values. The NNM results are indicated by lines, and the corresponding FEM results are included as stars. Dotted lines in Fig. 5(a,e) connect values that are only computed at discrete parameter choices, whereas solid lines used everywhere else indicate values that are computed over continuous parameter ranges. The insets in Fig. 5 (a-h) provide more detail on certain subranges of the data where interesting behavior occurs. Next, in order to quantify the accuracy of the τ values obtained via the NNM, the relative error ε with respect to the ground truth FEM solution is computed. Specifically, ε is defined as the relative error of τ with respect to τ FEM , i.e., where τ and τ FEM are the MFPTs computed by the NNM and FEM respectively. Note that this error metric emphasizes the use of g 0 as a proxy for τ , rather than the importance of the accuracy with which g 0 itself is resolved. That said, since τ is the integral of g 0 and integration is a linear operator, it is straightforward to see that ε is bounded above by the relative L 1 norm of the error in g 0 . The relative errors ε are plotted in Fig. 5(i-l) . Here, circular markers indicate the discrete parameter choices at which ε was computed. Additionally, the plots in Fig. 5 (il) contain a dotted black line at 10 −2 , corresponding to a relative error of 1%. This is representative of a relative error threshold that is typically attainable and acceptable in MNFD research. Indeed, App. A describes standard particle simulations that were used to approximate τ independently of both the NNM and FEM. The formulation is similar to that used in actual studies of particles in the slit-well [7, 8] . The relative error of these particle simulation results was found to be comparable to or below 1% for all choices of parameters λ and σ. The final quantity included in the analysis is similar to the loss functional L used during the NNM training process (Eqn. 9). However, the total loss provides only a single characterization of a network's performance over its entire domain. When the NNM was used to solve parameterized g 0 problems, it was valuable to evaluate the relative performance of these solutions at different points in parameters space. To this end, we defined the marginal loss L(g 0 |λ, σ), a parameter-dependent generalization of the total loss. As with the true loss, L(g 0 |λ, σ) is the sum of the L 2 norms of the PDE, BC, and normbased residuals. However, whereas the total loss averages these quantities over all choices of λ and/or σ (Eqns. 12, 13, and 14) , the corresponding terms in L(g 0 |λ, σ) were instead treated as functions of λ and σ. In the case of nonparameterized NNM solutions, the marginal loss definition simply reduces to the original total loss. The marginal losses are plotted in Figs. 5(m-p) . The values in Fig. 5 (m) correspond to NNM solutions trained at fixed parameter choices, and are thus indicated by discrete circular markers. Conversely, since L(g 0 |λ, σ) can be evaluated continuously for parameterized solutions, the corresponding marginal loss values shown in Figs. 5(n-p) are indicated by solid lines sampled finely throughout the parameter space. This section contains a discussion of the results in Fig. 5(a,e,i,m) . Here, the NNM was applied repeatedly to solving the g 0 equation for fixed choices of the problem parameters: field strength λ and particle size σ. This NNM formulation is the same as the one used in Sec. IV A, but now applied to many more choices of the problem parameters. Specifically, the results are shown for 10 choices of λ uniformly spaced from 5 to 50 and 5 choices of σ uniformly spaced from 0.125 to 0.625, with a distinct neural network used to approximate g 0 for each parameter combination. The horizontal axis in Fig. 5(a,e,i,m) indicates λ, and the colors correspond to choices of σ as per the legend in Fig. 5(n) . For small values of λ in Fig. 5(a) , τ is monotonically increasing with σ, and for large values of λ (see the inset) the opposite is true. Moreover, the finer sampling of parameter space resolves new features that were not clear from examining only the four samples in Sec. IV A. For instance, Fig. 5(a) shows that τ decreases monotonically with λ for each choice of σ. In addition, the dependence of τ on σ is much stronger at low field strengths. The same sorting behaviors can be viewed from a different perspective via µ eff in Fig. 5(e) . In addition, the crossover in sorting order around λ ≈ 25 is better resolved by µ eff than τ . Indeed, in Fig. 5 (e) it is clear that there is no single value of λ for which τ and µ eff are entirely independent of σ. Of course, the results discussed above for τ and µ eff are not novel, as they are consistent with published results on the slit-well device (e.g., Cheng et al. [7] ). Rather, the purpose of this discussion is to illustrate two points. First, that valuable information can be extracted by studying the variation of key output metrics (here, τ and µ eff ) as functions of the key input parameters (here, λ and σ). Second, that the physical problem being studied in this paper (Sec. II) indeed captures essentially the same physical mechanisms expected for the actual slit-well system. Before considering the benefits of the more ambitious parameterized NNM formulations, it is important to assess how accurately the NNM resolves τ and µ eff when applied to the simpler task of solving g 0 at a single point in parameter space. Figs. 5(a,e) suggest a good agreement between the predictions of FEM and NNM for all choices of λ and σ. This agreement is quantified precisely in Fig. 5 (i), which shows ε, the relative error in τ . In this plot, it appears that ε is roughly independent of both σ and λ, suggesting that the current implementation of the NNM is fairly robust throughout the problem parameter space. This is corroborated by the testing losses L(g 0 |λ, σ) plotted in Fig. 5(m) , which are also roughly independent of the problem parameters. Most importantly, for all choices of parameters λ and σ in Fig. 5(i) , ε is well below the 1% error threshold indicated by the black line. In other words, the NNM is at least as effective at resolving τ as the Brownian dynamics particle simulations included in App. A. Whereas in Sec. IV B 1, 50 networks where used to obtain 50 different g 0 solutions, which were then integrated over their respective domains to produce 50 different τ measurements, in this section only 5 networks are utilized to accomplish the same goal. Each of these 5 networks solves g 0 (x, y; λ) for λ ∈ [5, 50] at a fixed choice of σ. As in Sec. IV B 1, the metrics τ , µ eff , ε, and L(g 0 |λ, σ) are computed from the solutions; these are plotted in Fig. 5 (b,f,j,n) respectively. Comparing Fig. 5(b) to (a) and (f) to (e), it is clear that the NNM formulation parameterized by λ recovers the same results previously obtained by solving g 0 independently for many different parameter choices in Sec. IV B 1. One advantage of the parameterized NNM formulation is evident in the inset of Fig. 5(f) . For each choice of σ, there is a λ value for which µ eff is minimal. When computing µ eff only at discrete choices of the parameters [as in Fig. 5(e) ], the exact location of these minima is not clear. Instead, the results in Fig. 5 (f) illustrate that the parameterized NNM formulation naturally resolves the existence of local minima, since the solution is trained continuously for all parameter values in the training domain. The benefit of continuous mappings from problem parameters to key output metrics becomes more valuable as dimensionality of parameter space is increased (e.g., as explored in Sec. IV B 4). Despite this potential advantage, the parameterized NNM formulation must be utilized with caution. In this case, it is unclear from Fig. 5 (f) alone whether the predicted minima in µ eff occur at the correct values of λ. Throughout most of parameter space in Fig. 5(f) , the NNM results (solid lines) appear to agree well with show any evidence that µ eff attains a minimum at all in this region of parameter space. However, FEM measurements conducted on a finer sampling of λ (not shown) confirm that there is a subtle minimum in µ eff near λ = 5.5, for σ = 0.625. The NNM prediction shown in Fig. 5 (f) places this minimum at roughly λ = 6. Regardless of the accuracy with which this feature is resolved here, this case illustrates that parameterized solutions can potentially provide clues to the existence of certain features in parameter space that would be in-visible to a coarse sampling of measurements made at discrete parameter choices. The relative error ε and marginal loss L(g 0 |λ, σ) in Figs. 5(j,n) quantify the accuracy of the g 0 (x, y; λ) solution. Here, both ε and L(g 0 |λ, σ) are highest at the boundaries of the λ training range and fairly uniform throughout the majority of the interior of the training range. In particular, both are highest at the left boundary, λ = 5. This relationship between error and loss is similar to those studied in Magill et al. [57] , and provide further justification for using the (marginal) loss as an a posteriori method for gauging the reliability of NNM solutions. The deterioration in performance seen in Figs. 5(j,n) at the boundaries of the λ training range can likely be attributed to the uniform Monte Carlo sampling of λ during training. The exact end points have very low probabili-ties of being sampled directly; moreover, their neighborhoods are only sampled on one side, whereas the neighborhoods of points nearer to the middle of the λ training range are sampled thoroughly on both sides. This could effectively lead to an under-representation of the behavior near the end points in the training loss. Characterizing this tentative mechanism is beyond the scope of the present work. Overall, only 3 of the 50 relative errors in Fig. 5 (j) slightly exceed the 1% error threshold. Thus, the implementation of the parameterized NNM studied in this section meets the standard of accuracy typically attained by BD simulations. In the regions of parameter space where the relative error was not measured directly, the marginal loss [ Fig. 5(n) ] provides an a posteriori estimate of the error, suggesting that the NNM's performance is excellent except for λ values very close to the boundaries of the training range. Altogether, these results demonstrate that the NNM is a feasible technique for solving the g 0 problem over a continuous range of field strength values. Moreover, using g 0 as a proxy for τ and µ eff enables the NNM to resolve the behavior of these key output metrics continuously over the target parameter range. To expand upon the unique strengths of the NNM, this section will consider the problem of solving g 0 as a function of the particle size σ. Here, since the diffusion coefficient is being modeled as D = σ −1 , the terms of the g 0 PDE depend directly on the parameter σ, just as they depend directly on λ. However, the location of the boundaries of the slit-well domain also depend explicitly on the parameter σ (Sec. II). Thus, whereas λ only modified the PDE terms, σ modifies both the PDE terms and the domain geometry. As described in Sec. I, it is challenging for classical reduced-order methods to deal with parameterized domain geometries. However, this section will demonstrate that the NNM can handle geometrymodifying parameters (σ) just as easily as parameters that do not modify the domain geometry (λ). Once again, the MFPT τ , effective mobility µ eff , relative error ε, and marginal loss L(g 0 |λ, σ) are computed from the NNM solutions and plotted in Fig. 5(c,g,k,o) respectively. Whereas the results in Fig. 5(a,e,i,m) and Fig. 5(b,f,j,n) for Secs. IV B 1-2 were solved and plotted as functions of λ, the results for this section are presented as functions of σ. Specifically, each curve in Fig. 5(c,g,k,o) represents a single neural network trained over the range σ ∈ [0.125, 0.625] at a fixed choice of λ (indicated by the legend in Fig. 5(o) ). The τ and µ eff measurements in Fig. 5(c,g) indicate that the NNM parameterized by σ recovers the same physical properties observed for the NNM with fixed parameters (Sec. IV B 1) and the NNM parameterized by λ (Sec. IV B 2) . For the τ measurements in Fig. 5(c) , the λ = 5.0 curve is monotonically increasing whereas the λ = 50.0 curve (enhanced in the inset) is monotonically decreasing. This implies that small particles have a lower MFPT at low field strengths, and large particles have a lower MFPT at high field strengths. Likewise, the λ = 5.0 curve for µ eff in Fig. 5(g) indicates that small particles are more mobile at low field strengths, whereas the λ = 50.0 curve indicates that large particles are more mobile at high field strengths. Finally, the λ = 25.0 curve for µ eff is clearly nonmonotonic [see the inset of Fig. 5(g) ]. In fact, the plots of µ eff make it clear that for all intermediate field strengths (λ = 15, 25, 35) the effective mobility is a nonmonotonic function of σ. Altogether, the results in Fig. 5(c,g) show that the NNM recovers the same physical information whether it is trained repeatedly at independent parameters (Sec. IV B 1), as a continuous function of λ (Sec. IV B 2), or as a continuous function of σ. Visual comparison of the NNM results (solid lines) to the ground-truth FEM results (stars) in Figs. 5(c,g) suggests good agreement between the two through most of the parameter space. However, the effective mobilities computed by the NNM in Fig. 5 (g) deviate noticeably from the FEM results at the left endpoint σ = 0.125. Accordingly, the relative errors and marginal losses plotted in Figs. 5(k,o) are also highest at σ = 0.125. In fact, ε and L(g 0 |λ, σ) of the NNM solutions parameterized by σ [ Figs. 5(k,o) ] exhibit the same structure previously identified (Sec. IV B 2) in ε and L(g 0 |λ, σ) of the NNM solutions parameterized by λ [ Fig. 5(j,n) ]. That is, ε and L(g 0 |λ, σ) are roughly uniform for intermediate values of σ but increase sharply near the boundaries of the training domain. In particular, ε and L(g 0 |λ, σ) are consistently higher at the low-σ endpoint than at the high-σ endpoint. Overall, the relative error in Fig. 5(k) is well below the 1% error threshold for most of the training range. As was the case in Sec. IV B 2, at the few points where relative error exceeds 1%, it only does so by a small amount. The marginal loss continues to behave as an a posteriori measure of solution accuracy, and suggests that the regions of high relative error are once again concentrated near the endpoints of the training range. Altogether, then, these results confirm that the NNM is a feasible method for solving g 0 directly as a function of σ. In particular, the parameter σ directly changes the domain geometry in addition to modifying the terms of the PDE. Despite this, the performance measured in this section is essentially the same as that reported in Sec. IV B 2, where the NNM was parameterized by the simpler parameter λ (which does not affect domain geometry). Thus, it appears that the NNM can handle geometry-modifying parameters just as easily as parameters that do not modify domain geometry. This is particularly interesting given the difficulty of treating parameterized geometries with other reduced-order modeling techniques. The results shown so far have established that the NNM can robustly solve the g 0 equation in the slit-well MNFD (Sec. IV B 1), and that the method can easily be extended to produce solutions parameterized by field strength λ (Sec. IV B 2) or particle size σ (Sec. IV B 3). Expanding upon this capability, in this section the NNM is used to approximate g 0 as a function of both λ and σ simultaneously (Fig. 3) . Specifically, a single neural network is trained to approximate the four-dimensional function g 0 (x, y; λ, σ) over the same parameter space previously spanned by 5 networks in Secs. IV B 2-3 or 50 networks in Sec. IV B 1. The MFPT τ , effective mobility µ eff , relative error ε, and marginal loss L(g 0 |λ, σ) are computed from the NNM solution g 0 (x, y; λ, σ) and plotted in Figs. 5(d,h,l,p) . However, the accuracy of the solution g 0 (x, y; λ, σ) is slightly worse than that observed in the previous sections [Figs. 5(a-c,e-g)] as visible in τ and µ eff [Figs. 5(d,h)] and quantitatively confirmed by ε and L(g 0 |λ, σ) [Figs. 5(l,p)] This is not entirely surprising, since the four-dimensional problem here is intrinsically more difficult than the three-dimensional (Secs. IV B 2-3) and two-dimensional formulations (Sec. IV B 1) of the problem. Moreover, the network depth and width were held constant over all experiments, and the training time was held constant for all the parameterized formulations (Sec. III A). Regardless, although g 0 (x, y; λ, σ) appears somewhat less accurate than the solutions from previous sections, it generally still meets the target 1% error threshold over most of its parameter training range. An exception to this statement is presented by the results at λ = 5 [the brown lines in Figs. 5(d,h,l,p)], for which the error of g 0 (x, y; λ, σ) is greater than 1% over nearly the entire σ training range. The marginal loss also reflects this poor performance; for λ = 5, L(g 0 |λ, σ) in Fig. 5(p) is more than an order of magnitude larger than L(g 0 |λ, σ) from all previous experiments [i.e., those in Figs. 5(m-o) ], and several times larger than the other L(g 0 |λ, σ) curves in Fig. 5(p) . Conspicuously, the L(g 0 |λ, σ) curves in Fig. 5 (p) vary significantly with λ, whereas in Figs. 5(m-o) very little variation was observed between the different L(g 0 |λ, σ) curves. Of course, the results in Fig. 5 (p) differ fundamentally from those in Figs. 5(m-o); whereas each curve in Figs. 5(m-o) corresponds to one or more independent networks, all the curves in Fig. 5(p) are generated by a single network. In fact, the brown (λ = 5) and blue (λ = 50) curves, which exhibit the highest marginal losses in Fig. 5(p) , lie directly on the boundary of the network's (λ, σ) training domain. When analysing the NNM parameterized by λ or σ (Secs. IV B 2-3), a substantial deterioration in accuracy was found to be highly localized near the boundaries of the parameter training range. If a similar boundary effect exists here for the g 0 (x, y; λ, σ) solution, then the results in Figs. 5(d,h,l,p) are not representative of the solution's overall accuracy over the entire problem parameter space, as essentially half of the data shown in those plots lie on the boundary of the network's parameter training space. To investigate this possibility, the same metrics that are shown as discrete lines in Fig. 5(d,h,l,p) are replotted in Fig. 6 as continuous functions of (λ, σ) . The first row [Figs. 6(a-d)] shows three-dimensional plots of the metrics over parameter space, whereas the second row shows two-dimensional contour maps [Figs. 6(e,f)] and color maps [ Figs. 6(g,h) ] of the same metrics. As anticipated, the relative error ε [Figs. 6(c,g)] and marginal loss L(g 0 |λ, σ) [ Figs. 6(d,h) ] are only large near the boundaries of the parameter training space. Indeed, ε in Fig. 6 (g) is below the 1% threshold (indicated by the solid white line) throughout the majority of the parameter space, confirming the suspicion that the line plots in Fig. 5 provide a biased view of the g 0 (x, y; λ, σ) solution. The boundary effect is particularly clear in L(g 0 |λ, σ) in Fig. 6(d) , which features a prominent convex shape. Here, L(g 0 |λ, σ) is consistently higher along all the edges of the training parameter space and decreases monotonically and rapidly away from the boundary. In particular, L(g 0 |λ, σ) is exceptionally large at the corners of the training space. Note that the decay of marginal loss away from the boundaries of the parameter space is actually substantially sharper than it appears visually in Figs. 6(d,h) . For one, the colur scales for Figs. 6(d,h) are logarithmic. Moreover, the color map used throughout Fig. 6 is not perceptually uniform: it exhibits far more variation in color and contrast near the lower end of the scale. These plotting choices make the subtle structure of the marginal loss more apparent, but give it the biased appearance of a gradual variation throughout the domain. In actuality, when plotted with a linear color scale and a perceptually uniform color map, the marginal loss appears essentially flat through most of the domain. In quantitative terms, the marginal loss varies by only a factor of 2 over the interior of the domain [i.e., within the white line in Fig. 6(h) ], whereas it grows by roughly an order of magnitude near the edge of the domain (i.e., outside the white line). As expected, the relative error ε [Figs. 6(c,g)] is closely tied to the marginal loss L(g 0 |λ, σ). Relative error is uniformly low in the interior of the parameter space (roughly (λ, σ) ∈ [15, 45] × [0.2, 0.6]), corresponding to the flat interior of L(g 0 |λ, σ). Additionally, near the two corners at λ = 5 where L(g 0 |λ, σ) is largest, ε also attains its highest values, approaching 10%. There is also a small peak in ε at the (λ, σ) = (50, 0.125) corner, corresponding to an equally small peak in L(g 0 |λ, σ) at the same corner. Surprisingly, although L(g 0 |λ, σ) exhibits a clear peak at the (λ, σ) = (50, 0.625) corner, ε does not. Therefore, the marginal loss L(g 0 |λ, σ) once again appears to act as a conservative a posteriori estimator of relative error ε: high relative error occurs near regions of high marginal loss, although high marginal loss does not always imply high relative error. As noted above, the performance of the g 0 (x, y; λ, σ) solution deteriorates even more significantly at the corners of the parameter training space than on its edges. This is more complicated than the boundary effect discussed for the solutions parameterized by just λ or σ, and can be accounted for by extending the postulated mechanism from Secs. IV B 2-3. There, it was argued that the deterioration in performance arises because the stochastic sampling used during training underrepresents boundary points: whereas the neighborhoods of interior points are thoroughly sampled on all sides, this is not true for boundary points. In the one-dimensional parameter training spaces considered in Secs. IV B 2-3, the boundary of the parameter training range consists only of the two end points, which are equally affected by the sampling bias. In the two-dimensional parameter training space considered here, however, the corners and the edges of the boundary are under-represented to different extents by the stochastic sampling process. Whereas parameter values on the edges of the domain only have 50% as many neighboring points inside the training space as interior points, parameter values on the corners have only 25% as many. This tentatively explains why per-formance is so much worse at corners of the parameter training space than it is on the edges. If this mechanism extends to higher dimensional parameter spaces, it may eventually prove to be a dominant source of error: for instance, the corners of an ndimensional hypercube have only 1/2 n as many neighbors inside the training space as interior points, and the number of boundary segments (corners, edges, faces, . . .) grows rapidly with n. In fact, the fraction of a parameter space lying within a given distance of its boundary also increases with dimensionality. Altogether, these observations suggest the need for further investigation into this boundary effect, its possible connection to Monte Carlo sampling of the loss during training, and methods (such as low-discrepancy sampling methods [66, 67] ) for resolving the problem. In contrast to the line plots in Fig. 5 , the plots in Fig. 6 highlight the richness of information available through the g 0 (x, y; λ, σ) solution compared to the solutions parameterized only by λ (Sec. IV B 2), σ (Sec. IV B 3), or neither (Sec. IV B 1). For instance, although the NNM solutions parameterized by λ or σ (Secs. IV B 2-3) suggested a nonmonotonic dependence of τ and µ eff with respect to σ for certain values of λ, they did not provide sufficient information to estimate the exact range of λ over which this behavior persists. Just as the NNM solutions parameterized by λ or σ were shown in Secs. IV B 2-3 to be more helpful than the fixed parameter solutions (Sec. IV B 1) in localizing one-dimensional critical points, so is the NNM solution parameterized by both λ and σ more useful for delineating the nonmonotonic regions of parameter space. The range of nonmonotonic behavior can be estimated visually from τ in Fig. 6(e) . In that plot, whenever a given contour line crosses the same value of λ twice, τ depends nonmonotonically on σ. However, because the contours of τ are nearly parallel to the contours of λ, it is difficult to visually identify the regions of nonmonotonicity using this vertical line test. In contrast, the µ eff plots in Figs. 6 (b,f) are much easier to interpret. Again using a vertical line test, it is easy to see that nonmonotonic dependence of µ eff on σ is present at voltages as low as λ ≈ 10. In fact, µ eff is doubly nonmonotonic with respect to both λ and σ in the large-σ, low-λ range [black region in Figs. 6(b,f) ]. Although the same trends were suspected from the solutions discussed in Secs. IV B 2-3, g 0 (x, y; λ, σ) resolves the features more completely. Despite the usefulness of g 0 (x, y; λ, σ) for resolving critical points, the solution predicts a false saddle point in µ eff at (λ, σ) ≈ (40, 0.2). The contour passing through this saddle point is highlighted by the white solid line in Fig. 6 (f). Additional FEM results (not shown) confirm that there is no saddle point anywhere in the parameter space under consideration. This error can be attributed to the fact that the true µ eff changes extremely little in the high-λ, low-σ region of the domain. For illustration, the two dotted white lines in Fig. 6 (f) indicate contours for µ eff values 1% greater and smaller, respectively, than the value of µ eff on the solid white line passing through the saddle point. As evident on the color bar above the plot in Fig. 6 (f), this ±1% range corresponds to a very small fraction of the total variation of µ eff over the domain. Despite this, the area between the dotted white lines account for roughly 25% of the total parameter training space, demonstrating that µ eff is extremely flat throughout this entire region. Indeed, this is how it is possible for the relative error [ Fig. 6 (g)] in the proximity of the false saddle point to remain at 1% or below. In other words, although the presence of a false saddle point is a noticeable qualitative error, it corresponds to a very small quantitative error in the key output metrics τ and µ eff . In fact, the visual appearance of the saddle point in Fig. 6 (f) is intentionally accentuated by the choice of color map, as discussed for the marginal loss above. It is quite feasible that an error of such small magnitude could be resolved simply by increasing network capacity and/or training time. Still, the question arises of whether and how the NNM can be used reliably in applications where these types of incorrect or ill-conditioned features may occur. The marginal loss L(g 0 |λ, σ) provides one possible resolution to this concern. The region of increased L(g 0 |λ, σ) near (λ, σ) = (50, 0.125) in Fig. 6 (h) coincides fairly closely with the right half of the saddle point in Fig. 6(f) . Thus, L(g 0 |λ, σ) correctly reflects that the solution is less reliable in this region. If the baseline FEM solutions had not been available, the behavior of L(g 0 |λ, σ) could have been used to draw into question the validity of the predicted saddle point. In other words, the marginal loss L(g 0 |λ, σ) once again acts as a conservative a posteriori error estimator, bolstering the robustness of the NNM. Future work should elaborate on what quantitative predictions of solution quality can be based on the marginal loss, along the lines of the investigations in Magill et al. [57] . In the interim, we propose using the mean marginal loss (which is in fact the total loss, given by Eqn. 9) as an approximate threshold between regions of relatively high and low expected accuracy. This is indicated in Fig. 6 (h) by the solid white line. As expected, the region of low expected accuracy is concentrated near the boundary, and in particular includes part of the false saddle point. In fact, the marginal loss L(g 0 |λ, σ) as defined here is likely a suboptimal tool for the detection of false critical points in parameter space, because it does not directly measure gradient information with respect to (λ, σ). Rather, it is only indirectly sensitive to the error in the shape of τ and µ eff insofar as it emerges from errors in the shape of g 0 (x, y; λ, σ). For applications in which the localization of ill-conditioned critical points is of interest, modified loss functions that incorporate the derivatives of the target PDE with respect to λ and σ (e.g., like those explored by Avrutskiy [59] ) might be more relevant error estimators. This notion illustrates the potential benefits of customizing the NNM for specific PDEs and research questions, just as flux-or energy-conserving numerical methods are preferred for applications where those features are particularly important. In summary, the results in this section demonstrate that the NNM can produce a robust approximation to the g 0 (x, y; λ, σ) solution. Here, g 0 (x, y; λ, σ) enables higher dimensional visualization of τ and µ eff over λ and σ, resolving features in parameter space more accurately and completely than the solutions parameterized by only λ or σ. Furthermore, g 0 (x, y; λ, σ) accurately predicts the magnitude of τ and µ eff to within the 1% error threshold simultaneously over the majority of the parameter training space. Although g 0 (x, y; λ, σ) exhibits some regions of high relative error, L(g 0 |λ, σ) once again provides a robust a posteriori estimator of the solution's reliability throughout parameter space. This work investigated the use of the neural network method to solve a parameterized time-integrated Smoluchowski equation describing nanoparticle passage through the slit-well microfluidic device. The g 0 solutions were solved for a variety of fixed choices of field strength λ and particle size σ using both the NNM and a standard FEM implementation. Additionally, the NNM was used to solve the equation directly as a function of λ and/or σ. Mean first passage time τ and effective mobility µ eff were studied as the primary output metrics of interest, with relative error ε and marginal loss L(g 0 |λ, σ) used to characterize solution performance. The qualitative examinations of g 0 in Sec. IV A revealed a wide variety of functional behaviors over the region of (λ, σ) parameter space studied here. The four primary regimes underlying nanoparticle sorting in the slit-well (i.e., low and high fields for small and large particles) correspond to four significantly different g 0 solution types, each reflecting different interplays of drift, diffusion, and geometry. This highlights the challenging nature of the parameterized PDE problem studied in this work. Additionally, this analysis suggested that the g 0 solutions themselves may encode interesting and useful qualitative information about biophysical processes. Future work should examine how information encoded in g 0 may be complementary to qualitative information derived from stochastic particle trajectories. Of course, qualitative insights aside, the most salient feature of g 0 is that it integrates to yield the mean first passage time, τ . As noted, although τ is a quantity of widespread interest in all first passage problems and is relevant to many MNFD design problems, it appears that numerical solutions of g 0 have rarely been leveraged for such applications. The results of this paper support that g 0 may be an undervalued tool in computational biophysics. Although g 0 can be computed using many methods, such as FEM or particle simulations, this work focused on resolving g 0 using the NNM. When applied to fixed choices of problem parameters, the NNM consistently estimated τ with errors below 1%. In particular, the NNM values were at least as accurate as typical particle simulations, which are the most common tool for studying first passage problems in biophysics. However, runtime was not compared in this work, and should be a major focus of future investigations. The main appeal of the NNM is the unique ease with which it can be applied to parameterized g 0 problems. Via integration of g 0 , these solutions yield a direct mapping from key problem inputs (e.g., λ, σ) to key problem outputs (e.g., τ , µ eff ). This is particularly appealing for the application of MNFD research, where essential phenomena often depend nontrivially on the coupling of many system parameters. The results in the current work demonstrate that the NNM can learn accurate approximations of g 0 parameterized by λ, σ, or both, all using a modest network size and even without careful hyperparameter optimization. Whereas classical ROM techniques typically require special considerations to handle geometry-modifying parameters like σ, the NNM was found to resolve g 0 (x, y; σ) just as easily as g 0 (x, y; λ). As discussed, parameterized solutions can be quite useful in characterizing entire regions of parameter space. Although the NNM is expected to perform well on highly parameterized PDEs, the careful error analysis presented in the current study revealed several points of caution for future efforts in this direction. Firstly, all parameterized solutions studied here exhibited a deteri-oration in accuracy near the boundaries of their parameter training space. Nonetheless, the predicted values of τ were still mostly within the 1% margin of error. Moreover, the marginal loss functional L(g 0 |λ, σ) proposed here was found to act as a conservative a posteriori estimator of the solution accuracy throughout parameter space. That is, in any region where the relative error ε of the predicted τ was large, L(g 0 |λ, σ) was also large. The second point of caution that must be considered when applying the NNM to parameterized PDEs concerns the interpretation of key features, such as critical points, that are identified using these solutions. For instance, in Sec. IV B 4, the NNM solution exhibited an erroneous saddle point in a flat region of µ eff , which was an artifact that arose due to the ill-conditioning of the gradients of µ eff (λ, σ). In fact, plots of ε showed no indication of errors in this region, as the mistake only manifested in the curvature of the mapping. However, once again the marginal loss L(g 0 |λ, σ) did indicate that the NNM solution lost fidelity in this region of parameter. In summary, the parameterized NNM solutions were generally accurate far from the training boundaries, and L(g 0 |λ, σ) provided robust regions of confidence. Altogether, these results highlight the specific appeal of the NNM as a method for studying parameterized first passage problems via the time-integrated Smoluchowski model. We hope this work prompts further investigation into the use of g 0 with or without the NNM, and into the relationship of τ and µ eff to more standard MNFD metrics. Regarding the application of the NNM to such problems, future work should address technical challenges such as singularities posed by sharp corners, training difficulties for highly skewed geometries, and achieving competitive runtime. to the distribution ρ 0 (Eqn. 7). The position of the ith particle x i was updated according to the discretized BD equation In Eqn. A1, the particle properties are the diffusion coefficient D, the friction coefficient γ, and the particle charge q. As noted in Sec. II, both q and γ were set equal to the particle diameter σ, to capture free-draining behavior. The diffusion coefficient D was set to 1/σ and the time step was set ∆t = 10 −5 The term R(t) in Eqn. A1 is a random driving force representing the thermal motion of an implicit solvent which was sampled from a uniform distribution of mean 0 and variance 1. Rather than representing the interactions between particles and walls as perfectly rigid, the walls were implemented using a short-range repulsive shifted WCA force (A3) where r i is the minimum distance from particle i to the nearest reflective wall minus a distance r shift = 0.5(σ − σ 0 ). Here r shift corresponds to the radius of the hard core of the particle, whereas σ 0 = 0.125 is the length over which the surface of the particle is partially compressible. The potential is zero beyond a cutoff distance r cut = 2 1/6 σ 0 , so that if the center of the particle is farther than a distance r shift +r cut from the wall there is no interaction. The energy scale of the repulsive interaction was set to = 0.125 = σ 0 . Although this type of model is commonly used for particle-wall interactions, due to its improved numerical stability relative to perfectly rigid interactions, it introduces a small difference between the underlying physics of the BD simulations and the PDE models being solved in this work. For this reason, the MFPTs determined using particle simulations should not be expected to agree exactly with those obtained using the NNM and FEM methods, even in the limit of small ∆t and large N . Nonetheless, as our results corroborate, the effect of this difference between the models is small. The term E 0 in Eqn. A1 corresponds to the baseline electric field in the slit-well domain (denoted by red in Fig. 8 ). This was solved for a voltage drop of 2 units from the leftmost to rightmost boundaries, as in Magill et al. [57] . The net electric field strength was set by the parameter λ. E 0 used here was the same one described in Sec. III B. As shown in Magill et al. [57] , particle simulations conducted using an electric field solved with the NNM are nearly statistically indistinguishable from those conducted using a field solved with the FEM, so long as the NNM electric field exhibits a sufficiently small loss. The purpose of the present study is not to replicate this result, but to explore the computational advantages of the NNM over other techniques in parameterized problems. Thus, the particle simulations are conducted using the FEM electric field, which is taken as the reference ground truth. Parallel to the analysis conducted in Sec. IV B, the mean first passage time τ and effective mobility µ eff are computed using the BD simulations for various choices of field strength λ and particle size σ. These values are plotted with dashed lines in Fig. 7(a) and (b) with star markers to denote τ and µ eff values obtained by FEM. Note, τ and µ eff are only solved for the same discrete choices of λ and σ that are also computed using FEM. In addition, the relative error ε is computed using Eqn. 23 where τ and τ FEM are the MFPTs computed by BD and FEM respectively. The values are plotted in Fig. 7 (c) with circular markers denoting the parameter choices where the relative error was computed. All of the relative errors in Fig. 7 (c) fall below 2%, with majority of the values being within 1% error. This establishes a 1% error baseline against the groundtruth MFPT values computed by FEM, for which to benchmark the performance of the NNM. The baseline electric field E 0 used to drive particle motion in the slit-well device (Eqn. 6) was computed using the NNM, as described in Magill et al. [57] . That is, the baseline electric potential u 0 was solved using Laplace's equation over a voltage drop of two units from the left slit wall to the right slit wall. The electric field was then computed using the relation E 0 = ∇u 0 . The red and black contour lines in Fig. 8 correspond to the electric field E 0 and electric potential u 0 respectively, inside the slit-well MNFD. The MFPT and effective mobility of nanoparticles traversing the slit-well MNFD were obtained using BD simulations, FEM and NNM. Table I shows the runtime, in minutes, of each method used to solve the MFPT at fixed choices parameter values. Four choices of the parameters are included, illustrating that runtimes were fairly independent of parameters for NNM and FEM, but depended strongly on parameters for BD. Table II shows the runtime, in days, of each method used to solve the MFPT over large regions of parameter space. As implemented, the various parameterized NNM methods all have comparable runtimes to one another. Moreover, the total runtime of the 8099 FEM solutions exceeds the mean runtime for the parameterized NNM methods. Optimizing runtime was not a goal of the current work. The implementations of each of the algorithms studied here (NNM, BD, FEM) can undoubtedly be improved upon to substantially decrease the runtimes from those reported in Table I and II. Moreover, judicious use of parallelization across GPUs and/or CPUs, as applicable, could provide further improvements to each of the methods. Thus, the runtimes included here are provided for reference only, and a more careful comparison is left to future work. DNA manipulation, sorting, and mapping in nanofluidic systems DNA electrophoresis in microfabricated devices Nanofluidic devices and their applications Micro-and nanofluidic devices for environmental and biomedical applications Nanopore-based devices for bioanalytical applications Entropic trapping and sieving of long DNA molecules in a nanofluidic channel Electrophoretic size separation of particles in a periodically constricted microchannel Electrophoretic ratcheting of spherical particles in well/channel microfluidic devices: Making particles move against the net field Nanofilter array chip for fast gel-free biomolecule separation Molecular sieving in periodic free-energy landscapes created by patterned nanofilter arrays Electrophoretic time-of-flight measurements of single DNA molecules with two stacked nanopores Microfluidic devices for the detection of viruses: aspects of emergency fabrication during the COVID-19 pandemic and other outbreaks Scalable mRNA and siRNA lipid nanoparticle production using a parallelized microfluidic device First passage problems in biology Entropic trapping of DNA with a nanofiltered nanopore DNA translocations through nanopores under nanoscale preconfinement Mapping the variation of the translocation α scaling exponent with nanopore width A sequential nanopore-channel device for polymer separation A guide to first-passage processes Asymptotic analysis of first passage time problems inspired by ecology First passage processes in cellular biology Introduction of first passage time (FPT) analysis for software reliability and network security Synaptic transmission and Fokker-Planck equation ImageNet: A large-scale hierarchical image database Deep learning Natural language processing (almost) from scratch Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations DGM: A deep learning algorithm for solving partial differential equations Recursive regression with neural networks: Approximating the HJI PDE solution Neural networks catching up with finite differences in solving partial differential equations in higher dimensions Solving high-dimensional partial differential equations using deep learning Deep backward schemes for high-dimensional nonlinear PDEs. Mathematics of Computation Simulator-free solution of highdimensional stochastic elliptic partial differential equations using deep neural networks Neural network study of hidden-charm pentaquark resonances A deep learning solution approach for high-dimensional random differential equations Learning in modal space: Solving time-dependent stochastic PDEs using physics-informed neural networks Machine learning approximation algorithms for high-dimensional fully nonlinear partial differential equations and secondorder backward stochastic differential equations Machinelearning solver for modified diffusion equations A proof that artificial neural networks overcome the curse of dimensionality in the numerical approximation of Black-Scholes partial differential equations A proof that deep artificial neural networks overcome the curse of dimensionality in the numerical approximation of Kolmogorov partial differential equations with constant diffusion and nonlinear drift coefficients A proof that rectified deep neural networks overcome the curse of dimensionality in the numerical approximation of semilinear heat equations NVIDIA SimNet™: An AIaccelerated multi-physics simulation framework Numerically solving parametric families of highdimensional Kolmogorov partial differential equations via deep learning The deep parametric PDE method: application to option pricing Solving parametric PDE problems with artificial neural networks A review of parametric model order reduction techniques A review of model order reduction methods for large-scale structure systems A deep learning approach to reduced order modelling of parameter dependent partial differential equations Building high accuracy emulators for scientific simulations with deep neural architecture search Deep neural networks for nonlinear model order reduction of unsteady flows Model order reduction assisted by deep neural networks (rom-net) A comprehensive deep learning-based approach to reduced order modeling of nonlinear time-dependent parametrized PDEs Geometry parametric model order reduction with randomly generated projection bases Model reduction of variable-geometry interconnects using variational spectrally-weighted balanced truncation A multiparameter momentmatching model-reduction approach for generating geometrically parameterized interconnect performance models PhyGeoNet: Physics-informed geometry-adaptive convolutional neural networks for solving parameterized steady-state PDEs on irregular domain Neural network solutions to differential equations in nonconvex domains: Solving the electric field in the slit-well microfluidic device Solving nonlinear and highdimensional partial differential equations via deep learning Enhancing function approximation abilities of neural networks by training derivatives Deep-Onet: Learning nonlinear operators for identifying differential equations based on the universal approximation theorem of operators Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems Understanding the difficulty of training deep feedforward neural networks Adam: A method for stochastic optimization The FEniCS project version 1.5. Archive of Numerical Software Kinetics of escape through a small hole Quasi-Monte Carlo sampling for machine-learning partial differential equations Enhancing accuracy of deep learning algorithms by training with low-discrepancy sequences The authors gratefully acknowledge funding from Mitacs under the Accelerate Entrepreneur program (Ref.