key: cord-0492398-k4dxk9nu authors: Cundin, Luisiana X. title: General Solution For Generalised Newell-Whitehead-Segel Equations date: 2020-08-05 journal: nan DOI: nan sha: 0653c16f08f1cc361293910f820ec70fa552e69a doc_id: 492398 cord_uid: k4dxk9nu In this monograph, two sets of parabolic differential equations are studied, each with nonlinear medium response. The equations are generally referred to as"Newell-Whitehead-Segel equation,"which model a wide variety of nonlinear physical, mechanical and biological systems. Nonlinear medium response can be viewed in many perspectives, such as, memory response from the medium, whereby, the medium"remembers"earlier influences; reactive responses, whereby, the medium is actively responsive to input, such as, chemical reactivity, turbulence and many other circumstances; these equations arise often in the biological sciences when modeling population dynamics, whether the population be genomic, such as, alleles, or animal species in the environment; finally, these sets of equations are often employed to model neurological responses from excitable cellular media. The solutions provided are of a very general nature, indeed, whereby, a canonical set of solutions are given for a class of nonlinear parabolic partial differential equations with nonlinear medium response expressed as either a p-times iterative convolution or p-times multiplicative response. The advantage of canonical solution sets are these solutions involve classic representative forms, such as, Green's function or Green's heat kernel and aid researchers in further complication, analysis and understanding of the systemic behavior of these types of nonlinear systems. Nonlinear dynamic systems both fold and stretch the experimental space within which they reside; repeated folding and stretching ultimately leads to chaotic behavior. Chaos is of particular interest to many modern researchers in their study of complex dynamic systems. Chaotic behavior is a deterministic system at heart, yet behaves seemingly random over time; thus, revealing a manifold world that enjoys an array of very rich and convoluted behavior. Even though the "heat equation" is widely perceived a dynamic process not exhibiting chaotic behavior, this most fundamental process describes some of the most elementary physical processes in the universe, namely, information content of a system, which physicist also deem as entropy. Chaos becomes evident within heat evolution if the the heat equation is run in reverse, the so-called "Reversed Heat Equation," whereby, time is reversed and cooler regions pool and pile heat upon any hotter region in the vicinity. The implosion of fissile material exhibits this behavior, where unintended hot spots form within the kernel of the plum, causing unstable, inconsistent chain reactions. Ultimately, in reverse, the heat equation does exhibit chaotic behavior, thus, despite being unaware, this most fundamental physical process is chaotic in both directions. Of course, focusing on the linear heat equation, the equation simply describes the flow of heat from hot regions to colder regions in some domain D. This natural phenomenon may appear, at first, quite unexciting, nevertheless, heat is ultimately subject to the rules of quantum mechanics, therefore, the entire process is probabilistic in nature. In fact, the heat equation, proper, essentially describes the statistical average or aggregate behavior of a system seeking equilibrium, for example, Einstein's description of Brownian motion is the heat equation, basically describing the aggregate, random motion of Brownian particles [2] . The classic example of chaos is the Lorenz systems, which describe atmospheric convection, but all air currents in the atmosphere are governed by temperature; therefore, even this process is ultimately ruled by diffusion-convection processes. Another example, in solid state physics, heat is treated as a quantum particle, referred to as a phonon, and is described by Schrödinger's equation, which is the heat equation rotated in the complex plane to reflect the conjugate solution on Riemann's sphere. In quantum mechanics, the basic solutions, so-called wave functions, are always time averaged and spatially averaged, therefore, once again, the aggregate behavior constitutes what is observable. Even in pure mathematics examples exist for the utility of the heat equation, the so-called Ricci flow is a mathematical method to explore topological surfaces [1] . Examples abound, but for one additional example, diffusion processes are widely used in the biological sciences to model everything from cellular migration, biological cellular growth, genomics, mutagenesis, population dynamics, and the list goes on. . . There are few analytic solutions for nonlinear equations, thus, analyst resort to numerical analysis to study and generate solutions for such models. There exists many prohibitive numerical reasons in attempting a study of long-time chaotic behavior for a nonlinear system, mainly, the nature of computers and how they represent numbers in binary form disallow continuous number representation [3] . An algorithm run on a computer can become "stuck in a rut," so to speak, producing false sets of solutions. The number line is not equally spaced along the binary representation in the IEEE numbering system, hence, round-off error causes interlaced sampling problems (see Bracewell for details and implications), plus, error accumulates over the run of the algorithm, leading to biased results [4] . This fact is unfortunately true for any numerical method entertained: finite difference, finite element, finite volume, interpolation, &c; as a consequence, there is no substitute for analytic solutions for differential systems, in general. Closed-form, analytic results provide a global solution for analysts, whereby, in-depth analysis may be performed for a system under study; in addition, analytic results provide a kernel for further complications, which can either yield additional closed-form results, but, if not, then enabling and aiding analysts in minimizing numerical error under further integration or under some other mathematical operation. The differential systems studied in this monograph are quite general in nature and provide closed-form, analytic solutions for a class of nonlinear parabolic differential equations. These equations are called by many names, but are known as the "Newell-Whitehead-Segel equation," which has the following general form: is functional of the unknown function u(x,t). The differential system is governed by a set of constants {D, b, ε}, where D is the diffusion rate or constant, b is the convection force and ε is the magnitude of the nonlinear medium response, which is represented by the functional f (u). For the linear case, inept, for ε and b equal to zero, the equation is referred to as the heat equation. Solutions abound for the linear case, but the most fundamental solution is that referred to as Green's solution or Green's heat kernel, viz.: Green's solution represents the most fundamental solution because it is irreducible. All other solutions, in any other form or representation, is either Green's heat kernel or an amalgamation of Green's heat kernel with other functions. It is most advisable and satisfying to achieve or find a Green's solution for a given differential system, because it provides the most basic of solutions. If further complications are entertained, such as, a multi-regional domain with differing diffusion constants, then a global solution can be constructed by suitable choice of boundary conditions for each region, thereby, forming a quotient space to describe the entire domain D. Green's heat kernel reduces to a Dirac delta function of space in the limit of zero time, thus, describing one point in the entire domain as the input of heat for all later time, hence, Green's heat kernel simply represents one point in space initially heated and then describes how this energy diffuses or spreads out over time. Having Green's function reduce to a Dirac delta function is crucial, because, if further complication be desired, it is a simple matter of convolving whatever spatial distribution under consideration with Green's heat kernel to arrive at that solution. This property is generally referred to as a transfer function, because the kernel is an impulse function. In general, further complications can be achieved by convolving a driving function f (x,t) with Green's heat kernel, thus: Numerically speaking, this property carries great weight, because the kernel carries the core, key and the bulk of the information for the system, therefore, Pareto is achieved by simply studying Green's solution; but, in addition, since the majority of the information is held in hand as an exact solution, then all further complications can mitigate numerical error, aid in forming complex quotient spaces and a host of other mathematical operations. If convection be considered, inept, b is not equal to zero, then the solution requires an exponential mapping from Green's solution to a plane tangent to Riemann's sphere, namely, Green's function would be multiplied by the exponential of the convection constant multiplied by time, i.e. G(x,t)e −bt . The exponential mapping is the result of integrating the convection constant over time and can be considered a winding number, thus, the larger the convection, the faster decay of the system; in other words, think of heat in a convective system diffusing and being further carried by fluid flow. Of course, if the convection constant is negative, then the convection process is reversed, heat is being driven into the system, therefore, the solution would grow exponentially in time. Finally, considering the nonlinear medium response, which is governed by the constant ε, there is an array of possible responses one can consider; ultimately, the system is now a function of a set of constants and the function itself, i.e. {D, b, ε, f (u)}. A more formal, general definition of the system's dependence would be a functional with a set of variables that may or may not be time or spatially dependent, in addition, the system itself plays a role in deciding the behavior at any given moment in the experimental space, viz.: The functional F is a most general functional, whereby, all the independent variables could be a function of the spatial variables, which could be themselves a set of independent variables indexed by i. Of course, all variables could be indexed to reflect a domain comprised of multiple regions (R i ), i.e. ∑ R i ⊂ D. Finally, all independent variables could be time dependent. The functional considered is homogeneous and this will be the only circumstance considered in this monograph; although, due diligence is given for further complications in the case of inhomogeneous problems. There are two fundamental medium responses to be considered in this monograph, namely, a nonlinear medium response where the functional f (u) is either an iterated convolution or multiplicative response. Now, non-linearity is a process of folding and stretching space and this leads to chaotic behavior; in both functionals considered, the basic process of folding and stretching of space is involved. In the case of convolution, generally, two functions are multiplied together and integrated over a shifted variable; this process tends to smooth or stretch the space. In like manner, multiplication of two functions is equivalent to the convolution of the Fourier transform of each function, hence, the same process is occurring, only in the frequency domain. The first case to be considered will be the iterated convolution of the medium and may be thought of as the medium possessing "memory," whereby, information is folded from the past onto the present. In probability theory, the distribution of a set of random, independent variables are added, the normalized sum tends to a normal distribution, even if the original distribution was not normally distributed, i.e. the Central Limit Theorem. This is evidenced by Green's heat kernel, which is certainly Gaussian in spatial terms, thus, the spatial average of the heat particles, Brownian particles, &c, tend to a smooth, Gaussian distribution. Of course, the Central Limit Theorem seeks tendencies to a Gaussian distribution, inept, a normal distribution N (µ, σ ); nonetheless, seeking beyond this limit would find the ultimate conclusion of an infinite convolution to tend to unity, in other words, a normal distribution, whose variance is ever widening. Contrariwise, another approach would be to consider a multiplicative medium response, which is predominately a folding of the space, for example, consider the following: The order of multiplication governs the number of solutions or roots of the system and represents the experimental space, essentially, being folded upon itself, where the solutions reside on two separate Riemann sheets. In the extreme, the logarithmic function with complex variable yields an infinite set of solutions, log(z) + i(θ + 2kπ), where each solution resides on separate Riemann sheets, indexed by integer k. Consequently, this is the reason for seeing analyst seeking various roots to nonlinear equations, usually, some derived algebraic eigenvalue, and, once solved, enables the analysts to parse out specific solution sets on the complex domain C n . Brute force methods of identifying various algebraic roots comes with frequent frustrations and failures, worse, the solution centered on may only be valid within a confined range of the complex phase, hence, leading to many unruly, interval dependent solutions that involve such functions as inverse trigonometric functions. A multiplicative functional could represent a joint set of independent probabilities, whereby, probabilities are multiplied together; this is often the perception in the biological sciences. In the physical sciences, multiplicative medium responses represent a reactive media, such as, in chemistry, where an aliquot of chemical species react over time; either endo/exothermic in nature. Other examples abound; nonetheless, primarily, the process under study is multiplicative in nature, that is to say, the probabilities are independent. On the contrary, as will be shown, the general solutions for either the multiplicative case or the iterated convolution problem are quite general in nature, but the solution can involve some rather ornery special functions, such as, the incomplete gamma functions or the exponential integral. Both representations indicate the nature of the global solution involved, being nonlinear in nature, time is not completely separable from the system's solution, thus a pole exists somewhere in the complex plane, often residing on the origin. The very nature of nonlinear differential systems are self dependent or self-feeding; therefore, like an ouroboros eating its own tail, the system is constantly subject to a continuous feedback loop. Solutions evade capture by analysts because no beginning or end presents itself, thus, it is necessary to catch the dragon by its tail and feed its tail to its mouth to initiate the nonlinear process. The solutions are all solved and presented in the Fourier codomain, and, for two important reasons: firstly, transformation to the codomain enables finding the exponential time mapping, which would be generally impossible if the spatial variable were explicitly involved; secondly, given the nature of the poles, their movements in time, &c, inversion from the codomain is generally prevented for either system considered in this monograph. All differential equations can be perceived as intimately associated with the theory of distributions and from this point of view, the solutions outlined in this monograph have far reaching implications. The range of possible functions to entertain are far too large for one individual, therefore, it is my hope to foster further curiosity and future research into this area, moreover, it is my expressed hope that the methodology outlined should greatly aid young researchers in his/her quest. The methodology employed in this monograph takes advantage of several properties convolutions possess, which, judiciously applied, enable separating the exponential mapping to the Bernoulli differential equation. Once the equation is reduced to the Bernoulli differential equation, an exact solution is admitted for these nonlinear ordinary differential equations. Ultimately, my method is based upon the belief: Postulate 1.1. Green's function represents an irreducible solution to the linear differential system; therefore, it is postulated: any further complication for the nonlinear differential system is best constructed from a set of Green's functions, thus, generating a solution comprised of canonical sets of functions. Postulate 1.2. The exponential mapping is primarily independent of the base functions involved and dependent upon the nonlinear functional alone. Before closing, I feel it remiss if I did not mention that many would find the inability to transform the solution back into the original domain, that is to say, unsuccessfully finding the inverse Fourier transforms for the solutions as deficient. In my opinion and experience, the spectral domain offers far greater freedom; moreover, after careful consideration, the codomain should be considered superior to the original domain. I would venture to say that most readers of this monograph would like to see inversion, if for no other reason, than to enable ease in applying boundary conditions; but, it is all together possible to apply boundary conditions in the spectral domain, therefore, the pressure or need for inverting the solutions simply does not exist, at least, not in my mind. With the advent of computers, it is a simple matter to generate the requisite lists for a discrete Fourier inversion and, thereby, obtain the same. A generalized inversion of the solutions presented in this monograph are, I believe, without any analytic representations, especially, when considering how the complex plane would be broken into multiple branches by the index of non-linearity p. This last point is quite prohibitive in attacking a generalized Fourier inversion of the solutions provided. It is altogether possible specific inversions be found, but it would require defining the constant of integration, the power of non-linearity, p, and extraordinary work in complex integration. Additional benefits exist working solely in the frequency domain, for example, if the coefficients are spatially dependent, then multiplication in the spectral domain can realize this fact; similarly, as seen in electromagnetic applications involving wave propagation and complex index of refraction, whose dependence is both, possibly, spatially and temporally dependent. For stochastic systems, the variables would more easily be treated in the spectral domain. I could cite additional benefits for adhering to the spectral domain, nevertheless, I'll leave the discussion here. Consider the following homogeneous parabolic partial differential equation, where medium response is nonlinear, expressed as p-times, self-repeated convolution, viz.: Assuming the general solution u(x,t) to the partial differential system is comprised of a convolution of two functions, namely, u(x,t) = G(x,t) * f (x,t), where G(x,t) is Green's function-a known solution for the linear partial differential equation-a series of particular advantages can be capitalised regarding the properties of convolutions. The first convolution property enables distributing a derivative not explicitly involved in the convolution, resulting in a standard operation for a derivative on a set of functions, see Theorem (3) in Appendix (5) . The second property of convolutions to be capitalised is the arbitrary choice to wit a derivative is applied to in a convolution, where the derivative is over the variable of the convolution, see Theorem (4) in Appendix (5) . Employing the above two Theorems enables reducing the original differential equation to the Bernoulli differential equation, viz.: where the convolution is in the spatial domain, employing Bracewell's symbol for Fourier transformations (⊂), and, prime ( ) indicates the derivative with respect to time [4] . Proof. Since Green's function is a known solution for the linear differential equation, its time derivative cancels the second order spatial derivative; thus, after substituting the convolution for the unknown function, u(x,t), the original equation, eq. (1), can be simplified with the aid of theorem (3) and theorem (4), see Appendix (5), viz.: Finally, using the property that the Fourier transform of a convolution equals the multiplication of each transformed function, theorem (5): Appendix (5), the original equation is reduced to the Bernoulli differential equation: which is equivalent to equation (2). Done. The Bernoulli differential equation is a nonlinear ordinary differential equation, and, admits exact solutions, typically solved by employing the integrating factor method; consequently, a solution is possible for the reduced differential equation, eq. (2), therefore, a solution exists for the original differential equation, eq. (1), viz.: where C(s) is the constant of integration resulting from integration over time. The coefficient of integration C(s) is represented in the codomain and completely arbitrary. The second term in function h(s,t) is the general nonlinear kernel solution, which represents the (p − 1) multiplication of Green's function in Fourier space, integrated with the integrating factor over time. The time integral is indefinite, but could be made definite for specificity, since, the time domain is defined to be {t|t ∈ R ≥ 0}. Let's show the general solution to the Bernoulli differential equation satisfies the general requirements, viz.: Proof. If equation (3) is a solution for equation (2) , then taking the derivative with respect to time yields: where the first term cancels the second order spatial derivative of Green's function, (as already discussed). It is only a matter to prove the second term satisfies equation (2), therefore, consider the following definitions: Recognize F(s,t) = h m , and, parameter m = −1/(p − 1), hence, raising the function F(s,t) to the p th power yields the identity: Equivalence has been shown. Done. A most general solution has thus been obtained, representing a closed-form, analytic solution for a homogeneous nonlinear parabolic differential equation, whose nonlinear medium response is expressed as a p-iterated convolution, viz.: The solution is comprised of Green's function G(x,t) and a nonlinear kernel function f (x,t), whose inverse Fourier transform is defined to be the inverse of the function h(s,t) raised to the power of parameter m = −1/(p − 1), viz.: where the inverse Fourier transform is signified by symbol F −1 , lastly, function h(s,t) is defined as such: Done. To facilitate analysis, the nonlinear kernel for function h(s,t) will be evaluated with definite limits {0,t} and the constant of integration C(s) set to unity, viz.: the reciprocal integrating factor: e (p−1)bt , has been pulled out of the definition for convenience. Even though the definition for the solution, function h(s,t), is completely arbitrary, this definition holds some nice properties. Firstly, in the limit of time to zero, the function h(s,t) equals unity, which yields a Dirac delta function, δ (x), after inverting from the Fourier codomain to the original spatial domain. This means the initial condition is set by some arbitrary magnitude A, i.e. u(x, 0) = Aδ (x). Furthermore, this also means the impulse solution for Green's heat function is valid for this nonlinear kernel solution, therefore, the kernel solution constitutes a nonlinear Green's function or Green's solution. Further complications are thereby possible by employing the transfer function theorem with theorem (1). As a means of checking if the solution reduces to the linear case, setting the nonlinear medium coefficient ε to equal zero, the solution in theorem (1) immediately reduces to the linear solution with convection, viz.: If the medium response is solely linear, then it would be expected both the equation and the general solution would reduce to the linear case. . . as demonstrated. With that said, let's see the behavior of the nonlinear solution along the boundary of the domain, where special attention is given to see whether or not the maximum principle is satisfied. In the limit of infinite time, the general solution, equation (4), would approach the steady state. By inspection, the specific solution, eq. (5), shows the function h(s,t) approaches a constant in time, in the codomain, this constant is multiplied by Green's function and the exponential dependent on the variable b, moreover, the frequency response of the function u(s,t) is dominated by Green's function in the codomain, therefore, so the same in the original spatial domain; as a consequence, the entire solution approaches zero along the boundary of the domain, ∂ D. The solution, even though it represents a nonlinear equation, still satisfies the maximum principle, because Green's function dominates the solution for large time. It is imperative for the heat equation, regardless if it is linear or nonlinear in nature, to satisfy the maximum principle, because for both elliptic and parabolic partial differential systems, harmonic theory states for any precompact subset D of the domain u(x,t), the maximum of u(x,t) on the closure of D is achieved on the boundary of D, i.e. The spatial behavior can be analyzed in the codomain by focusing on Green's kernel in the nonlinear kernel, eq. (5), which generally sharpens as the power of non-linearity, p, increases; this indicates the energy is being swiftly spread along the spatial dimension. This behavior is to be expected, given the original differential system, eq. (1), is a repeated convolution, hence, the solution u(x,t) is continuously being folded in upon itself over time, that is to say, the memory of the system cause the spectrum to widen with increasing power of non-linearity. There exists a very interesting behavior with respect to the magnitude of the nonlinear medium response, ε. The ramifications of this behavior have greater meaning for hyperbolic differential systems, which parabolic partial differential systems are a basis for such higher order partial differential systems. Nonetheless, with respect to the system discussed in this monograph, the magnitude of non-linearity results in a curious behavior for certain values for the medium coefficient ε. If we solve for the root of equation (5), we find the following general condition, with frequency variable set to zero, viz.: The condition above gives the time value, t 0 , for when the function h(s,t) will cross the origin along the time axis as a function of variables {ε, p, b,C(s) = 1}, and, since the function h(s,t) is really the denominator in the solution, a root creates an asymptotic discontinuity in time for critical times t 0 . By inspection of definition, eq. (5), if ε is negative, the denominator will always be positive, thus there are no roots; but, for positive values, especially larger than unity, specifically, ε > C(s), the function h(s,t), starting from the time origin, the function decreases from its maximum, C(s), crossing the time axis at time t 0 , thereafter, becoming a strictly negative function of time. If a root exists, the value of function h(s,t) will be positive as it approaches the root from the left and negative as it continues beyond the root. This will cause a momentary explosion in the magnitude for u(x,t 0 ); but, this odd behavior will only occur for time, t 0 , contained in the domain, The time for t 0 is either zero or negative for typical values {0 ≤ |ε| ≤ 1}, and, by inspection, this eliminates any root occurring within the domain, although, the convection constant can modify these results. The heat equation is generally a dissipative system, therefore, for all negative values of ε, the root, t 0 , is negative and exhibits no roots; thus, for all regular definitions for the medium response, regardless if it is linear or nonlinear, the solution is regular, convex, monotonic and bounded on the domain D. For values of ε > C(s), there does exists a root, t 0 , within the time domain, hence, there will be an asymptote somewhere along the timeline. This odd behavior is truly a nonlinear resonance and has specific application for certain physical problems. Scientists often employ the original equation, eq. (1), with coefficient ε 1 as a model in biological sciences, where two such examples are 1) genetic drift (alleles or other biomarkers) shifting in a population over time, modeling the invasion of a new dominate allele throughout a given population of some species of animal, and, 2) the shift in prevalence for some species of animals in a population over some environmental area, given food sources, disease, species invasion and other extraneous factors. The existence of a momentary explosion in population is certainly interesting and warranted, for many real-world problems show sudden blooming of populations, whether it be viruses, bacteria or other larger animal species considered. Equation (6) provides the root along the timeline for values {b, ε}. In the limit of ε to unity from the righthand side, the root in time approaches infinity; thus, the effect of any asymptotic root would not be visible. The overall affect of the nonlinear medium response is greatly muted for roots downfield along the timeline, because Green's function diminishes rapidly and essentially overtakes any awkward nonlinear influence. For all values larger than unity, the root swiftly moves to the origin, the root causes an asymptote located at time t 0 specified by eq. (5). Because of the asymptotic behavior occurring near the origin, the Green's function still yields reasonable values, therefore, the result can cause a momentary, explosive increase along the asymptotic line in time. Typically, biologists use eq. (1) with values of |ε| < 1, which prove solutions that are smooth, regular and monotonic; yet, even slightly larger values produce muted nonlinear responses; the greatest affects regarding non-linearity are seen for values {C(s) = b = 1, ε = 2} and p = 2. Generally speaking, the solution u(x,t) behaves similar to the linear system, but the nonlinear kernel tends to cause the impulse to spread spatially much faster than the linear system and also vanish in time much faster than the linear system would. The larger the order of non-linearity, p, the faster the system dampens to zero, hence, the energy of the system is swiftly dissipated in short order. In fact, in the limit of infinite order of non-linearity, the nonlinear function h(s,t) approaches unity, viz.: any constant raised to the power of zero yields unity, therefore, the inverse Fourier transform is a Dirac delta function, δ (x). This result confirms the influence of nonlinear medium response in thermodynamic systems vanish for large orders of non-linearity, therefore, it is wise to keep this in mind when contemplating such systems, for it is common to erroneously think the larger the order of non-linearity, the larger the influence or the larger the distortion from the linear system, in fact, the very opposite has been clearly demonstrated. As for the theory of distributions, analyst typically look for large values and find the natural tendency for the distribution toward a normally distributed set, the Central Limit Theorem; yet, the above analysis does show in the extreme, that is to say, for an infinite convolution of some distribution, the limit is unity. This result makes perfect sense, given any distribution, the infinite iterated convolution would lead to an ever widening variance. If further complication of the general solution be sought, there are two important perspectives to consider. The first is the use of the general solution as an impulse response (kernel) to represent non-linear systems in a transfer function. Traditionally, if a forcing function is involved, that is to say, if the differential system be considered inhomogeneous, the manner in which the influence of some arbitrary forcing function K(x,t) is handled is by integrating the forcing function by the complementary solution to the heat equation u(x, −t), then multiplying that integral result by the solution u(x,t), for example, in the linear case, viz.: u(s,t) =e −D(2πs) 2 t e −bt k(s,t)e D(2πs) 2 t e bt dt + B(s)e −D(2πs) 2 t e −bt , The forced linear heat equation, eq. (7), has general solution, eq. (8), where the second term describes how the initial condition, that is to say, the initial distribution of heat throughout space at time zero, dissipates in time and space, and, is realized by convolving B(x) with the heat kernel. The first term in eq. (8) is the influence of the forcing function, proper, where the forcing function k(s,t) is integrated over time with the complementary solution to the linear heat equation, finally, this integral is convolved by the linear heat kernel to represent how the forcing function dissipates over time and space. It is imperative to understand the meaning of the maximum principle, which states there is only one maximum within the domain for the heat equation. Consider the forcing function being active for some window of time, say, 0 ≤ t ≤ t 0 , the particular solution, the first term in equation (8), would be active up until time t 0 and thereafter would be zero; thus, in applying any time dependent forcing function, it is advised to keep in mind the last profile forced becomes the initial distribution, B(x), for the second term in equation (8), which would then describe how the forced distribution of heat will dissipate over time. In the case of the nonlinear system contemplated, eq. (1), a very similar definition ensues, as was seen in the linear case; but, the forms involved are decidedly more complex, viz.:. Despite the complexity, the benefit of achieving closed-form, analytic solutions can not be understated: such results provide the analysts far greater insight into the overall behavior for systems under study; additionally, analytic results provide far greater control over any subsequent integration, where efforts can be mounted to minimize numerical error; lastly, the ability to provide Pareto or better with respect to the total energy or information represented by the original partial differential system can be priceless. Achieving Pareto, eighty percent or more of the total energy or information represented by a system, enables tremendous advantages, where pure numerical integration, whether finite difference methods, finite element or other some other purely numerical method are known to be costly in time, resource hungry and fraught with frustrations; such methods never are able to guarantee the numerical results obtained are accurate, worse, analysts are often without any closed-form, analytic solution, to wit, a comparison may be made, therefore, are flying completely blind. Additional complications, imaginable, would be an unknown function, per chance, comprised of a set of functions, i.e. u( where function K(x,t) is arbitrary. Three main scenarios arise in considering an additional set of functions to the substitution. The first would be a time dependent function K(x,t). The condition of canceling the second order spatial derivative would still be satisfied, but the nature of function K(x,t) would need be known in order to work out in detail the reduced differential system. Ultimately, the Bernoulli differential equation would result and the specifics would need be worked by the reader who would contemplate such complexity. The second general scenario would be the function K(x) solely dependent on the spatial variable. There are two general cases one might consider where function K(x) is a set of functions either convolved or multiplied, viz.: Both substitutions would still satisfy the requirement for canceling the second order spatial derivative. The solutions would be the following two definitions: wherek represents the Fourier transform of the Kernel function considered. In the second case, a bit of slight of hand will be employed to show the reader explicitly these nonlinear solutions are not truly dependent on the substitution, per se, rather, the nonlinear kernel is a simple exponential mapping to represent the nonlinear term, whose coefficient is ε, viz.: This last manipulation takes into account the nonlinear exponential mapping is wholly dependent upon the definition for the nonlinear term alone, hence, one may think of this as normalizing the solution u(x,t) by dividing by the additional function K (2) (x). The above manipulation shows the versatility of the general solution provided and is only limited by the imagination of the analyst. If serious thought be given, the solution to the nonlinear equation really is comprised of a nonlinear kernel affected by only the expressed definition for the nonlinear term, therefore, considerable freedom exists in just manipulating the solution to satisfy additional differential systems. The set of functions K (1,2) (x,t) could be Bernoulli trials, Poisson distributions, gamma distributions, &c, or any combination thereof; of course, whether this combination would be amenable to integration and analytic representation is wholly dependent on the definitions and the resulting integrand, plus, many other factors. As an illustration and as a general aid, the general solution will be applied for a Fisher's type equation. With nonlinear power p equal to two, the convolution equation generates a nonlinear medium response represented by the self convolution of the solution; assuming parameters {D, b, ε} are all positive, the following definition results: Referring to the general solution, theorem (1), function h(s,t) takes on the following form after substituting the requisite parameter values: In general, the inverse Fourier transform of this formula is unknown; but, most researchers study systems where the leading nonlinear coefficient, ε, is much less than unity, therefore, the Binomial theorem will be employed to expand equation (11). The Binomial theorem is stated thus: If we consider the second and third terms in the denominator of equation (11) both to represent parameter y in the Binomial expansion, then we may justify an expansion under arguments: both terms cancel one another for time to zero; for larger times, the third term vanishes; the second term has its largest value moderated by the nonlinear coefficient, thus, ε/b; lastly, since the nonlinear coefficient is much less than unity, ε 1, this term is guaranteed smaller than unity (assuming b = 1). Also, the limits of integration will be assumed to be {t|0 ≤ t ≤ t}. The expansion yields two terms, plus, higher order terms, which will be neglected, since the series is a Cauchy series, viz.: where the exponential, exp (−bt), will be considered momentarily. Multiplying by the Gaussian function and the exponential, yields: This expansion enables ease in finding the inverse Fourier transform for each term, see "Formulae" Table in Appendix (5) . The inverse Fourier transform of the solution yields the following equation, viz.: By inspection, the solution reduces to the linear solution for ε equal to zero-continued validation is imperative. The limit for the complementary error function is zero, irrespective of the limit for either the spatial or time variable to infinity, thus, the solution approaches zero along the boundary of the domain ∂ D. In the limit of zero time, the second and third terms cancel, plus, Green's function, G(x,t), approaches a Dirac delta function of spatial variable x, i.e. δ (x). It must be remembered, this solution represents the impulse drive for the nonlinear system, thus, further complications are possible through additional convolutions. As time expands from zero, the third term will decline in value, increasing the reduction in magnitude by the second term. In the spatial axis, the exponential raised to the power of the spatial variable will expand the energy along the x-axis, hence, showing the tendency for convolutional systems to spread the energy swiftly over the spatial dimension. Since the error function is the integral of the Gaussian function, it has the property of rising from its floor value to its maximum value of unity rapidly, therefore, at some time, t 0 , the nonlinear system swiftly shutoffs and becomes quiescent. The classic form for the Newell-Whitehead-Segel equation has nonlinear medium response expressed in multiplicative form, viz.: where p is a non-negative integer greater than unity, i.e. {p|p ∈ Z = and Z > 1}. The general solution for this equation causes considerable consternation. The equation does not allow separation. If a product of two functions, both a function of the dependent variables, be substituted for the unknown function u(x,t), the very necessary simplification that removes the second order spatial derivative is not typically possible. In fact, a product substitution makes matters worse, where four terms arise containing first and second order derivatives, viz.: ,t) , and, the subscript signifies the order of derivative. The above expansion, in and of itself, is not necessarily the greatest concern; it turns out, multiplication of the unknown function u p is what causes the greatest trouble in attempting a solution. Multiplication transforms to a set of repeated convolutions in the codomain, whose resulting p th convolution is generally not known. Now, in the case of a Gaussian function, inept, the Fourier transform of Green's function, repeated convolutions can be calculated, because the function is convex and bounded on the open interval. The best path to a general solution, as seen for the iterated convolution case, must transform the original differential system into the Bernoulli differential equation. There are many approaches by which one may transform the original equation; but, typically speaking, it is advisable to cast the Bernoulli differential equation in a form of products, one known and the other yet defined. As was seen for the p th convolutional differential system, the Fourier transform, transformed the convolutions into a product of functions, which enabled ready solution of the problem. In the present system, the multiplicative form, the Fourier transformation will transform the product into a set of iterated convolutions. This bodes badly for solving the resulting Bernoulli differential equation. As a consequence, serious consideration must be given beforehand as to the form of the functions employed. Ultimately, it is highly desired the p th convolution should result in a simple function of some sort, thereby, fascilitating ease in further complications. With this end in mind, consider the folowing n th root form of Green's function, viz.: where a prime is attached to both the function itself and its Fourier transform, indicating this special form of Green's function. At first sight, this form of Green's function may appear completely unhelpful, but it holds a very special property, namely, convolving the Fourier transformed form n − 1 repeated times yields the regular transformed Green's function, viz.: It is with the foresight of knowing the multiplication in equation (12) will transform to yield a p th iterated convolution, that the n th rooted form of Green's function will be employed, thereby, transforming the original equation into the Bernoulli differential equation. To start, the substitution for the unknown solution u(x,t) will be a product of a function of time and Green's rooted function, i.e. The root will participate in the derivatives, hence, for ease, equation (12) is redefined thus: Taking the equation to the Fourier codomain, yields: whereupon substitution, yields the following: where subscript t indicates a derivative with respect to time, i.e. f t (t). The first four terms are generated by taking the time derivative of the substitution with respect to the time variable, and the last term of the four cancels with the Fourier transformed term involving the second order spatial derivative. Placing the fraction 1/n aids in normalizing the fourth term, in order to, facilitate cancellation of the fifth term. . . the best that can be accomplished. This yields the following reduced equation to solve: Now, focusing on the repeated convolution, preceded by coefficient ε, if the value of p be equal to n − 1, then the repeated convolution results in the standard Fourier transform of Green's function, viz.: Dividing through by the rooted Green's function, g , produces a difference of exponents, plus, some additional constants and time variables, thus: and multiplying through by n finally yields: Once again, we need to employ the transform that enables reducing the Bernoulli differential equation to a linear ordinary differential equation. The transformation is achieved by letting f (t) = h(t) m , therefore, f t = mh m−1 h , where prime indicates a derivative with respect to time (note: all references to the rooted Green's function g have been eliminated in the final definition of the Bernoulli differential equation, thus prime should not cause any misunderstanding). Substituting the transformation yields: Dividing through by h m−1 gives the following: and, after solving for the condition m(p − 1) + 1 = 0, the parameter m = 1/(1 − p). Using the parameter m = 1/(1 − p) removes the last occurrence of the function h and leaves the following linear ordinary differential equation to solve, after dividing through by the parameter m, viz.: Solving for the integrating factor yields: where the natural logarithm is involved, i.e. ln(t). With the integrating factor in hand, it is a matter of multiplying through and integrating over the time variable, viz.: As it stands now, the formula has a mixture of both integers p and n, but it would be advantageous to transform to one single integer, and, since the power of non-linearity, p, is more relevant for our purposes, then with n = p + 1, the integral becomes: The integrand involves an exponential in time, multiplied by the time variable raised to a power; this type of integral is referred to as the exponential integral, which can be cast into the form of an incomplete gamma function. Most critical to note: the exponents for the time variable will be fractional in nature, which avoids poles for the gamma function. Finally, the integral has been multiplied by the reciprocal integrating factor to yield the final answer, viz.: where Γ(n, x) represents the incomplete gamma function. A most general solution has thus been obtained, representing a closed-form, analytic solution for a general homogeneous nonlinear parabolic differential equation, whose nonlinear medium response is expressed as a p-times, multiplicative response, viz.: The solution is comprised of a rooted Green's function and a nonlinear kernel function f (x,t), whose inverse Fourier transform is defined to be the inverse of the function h(s,t) raised to the power of parameter m = 1/(1 − p), viz.: where the inverse Fourier transform is signified by symbol F −1 . Lastly, function h(s,t) is defined as such: where Γ(n, x) represents the incomplete gamma function; C(s) is the coefficient of integration. More generally, the function h(s,t) is equal to. . . The manner in how the solution was derived, namely, employing the rooted Green's function, resulted in integrands of one form. If Green's function is used in the initial substitution, then the i th convolution can be calculated to be the following: As a consequence, the solution will lead to an integrand comprised of another form. I mention it here for the reader, because there are pros and cons for each definition; nevertheless, the reader can certainly ascertain no simple analytic representation results for the multiplicative form of the differential equation. A most general solution has thus been obtained, representing a closed-form, analytic solution for a general homogeneous nonlinear parabolic differential equation, whose nonlinear medium response is expressed as a multiplicative response of power p ∈ Z = ∧ p ≥ 2, viz.: The solution is comprised of Green's function and a nonlinear kernel function f (x,t), whose inverse Fourier transform is defined to be the inverse of the function h(s,t) raised to the power of parameter m = 1/(p − 1), viz.: where the inverse Fourier transform is signified by symbol F −1 . Lastly, function h(s,t) is defined as such: Done. * The affect of continued convolutions are most easily seen in the Corollary definition for function h(s,t), where the spectrum will widen with rising power p, leading to more concentration in the original spatial domain. Taking the first order series approximation for integer p and time to infinity, with respect to the logarithm of the integrand in equation (14), yields a function, whose limit for integer p to infinity and infinite time is negative infinity, viz.: The asymptotic expansion tends to negative infinity, therefore, the exponential magnitude is decreasing for large integer p and time. The proportionality shows for large time and power of non-linearity, integer p, the kernel for the function h(s,t) approaches zero, viz.: There are certainly a range of values for integer p and time to be explored; but, in the extreme limit, the solution u(x,t) goes to zero. This is another verification that large orders of non-linearity lead to contradictory results. The classic Fischer's equation will be used for illustration purposes. The equation is primarily employed in the biological sciences to model genetic drift in a population [5] . For a set of alleles, whose respective probabilities for dominance are, say, p, q, r. . . ; the corresponding partial differential model is defined by R.A. Fisher, viz.: Assuming the probabilities are a function of space and time, the solution is immediately the time integral, where the integrand includes the convolution of each respective probability for each allele, viz.: Since Green's function has been employed; then, for time zero, the limit is a Dirac delta function of space. The model was originally intended to model allele propagation through a population in an environment, say, birds along a shoreline; but, since Green's function has been employed, one could consider the model for allele substitution within a genetic code. This would require a spatial dependent probability for each allele, for example, p(x,t); therefore, the probability function would describe the likelihood for substitution or insertion at specific locations along the genetic sequence. If the spatial dependence be dropped and the allele probability is time dependent only, then the solution would require integrating over that time dependence. Finally, if we assume the allele probabilities are constants, then the solution simplifies considerably, viz.: The solution does not satisfy the maximum principle because the leading coefficient ε is defined to be a positive real number, ε ∈ R + . In its simplest form, it is debatable if this model has any meaningful application to physical processes. One complication that could bring greater meaning to Fisher's equation would be to raise the power of the unknown solution in the last term, thereby, converting the differential equation to a nonlinear partial differential equation, whose solution is known. Consider the following: . . The solution is immediately known, viz.: where the hat signifies the Fourier transform, i.e.p = p(s,t), also, small g symbolizes Green's function in the codomain, i.e. G ⊂ g. The solution represents the probability of an allele substitution or dominance, with spatial affinity specified over some time window. The intensity for selection, coefficient ε, may now be any positive or negative real number, i.e. ε ∈ R. The overall solution also satisfies the maximum principle, which is essential for any meaningful application to some real-world physical process. It is thoroughly possible to imagine a wide set of possible analytic solutions, given a set of representations for probabilities: p, q, r. . . , including the inverse Fourier transform. Computer aided numerical analysis offers many advantages to the young researcher, but new technology can hamper as much as it can aid. The most dogmatic of computer scientists/advocates ardently believe, with enough computer power, Laplace's daemon could be achieved. Virilly, many, both scientific and layman alike, being technologically spellbound, suffer the decisive inability for critical thought in modern times; and, this mental deficiency grows worse with time as the promise of such daemons, such as, artificial intelligence (AI) should grow ever more gargantuan. Laplace was once asked about the potential to model a room full of molecules, atoms and other particles. The classic mechanical descriptions were certainly within hand, Newtonian mechanics, but the sheer number of particles would require the mind of God, no less, to simultaneously hold all parameters in real time. We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes. -Pierre Simon Laplace, "A Philosophical Essay on Probabilities" [6] In some sense, this line of reasoning is curious coming from a man known for both his knowledge of probability theory and personal penchant for games of chance. That this great philosopher, Laplace, completely ruled out any possibility randomness could play in the happenings of this universe is more a testimony to the prestige widely attributed to Newtonian mechanics; in those times, no one questioned the preeminence, the rightness, the truth of classical mechanics, not in public at least-perish the thought! Ironically, randomness has reared its ugly head against those who would claim mastery over the universe, at least, in thought or understanding; but, like a nightmare, randomness proves a more viable, even more real concept of the universe than the more desired deterministic views. Quantum mechanics, of course, destroyed all hopes for a clockwork universe; the universe has forever changed to a big maybe. Frankly, the quantum revolution in scientific thought represents a more mature perception of the world, for determinism is rather juvenile in its nature and desire. With that said, these two forms of thought are still at war with one another, where determinism forever seeks dominance. It's a matter of control for those who subject themselves to this type of childish thinking-determinism-a type of wishful thinking, a wish for certainty, for a promise. . . Modernity has, somewhere along the line, decided to replace human thought with algorithmic thought-all in an attempt to gain full mastery. . . but of course. The benefits are seen to far outweigh any possible human contribution; yet, algorithms are designed by humans, thus, suffer bias, cliches and other laughable fits and starts; reminding one of some cantankerous whirligig desperately trying to wind. The very real threat of algorithmic bias demands our human vigilance, because time and bias drift could prove fatal to humanity, if not just embarrassing. I was once asked to model a particular mercury filter, designed by a certain group in the U.S. Navy. The filter itself was rather crude in construction; but, putting aside the absurd casing of thick PVC plastic piping, the inner filter was an industrial grade fiber filter. Such filters are rated to catch particles of some minimum size, say, micron size. I was tasked with building a model in COMSOL to model mercury laden effluent going into the filter cavity and calculate the "performance" of the filter! This was obviously an exercise in pure futility. The answer is decided by the chosen fiber filter and nothing else. It would seem computer-driven reality, that is, virtual reality has taken precedence over the real world that lay around us. What a curious happenstance, indeed! As a trained numerical analyst, I too, was once fascinated by the prospect of solving formerly intractable problems through the aid of computers and carefully designed numerical algorithms; contrariwise, my own experiential corpus has proven otherwise, inducing me to become a modern Luddite, of sorts. Modern analysts praise the "all-important" data in their so-called data-driven policies, as if points of facts trump all reality; albeit, reality, our human reality, is comprised of data, that is, signals, but a healthy dose of Humean skepticism is required for any worldly, self-respecting man. Data, as it is called or referred to, is no more real than any other signal; in fact, what is referred to as "data" is biased by the fact we humans decide it is a "data point" in the first place, christening this seemingly vaporous entity as worthy of our attention. It was David Hume who unequivocally proved that man's epistemological limits are small, indeed, and there is no means with which we may reach beyond the threshold, despite all our might. Like the weary widow seeking to contact her lost husband beyond the grave. . . Where David Hume proved all human knowledge imperfect and dependent upon the imperfect senses of the body, it was Friedrich Nietzsche, the German philosopher, who dealt with the epistemological problem in the more modern sense; his analysis was simple: There is no reality. Nietzsche speaks of "facts," yet equally demands his readers to "go beyond Good and Evil," that is to say, to set aside their paltry human judgmentalism, for such effete forces are no match for the world that surrounds us. . . simply put, the forces at play around us are far larger than us. History has proved otherwise, for humans are rather tenacious, bullheaded and stubborn, especially, when they seek an object of desire. Determinism, as a philosophy, has never recovered from the revolution of quantum mechanics. The old, comforting Newtonian world, where everything could be known and understood, simply died away in a fortnight. . . and that left many bereft of any belief. It is belief and all its dogmatic undertones at play in this topic, to be sure, and, many zealots cannot accept the death of their messiah; it would be anathema to do so, worse, it would deny the very basis upon which their being finds rest, worse still, the death of their existential wellspring. An ever encroaching alien intelligence demands prostration from all devotees, demanding each devotee to divorce from their old morals, morale. . . . . . to be replaced with a new cognizance, a new moral for judgment. . . and all based on algorithmic decision making; essentially, the dawn of a new idol is upon us, and, to be sure, this new idol is as hollow as a clay figurine of old. The extreme, of course, is so-called Big Data, which implies enough data or facts have been acquired to enable achieving Laplace's daemon. There is a worm of a lie at the heart of this argument: firstly, Laplace's argument never questioned the underlying principles upon which he would model the trajectories of some Avogadro number of particles, but he did most certainly question the human ability to ever track such a melee; secondly, reality is not the sum total of a set of facts, rather, reality is decided upon, but fools think of such narcissistic, solipsistic propositions as valid enough to be considered a theory. Reality does not become more real with each additional "fact," contrary, reality actually dissolves as knowledge is attained over a lifetime. Without stirring abroad One can know the whole world; Without looking out the window One can see the way of heaven. The further one goes The less one knows. Therefore the sage knows without having to stir, Identifies without having to see, Accomplishes without having to act. -Tao de Ching [7] . The principles of statistics provide an excellent lesson for those willing to listen to those well founded, solidly, mathematically defined precepts that speak of knowledge in terms of confidence, rather than certainties. But, all-to-often does a researcher perceive statistics a type of game, where the goal is to "beat" the test statistic, and, if the celebrated goal be achieved, the happy researcher makes a victory lap and claims to have discovered or proved something or another. Well, just because you "beat" the test statistic, doesn't mean you win the challenge. For example, if your calculated statistic be 2.0001 and compare this to a test statistic of 2.0. . . should you conclude victory? Obviously not. One should be conservative in their approach to statistics, which would require a statistic at least an order higher than the test statistic-that would build confidence, eh. "But, that would mean an obvious result," would be said in reply and I cannot deny this fact, for the application of statistics is not to discover the unknown, but rather to confirm what is already known. If there be any bit of wisdom I can pass along to some youthful scientist, it would be thus: If you listen very closely, Sophia will tell you the truth, she will softly whisper in your ear, "I don't know." Not the answer sought by a scientific mind, which fundamentally seeks to know truth; but, truth is elusive, at best, and the world may be nothing more than a Rorschach inkblot, after-all. If a derivative is applied over a convolution and the variable is independent of the convolution, then the derivative yields the summation of successive derivatives of each function involved in the convolution, viz.: If the derivative happens to be with respect to the dependent variable, the derivative can be placed onto one singular function involved in the convolution, viz.: where symbol (x) signifies what variable the convolution is over and prime ( ) represents, in this case, a spatial derivative. The Fourier transform (F ) of a convolution equals the multiplication of the transforms of each function, viz.: Where erfc(x) is the complementary error function, defined as 1 − erf(x). Date: 16th-17th century, Department: Achenbach Foundation, Accession Number: 1963.30.15032, Credit Line: Achenbach Foundation for Graphic Arts Nonlinear models in 2+ε dimensions Einstein's Dissertation on the Determination of Molecular Dimensions" (PDF). The Collected Papers of Albert Einstein A New Pathology in the Simulation of Chaotic Dynamical Systems on Digital Computers The Fourier Transform And Its Applications THE WAVE OF ADVANCE OF ADVANTAGEOUS GENES A Philosophical Essay on Probabilities Inkscape