Integrating Neural Networks with Numerical Methods for Solving Nonlinear Differential Equations

Main Article Content

Suresh Kumar Sahani, Binod Kumar Sah

Abstract

Simulating significant occurrences in the fields of physics, engineering, biology, and finance requires a solution to a nonlinear differential equation (NDE), which is at the heart of the process. Classical numerical methods, such as Runge–Kutta techniques or finite differencing, are known to be resilient; yet, they are susceptible to being challenged by complicated initial-boundary prescriptions, stiffness, or dimensionality. An effective supplement to traditional solvers, neural network approximations have evolved as a potent tool over the course of the last several years. In this research, a hybridized computational framework is presented. This framework integrates feed forward neural networks (FNNs) with classic numerical solvers in order to improve the approximation, convergence, and stability features of nonlinear ordinary and partial differential equations. As a result of the incorporation of FNNs into collocation and Runge–Kutta frameworks, the technique ensures that predictions are based on physics while also improving computing scalability. The purpose of training is to lower a loss function that is formed from the residual of the differential operator and boundary conditions in such a manner that the neural approximation generalizes throughout the solution space. This is the aim of training. The Van der Pol oscillator, the Bratu boundary value problem, and a reaction-diffusion partial differential equation (PDE) are the three test benchmark nonlinear systems that are investigated, and a comparative error analysis analysis is performed using standard solutions. It has been shown via the results that the approach that has been provided herein consistently increases accuracy (with an error reduction of up to 36%) and stability for stiff regimes, while also successfully generalizing on sparse data. The most important benefits are a decreased reliance on grids, a smoother convergence, and an easier application to high-dimensional settings. The current study helps to nurture the synergy between neural approximations and formal mathematical frameworks, which in turn helps to position AI-infused solvers as alternatives that are dependable and explainable for challenging differential models.

Article Details

Section
Articles