** This note is a wish for my father who is now sustaining a very dull pain after surgery while I am very far away from the country.*

The terminal (final) value problem has been scrutinized over the last 20 years by many approaches with fast-adapting and effectively distinct strategies. This problem plays a critical role in different fields of study, for example, enabling to detect a hard-to-measure object arising in the identification of the initial temperature when eruption definitely starts, or perceive the quality of an original image due to blurred effects or noise influence.

The need of solving various kinds of the PDE here originates from its natural instability due to catastrophic and exponential growth in the solution (inspired by the Duhamel principle). As a consequence, once inaccurate final data participates into the PDE, one obtains the totally wrong solution even if we attempt to measure data as correct as possible. In fact, one always gains the inaccurate or measured data in real applications. Thus, international attention had ramped up pressure on the perception of useful tools to the so-called regularization methods where stable approximate solutions were developed year by year.

In this brief note, we set a newly demonstrated stage on the modified quasi-boundary value (QBV) method which is mimicking the well-known QBV method by Showalter in 1983. The method rose to fame to researchers at that period of time since the only simple thing is perturbing the initial data by the terminal information coupled with a positive parameter small enough. As the parameter goes smaller and smaller, it is possible to prove the corresponding solution approximates very well the desired solution. Nevertheless, the method was first proposed for the linear forward problem.

The pioneering treatment on the backward problem was Clark and Oppenheimer in 1994, and they also got the same convergence results. However, the completely crucial achievement is they approximated a genuinely ill-posed problem by the well-posed one. One of the modifications of this method can be found in the result of Denche and Bessila in 2005. Their response to the linear backward problem is based on the framework of using a variant way to give an error estimate of logarithmic type at the initial point of time.

During the maturity of this field, the following problem

associated with the final observation , is yet to be addressed.

Here, the operator maps from the domain that the operator belongs to, which is the subset of the abstract Hilbert space , to . The operator is ascribed to be positive-definite, self-adjoint, unbounded and linear such that it generates a compact contraction semi-group on .

One may immediately realize that this time-dependent problem consists of a large amount of real models such as Fisher’s model () describing the spreading of biological populations of diploid individuals and the Ginzburg-Landau equation () in superconductivity. As a result, some assumptions should be made to interlock the reality.

Our novelty is to currently fabricate the new filtering function which allows to replace the intrinsically unstable term, and leads to the well-posedness problem. In addition, we apply the usual cut-off method to deal with the locally Lipschitzian on the source term . Thus, combining these two methods, we are succeeded in defining the stable approximate problem. Hence, the qualitative analysis can refer to the preprint attached here.

Even though the considered problem is more complicated than the previous works. We bear in mind that much further investigation should be required. We thus wish to shortly single out the following challenges:

- Due to biological and ecological reasons, one needs to take into consideration the nonlocal problem with advection terms.
- It is not clear how well the method works on a specific model such as two-species reaction-diffusion system.
- A better understanding of how to treat the mild solution numerically is needed.