Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

With enough coaxing, we can get the optimizer to converge to known methods (high-order, conservative, entropy-stable, ...), and I'm sure this tactic will lead to more papers, though they'll be kind of empty unless we're really discovering good methods that were not previously known.

I presume you meant "verify" in the last sentence.



No, what I am doing is using high order, conservative (universal DAEs), strong-stability preserving, etc. discretizations for the numerics but utilizing neural networks to represent unknown quantities to transform it into a functional inverse problem. In the discussion of the HJB equation, we mention that we solve the equation by writing down an SDE such that the solution to the functional inverse problem gives the PDE's solution, and then utilize adaptive, high order, implicit, etc. SDE integrators on the inverse problem. Essentially the idea is to utilize neural networks in conjunction with all of the classical tricks you can, making the neural network have to perform as small of a job as possible. It does not need to learn good methods if you have already designed the training problem to utilize those kinds of discretizations: you just need a methodology to differentiate through your FEM, FVM, discrete Galarkin, implicit ODE solver, Gaussian quadrature, etc. algorithms to augment the full algorithm with neural networks, which is precisely what we are building.

So I completely agree with you that throwing away classical knowledge won't go very far, which is why that's not what we're doing. We utilizing neural networks within and on top of classical methods to try and solve problems where they have not traditionally performed well, or utilizing it to cover epistemic uncertainty from model misspecification.


This looks really interesting.

I think it would be a good topic for a blog post or teaching paper that shows how to do this for very simple problems "end-to-end" (e.g. advection eqt, diffusion eq, advection-diffusion, burgers eqt., poisson eqt, etc.).

I see the appeal in showing that these can be used for very complex problems, but what I want to understand is what are the trade-offs for the most basic hyperbolic, parabolic, and elliptic one-dimensional problems. What's the accuracy? What's the order of convergence in practice? Are there tight upper bounds? (does that even matter?), what's the performance, how does the performance scale with the number of degrees of freedom, what does a good training pipeline look like, what's the cost of training, inference, etc.

There are well-understood methods that are optimal for all of the problems above. Knowing that you can apply these NN for problems without optimal methods is good, but I'd be more convinced that this is not just "NN-all-the-things hype" if I were to understand how these methods fair against problems for which optimal methods are indeed available.


No, it will not work well without the optimal method. But the method is no longer optimal if say a nonlinear term is added to these equations, so you can use the "optimal" method as a starting point and then try to nudge towards something better. Don't throw away any information that you have.


This comment sounds good. I was objecting to approaches like Eq 10 of your paper and much of the Karniadakis approach.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: