This makes me wonder: is it possible for ML models to be provably correct?
Or is that completely thrown out the window if you use a ML model rather than a procedural algorithm?
Because if the model is a black box and you use it for some safety system in the real world, how do you know there isn’t some wierd combination of inputs that causes the model to exhibit bizzare behaviour?
Or is that completely thrown out the window if you use a ML model rather than a procedural algorithm?
Because if the model is a black box and you use it for some safety system in the real world, how do you know there isn’t some wierd combination of inputs that causes the model to exhibit bizzare behaviour?