"Machine superintelligence may or may not be controllable. If we do nothing to regulate it, or to prevent horrible outcomes, we will with X > [too big] probability find ourselves doomed.
We need to find a way to reduce X. I propose regulation is at least not likely to be counter-productive, and may be strictly incrementally useful."
A genuinely sympathetic paraphrase might be:
"Machine superintelligence may or may not be controllable. If we do nothing to regulate it, or to prevent horrible outcomes, we will with X > [too big] probability find ourselves doomed.
We need to find a way to reduce X. I propose regulation is at least not likely to be counter-productive, and may be strictly incrementally useful."