Universal Approximation guarantees the existence of neural networks that are safe against adversarial attacks. What it doesn’t guarantee is that these networks will play well with scalable “adversarial certification methods” such as interval analysis. Our paper (ICLR 2020), extends the universal approximation theorem to address this problem.
By demonstrating that interval-certifiable networks are universal approximators, we also showed that while a specific network might have a convex relaxation barrier such a barrier does not exist for neural networks in general.
Source: Reddit Machine Learning