Demixing a sequence of source signals from the superposition of noisy bilinear measurements is a generalized mathematical model for blind demixing with blind deconvolution. It is prevalent across the areas of machine learning, data analysis, signal and image processing. However, state-of-the-art convex methods for blind demixing via semidefinite programming are computationally infeasible for large-scale problems. Although the existing nonconvex algorithms are able to address the scaling issue, they normally require proper regularization and careful initialization to establish optimality guarantees. To address the limitations of exiting methods, we develop a provable nonconvex demixing procedure via Wirtinger flow, much like vanilla gradient descent, to harness the benefits of regularization free, random initialization, fast convergence rate, as well as computational and statistical guarantees. This is achieved by exploiting the benign geometry of the high-dimensional statistical estimation problem, thereby revealing that Wirtinger flow enforces the regularization-free iterates in the region of strong convexity and qualified level of smoothness where the step size can be chosen aggressively. Time permitting, I will discuss how to develop the Riemannian trust-region algorithm with random initialization to solve the blind demixing problem with fast convergence rate and low iteration cost. This algorithm is guaranteed to converge to an approximate local minimum.