In many instances in mathematics, and in science more generally, one encounters compositions of randomly selected operations. For example, the derivative of the iterates of a diffeomorphism in dynamical systems, or the solution to a difference equation with random coefficients such as a time evolution in random environment. A newer context where such compositional products appear is deep learning, both in the initializing and the training of neural networks. In the fundamental case of linear maps there is Oseledets’ multiplicative ergodic theorem from the 1960s. In short it asserts that a random product of matrices behaves, in a certain asymptotic sense, just like the power of a single matrix, which then would be, as it were, the mean of the random product. In this lecture I will explain a much more general theorem of this type (from joint works with Margulis, Ledrappier and Gouëzel) that applies also to operators and non-linear maps, and that in particular implies the Oseledets theorem, as well as certain results on random walks on groups, Brownian motion, surface homeomorphisms and random mean ergodic theorems.Link Zoom:https://us02web.zoom.us/j/86086663267?pwd=ZzRBNTI0bUIzTU9ReHVtdlhXblljZz09
ID riunione: 860 8666 3267
Codice d’accesso: 965677