Theses and Dissertations
Permanent URI for this collection
Browse
Browsing Theses and Dissertations by Author "Takhanov, Rustem"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Open Access Experimental study of Pac-Man conditions for learn-ability of discrete linear dynamical systems(Nazarbayev University School of Science and Technology, 2019-05-01) Damiyev, Zhaksybek; Takhanov, Rustem; Tourassis, Vassilios D.In this work, we are going to reconstruct parameters of a discrete dynamical system with a hidden layer, given by a quadruple of matrices (𝐴,𝐵,𝐶,𝐷), from system’s past behaviour. First, we reproduced experimentally the well-known result of Hardt et al. that the reconstruction can be made under some conditions, called Pac-Man conditions. Then we demonstrated experimentally that the system approaches the global minimum even if an input 𝑥 is a sequence of i.i.d. random variables with a nongaussian distribution. We also formulated hypotheses beyond Pac-Man conditions that Gradient Descent solves the problem if the operator norm (or alternatively, the spectral radius) of transition matrix 𝐴 is bounded by 1 and obtained the negative result, i.e. a counterexample to those conjectures.Item Open Access Explorations on chaotic behaviors of Recurrent Neural Networks(Nazarbayev University School of Science and Technology, 2019-04-29) Myrzakhmetov, Bagdat; Assylbekov, Zhenisbek; Takhanov, Rustem; Tourassis, Vassilios D.In this thesis work we analyzed the dynamics of the Recurrent Neural Network architectures. We explored the chaotic nature of state-of-the-art Recurrent Neural Networks: Vanilla Recurrent Network, Recurrent Highway Networks and Structurally Constrained Recurrent Network. Our experiments showed that they exhibit chaotic behavior in the absence of input data. We also proposed a way of removing chaos chaos from Recurrent Neural Networks. Our findings show that initialization of the weight matrices during the training plays an important role, as initialization with the matrices whose norm is smaller than one will lead to the non-chaotic behavior of the Recurrent Neural Networks. The advantage of the non-chaotic cells is stable dynamics. At the end, we tested our chaos-free version of the Recurrent Highway Networks (RHN) in a real-world application. In a sequence-to-sequence modeling experiments, particularly in the language modeling task, chaos-free version of RHN perform on par with the original version by using the same hyperparameters.