Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

From fast Fourier transform algorithm to deep learning of neural networks and Quantum computing with Alexander Dorogov

We are talking with the professor of the Automation and Control Processes Department of St. Petersburg Electrotechnical University, Doctor of Engineering, Alexander Yuryevich Dorogov about the applications of fast transformation algorithms in the field of neuroinformatics and quantum computing.

A set of algorithms called fast Fourier Transform Algorithms (FFT) entered the spectral analysis practice in the 60s, starting with Cooley-Tukey works.

It is well known what a huge role in signal processing the discovery of the fast Fourier transform algorithm played.

Its use allowed us to drastically reduce the number of computational operations when performing spectral transformations, which significantly expanded the scope of the spectral analysis used for data processing.

How did you use the principles and advantages of the fast Fourier transform algorithm in the artificial neural networks theory?

FFT algorithms have a pronounced multilayer structure similar to the multilayer perceptrons structure, therefore the use of the FFT ideology to build Neural networks comes as a natural solution. In order to do this it is enough to replace the constant values of butterfly operations with tunable synaptic weights.

It should be noted that the first step in this direction was taken a long time ago. An analytical description of fast algorithms for generalized spectral transformations was given back in the historical work of Good (1958). The generalized spectral transformation (or adapted transformation) from today’s perspective can be considered as a neural network with linear activation functions.

What is the difference between the fast neural networks you propose (FNN) and the conventional neural networks?

Unlike direct distribution networks, FNN has a modular organization in which modules (neural kernels) are connected into a multilayer regular network.

FNN structural features remove the “curse of dimensionality”, which allows the application of this type of network for large and extra-large data arrays processing. Moreover, the training procedure absolutely converges in a finite number of steps equal to the neural network layers number.

In terms of adapted transformations, the learning process is usually called tuning. A typical task for a adapted transformation is to set up basic functions for a given system, for example, the Fourier, Hadamard – Walsh basis, etc. The automodelling of the FNN structure to the structure of vector spaces tensor products allows them to be used to construct quantum computing algorithms.

What is the application idea of reconfigurable FFT algorithms to highlight the main components in pattern recognition?

Quite often, the main component can be distinguished directly from the signal as a base image, relative to which all signal variations are observed. Since the signal energy is determined by the sum of the spectral coefficients’ squares, one can directly judge the significance of the informative feature by the spectral coefficient value.

An orthogonal transform that is tuned to one main component belongs to the class of adapted. Due to orthogonality, the result of the adapted transformation to the base image is the output vector, in which one of the coordinates is equal to unity, while all other coordinates are equal to zero. Thus, the adapted orthogonal transform is ideal for selective recognition of a base signal.

What is the difference between fast neural networks and wavelet transforms?

The wavelet analysis is based on the signals’ representation in the form of a superposition scale transformations and shifts of generating wavelet pulses. The wavelet functions self-similarity in scale transforms is a fractal sequences’ distinctive property; therefore, the wavelet transform can be considered as one of the representatives of fractal data processing methods.

The self-similarity principle can also be used in the fractal filter’s construction. In a generalized sense, filtering is a procedure that realizes the projection of a signal specified in a functional space into a space of smaller dimension.

Unlike the wavelet transform, fractal filtering is irreversible and is associated with information loss. The purpose of fractal filtering is to identify spatially distributed properties of the studied object at multiple time scales.

It is known that neural networks can serve as fractal structures’ generators. Is it possible to approximate a neural network to regular fractals?

The world around us is filled with fractals to a much greater extent than with the objects “familiar” to us. A good example are the seemingly well-studied dynamical systems. Modern studies have shown that fractal attracting sets (attractors) are characteristic of nonlinear dynamical systems in the space of phase coordinates. Fractal attractors are associated with the so-called dynamic chaos scenarios, which defines the boundaries of the possible prediction of the dynamic systems behavior.

FNNs are regular structures by their construction; it can be proved that these structures have the self-similarity property. Therefore, any regular fractals can be approximated in the FNN class. The fractal type imposes some restrictions on the neural network parameters.

What is the relationship between fast neural networks and deep learning neural networks?

A small modification of the FNN structural synthesis algorithm leads to deep learning network architectures with many planes in neural layers. The new network topology is ideologically close to the topologies of convolutional deep learning networks, but it is regular. FNN training methods are transferred without modification to a new networks’ class, while the number of recognizable images increases dramatically, covering all elements of the output plane.

The proposed solution allows you to give a constructive answer to the fundamental questions of deep learning neural networks: how to choose a topology and how to reduce network learning time.

How are fast neural networks used to build quantum computing algorithms?

In a quantum computer, a bit functional analog is a qubit, which is associated with a two-dimensional complex space. In 1994, Peter Shor showed that the NP – difficult problem — integers factorization — can be solved on a quantum computer in polynomial time. To build an effective quantum calculation, Shor used a modification of the fast Fourier algorithm with base 2. When using FNN in quantum algorithms, all neural network kernels are structurally similar and are 2 × 2 unitary matrices. If all the kernels within the layer are parametrically identical, then the neural layer is uniquely associated with one quantum register qubit. In this case, the neural network equivalent unitary operator, describing the state evolution of the qubit register of n dimension, is expressed by the Kronecker product of the neural kernels matrices. It should be noted that a quantum register simulation on an ordinary computer leads to an NP – difficult problem.

In the quantum version, the image is the quantum state of the register. The pattern recognition task is reduced to the creation of a unitary transformation, which forms at the output one of the register basic states with a probability close or equal to one. Neural network learning comes to a function fractal decomposition and neural kernels parameters’ tuning. Decomposition methods are based on fractal filtration and differ only in the fractal filters’ complex nature.

If the quantum register initial state coincides with the image function, then after exposure to the neural network operator the quantum state will exactly coincide with one of the basic states. In the general case, FNN performs elementary transformations over the states of a quantum register, rather than individual qubits.

Interview: Ivan Stepanyan

The post From fast Fourier transform algorithm to deep learning of neural networks and Quantum computing with Alexander Dorogov appeared first on ALL ANDORRA.



This post first appeared on Andorra News, please read the originial post: here

Share the post

From fast Fourier transform algorithm to deep learning of neural networks and Quantum computing with Alexander Dorogov

×

Subscribe to Andorra News

Get updates delivered right to your inbox!

Thank you for your subscription

×