Using sets of Big Data not yet feasible

14.01.2016

Professor Esa Ollila is a developer of signal processing and machine learning tools and methods for Big Data use.

Esa Ollila, DSc (Tech), was appointed Associate Professor at the Department of Signal Processing and Acoustics on 1 June 2015. Ollila's professorship is in statistical signal processing, which consists of models and methods in applied mathematics, electrical engineering and machine learning.

According to Ollila, signal processing is a field that has managed to expand itself.

– The importance of data analysis increases along with the increasing amount of data. Despite that smart devices in our everyday live, have given us multiple ways to collect, process and store large amounts of data very easily, the utilisation rate, so making value, of that data has not been able to keep up.

Ollila mentions that current technologies are lacking in scalability, which makes it harder to find meaningful patterns, correlation or clusters in big datasets. Additionally, the information we want to extract is the uncertainty of derived estimates and analyses.

– At the moment, we don’t have the methods in our Big Data toolbox that would make it possible to assign an error bars for estimated parameters. This is one of my current research subjects in the field of Big Data.

On a larger scale, his aim is to develop computational methods, tools and algorithms for processing and visualising large, multidimensional datasets. The intention is to enhance the usability of Big Data, i.e. enormous data sets.

– These methods can also have wide-reaching economic and social importance, according to Ollila.

Wide range of applications for data modelling and analysis

What, then, is the signal processing of today?

– At its simplest definition, a signal is actually just data, that is, numeric observations from different sensors or measurement systems. That’s why the term, signal, is outdated. Typical examples of Big Data or signals are the datasets produced by signal and telecommunication systems, smart phones, multi-sensor systems, computer networks or smart electric grids.

The data, or a large data set, can be used to calculate various different estimators, or key figures, that in some way describe the observed population or help to understand the observed phenomenon that the data is a representation of. As the amount of data increases, the amount of erroneous measurements or deviating observations grows at the same rate, which also produces errors in the estimators, conclusions and analyses.

– That is why robust signal processing and data analysis methods, that is, methods that are not prone to measurement errors or deviations from underlying modelling assumptions, play a very important role in Big Data analysis. I have extensive experience in robust methods and statistical modelling, which I'm now able to use in my own Big Data research.

Ollila studies the randomness of measurements in signal processing, with a special focus on various noise models that can be used in antenna array and radar signal analyses and the derivation of optimal estimators. He has also developed techniques for estimating the direction of arrivals of signals received by an antenna array and the location of the transmitter.

There is a wide range of applications for data modelling and analysis. Ollila provides an example in the form of medical imaging.

– Tensor decompositions, for example, are highly useful and helpful data processing methods for multi-dimensional datasets that can be used, among other things, in analysing medical data.

And what does the professor do in his free time?

– I spend my free time mostly with my family and hobbies, skiing in the winter and playing tennis in the summers. I also like to cook, which has proved to offset my work very nicely.


Photo: Lasse Lecklin