The world is awash with data. Large data sets have been available for many decades but in recent years their volumes have grown explosively. With mobile devices and internet connections data capture is simple and with powerful computers the analysis of “big data” is feasible [see TM092, or search for “thatsmaths” at irishtimes.com].
But there are challenges: many data sets are too large and too complex to be analysed or understood using traditional data processing methods. Our current armoury of analysis techniques is inadequate and new mathematical methods are needed.
Globally, something like five exabytes of data are created every day. That is about a million, million, million words. The data comes from a multitude of sources. Amongst these are internet traffic, phone calls, education, medical and health records, court reports, genome sequences, astrophysical observations, stock market movements and social networks. On Twitter, about 6,000 tweets are sent every second, which means 500 million per day and about 200 billion per year.
Small data sets can be organized into a matrix using a simple spreadsheet. For big data, processing is beyond human capacity and more sophisticated methods are required: new efficient forms of processing are needed to extract value from the data. Big data presents huge management tasks: data capture, verification, storage, sorting, analysis, visualization and presentation.
Hadoop & Presto
Big data is noisy, unstructured and ever-changing. Real time analysis requires massively parallel processing with software running simultaneously on thousands of processors. Brute-force analysis alone is ineffective and modern computer architectures require innovative algorithms to exploit their power. Specialized software tools are available, such as Hadoop, a software framework for distributed storage and processing of very large data sets on computer clusters, and Presto, a system for running interactive analytic queries, developed by FaceBook.
Unlocking the information from large data sets yields understanding and enables predictions about future trends. Big data analysis can reveal new links and relationships. Large companies – eBay, Amazon, Netflix, Facebook and Google – are engaged in analysis of customer preferences: patterns of purchases enable them to recommend products that a customer is likely to buy. Other applications include insurance fraud detection, flight analysis and medical diagnosis and prognosis.
Humans are poor at heavy quantitative analysis but brilliant at pattern recognition. For example, we may recall a face seen briefly years ago. So far, machines cannot match us but, as large data analysis progresses and new techniques are developed, substantial advances may be expected. Millions of images can be input to deep learning algorithms to train them to recognize patterns.
Topological data analysis
Many human activities are organized in networks, which can be modelled using graph theory. A graph is just a collection of nodes linked by edges, like an electric circuit diagram or a railway map. The branch of mathematics dealing with connectivity and continuity is called topology, and it includes graph theory. Topological data analysis provides a way of generating structured data sets from unstructured, chaotic data. The structured data can then be processed using machine learning algorithms.
Often, data is represented by points in a high-dimensional space – difficult to visualize but amenable to algebraic manipulation. Large multi-dimensional data sets can be reduced to a greatly compressed state using methods like singular value decomposition, where the essential information-bearing components are isolated and the rest dumped.
There is an acute shortage of experts in data analysis in many industrial sectors, including health, finance, climate science, pharmaceuticals and online services. Several universities, UCD included, offer postgraduate programmes in data analytics. With so many open problems, this is a promising field for young people.