KIVI Chair
The KIVI Chair knows a long tradition. It is awarded by the TU Delft and has a research period of four years in a significant field of research with applications in various engineering disciplines. The chair is meant to establish a connection between scientific research at the TU Delft and the engineers in the professional practice.
Big Data Science
The research explores the topic of Big Data Science. The professor of TU Delft who holds the chair is prof. dr. ir. Geert-Jan Houben.
Delft Data Science
This research is closely connected to the Data Science work in Delft: Delft Data Science.
On the 27th of November 2017 KIVI and Delft Data Science have organised 3 Big Data Science master classes: ‘Smart Algorithms for Smart Grids’, ‘Responsible Data Sharing: A vision of human-technology partnership’ and ‘Software Analytics’
The Master classes are typically an exchange of cutting edge research and knowledge developed within innovative companies. Furthermore the Master Classes give engineers and researchers the opportunity to inspire each other and to work together.
See more: KIVI Chair Big Data Science Masterclasses.
As part of the KIVI chair Big Data Science the Royal Netherlands Society of Engineers (KIVI) and the TU Delft have organised three Big Data Science Master Classes on November 10, 2016.
The Master classes are typically an exchange of cutting edge research and knowledge developed within innovative companies. Furthermore the Master Classes give engineers and researchers the opportunity to inspire each other and to work together.
See more: KIVI Chair Big Data Science Masterclasses.
On Wednesday 10 June 2015, Micaela dos Ramos, Director of the Royal Netherlands Society of Engineers (KIVI) and Prof. Geert-Jan Houben, the KIVI Chair Professor, gave the go-ahead for the KIVI Chair in Big Data Science. The Chair, that will link scientific research at TU Delft with professional engineering practice, was launched at the kick-off symposium in The Hague. At the symposium, Professor Houben described developments in the field of big data science and outlined its impact on society and the engineering sciences. “What makes big data so interesting, is that, just like the web, it is both fundamental and experimental at the same time”, said Houben. As a result, it calls for a new type of research, to which computer scientists are often unaccustomed. “It involves social aspects that are absent from a purely technical system. That’s what makes researching it so exciting”.
After the plenary session, there were three parallel masterclasses offering a taster of big data research. In his masterclass, Dr Alexandru Iosup looked at ‘Scalable High Performance Systems’, for example at data centres, one of the most important prerequisites for an information culture. During the masterclass ‘Crowdsourcing in Enterprise Environments’ by Dr Alessandro Bozzon, the focus was on the importance of combining the cognitive and reasoning capacity of individuals and groups with the computational powers of machines. Bozzon argued that this will promote welfare and integration. In his masterclass entitled ‘Acceleration of Personalized Medicine Applications’, Dr Zaid Al-Ars explained how genomics data analysis is used in the diagnosis of genetic diseases such as cancer and outlined the necessity of reducing the calculation time involved.
The symposium ended with a panel discussion in which the engineers present could put questions and comments to the scientists. This marked the start of collaboration in big data science between scientists and engineering professionals.
“KIVI seems much more technology oriented than peers in other European countries, which is a plus for collaboration with academia.”
To cope with increasing computation demands and with the data deluge, we have already started to build complex hardware and software ecosystems, exposed as cloud services to a vast user community (possibly tens of millions, world-wide). These users demand high performance or high throughput, and may switch at any time among the hundreds of service providers and technologies. Interesting new challenges emerge in the operation of the datacentres that form the infrastructure of cloud services, and in supporting the dynamic workloads of demanding users.
We discuss here several steps towards addressing these challenges. If we succeed, we may not only enable the advent of big science and engineering, and the almost complete automation of many large-scale processes, but also reduce the ecological footprint of datacentres and the entire ICT industry.
“Participants actively interacted to provide examples of concrete engineering and scientific issues that could be addressed by human computation techniques.”
Roughly defined, crowdsourcing is “an online, distributed problem-solving and production model that leverages the collective intelligence of online communities to serve specific organisation goals”.
More and more, companies turn (directly or indirectly) to online communities to outsource tasks previously performed by their employees or associates. But how to tap systematically and consistently crowds of online users to perform such tasks? How can organizations build upon crowdsourcing techniques to valorise their (human) assets, stimulate employee engagement and retainment, and, ultimately, provide a more enjoyable and productive workplace?
The masterclass aims at providing answers to such questions, and can target different audience. On one hand, Allessandro Bozzon presents examples of successful (and disastrous) case studies where crowdsourcing found real-world application. On the other hand, he introduces the underpinning principles and “theories” of crowdsourcing, to provide the “ingredients” that can be used to successfully implement crowdsourcing for, and in enterprise environments.
“The audience was keen to understand the issues and the solutions presented. The Q&A session and discussion continued well into the break and had to be taken offline to start the following session.”
The topic of Acceleration of Personalized Medicine Applications is related to the rapidly growing mass of health sensor measurements of individuals, and the computational bottlenecks we face in using them as a way to enable a personalized treatment plan for patients.
One focus we address in this topic is the need to reduce the computational time of genomics data analysis and the way it is used in diagnostics of genetic disease, such as cancer. The computational complexity stems from many sources, such as processors, memory and IO. We discuss these bottlenecks and identify possible solutions.
Visiting address:
Building 28
Room - 840 West 4rd floor
Van Mourik Broekmanweg 6
2628 XE Delft
The Netherlands