VACANCY Postdoc Social Data Analytics to Support Vulnerable Youth, 2 years –0,8 FTE

As a part of its research agenda, the LDE Centre for BOLD Cities aims at developing big data solutions and capabilities for city civil servants and policy makers confronted with pressing social issues. In the context of a recently funded national project (NWA Startimpuls - Big Data voor Jongerenbeleid), the BOLD Cities research team will focus specifically on young people vulnerable to, among other things, labour market insecurity, social exclusion, housing shortages, or anti-social and extremist temptations

The research activity will be carried out under the supervision of Alessandro Bozzon Prof. Liesbet van Zoonen (Erasmus University Rotterdam); and in collaboration with other researchers from the TU Delft and LDE-BOLDCities. 

The postdoctoral researcher will be embedded with the SocialGlass research team, and will be expected to conduct research on social data analytics, investigating new ways to combine and exploit social data sources to support vulnerable layers of the population. 

Candidates are required to have a completed PhD. Specialization and a proven track record is needed in one of more of the following subjects: user modeling, web mining, web science, information retrieval, urban analytics, or related fields. Preferential consideration will be given to candidates with a genuine passion and commitment to the social issue addressed by the project. 

The full project description and the application details can be found here. Application deadline: December 15, 2017.

VACANCY Postdoc FairNews, 2 years –1,0 FTE

Equal opportunities of access to news is a necessary precondition for a functioning democracy. Data analytics, machine learning and personalised recommendations make it possible to filter news, based on individual user profiles and ‘social sorting’. We are only beginning to understand the implications for political knowledge and participation in the public debate, but also the realisation of fundamental freedoms, such as freedom to expression and non-discrimination.

The objective of this joint project with the University of Amsterdam (3 postdocs in total) is to develop solutions that lead to a) more transparency and effective ways of informing users about algorithmic profiling and targeting in the news media and b) guidance for offering personalised recommendations in the news media in a way that is fair, non-discriminatory and accessible.

The postdoctoral researcher will be charged with:

  • investigating algorithmic biases in a data-driven manner (data provided by the use case);
  • proposing algorithmic remedies for the observed algorithmic biases;   
  • engineering user-facing prototypes that contain explainability components and enable users to learn why they receive certain information recommendations;
  • evaluating the proposed solutions in simulated and real-world studies;   
  • disseminating the conducted research through conference papers and journal articles.

The full project description and the application details can be found here. Application deadline: November 15, 2017.

A few topics for which Postdoc positions might be open soon:

For these themes and topics, there might be openings for postdocs (soon), so good candidates interested can always contact us, for example via prof. Geert-Jan Houben: 

  • Complex Search and Learning
    The learning process is more involved than what is currently acknowledged in MOOC platforms. One important aspect of online learning is search (retrieving information) and sensemaking (making sense of the information). Search & sensemaking is an intricate part of the learning process, and for many learners synonymous with accessing and ingesting information through Web search engines.
    In order to support complex search scenarios, we need to design search systems whose effectiveness is optimized over whole search episodes instead of individual search queries. Although existing research in interactive IR has made some headway towards a better understanding of complex search tasks, empirical research approaches are hampered by the fact that they mostly rely on simulated search tasks and a very limited number of lab study participants. In this project, we will design and deploy a Web search system within a number of MOOCs, to collect information on naturally occurring complex information needs and how learners go about solving them. The MOOC setting provides us with explicit information on learners' knowledge and skills, allowing us to explore how search success and search behaviour relate to learning behaviour and knowledge gains - for the first time in a natural and large-scale setting (instead of a small-scale lab setting).
    Contact: Claudia Hauff (
  • Gamifying the MOOC experience through deep learning and crowdsourcing
    MOOC learners are often ill-equipped to excel in this new type of online learning environment for a number of reasons, most importantly a lack of self-regulatory learning skills. Learners receive little information about their learning performance beyond the scores they achieve on assignments. Advances in recent years in the field of natural language processing, in particular its use of deep learning, offer us a way forward through the use of neural language models and their ability to generate language. In this project, we will design and deploy a system that - based on relevant text material such as textbooks and scientific papers - automatically generates an unlimited set of assessment questions on the fly through which learners can self-assess their comprehension of the material, not just at fixed assessment periods but continuously. We will explore crowdsourcing approaches to semi-automatically determine the validity of the generated questions. The system will also include gamification elements (leaderboard, streaks and so on) to entice MOOC learners to use it.
    Contact: Claudia Hauff (
  • Explanations to support media-literacy in learners
    More and more educators are adopting problem-based learning. One common motivation is to motivate students to take ownership for their own learning, and another growing classroom sizes. Learners in these settings, possibly with the support of a mentor, inform their own learning and often independently seek additional resources and references. This is largely a positive development, and unlike previous generations of learners, these learners can take advantage of the rich availability of online resources to support their opinions and arguments when completing course work.
    However, there has also been a surge in the number of articles and statements online that are misleading or false, sometimes with the sheer purpose of profit. These sorts of sensationalized ``fake news’’ are often widely shared, and have received a lot of attention recently. Stopping the proliferation of fake news is not just the responsibility of the platforms used to spread it. Those who consume news also need to find ways of determining if what they are reading is true.
    Toward this end, this project addresses ways of helping learners assess the veracity of online resources. It will use explanation facilities to justify why a source may be reliable, or not. For example, it may be shared on a reliable domain (more reliable) or it may supply strong statements with no quotes (less reliable). The project will help develop a set of signals that can be automatically detected and used as explanations to learners. It will also evaluate whether learners can use these explanations to accurately assess whether online articles are truthful or not.
    Contact: Nava Tintarev (
  • What am I not seeing? Visualizing Educational blind-spots
    Recommender systems are a familiar part of our everyday online lives, suggesting items to try, and helping us to deal with information overload online. They are also used in education, to help students navigate a plethora of potential learning resources. Previous research has found that showing users their progress using open learner models can help them decide on what to study next. One area that has been under-explored in this regard, is helping learners navigate their ``unknown unknowns’’. These are parts of the learning space the learner is unfamiliar with. These are not just areas where the learner has not made any progress, but also areas that they do not even know exist yet, and are unlikely to explore without external influences.
    Consequently, this project will explore the use of interactive visualisations to help users understand their learner profiles. It will explore how these techniques might be used to highlight learning blind-spots, i.e. regions of the recommendation space that have yet to be exposed to, or explored by, the user. This project will build on previous successful interfaces for novel content discovery, and look at the more sustained effect of the ``nudges’’ given by such explanatory interfaces. It will also study the difference in the effect of these interfaces when these blind-spots are intentional (e.g., the user is uninterested but the system does not know this yet), and unintentional (e.g., the user is simply unaware of this part of the search space but is interested).
    Contact: Nava Tintarev (

Vacancies connected to data science & Delft Data Science

See Delft Data Science.