Conference of European Statistics Stakeholders 2018 (CESS 2018)
Bamberg, Germany, 18–19 October 2018

Thursday, 18 October 2018 - Hegelsaal I - 11:00 - 12:00

Invited Paper Session -


Th-IPS01-01

Effects of attrition on longitudinal EU-LFS estimates

Hannah Kiiver

Eurostat, European Commission, Luxembourg, Luxembourg

All countries that deliver Labour Force Survey data to Eurostat make use of a rotational pattern that results in annual overlaps of data, and with the exception of Germany, also overlaps of quarterly data. Rotational patterns are generally used in sample surveys to reduce the variances of estimators. For the EU-Labour Force Survey (LFS), this longitudinal feature can be exploited to produce highly demanded estimates of labour market transitions between the status employed, unemployed and economically inactive. When defining the longitudinal sample, Eurostat has to rely on the available overlapping data; this means, that information on individuals who drop out of the survey before completing all interviews is lost. The individuals dropping out may have moved away, it may have been impossible to contact them, they may have refused to answer, or they may have died. Eurostat does not have information on the reason why individuals drop out,nor the information that would be necessary to fill or estimate the missing labour market data for those individuals. If those individuals that drop out are on average no different from those that stay in the sample, this attrition will only lead to less precise estimates. If, however, these individuals differ with regard to their labour market status from the remaining sample, estimates based on this sample would not be representative. In short, transition statistics derived by Eurostat using LFS data may possibly suffer from attrition bias. In this paper, simple binary regression is used to determine whether attrition bias may be a problem, and simple simulations are used to estimate the potential size effect differential attrition might have on the estimates.


Th-IPS01-02

Innovative tools and sources in European social surveys

Hubertus Cloodt

European Commission - Eurostat - Income and living conditions; Quality of life, Luxembourg, Luxembourg

The 2011 DGINS Wiesbaden's Memorandum stressed «the need for better information from time use and household budgets in terms of coverage and comparability ». This has been followed up by Eurostat in the context of the modernisation of social statistics in order to improve responsiveness to users' needs and efficiency of statistical production using new technological development.

More generally, the burden of the survey data collection on respondents in Member States, the subsequent low and/or decreasing response rates as well as burden for NSIs with the data processing, stress the need for modernisation by finding more attractive ways of collecting these data, including the use of new data collection tools and new sources in the Member States.

Both the household budget survey (HBS) and the time use survey (TUS) are key elements of the social statistical architecture. They are the source of information for many purposes and their information is used and re-used in several contexts.

HBS is the basis for the consumption patterns used for weighting price indices including the Harmonised Index of Consumer Prices (HICP), and is used for the consumption side in the National Accounts. In a broader sense, the HBS gives a picture of the living conditions of households by providing detailed information on households’ expenditures, useful in a variety of contexts, like social protection, poverty, transport, energy, education, health and health care expenditure, consumer protection, sports, culture, etc. Furthermore, the HBS is capturing one essential dimension of the material living conditions of the households, and plays an important role in the context of a better and joint measurement of income, consumption and wealth.


Th-IPS01-03

The MIMOD project: a platform for sharing knowledge and practices in the ESS

Marina Signore

Istat, Italian National Institute of Statistics, Rome, Italy

The MIMOD – Mixed Mode Designs for Social Surveys was awarded a Eurostat Grant to support ESS National Statistical Institutes (NSIs) facing the challenges in implementing mixed-mode and multi-device data collection designs.

Istat is leading a Consortium in partnership with CBS (Netherlands), SSB (Norway), STAT (Austria) and Destatis (Germany). A network of supporting countries -INSEE (France), Czech Statistical Office (Czech Republic), Central Statistical Office of Poland (Poland), Statistic Finland (Finland) and Statistics Sweden (Sweden) – provides inputs to the project.

The MIMOD project covers a wide range of topics related to the use of mixed-mode approaches in social surveys, including mode organisation; questionnaire design; mode bias and mode effect; challenges for phone and tablets respondents in CAWI and case management systems. It aims at sharing knowledge on best practices in use in the ESS and streamlining theoretical findings available in the literature in order to create a common understanding and reference guidance to ESS NSIs in their path to the modernisation of official statistics.

The presentation will provide information on the main areas covered by the project, as well as the main outputs. It will include a summary of the results of the survey on the state of the art on mixed-mode designs in ESS social surveys carried out within the project framework activities.

Thursday, 18 October 2018 - Hegelsaal II - 11:00 - 12:00

Special Topic Session -


Th-STS01-01

Integrating National Accounts and Balance of Payments Statistics in Austria

Michael Andreasch

Oesterreichische Nationalbank, Vienna, Austria

The presentation will explain the decision to integrate financial accounts data and balance of payments data in an integrated framework both from a technical and analytical viewpoint. The presentation will highlight the importance of the respective organisational solutions, the interaction between the two domains within one division as part of the statistics department. Furthermore, the presentation will show the pros and cons of the solution in Austria and finally will demonstrate the usefulness of the integration for the analysis of: a) the functional approach of BOP statistics combined with the breakdown by financial instrument in the financial accounts and b) the sector analysis comparing the contribution of individual sectors for the overall net lending/net borrowing position of Austria and the direct cross-border activities of various economic sectors


Th-STS01-02

The importance of high quality national financial accounts and Balance of Payments for European policy making

Henning Ahnert 1, Robert Obrzut 2

1 European Central Bank, Frankfurt, Germany
2 Eurostat, Luxembourg, Luxembourg

The presentation will cover 3 aspects: the first part will show the use of financial accounts indicators – in particular private sector debt - for the Macroeconomic Imbalance Procedure and other policy analyses. The second part will briefly introduce the joint efforts of the ECB and Eurostat, in collaboration with the European System of Central Banks and the European Statistical System, to ensure high quality and a level-playing field for financial accounts and balance of payments statistics used in the MIP process. The third part will shed light on one specific dimension of these two statistics, namely the measures used to assess the integration and consistency of the balance of payments/international investment positon statistics with the external sector in the financial and non-financial accounts.


Th-STS01-03

Exchange Rate Effects in the International Investment Position - Methods, Tools and Applications for Germany

Ulf von Kalckreuth, Stefan Hopp, Stephanus Arz

Deutsche Bundesbank, Frankfurt am Main, Germany

Exchange rate movements play an important role in explaining the development und fluctuation in national and sectoral gross and net wealth and the rate of return on foreign investments. The German international investment position (IIP) statistics has for a long time provided and published data on assets and liabilities with foreign counterparties by sector and by financial instruments. For the time since 2012, all items can additionally be broken down according to seven currencies: Euro and six non-Euro denominations. Ex post, this allows calculating the effect of exchange rate changes on the Euro value of assets and liabilities, enabling a wide range of analytical work. These exchange rate changes are now collected in an index of exchange rate effects in the IIP, which depicts the influence of individual exchange rate movements on all non-derivative assets and liabilities in the external position on an aggregated level as well as on various disaggregated levels. Ex ante, it is possible to conduct partial sensitivity-analyses of exchange rate shocks. Furthermore, the extended IIP approach can under certain qualifying assumptions be used to indicate currency mismatches and potential imbalances and as a basis for delving deeper into sectoral currency risk exposure and potential vulnerabilities on the aggregate level.

International spillovers of financial shocks can be transmitted by a variety of channels, among them direct financial interlinkages and demand effects. The exchange rate is a summary relative price for traded goods and services as well as for real and financial assets. Thus it will steer trade and financial flows and determine the relative wealth of people, sectors and nations, including their state of solvency. The Asian crisis started out as a series of currency devaluations that triggered stock market declines and made foreign debt positions of a number of countries unsustainable.

The IIP is to national wealth what the current account is to GNP: national wealth is the sum of real capital plus the net foreign position of a country. Thus, in order to categorize and analyse wealth effects of exchange rate fluctuations, the IIP is the point of departure. Obviously, wealth effects of exchange rate movements on countries, sectors and individuals depend to a large extend not only on their overall gross and net financial positions but also on the currency denomination of their portfolio.

Our paper gives a methodological exposition on analysing the instantaneous valuation effects, with a focus on the German IIP. As the statistical data compilation in Germany is methodologically guided by the IMF BPM6 directives which apply world-wide, it can be expected that this type of analysis is feasible in many other countries.


Th-STS01-04

Future challenges for institutional sector accounts and BoP statistics

Peter VAN DE VEN

OECD, Paris, France

In the aftermath of the 2007-2009 economic and financial crisis, user demands for more detailed and timelier statistics on income and finance to detect (financial) risks and vulnerabilities have increased tremendously. But also the increased attention for material economic well-being of households has created a renewed demand for income, consumption and wealth data on households, including their distribution across household groups. The presentation will provide a short overview of the main medium and long-term challenges, when it comes to compiling institutional sector accounts and balance of payments. These issues relate to e.g. globalisation, tracking new developments in the financial industry, capturing financial interconnectedness and other risks, compiling data for the household sector, and more generally the implementation of the relevant recommendations of the G-20 data Gaps Initiative.

Thursday, 18 October 2018 - Upper Foyer - 11:00 - 12:00

Contributed Paper Session -


Th-CPS01-01

Evidence-based, default-open, risk-managed, user-centred data access

Felix Ritchie

University of the West of England, Bristol, Bristol, United Kingdom

In recent years, there has been an increasing demand from the academic community for more access to confidential data for research purposes, particularly that data collected by government departments. This has happened for three reasons. First, both governments and users have become conscious of the research value in data resources and the financial pressure to re-use data. Second, the growing use of administrative data has greatly increased the range of questions that can be answered. Third, over the last decade or so there has been growing evidence that secure academic research use of the most sensitive data can be managed without placing excessive burdens on users or data holders. In addition, the wider user community is increasingly demanding more granular statistics, such as user-generated tables.

However, the supply of secure efficient data access solutions is still a minority sport: data access is dominated by defensive decision-making which has changed little in decades. A major influence is the dominance of downside risk in the literature on statistical disclosure control (SDC). Over fifty years this literature has successfully developed a coherent approach to analysing problems and proposing solutions, to the extent that anyone facing an anonymisation problem can pick an effective, uncontroversial off-the-shelf solution. However, the SDC literature is uniformly defensive, encouraging users of SDC analyses to take the same attitude.

The net result of defensive decision-making in government and defensive SDC literature is to generate a ‘policing’ model of data security, where right, wrong and responsibilities are clearly defined, and the aim of the data owner shifts from user needs to ‘due diligence’. Moreover, defensive decision-making encourages a focus on theoretical worst cases: the evidence base that has been built up on how researchers actually use data plays almost no part in the literature.

This paper argues that a change in attitudes can lead to outcomes which are cost effective, more secure, more sustainable, more resilient, and encourage good relationships with stakeholders. We refer to this as the evidence-based, default-open, risk-managed, user-centred (‘EDRU’) model, and it reflects insights from economics, psychology, criminology, and cybersecurity.

This paper summarises the case for this evidence-based holistic approach to data access management. The common themes are use of evidence, integration of statistical and non-statistical approaches, and the effective use of limited resources. While this approach is no longer novel in some communities, it is still unfamiliar enough to cause concerns amongst those implementing data access strategies. This paper aims to address such concerns and demonstrate the importance of grounding strategy in realistic expectations of risk, uncertainty, cost and incentives.


Th-CPS01-02

The circular market flow as an approach to explain the value of official statistics to users

Margarita Rohr 2, Florabela Carausu 1

1 University of Valencia, Valencia, Spain
2 Gopa Luxembourg, Luxembourg, Luxembourg

Starting from the idea that official statistics are a public good, the authors argue that ensuring the value of official statistics can tackle positively the challenges that official statistical producers are currently facing. From the legal point of view a public good belongs to or is provided by the State at any level through all bodies being part of the public sector. From the economic standpoint, it is a good that is available to all population and its use by one person does not reduce its use by another. In this sense, official statistics are very important for the economic and social development of a country and its significance can be explained by the circular market flow.

The main objective of this paper is to offer a new way of understanding the importance of the statistical system for the economic and social development of societies, but as well to promote the continuous and systematic dialogue between statistics producers and users, focused towards communicating the value of official statistics to the society in general.

The circular market flow is proposed as a basis for showing the relations that exist within an economic system, and as well the important role that official statistics play within the society. Statistical literacy, the communication of the value of official statistics to stakeholders, and a closed feedback loop are capable to engage all actors with the process of producing official statistics, ensuring its sustainability.


Th-CPS01-03

Empowering and interacting with statistical producers: a practical example with Eurostat data as a service

Jacopo Grazzini, Jean-Marc Museux, Martina Hahn

Methodology and innovation in official statistics Unit, Eurostat - European Commission, Luxembourg, Luxembourg

Since policy advice is becoming increasingly supported by data resources [1], public organisations are leveraging the use of these resources to inform decisions. In this context, supporting data analytics, which aims eventually at extracting valuable information from data to use in intelligent ways by means of advanced statistical and computational techniques [2], offers big opportunities – and challenges – for enhanced insight and decision-making [3]. Data analytics draws not only on existing and new sources of ever-growing data, it builds also on new methodologies and emerging Information and Communication Technologies tools, and advances thanks to innovative initiatives. In particular, with increased availability of open data, new developments in open technologies and recent breakthroughs in data science2, it is believed that data analytics can help improve current governance processes by enabling policy-driven data-informed evidence-based decision-making and potentially reduce the bias, costs and risks of policy decisions [4][5]. Eventually, it seems very logical, and appealing, to deploy complement the statistical offices' toolbox with data analytics tools. Still, the development and deployment of such tools will need sound judgment as an abundance of data and computing power does not automatically guarantee good decision-making.

Nowadays political decisions are expected to be accompanied by the access to the data analysed, the detailed information about the sources, the underlying assumptions (models and methods) and also the tools (software) used to support the decision3 [4]. At the time where the citizens’ demands for more transparency in the EU institutions are growing, it also underpins the movement towards not only more open, transparent and defensible [5], but also more participative decision-making systems. In the context of a "post-truth" society, data analytics presents substantial promises for E-government, openness and transparency, and the interaction between governments and produsers, e.g. statisticians, scientists and also citizens [1]. Additionally, it is also necessary to create a framework to provide produsers with the ability to both perform the analysis and repeat it with different hypotheses, parameters, or data, hence translating questions that are asked into a series of well-understood computational methods [3][6].

While the importance of openness and transparency in statistical processes [6][7], and how these can be supported through open algorithms and open data [4][5], has been 2

already emphasized, this contribution aims at showcasing an approach similar to [8] where algorithms and data are delivered as interactive, reusable and reproducible computing services. This will eventually provide produsers with the necessary tools to perform, for themselves, data analytics on Eurostat data in a straightforward manner.

Thursday, 18 October 2018 - Hegelsaal I - 12:00 - 13:00

Invited Paper Session -


Th-IPS02-01

Time-to-market for providing micro-data and granular data for official statistics.

Petra Steiner 1, Ann van Nieuwenhove 2, Chrstian Steinbauer 3

1 Bureau van Dijk Electronic Publishing GmbH, FRANKFURT, Germany
2 Bureau van Dijk Editions Electroniques S.A., Brussels, Belgium
3 Bureau van Dijk Electronic Publishing GmbH, FRANKFURT, Germany

We will expand on micro and granular data content Bureau van Dijk provides to complement official statistics and give some insight in current projects and challenges faced.


Th-IPS02-02

The use of granular data to produce high quality statistics: challenges and perspectives

Antonio Matas Mir

European Central Bank, Frankfurt, Germany

Abstract

The collection of granular data has been one of the major shifts in the compilation of central bank statistics in the last twenty years. The increase in computing power and storage capacity, and the corresponding increasing digitalisation of the economy, has transformed the statistical landscape and the “business as usual” of financial statistics. Granular data allow flexibility and thus to meet better changing user needs; they reveal the heterogeneity in the economy, the concentration of risks, and the interconnectedness of the entities; and they map better the intricacies and flows of a globalised world. Finally, and perhaps crucially, they allow collecting data once to serve a variety of purposes.

Nevertheless, granular data face substantial challenges and necessitate a paradigm shift if they are to achieve their multiple goals. The change in the scale of the statistical units analysed – from the sector level information, to the credit institution and down to the transaction ­– requires the identification of entities and objects in a standardised manner. Granular data also impose consistency and homogeneity on the data at source (through the use of generic and detailed reporting formats), as well as authoritative reference databases. The globalised nature of financial entities requires increased international cooperation between collecting authorities and close work with the reporting entities. Last but not least, they also need the transformation of the internal organisation to collect, process, store, and integrate the data inside the collecting institutions.

These challenges are being addressed at the ECB in different ways. We provide an overview of selected endeavours in the field of securities issuance, holding statistics and financial accounts, showing the path from reference databases, through the collection of granular data and the intermediate outputs, to aggregate statistics with a full granular foundation.


Th-IPS02-03

Financial stability work and micro data: opportunities and challenges

Bruno TISSOT

Bank for International Settlements, BASEL, Switzerland

Public authorities working in the financial stability area have shown an increasing interest for big data. Yet Financial Big Data are quite specific. They primarily consist of the very large though relatively well-structured datasets derived from administrative and financial activities, for which new private data sources play a major role. Accessing Financial Big Data can provide many opportunities for public authorities, but also specific challenges when handling these data and using them to support policy.

Thursday, 18 October 2018 - Hegelsaal II - 12:00 - 13:00

Special Topic Session -


Th-STS02-01

 Disentangling Data Narratives: The Impact of Migrants on European Welfare Systems

Caterina Francesca Guidi, Gaby Umbach

Robert Schuman Centre for Advanced Studies European University Institute, Florence, Italy

Today, migration is one of the key issues in the international and European political as well as public debate. One of the most compelling challenges concerning the integration of migrants into receiving societies consists in the adaptation of national healthcare systems to migrants’ needs.

European Union (EU) member states (MS) differ hugely in terms of their healthcare provision models, contribution systems, and integration policies adopted towards foreigners. Differences in access to and use of healthcare systems by migrants from within the EU and those from outside the EU (i.e. from third countries) are still considerable and further diversified based on migrants’ legal status. To analyse these differences between the traditional types of healthcare systems within the EU, it is necessary to establish and measure the systematic relationship between the costs and performance of healthcare systems, migratory care demand, and the migrants’ contribution to the MS political and economic systems.

At the same time, the debates about the pressures on MS’ healthcare systems are overshadowed by the conflict between ‘factual’, i.e. measurable, and ‘post factual’, i.e. perceived, realities. While the former might paint a not too negative picture of the performance of national healthcare systems in reaction to migration, the latter might depict a doomsday scenario that was to end in the collapse of national healthcare systems.

To provide evidence-based insights into the topic for our case study, the United Kingdom (UK), we will analyse the role of diverging data narratives and contrast ‘alternative truth scenarios’ in ‘Brexit-UK’ with the ‘real’, i.e. measurable, impact of migrants on the British healthcare sectors. The key aim of the paper is to juxtapose factual and perceived evidence in the given case and to elaborate on key methodological strategies how best to disentangle the two the two.


Th-STS02-02

The reduction of complexity by means of indicators

Walter J. Radermacher

FENStatS, Wiesbaden, Germany
La Sapienza University, Rome, Italy

STS02: Selection and Synthesis of Indicators for SDGs and Welfare: Strategic Partnership, Methodologies and Applications

“What is sustainability? How can we make sustainable development a reality? How sustainability can be measured?” is a set of questions, which did receive 21,580 reads, 475 followers and 1,770 answers in the social network ResearchGate[1]. Obviously, these questions are related and cannot be answered separately. It is not possible to measure sustainable development independently of the political-social change process that needs to be done to reach this goal. The statistical question behind it is with which metrics the decisions on the way to sustainable development can be improved and of what quality and condition these metrics have to be, so that they are robust enough for the social conflicts that have to be solved. The complexity inherent in sustainable development must be reduced and simplified, but without going too far or risking the credibility of metrics with covert value judgments. Indicators are a very suitable tool for this. They embody the property of being 'boundary objects', that is, providing a connecting language for different communities (politicians, civil society, scientists, statisticians). It is therefore no coincidence that the Sustainable Development Goals and the corresponding indicators have found each other at UN level. However, the reduction of complexity through indicators also requires that it is underpinned by a scientific approach and that it is framed by informational governance. For statisticians, this means understanding the specific role that indicators play in their portfolio of products.

[1] Extracted on 21. 09. 2018 https://www.researchgate.net/post/What_is_sustainability_How_can_we_make_sustainable_development_a_reality_How_sustainability_can_be_measured


Th-STS02-03

The systemic view of the SDGs indicators through a statistical model-based approach: an application on Eu countries

Cavicchia Carlo, Filomena Maggino, Maurizio Vichi

Department of Statistical Sciences Sapienza University of Rome, Rome, Italy

Studies on indicators for the SDGs and related targets are fully under way. In November 2017 a first set of EU SDG indicators was released, which was agreed by the European Statistical System (ESS). The objective of this paper is to provide a statistical-model based approach and a series of tools for giving a systematic view of the proposed SDGs indicators.

Thursday, 18 October 2018 - Upper Foyer - 12:00 - 13:00

Contributed Paper Session -


Th-CPS02-01

Challenges and Opportunities with Mobile Phone Data in Official Statistics

Tiziana Tuoto 1, Fabrizio De Fausti 1, Roberta Radini 1, Luca Valentino 1, Marcello Savarese 2, Francesco Fabbri 2, Maria Rita Spada 2

1 Istat - Italian National Statistical Institute, Rome, Italy
2 Wind Tre S.p.A, Rome, Italy

In recent years, the use of new data source for statistical purposes represents a big challenge for official statistics. Great potentialities are recognized to mobile phone data for supplementing most of the statistical output on population estimates and mobility at a very fine scale. The great interest for the use of these data in Official Statistics is proved also from the several Eurostat founded projects on the topics. However, statisticians from national statistical institutes have rarely the opportunity to directly handle raw data from telecommunication providers, in order to directly investigate how raw data can be treated to extract all their potentialities for official statistics production. This work describes the first results of cooperation between Istat, the Italian National Statistical Institute, and Wind Tre, Mobile Phone Operator MPO, discussing opportunities and challenges. Some uses and the applied methodologies are introduced. The mobile phone data seem promising for further developments and innovative solutions for describing complex behavior, not completely caught by other data sources, i.e. administrative data and to explore new digital solutions taking into account always the privacy constrains. This work is also important to encourage the different stakeholders to cooperate defining a new ecosystem useful for different contexts.The usability and potentialities of Mobile Phone Data (MPD) are analyzed with respect to the new census framework, underlining the steps in which MPD may increase the information already available via administrative data and social surveys. In effect, currently, Istat and other National Statistical Institutes (NSIs) is implementing a census transformation program, the new framework provides for leaving the traditional door-to-door decennial census in favor of the combined use of statistical registers based on administrative data and social surveys. Specific aspects, like coverage of sub-population and other information that cannot be derived by administrative data and ongoing social surveys, will be investigated by yearly ad-hoc sample surveys. To this aim, MPD can be used in different ways, both as complementary data source and primary data source, as well as to validate population estimates. In this report, the abovementioned aspects are investigated, even if, firstly, the reliability of MPD is assessed through the comparison with the official estimates.


Th-CPS02-02

Machine learning methods within the Federal Statistical Office of Germany

Florian Dumpert 1, Lydia Spies 2

1 University of Bayreuth, Bayreuth, Germany
2 Federal Statistical Office of Germany, Wiesbaden, Germany

In 2017, the Federal Statistical Office of Germany developed a Digital Agenda in order to advance the holistic digitalization of the organization. One of its main objectives is the automation of processes. First concrete classification tasks in earnings and craft statistics and in the business register have already been done. Currently, the Federal Statistical Office is working on a comprehensive integration of artificial intelligence and machine learning methods into the statistical production processes.

In the beginning, the focus will be on statistical data editing and imputation. Since those process steps are still often performed manually there is a high potential to improve efficiency through the application of machine learning techniques. The potentials and limitations of different machine learning methods for editing and imputation are therefore currently evaluated.

The presentation will give an overview of the evaluation setup and previous results as well as an outlook especially with regard to a contemplated integration of machine learning into the statistical production processes.


Th-CPS02-03

Obtaining fairness using optimal transport theory

Jean-Michel Loubes, Philippe Besse

Institut de Mathématiques de Toulouse, Toulouse, France

Along the last decade, Machine Learning methods have become more popular to build decision algorithms. Originally meant for recommendation algorithms over the Internet, they are now widely used in a large number of very sensitive areas such as medicine, human ressources with hiring policies, banks and insurance (lending), police, and justice with criminal sentencing. The decisions made by what is known referred to as IA have a growing impact on human's life. The whole machinery of these technics relies on the fact that a decision rule can be learnt by looking at a set of labeled examples called the learning sample and then this decision will be applied for the population which is assumed to follow the same underlying distribution. So the decision is highly influenced by the choice of the learning set. But this learning sample may present some bias or discrimination that could possibly be learnt by the algorithm and then propagate to the whole population by automatic decisions and, even worse, providing a mathematical legitimacy for this unfair treatment. Classification algorithms are one particular locus of fairness concerns since classifiers map individuals to outcomes. Hence, achieving fair treatment in machine learning is one of the growing field of interests. For this, several definitions of fairness have been considered. In this paper we focus on the notion of disparate impact for protected variables. Actually, some variables, such as sex, age or ethnic origin, are potentially sources of unfair treatment since they enable to create information that should not be processed out by the algorithm. Such variables are called in the literature protected variables. An algorithm is called fair with respect to these attributes when its outcome does not allow to make inference on the information they convey. Of course the naive solution of ignoring these attributes when learning the classifier does not ensure this, since the protected variables may be closely correlated with other features permitting a classifier to reconstruct them.

Thursday, 18 October 2018 - Hegelsaal I - 14:30 - 15:30

Invited Paper Session -


Th-IPS03-01

Making people understand and use statistics

Olof Gränström

Gapminder, Stockholm, Sweden

Gapminder has been working over a decade to explain the world through making people understand and use statistics. Gapminder has been addressing the problem of that most people don’t see the progress made in the world despite the statistics being open data. Gapminder has developed a model to explain the unseen facts of the world.

First we explore what we don’t know to break through the misconceptions we have about the world and show how we often answer more wrong the random on simple questions about the world where the data exists as open data. Then we create vizualisations, frameworks and narratives that helps people see the unseen facts about the world.

During my presentation I will go through Gapminders misconceptions study. Connect the work of Daniel Kahneman and Amos Dversky to how we create our worldview as Hans Rosling explains in his last book “Factfullness”. Finally showcase Gapminders, tools and frameworks to give a crashcourse in the global facts most people have missed and how fallacies, instincts and heuristics often cloud the judgement of those looking at statistics.


Th-IPS03-02

Preparing students for a world full of data

Roger PorKess

The Royal Society, London, United Kingdom

This presentation starts with two reports which Roger Porkess wrote for the Royal Statistical Society: The Future of Statistics in our Schools and Colleges (2012) and A world full of Data (2013). These show a considerable deficit in the statistical skills provided to young people in England, both in terms of the numbers of students involved and of the relevance to other subjects of such statistics as they are taught.


Th-IPS03-03

A European effort to explore games and the gamification of official statistics

Christine Kormann 1, Simona Klasinc 2, Maria Jesus Vinuesa Angulo 3, Nektaria Tsiligkaki 4, Antoaneta Ilkova 5, Pedro Campos 6, Marta Jankowska 7, Patrizia Collesi 8, Xenia Caruso 8, Sybille Luhmann 1

1 Eurostat, Luxembourg, Luxembourg
2 SURS, Ljubljana, Slovenia
3 INE Spain, Madrid, Spain
4 Statistics Greece, Athens, Greece
5 BNSI, Sofia, Bulgaria
6 Statistics Portugal, Lisbon, Portugal
7 Statistics Poland, Warsaw, Poland
8 ISTAT, Rome, Italy

Games and the gamification of educational material are no strangers to academic institutions. Pedersen et al. [1] present their interactive virtual learning environment for advanced quantum mechanics. Zamora et al. [2] introduce the gamified learning platform that allows students enrolled in the Master of Economics to check into seminars and classes to earn badges. Vogel et al. [3] showcase the mobile learning environment created by the City University of Hong Kong that encompasses a range of games, spanning from crossword puzzles and quizzes to e-tips and tattoos.

These global efforts to motivate students and help them learn material should not surprise us. After all, a wealth of literature vouches for the efficacy of such initiatives. Teachers and students alike, were enthusiastic about the possibilities for gamification in an Australian research programme. They recognised the "potential for games to impact positively upon learning environments". And they do not stand alone. Seaborn and Fels [6] reviewed 30 studies that specifically used gamification and found that – for those studies that included control groups – the results were measurably positive. Similarly, Su [7] finds that "gamification has a positive effect on learning motivation" and "learning motivation has a positive effect on academic performance". This is confirmed by Smith, who is one of the few researchers to look into the potential of games and the gamification of statistics. She posits that statistics is often perceived as cumbersome by students and that elements of gamification can positively change this attitude. After giving small groups of her students access to learning material that has been gamified to different degrees, she concludes that "there was a positive impact on students’ attitudes towards statistics and learning".

Naturally, National Statistical Institutes (NSI) in Europe also wanted to inspire the students in their countries. In an age of fake news and interpretable facts, it is of ever increasing importance to provide students with the tools they need to critically engage with the information presented to them. In many countries of the EU, statistics is an optional course though. So, in order to stimulate interest new solutions had to be found.

The DIGICOM project of the European Statistical System (ESS) set out to do just that. Its goal is to modernise the communication and dissemination of European statistics. This is achieved by exploring and developing innovative dissemination products and services based on experiences within the ESS and concrete needs of European statistics users.

This paper presents a range of the products developed by Eurostat and seven Member States under this framework. It does so by, first, clearly distinguishing between gamification and games, before then presenting projects that are currently running. These range from mobile apps, to online games and offline games. Finally, the paper will present some future projects before concluding. The paper is thus limited to a portfolio of products and does not assess the efficiency or potential of games and the gamification of statistics. Rather it presupposes both to subsequently showcase a close, innovative and creative European partnership.

Thursday, 18 October 2018 - Hegelsaal II - 14:30 - 15:30

Special Topic Session -


Th-STS03-01

The two Different Aspects of Privacy Protection in Indirect Questioning Designs

Andreas Quatember

Johannes Kepler University JKU Linz, Linz, Austria

The motivation behind the application of indirect questioning designs is their possible positive effect on the respondents’ willingness to cooperate. Whereas the privacy protection objectively offered by these methods has a direct effect on the estimator’s efficiency, it is the subjectively perceived protection which affects the respondents’ willingness to cooperate. For the discussion of these different aspects of privacy protection, a family of such designs is presented as representative of indirect questioning designs. Measures are suggested that formalize the difference between the objectively offered and the subjectively perceived privacy protection. Different features of indirect questioning designs, influencing the perceived privacy protection, are discussed particularly for the crosswise randomized response variant in order to avoid underestimations of the true levels of privacy protection.


Th-STS03-02

Preserving privacy protection using indirect questioning techniques in real sensitive surveys

Pier Francesco Perri

Department of Economics, Statistics and Finance – University of Calabria, Italy, Arcavacata di Rende (CS), Italy

Nowadays, large scale surveys are increasingly delving into sensitive topics such as religious prejudice, racism, drug use, sexual behaviour, gambling, consumption of alcohol, domestic violence. Sensitive, stigmatizing or even incriminating themes are difficult to investigate by means of standard data-collection survey techniques since respondents are generally reluctant to release information concerning their personal sphere. Consequently, doing research on delicate topics is not an easy matter since it is likely to meet with three sources of errors: (1) refusal to cooperate (unit-non-response); (2) refusal to answer specific questions (item-non-response); (3) untruthful answers (measurement error). In particular, dishonest or misleading answers generate a well-known source of bias which is called social desirability bias, i.e. the tendency of survey participants to present themselves in a positive light. All these errors can seriously flaw the quality of the data and, thus, jeopardize the usefulness of the collected information for subsequent analyses, including inference on unknown characteristics of the population under study. More specifically, standard survey questioning techniques based on self-reporting or direct questions generally produce overreporting of socially acceptable attitudes which conform to social norms and underreporting of socially disapproved, undesirable behaviours which deviate from social rules.

The present contribution aims at bringing together methodological and practical aspects of the indirect questioning approach. Specifically, the survey plan and the results of some real surveys about drug use and sexual behaviour will be discussed during the conference. It will be shown how the techniques employed in the surveys can enhance respondents’ cooperation and, according to the “more-is-better” principle, procure more reliable estimates than those stemming from traditional direct questioning (DQ) survey methods.


Th-STS03-03

ITEM SUM: A NEW TECHNIQUE FOR ASKING QUANTITATIVE SENSITIVE QUESTIONS

Mark Trappmann 1, 2, Ivar Krumpal 3, Antje Kirchner 4, Ben Jann 5

1 IAB, Nuremberg, Germany
2 University of Bamberg, Bamberg, Germany
3 University of Leipzig, Leipzig, Germany
4 RTI, Research Traingle Park, United States
5 University of Bern, Bern, Switzerland

Asking sensitive questions in surveys is a challenge because respondents are required to self-report behaviors or attitudes that potentially violate social norms. Norm violations are often formally or informally sanctioned, so respondents are reluctant to reveal potentially stigmatizing information in a survey interview. Therefore, respondents may choose to misreport on sensitive topics and adjust their answers in accordance with social norms. Systematic misreporting and item nonresponse may introduce considerable bias to the measurement of sensitive topics and lower the overall data quality of a survey study.

To combat misreporting on sensitive topics, survey designers developed various data collection strategies (“dejeopardizing techniques”) trying to elicit more honest answers from respondents by increasing the anonymity of the question-and-answer process.

Thursday, 18 October 2018 - Upper Foyer - 14:30 - 15:30

Contributed Paper Session -


Th-CPS03-01

Research Reproducibility with Confidential Data: Certifying the Uncertifiable

christophe perignon 1, christophe hurlin 2

1 HEC Paris, Cernay la ville, France
2 University of Orleans, Orleans, France

A growing fraction of research is nowadays conducted using confidential data, such as

highly‐granular government data which require specific accreditation and controlled secure

access. Analyzing such rich datasets allows researchers to conduct extremely innovative

research programs and to address research questions that they could not address by only

relying on public data; hence significantly pushing the frontiers of knowledge and having a

positive impact on Society.

However, researchers using confidential data are inexorably challenged when it turns to

research reproducibility. Indeed, how can they show that that their research is reproducible

when their peers, referees, editors, etc cannot have access to these unique data? How can

they signal the reproducibility nature of their research?

In this paper, we present a joint initiative conducted in France between the CASD (French

Secure Data Access Center) and cascad (Certification Agency for Scientific Code and Data).

They jointly propose to CASD users to attest that the numerical results in a given scientific

article (tables and figures) can be reproduced from the code and confidential data used by

the researcher. This certification consists in a rigorous evaluation process jointly conducted

by a referee specialized in the software used by the researcher and by an expert from the

particular scientific field (an Editor). At the end of the process, a certification rating is

delivered to the researcher, with RRR being the highest potential rating. The researcher can

transmit the reproducibility certificate along with the manuscript when submitting a paper

to an academic journal. Hence, such certification process enriches the « peer review »

process of research.


Th-CPS03-02

Instant Access to Microdata - microdata.no

Svein Johansen 2, Ørnulf Risnes 3

1 Statistics Norway, Oslo, Norway
2 Statistics Norway, Kongsvinger, Norway
3 Norwegian Centre for Research Data, Bergen, Norway

Norway has a large number of registers on individuals that have been established for administrative and statistical purposes, covering the entire population or significant subpopulations. The merged registers are used for production of statistics and represent a valuable source of data for research. Trusted researchers in approved research institutions have been able to apply for access to the data at their own site. The approval procedure is complicated as well as time- and resource demanding. There has been is a desire to simplify the procedure and at the same time make it safer through remote access and other measures. The entering into force of GDPR this May, makes the process even more comprehensive.

In 2012 the Norwegian Research Council funded a (then approx. four million €) project Remote Access Infrastructure for Register Data (RAIRD) aiming at creating an analysis server for easier and safe remote access to register data. The project is a joint venture where the grant was divided equally between NSD – Norwegian Centre for Research Data and Statistics Norway. Among the conditions for the project were

1. Online Remote Access (RA)

2. Micro data are invisible, only statistical output will show.

3. Users should be allowed to combine data from different sources.

4. All statistical results should be confidentially safe.

In March this year the RAIRD technology was made operative in the research data service microdata.no.

Section 2 will sketch the data structure, the metadata structure, and user interface.

Section 3 will deal with security and confidentiality and section 4 will present experiences and plans for the future.


Th-CPS03-03

National Statistician's Data Ethics Advisory Committee: Providing assurance that the use of data for research and statistical purposes is ethical

Simon Whitworth

UK Statistics Authority, London, United Kingdom

The UK Statistics Authority (UKSA) has the statutory objective of promoting and safeguarding the production and publication of official statistics that serve the public good, including producing official statistics that inform the public about social and economic matters, and assisting in the evaluation of public policy. The UKSA’s Better Statistics Better Decisions Strategy[1] aims to mobilise the power of data to meet the greater demand from policy makers and users for more timely, frequent, accurate and relevant statistics to help Britain make better decisions. This involves making better use of pre-existing administrative, real time and big data using innovative methods, to produce more frequent, timely and accurate statistics accounting for a wide variety of user needs. As we do this it is essential that we deliver a professional service, not only in the statistics and analysis we produce but also by considering the ethical issues associated with our use of data.

To ensure that this work is completed to the highest ethical standards, the UKSA has established a robust ethical governance structure to provide transparent and timely ethical advice to the National Statistician that the access, use and sharing of public data for research and statistical purposes is ethical and for the public good. This work has included developing ethical principles and establishing a variety of transparent governance processes to assess proposed uses of data for research and statistical purposes against these ethical principles. This paper will present these principles and governance processes.

Thursday, 18 October 2018 - Hegelsaal I - 15:30 - 16:30

Invited Paper Session -


Th-IPS04-01

From Report to Card-stack: a new way to disseminate research findings

Ineke Stoop, Anouk de Wit

SCP, Den Haag, Netherlands

The Netherlands Institute for Social Research│SCP is a government agency which conducts empirical research into the social aspects of all areas of government policy. The main fields studied are health, welfare, social security, the labour market and education, with a particular focus on the interfaces between these fields. The reports published by SCP are widely used by government, civil servants, local authorities, academics, and the general public. Usually there is a high degree of media interest in SCP reports.

All SCP publications are publicly available at the website (www.scp.nl). They range from factual monitors to in-depth analytical studies, from short research briefs to voluminous monographs, and from wide-ranging social reports to specific methodological notes. As SCP works mainly for the Dutch government, the focus is on the Dutch society and reports are generally in Dutch. Increasingly, the Dutch situation is compared to other European countries, and almost all publications comprise an English summary (https://www.scp.nl/english).


Th-IPS04-02

"Nothing is more important for monetary policy than good statistics" - also in a digitalized world

aurel schubert

Vienna University of Economics and Business, Vienna, Austria

Central banks in the European Union are not only power users of statistics, they are also very important producers of European Statistics. Their area of competence and expertise lies predominantly in the field of financial institutions, markets and products. These statistics are not only necessary for decision making but also to explain the decisions taken to the public. Unless these policies can be justified with the underlying data, they will not be understood and the central banks will not achieve the necessary credibility. Thus, the addressees of central bank statistics are not only the direct decision makers within the banks but also the public at large, and all those who are central bank watchers. In a digitalized world, these basic facts do not change. What changes and has to change, are the forms and possibilities how to reach the target audiences. The addressees are increasingly “digital natives” and the messages have to be geared to their ways of modern communication. At the same time, also central banks have to cope with the increasing trend towards “fake news” and other forms of “communication noise”, the rise of sentiment based decisions, and the generally reduced trust in public institutions. In this paper, strategies, possibilities, actual examples as well as the respective challenges of reaching the different audiences for central banks in a digitalized world of communication will be addressed.


Th-IPS04-03

Pro-active and present on multiple channels - data communication in the digital age

Matthias Rumpf

OECD, Paris, Germany

Users of official statistics are a divers group. They range from specialized researchers with interest in data mining and comprehensive datasets to policymakers with a short attention focus or casual users who are attracted fact bites and gamified presentations. Financed from public funds, statistical offices and other public data providers should be able to address the needs of all potential users. They should explain the purpose and the main insights from their data while preserving their fact-based and non-partisan approach to communication. At the same time, they should also be able to defend the integrity of their data in the public debate. Based on the experience at OECD, this paper assesses recent developments in data communication and user engagement for statistics and argues in favour of a pro-active multi-channel communication strategy.


Th-IPS04-04

The future of dissemination: a sound base for targeting professionals

Mike Ackermans

CBS, Den Haag, Netherlands

A new strategy

In 2014, Statistics Netherlands decided to organize its dissemination- and communication process in a dramatic way. The central question was, what is the use of making and disseminating statistics passively, when nobody uses those statistics? How can we make a statistical institute relevant in society, in a way that using its statistics, referring to them in public debate, using them as factual base for decisions or scientific research is top of mind with everyone?

Statistics Netherlands designed a new process and adjusted its organization to reach that goal. The start was the acknowledgement that communications of statistics is a strategic part of the mission of the organization. Centralizing all dissemination- and communications functions in a separate division, on par with the other operational and production divisions (though considerably smaller) was the first step. To demonstrate that intention, its executive management position was raised to board level in the CBS organization.

Thursday, 18 October 2018 - Hegelsaal II - 15:30 - 16:30

Special Topic Session -


Th-STS04-01

Statistics Flanders as a new regional statistical authority: a first SWOT-analysis

Roeland Beerten, Dries Verlet

Statistics Flanders, Brussels, Belgium

Statistics Flanders is a new actor in the landscape of statistical authorities in Europe, created in 2016. The Flemish Statistics Authority forms the core of the decentralised network of official statistics and focuses on the development, coordination, production and publication of official statistics across all government departments and agencies. To realise this, a network approach was used in which the coordination is realised through joint working and decision-making.

The ultimate goal of Statistics Flanders is to offer reliable key figures and data about Flanders so that everyone has the right information to make well-founded decisions: citizens, organisations, companies and policy makers of the several policy levels. As official statistics producers we want to tell stories with numbers about Flanders: about the people who live and work here, our economy, our environment and about our place in the world. We want to put our official statistics at the core of an evidence-informed approach to policy making. We also aim to make data openly available so we maximise the use of our data.


Th-STS04-02

Nonstandard employment and the working poor in five European countries

J. Cok Vrooman

The Netherlands Institute for Social Research|SCP, The Hague, Netherlands
Utrecht University (Dept. of Sociology), Utrecht, Netherlands

The rise of nonstandard and precarious employment is often linked to a growth in poverty among working people. However, theoretically the relationship is not straightforward, and the mechanisms through which new forms of work translate into poverty cannot be tested easily as a result of data limitations. This contribution demonstrates some of the possibilities and pitfalls, based on a very recent comparative study conducted by The Netherlands Institute for Social Research|SCP. It first discusses the demarcation of the working poor and various forms of nonstandard and precarious employment. Subsequently a theoretical model of the nexus between institutions, societal contexts and the occurrence of poverty among the (self-)employed will be presented. The empirical part assesses whether formal (employment regulation, social protection, low wage traps) and informal (work values, gender roles) institutions are related to the poverty risks of various segments of the working population, taking into account the relative size of these groups. This part focuses on five countries that are institutionally different, but rather similar in other respects: The Netherlands, Germany, Belgium, Denmark and the United Kingdom.


Th-STS04-03

Digital economy: Definition, measurement, and classification issues. Existing sources of information, and future plans and prospects

Stylianos Zachariou

ESTAT-F3, Luxembourg, Luxembourg

The presentation will focus on the following main issues:

Defintions of digital platforms, "gig" economy, etc. Which are the actual elements that differentiate such forms of work from the more traditional ones?

Is it "work" or "job"?

Legal issues (Relation to informal economy, "uber" conflicts, etc)

Existing sources of information (mainly is social statistics), plans for future surveys by Eurostat or member-States (and internationally)

Thursday, 18 October 2018 - Upper Foyer - 15:30 - 16:30

Contributed Paper Session -


Th-CPS04-01

Designing Textile Industrial Districts using Equity Maps

MONICA PRATESI 1, LISA BIANCO 1, LUCA FAUSTINI 2, LINDA PORCIANI 2, SABINA GIAMPAOLO 2, Vincenzo Mauro 1

1 Dipartimento di Economia e Management, PISA, Italy
2 ISTAT, FIRENZE, Italy

This work has been promoted by two main ideas. From one side the increasing interest on the development of measures able to go "beyond GDP". Since this indicator has proved to represent a very powerful measure, recently a growing number of attempts devoted to find tools to catch the progress of societies better, have been proposed both at international and national level. The "Istanbul Declaration" [1] – that launched the The Global Project on Measuring the progress of Societies – and the constitution of the The Commission on the Measurement of economic Performance and social Progress (CMEPSP, 2008), boosted very much debate and research. For example, the final report of CMEPSP strongly encouraged National Institutes of Statistic to build up system of indicators to better measure well-being and sustainability following a multi-dimensional approach and shifting from production to income and consumption. The Italian Institute of Statistics (Istat) took up the challenge implementing The Equitable and Sustainable Well-being Project (BES): a set of 132 indicators that cover 12 dimensions1 at national and regional level (NUTS 0 and NUTS 2). Starting from the core project other two branches of BES, oriented to measure well-being and sustainability at sub-territorial level, have been defined: Provincial BES (NUTS 3 level-[2] and City BES (URBES at LAU 1 level-[3])2. The second idea that promoted this work is related to the large diffusion in the Italian economic fabric of "Industrial Districts". According to Marshall work [4] Industrial district identifies an area where a concentration of firms has settled down, however it is not simply a localised industry. Marshall stressed not only the business relationships instituted in a local environment but also the importance of undertaking other socio-cultural aspects of this phenomenon. More, Becattini [5] [6] further developed the Marshall’s theory, who was the first introducing the concept of Industrial District, revising the "Marshall’s externalities" concept in order to better characterize the performance of Italian Industrial districts. He also highlighted as this entity is a meaningful "unit of investigation" for analysis. Indeed, Becattini emphasised how, in these areas, community and firms tend to merge creating a complex structure of economic and social relationships rooted on cooperation and competition forces.

The aim of this work is to investigate the well-being of industrial districts primarily exploiting available BES data. In other words, it is an attempt to assess well-being of industrial districts through BES lens. Different economic parameters able to represent the "district economic well-being" have been analyzed and other BES indicators are applied to include multiple social dimensions as elements able to describe the role of "Marshall’s externalities", which, according to Marshall [4] and Becattini [7], [8], determines the competitiveness and reactive capacity of the district. Summarizing, our study intend to explore if the Marshallian district connotation referred as "industrial atmosphere" is also an "industrial and equitable well-being atmosphere".


Th-CPS04-02

MAKSWELL: An EU Project on MAKing Sustainable Development and WELL-Being Frameworks Work for Policy Analysis

Fabio Bacchini, Maria Grazia Calza, Marina Gandolfo, Maria Pia Sorvillo, Alessandra Tinto

Istat, Rome, Italy

Prompted by the work of the Stiglits’s Commission, the growing attention to well-being and sustainability frameworks is now starting to relate to policy evaluation.

MAKSWELL project, funded by European Commission (Open and inclusive societies program) aims to extend and harmonising the indicators able to capture the main characteristics of the beyond-GDP approach proposing a new framework that includes them in the evaluation of the public policies. At the same time MAKSWELL project would like to improve the most appropriate traditional indicators available using the new data collection tools and modern statistical methods to have timely and accurate data.

Particularly WP2, WP3 and WP4 will help in the production of timely indicators selecting also new data sources (big data) and integrating them with traditional data (registers, survey data), especially where there are data gaps; the production of local estimates of poverty and living conditions are an example of the objectives of the WPs. WP5 will extend the previous results providing tools for policy making. It will provide at a macro level a framework that include in the traditional macro econometric models specific measures for well-being.

At this point, works on WP1 and the reflection paper on considered aims to shed lights on new Framework Programme for research and innovation (FP9), are concluded, while the activities on WP2, WP3 and WP4 are currently ongoing.


Th-CPS04-03

Including well-being indicators in the economic policy: first results in Italy

Maria Pia Sorvillo, Maria Pia Sorvillo

Istat, Rome, Italy

A comprehensive framework to measure well-being was developed at Istat since 2010, in line with the Stiglitz, Sen, Fitoussi approach (1) and taking into consideration international experiences (2).

The framework’s acronym is BES: B as Benessere (well-being) that is considered in its multidimensional nature, E indicates Equo (equitable) for a specific attention to distributional aspect of well-being, S stands for Sostenibile (sustainable) as conditions needed to preserve at least the same level of well-being for next generations are taken into account.

It includes 12 domains, related both to material well-being and to other aspect of quality of life, each of them illustrated by means of about 130 indicators, are yearly produced and analyzed by Istat in a report that is now at its fifth edition (3).

Since the definition phase, the Bes framework had two main aims: to inform all stakeholders about the state and the evolution of well-being in Italy and its regions; to support the policy cycle in definition, monitoring and evaluation.

Thursday, 18 October 2018 - AULA - 17:00 - 18:00

Keynote -


Th-KEY02-01

Capacity development for international cooperation in a modern statistics office in the face of new challenges – business as usual or value added?

Dominik Rozkrut, Olga Świerkot-Strużewska

Statistics Poland, Warsaw, Poland

The article argues that building the organization's international cooperation capacity and deepening involvement in this cooperation is an increasingly important factor in ensuring the basic values ​​of official statistics, as expressed in the Fundamental Principles of Official Statistics, the European Statistics Code of Practice and Recommendation of the OECD Council on Good Statistical Practice.

The on-going processes of globalization and digitalization lead to profound transformations in the environments of the systems of public statistics, progressive changes in the functioning of information markets and the economics of information. In these conditions, the importance of public statistics is growing, it needs to take new roles, change the way it operates, take a new position on the information market, invest in new research methods based on new data sources, change the way data, information and knowledge are communicated and disseminated. These changes bring not only considerable challenges to official statistics, but also bring new threats. Therefore, there is a need to discuss how to protect official statistics against new risks.

The article argues that in the new conditions, the development and involvement of statistical offices in cooperation at the international, supranational and global levels is becoming more and more important. The article discusses the role of fundamental principles of official statistics in the context of new challenges, attempting to declare that international cooperation is not only one of the elements of building an effective statistical system, but now **one of the most essential tools for maintaining and implementing the fundamental principles of official statistics**. The article presents a number of arguments and evidence of this thesis.


Th-KEY02-02

Of Number and Narratives: Evidence for Policy-Making in the 21st Century

Gaby Umbach

European University Institute, Florence, Italy

Friday, 19 October 2018 - Hegelsaal I - 09:00 - 10:00

Invited Paper Session -


Fr-IPS05-01

Urban Statistics 2020: Growing Demand for Geospatial Urban Statistics from the perspectives of the City of Helsinki

Ari Jaakola

City of Helsinki, Helsinki, Finland

The City Council of Helsinki approved the Helsinki City Strategy in September 2017 [1]. It sets the outlines city development in the years 2017-2021. The Strategy’s vision is ambitious: Helsinki wants to be “the most functional city in the world”.

The content of the vision is comprehensive. It includes themes such as securing sustainable growth, developing services, responsible management of finances as well as a faster and more agile organisational culture with continuous development of the city’s own operations and practices. All these objectives request a solid base of statistical information but at the same time they challenge the existing statistics and information base of the city.

In this abstract I will discuss the growing demand for urban statistics, and especially versatile urban statistics for facilitating well informed decision making of the city authorities. I will study these issues from the perspective of the city of Helsinki. The key questions are what kind of statistical information the city needs to reach the objectives set out in the City Strategy, what kind of challenges the new needs will bring to the current statistical information base and how to develop the existing statistics in order to meet the challenges.


Fr-IPS05-02

On the use of mobile phone data for assessing mobility in the Florentine metropolitan area

Alessandra Petrucci 1, Laura Grassini 1, Gianni Dugheri 2, Emilia Rocco 1

1 Department of Statistics, Computer Science, Applications “G. Parenti”, University of Florence, Firenze, Italy
2 Statistical Office – Municipality of Florence, Firenze, Italy

Technological evolution brought along, in recent years, a significant increase in the diffusion of devices that can record digital footprints of our behaviour on a daily basis, tracking a vast degree of activities. Constant and basically unintentional production of such tracks generates huge datasets that contain a precious quantum of information about socio-economic behaviour that may be extracted and used [1]. For example, relating to tourism, recent studies have been carried out by the National Statistical Institutes (NSI) for experimenting those new data sources in integrating official statistical data ([2], [3], [4]). On the other hand, several weaknesses are still recognized in the use of such data in terms of quality, accessibility, applicability, relevance, privacy policy and ownership of the data.

The present paper reports some empirical findings of the use of mobile phone data in quantifying the outgoing mobility population flows in the municipality of Florence. In particular, the Statistical office of the Florence municipality is carrying out a number of studies for assessing the use of such data for the estimation of mobility flows among the municipalities of the Florence local labour system (LLS) and for the estimation of visitors and commuters in specific areas of the city. In this contribution, we will address data coherence issues through a comparison with official statistical data on resident and commuter population derived from the ISTAT-ARCH.I.M.E.DE (Integrated archive of economic and demographic micro data) project.


Fr-IPS05-03

Mind the gap - What do users expect and what do we offer in regional and urban statistics

Teodora Brandmuller

Eurostat, Luxembourg, Luxembourg

Eurostat's mission is to provide high-quality statistics for Europe. Subnational data can increase the understanding of the diversity that exists within Member States and across the European Union. Eurostat offers a wide range of regional, city and typology based statistics to show the complex picture what is happening at a more detailed geographical level within Europe. One of the principles of the European Statistics Code of Practice is Relevance: European statistics meet the need of users. [1] This paper evaluates to which extent the Eurostat offer in the domain of subnational statistics meets the user needs and how it could be improved.

Friday, 19 October 2018 - Hegelsaal II - 09:00 - 10:00

Special Topic Session -


Fr-STS05-01

Session Outline

Walter Radermacher

FENStatS, Wiesbaden, Germany

Principles of official statistics in the era of digitisation

Before going into more detail about some of the important aspects of interaction between users and

producers of statistics, the baselines are condensed into a few guiding principles.

High-quality, official statistics strengthen democracy by allowing citizens access to key information

that enhances accountability. Access to robust statistics is a fundamental right that facilitates choices

and decisions based on valid information. Without statistics, there cannot be a well-grounded, modern,

or participatory democracy. Statistics is key for people empowerment.

Official statistics are the cornerstone of public open data; they are the basis of open government. For

example, on the EU Open Data Portal, the Eurostat statistical database accounts for the bulk of data on

offer. Enhancing access to statistics in open formats enables the free use of data, its interoperability,

and its consumption in integrated modalities. As a result, open statistics allow citizens to make sense

of complex phenomena and help in their interpretation across borders and without limits. As such,

open data and open statistics are a key driver of free dialogue in open societies.

Statistical literacy is critical in ensuring that individuals benefit from the power of statistics, and can

benefit from open access to statistical information and its associated services. Data literacy (‘datacy’)

is not limited to knowledge of basic statistical information: it entails knowing about the limits of

statistics and their use/misuse. The ability to understand statistics, and how they are produced, is a

fundamental skill for each individual. ‘Datacy’ is a key enabler for citizens.

Data for statistical services is worthless unless statistical methods are in place to ensure quality. In the

digital ecosystem, where data is abundant and a commodity, the value of information is increasingly

based on algorithms that generate tailored insights for users. The future is smart statistics.

On the whole, the general public is distanced from official statistics and valuable statistical

information. Hence, a bridge must be built between experts and laypeople to overcome this distance

and to foster understanding. Providing better information to users and non-users, and being able to

counter misjudgements and prejudices with facts, is probably the part of the statistical mission that has

the greatest added social value. That mission is about education and providing information that is

orientated towards the layperson. However, it should also be about co-design and co-production, with

the overall aim of involving the public in the generation of statistical results.

As statistical information is increasingly used for policy decisions, statisticians need to investigate

how their services are either used, not used, or misused. They should also examine the ethical

implications and the impact of evidence use on the policy cycle. More influence means more

responsibilities.


Fr-STS05-02

Old and new risks for the credibility of official statistics> comments from a user

Pilar Martin/Guzman

Universidad Autonoma de Madrid, Madrid, Spain


Fr-STS05-03

Official statistics as clickbait – the new threat in the post-truth society ?

Lyubomira Dimitrova

Sofia University, Department of Public Administration, Sofia, Bulgaria, Sofia, Bulgaria

Post-truth has emerged as a popular term, referring to a particular way information has been presented to the public. According to the definition given by the Oxford dictionary, it refers to a situation in which objective facts are being set aside to more emotionally shaped information. The role of official statistics in such circumstances is under threat, as it has been previously pointed by Baldacci et al. [1]. In their paper, official statistics is opposed to fake news, describing the possible relations between the two and the future actions that need to be taken.

Such practices of online media are harmful for the data dissemination as it threatens to jeopardise the trust in the official statistical sources. In order to prevent the harmful effects from these actions, the aim of the paper is to present a clickbait-detecting model, using data from all the headlines of articles containing press release information issued by the Bulgarian NSI from 21 media websites for 2017. Two models for clickbait detection are compared: the first one using the bag-of-words for natural language processing and the second one using the method applied by Wei et al. [3] which uses type labels to frame the main features which a clickbait is containing, but for the purposes of the paper they have been converted into parts of speech. The reason why these approaches are chosen is that the former is considered as easy to implement and simple, and the latter employs to the most common features that a clickbait has – its dynamics, pathos and expression which can be detected by the parts of speech used. As the dataset is rather small and unbalanced in terms of share of clickbait vs. non-clickbait headlines (the former are fewer) a support vector machine (SVM) classifier was used. The results show the superiority of the Parts of speech model, which is accurate in 92% of the cases, compared to the Bag-of-words model which predicted correctly 67% of the cases tested.


Fr-STS05-04

Engaging with users to modernise the dissemination of European statistics

Julia Urhausen, Maja Islam

Eurostat Unit B4 Digital dissemination, Luxembourg, Luxembourg

Modernising the Eurostat website and products is driven by the objective to better respond to users’ needs and to facilitate access to official statistics. Following current trends, the aim is to be more visual and attractive and also to provide more structured and precise texts replying to the most common user questions. Thus, engaging with users serves as the fundament and impetus for any changes in this modernisation process.

In 2017, several user research activities were launched at Eurostat as part of the DIGICOM project – an ESS project aiming to modernise the dissemination and communication of European statistics. The aim of these user research activities was to learn more about our users and their needs, and get recommendations on what we can do to modernise the dissemination of European statistics. Two methods were used: field studies and usability tests.

In the field studies, sessions covering 40 different users (light, intermediate, advanced) were organized over a period of 6 months. Users were asked about their profile and their use of statistics, and observed as they interacted with a number of dissemination products on the Eurostat website. The outcomes were descriptions of the main user profiles identified, a list of high-level recommendations on how to improve the dissemination products tested, and personas of the users of European statistics. The user profiles identified are valuable as they confirm not only the diversity and often contrariness of different users’ needs, but also facilitate to assess specific aspects in more depth.

The usability tests, conducted with smaller groups of users and focusing on a smaller number of products, resulted in more specific recommendations to improve the usability of the tested products. Implementing these recommendations is in reality not just a "copy and paste" process, but has partly proven to be challenging due to different or even contradicting user opinions.

In practise, this is a circular process: Eurostat proposes new or improved dissemination products to users who then provide their feedback; on this basis recommendations are drawn which subsequently result into additional improvements of the products. Learning from users now will help Eurostat in the future to disseminate better custom-tailored products. This presentation will include concrete examples of user feedback and its translation into improved dissemination services.

Friday, 19 October 2018 - Upper Foyer - 09:00 - 10:00

Contributed Paper Session -


Fr-CPS05-01

Using web scraped data to verify Egyptian consumer price indices

Mina Gerges

Central Agency for Public Mobilization and Statistics CAPMAS, Cairo, Egypt., Cairo, Egypt
Faculty of Computer and Information Sciences, Ain Shams University, Cairo 11566, Egypt., Cairo, Egypt

The purpose of this paper is to provide an alternative ways of data collection for NSOs, also covers the manipulation and analysis of web scraped data by tracking the utilization of online prices across markets’ websites and cities in near real time.

Recently, In Egypt many companies have been published several websites for e-commerce and one of these is souq.com owned by Amazon, Inc. which made scraping data more available and in general appeared what is called: Web scrapers which are software tools for extracting data from web pages. The growth of online markets over recent years means that many products and associated prices information can be found online and possible to be scrapable.

The consumer price index is one of the official statistics which estimate constructed using the prices of a sample of representative items whose prices are collected periodically; so it’s one of the best examples in this sense: by replacing the scraping of e-commerce websites and websites which publish the currently prices of products to automatically collect prices for some products and services rather than physical visiting to stores to manually collect the prices. This offers a range of great benefits including: Reducing data collection costs, increasing the frequency of collection and products in the basket, and improving our understanding of price behaviour. This paper introduces a developed generic tool that automatically collects online prices, as “Scraped Data”, based on multiple Search Engines to crawler newest prices and e-commerce websites. The developed tool aiming to aid in data collection reduction costs process depend on big data analytics.

Finally, the methodology of this paper is based on machine learning methods that can lead to the crawling of market data on the web, automatic price scraping and evaluation of scanned data.


Fr-CPS05-02

Perception of individual taxpayers regarding the e-invoice portal in tax compliance

Alexandre Silva 1, 2, Joana Leite 1, Daniela Costa 1, Cidalia Lopes 1

1 Coimbra Business School|Coimbra Polytechnic Institute, Coimbra, Portugal
2 Center for Health Studies and Research (CEISUC) Faculty of Economics of the University of Coimbra, Coimbra, Portugal

The present paper intends to assess and analyse the perception of individual taxpayers regarding the fulfilment of their tax obligations. In particular, to analyse the impact of the introduction of the e-invoice portal in tax compliance, and in this way adding value to official statistics, where the focus is restricted to the satisfaction assessment and the identification of flaws.

The theme is justified for several reasons. First, the tax compliance paradigm followed the evolution of the digital economy, and today it is only possible to file the tax returns online. Secondly, in 2015 the e-invoice portal was linked to the individual taxpayer's tax returns allowing them to manage their taxation, through the validation of their expenses, that will be deducted on the tax amount.

This study intends to check how taxpayers evaluate the e-invoice portal in their relationship with tax obligations. It is our intention to see if it is seen as a useful tool to support tax compliance or, instead, as an "obligation" to the taxpayer which leads to an increase of compliance costs.

A 31 questions questionnaire was made available, it was measured the (perceived) subjective norm, usefulness, reliability, compatibility, ease of use and attitude a structural equation model (SEM) analysis was performed with AMOS.

We conclude that the variables subjective, compatibility, reliability, usefulness and ease are positively related to each other, and that, together, result in a good attitude of the taxpayer towards the portal.


Fr-CPS05-03

e-Commerce in European Countries: Hierarchical Clusters Analysis using Eurostat official data

Ksenija Dumičić 1, Berislav Žmuk 1, Mirjana Pejić Bach 1, Augustin Bartolić 2

1 University of Zagreb, Faculty of Economics and Business, Zagreb, Croatia
2 -, Zagreb, Croatia

E-Commerce provides numerous advantages to participants, e.g. sellers can offer lower prices while buyers can overcome geographical and time barriers with the support of Internet and information and communication technologies (ICTs) [1,2]. Grandon et al. [3] stress that usage and application of e-Commerce infrastructure allows sellers to target tight geographically dispersed market segments, while the buyers can take advantage of approaching worldwide markets, with the broader supply of goods and services offered by the various suppliers at lower prices. However, number of authors implied in their research that elements which impact the e-Commerce utilization are: perceived quality, trust in enterprises, accessibility, security and role of government [6,7], therefore implying that perceived trust and barriers have significant impact to e-Commerce development.

Expansion of e-Commerce started during 1990s with the increasing growth and usage of Internet. United Kingdom was the leading European country regarding ICTs usage in online markets, which is the result of the “the dot-com bubble” [4]. In the last few years, United Kingdom has more than double the size of e-Commerce market than the next leading European country on that same market [5]. However, development of e-Commerce is not evenly distributed among European countries, indicating presence of enterprise digital divide.

In order to shed some light to the development of e-Commerce in European countries, we use data available in Eurostat database, about usage of e-Commerce and perceived barriers to e-Commerce in European enterprises. We utilize cluster analysis and Eurostat data as evidence on the development of e-Commerce, and building on that, we aim to channel possible policy recommendations for the purpose of digital society development. In this endeavour, we use two-stage approach.

Friday, 19 October 2018 - Hegelsaal II - 10:00 - 11:00

Special Topic Session -


Fr-IPS06-01

The role of data centres in facilitating access to data

Wilhelmus Kloek, Aleksandra Bujnowska

Eurostat, Luxembourg, Luxembourg

The paper presents current initiatives aiming at improvement of European microdata access. In particular the paper focuses on new modes of access such as remote execution allowing users to query microdata without seeing them. It explains how perturbation can be used to protect confidential cells and execute (compute) safe tables.


Fr-IPS06-02

International Network for Exchanging Experience on Statistical Handling of Granular Data (INEXDA)

Christian Hirsch

Deutsche Bundesbank, Frankfurt, Germany

The financial crisis of 2007/08 has highlighted the need for using granular data3 on financial institutions and markets to detect risks and imbalances in the financial sector. Administrative data producers are witnessing a growing need to improve granular data access and sharing. Sharing granular data is fraught with significant legal and technical challenges related to, among others, safeguarding statistical confidentiality. This presentation introduces the international network INEXDA, which provides a platform for administrative data producers to exchange practical experiences on the accessibility of granular data, on metadata as well as on techniques for statistical analysis and data protection.

To meet the demand of data users and data compilers for (granular) data sharing and to facilitate the implementation of Recommendation II.20 of DGI-2, a group of central banks established the International Network for Exchanging Experience on Statistical Handling of Granular Data (INEXDA) on 6 January 2017. Current INEXDA members are the Banco de Espãna, Banca d'Italia, the Banco de Portugal, the Bank of England, the Banque de France, the Deutsche Bundesbank, and the European Central Bank. However, in accordance with the objectives of INEXDA outlined below, participation is open to other central banks, national statistical institutes and international organisations.

Among other developments, this presentation also introduces the metadata schema used by INEXDA to describe granular datasets from different countries. The schema, agreed on by all members, facilitates a comprehensive inventory of existing granular datasets conducted in the member institutions. This inventory, in turn, will foster harmonisation activities between INEXDA members, broaden metadata and potentially future data sharing between institutions represented in the network, and pave the way for metadata on publicly available granular datasets to be shared with external researchers. The INEXDA metadata schema was developed to be easily adaptable for non-INEXDA institutions.


Fr-IPS06-03

Synthetic Longitudinal Business Databases for International Comparisons

Jörg Drechsler 1, Lars Vilhuber 2

1 Institute for Employment Research, Nürnberg, Germany
2 Cornell University, Ithaca, United States

International comparison studies on economic activity are often hampered by the fact that access to business microdata is very limited on an international level. A project launched by the Institute for Employment Research in cooperation with Cornell University, USA tries to overcome these limitations by improving access to Business Censuses from multiple countries based on synthetic data. Starting from the synthetic version of the longitudinally edited version of the U.S. Business Register (the Longitudinal Business Database, LBD), the idea is to create similar data products in other countries by applying the synthesis methodology developed for the LBD to generate synthetic replicates that could be distributed without confidentiality concerns. In this talk we discuss some pitfalls encountered while trying to harmonize the German business data collected at the IAB with the structure of the LBD. We also present some first results of the synthesis process and talk about plans for the future.

Friday, 19 October 2018 - Hegelsaal II - 10:00 - 11:00

Special Topic Session -


Fr-STS06-01

The imperious need for an NSI's renewed commmunication strategy in the era of information over-abundance. The communication strategy has to be global/systemic and innovative. The European ESS VIP programme DIGICOM fosters shared investment of the ESS in this field of activity.

GUILLAUME MORDANT

INSEE, PARIS-Montrouge, France
EUROSTAT, Luxembourg, Luxembourg

Facts / challenge

- information is over-abundant

- people select less information / information is pushed to reach them

- data is pushing away robust official statistics and fake / week information hides evidence based facts

- general opinion mistrusts official statistics… Data collection may be consequently fragilized.

The communication strategy has to be global/systemic and innovative.

Only a systemic vision of the communication of the NSI can meet the challenge :

- Not only a communication designed to support dissemination, but also a specific institutional

communication

- Not only « statistical » communication, but also pedagogical communication : statistical literacy...

- Not only a communication on products, but also a communication on processes, quality, shared values of

official statistics

Only an innovative communication of the NSI can meet the challenge :

- not only an active communication scheduled when releasing information, but a proactive an reactive

communication

- not only through traditionnal channels (web site and press media), but also directly to users through social

networks

- not only « distant » and passive, but also engaging, having users to interact with the information and

statistics : data-visualisation…

The European ESS VIP programme DIGICOM fosters shared investment of the ESS in this field of activity.

- from data visualisation to gamification

- from pedagogic videos and e-learning material to a european statistics competition for undergraduate

students

- from users analytics and segmentation to a communication strategy build around a branding strategy.

- from open data access with API development to linked open statistics’ promising future.


Fr-STS06-02

Official statistics through the eyes of students and teachers – the European Statistics Competition

Christine Kormann 1, Sybille Luhmann 1, Alicia Fernández 2

1 Eurostat, Luxembourg, Luxembourg
2 INE Spain, Madrid, Spain

In an age of fake news, it is of increasing importance to equip particularly young citizens with the necessary statistical tools to interpret official statistics and debunk false reports. But how can we best go about such a colossal task? One look at the European map quickly revealed that Eurostat and many National Statistical Institutes (NSIs) had already started to address this problem. In the framework of the DIGICOM[1] project an inventory of statistical literacy activities was carried out in 2016. Statistical competitions among schools were identified a good practice from INE Spain, similar initiatives were led by statistical societies in countries and a joint project could help support its extension to other countries.

Building on this experience, the first European Statistics Competition (ESC) was launched in October 2017 to stimulate the awareness of sound statistics by promoting statistical literacy and curiosity among students, while encouraging teachers to use new educative materials based on official statistics. As a DIGICOM initiative it involved 11 NSIs[2] from the European Statistical System and was coordinated by Eurostat and INE Spain. Over 11 000 students aged 14-18 and 1000 teachers took part in the competition, in which they had to solve statistical problems, search for data and produce statistical analyses. The top 180 students qualified to represent their countries in the final European round which required them to produce short videos on why official statistics matter.

[1]The project for Digital communication, User analytics and Innovative products (DIGICOM) is one of the eight projects of the ESS Vision 2020 implementation portfolio

[2] Bulgaria, Croatia, Cyprus, Finland, France, Greece, Italy, Poland, Portugal, Slovenia, Spain


Fr-STS06-03

Official statistics through the eyes of students and teachers – the European Statistics Competition

Alicia Fernandez 1, Sybille Luhmann 2, Adolfo Gálvez 3

1 National Statistics Institute, Madrid, Spain
2 Eurostat, Luxemburg, Luxembourg
3 National Statistics Institute, Madrid, Spain


Fr-STS06-04

Statistical literacy: a key to comprehend a changing world

Patrizia Collesi

Istat, Italian national Institute of statistics, ROME, Italy

As often reported in scientific literature, statistical literacy is the ability of individuals or groups to understand statistics [1]. The task of the Central Directorate for the Development of Statistical Information (DCSI) at ISTAT, is to develop the statistical literacy from the demand side, i.e. to promote in citizens, enterprises and institutions the capacity to use statistical information both for making informed decisions and evaluations for work and for everyday life. Much of the development in demand can already be found in the available statistics. To this the possibility of personalized responses through the achievement of a second objective must be added, establishing a direct relationship with users so as to give tailor-made responses to their requests and have a relationship with them that lasts over time.

The growth in the skills of Istat as a provider consists in understanding what lacks in the offer, adapting what is available and bridging the gaps. Consequently, in the relationship between supply and demand, it is necessary to see what ISTAT already has in its "bundle" of supply as a producer of that particular public service which is the statistical knowledge of what we have around it. In addition, ISTAT should be prepared for the latent demand for new information, following new themes, for example by preparing informative-learning material on the Bes project (Equitable and Sustainable Well-Being [2]), some of the indicators are actually included in the Italian Economic and Financial Document.

Friday, 19 October 2018 - Upper Foyer - 10:00 - 11:00

Contributed Paper Session -


Fr-CPS06-01

Data Fitness for Integration

Bernadette Lauro, Raffaella Traverso

European Central Bank, Frankfurt, Germany

Data are at the heart of policy decisions and represent the most valuable asset, after people, at the European Central Bank (ECB).2 This recognition has driven the institution towards the adoption of a data management strategy which is oriented towards more common and integrated data processes and to a data centric architecture.

Data quality management is core to the ECB statistical function. In the last two decades tremendous efforts have been undertaken in order to fill the gaps in aggregated and standardised data. Today the focus shifted on managing high volumes of data, in producing high-quality and granular data3 and in sharing data products. This has driven the ECB towards the adoption of large-scale IT systems that allow to combine more efficiently data sources and data models and to perform data analysis with different analytical tools. However, combining data in an IT platform, although necessary, is not sufficient to achieve high quality integrated data.4

Indeed, the quality of integrated data depends not only on the quality of the individual data sources, but also on the quality of all interrelated components of data management. These components encompass the univocal identification of business entities (master data) for which economic transactions and positions need to be analysed. Further components are the conceptual definitions and the methodology of compilation of data collected and provided from different sources; as a consequence, contextual integration requires congruence in the definition of concepts and in the codification of data. Additionally, clear and structured information (metadata) that clarify the meaning and the structure of the data are relevant to integrate data efficiently. Finally, a state-of-the-art technical infrastructure is essential to enable the integration of all these components.

Integrating data involves different but interdependent data management activities. Therefore, to reach the desired level of quality, not only data but also models, processes and tools must respond to measurable quality indicators, according to a defined and compatible maturity model. In this sense, the fitness for purpose of integrated data is a richer concept than the fitness of a single dataset individually considered.


Fr-CPS06-02

The harmonised Labour Market Areas - the European value added

Dario Buono, Valeriya Angelova-Tosheva, Teodora Brandmueller

Eurostat, Luxembourg, Luxembourg

The European Union has defined far-reaching policy development objectives in the context of the Cohesion Policy, the ten Commission priorities and more recently within the context of the Sustainable Development Goals and the Urban Agenda. These political initiatives share the challenge to provide adequate, statistical information on which to base the necessary policy actions. In order to implement the policy initiatives in the European context, there is a growing need for not only higher geographical detail and references related to administrative units, but also for information, that reflects the inherent structure of the social and economic reality at which European decisions and projects need to be targeted. As such, the structuring of information according to functional areas is complementary to the established administrative areas and regions.

With concepts such as the Labour Market Areas (LMAs), the established system of the territorial classification of NUTS based on the national administrative structures of Member States should be supplemented by the concept of “functional geographies”. The concepts of the LMAs attempt to reflect the phenomenon that with increasing mobility, administrative regions coincide to a lesser degree with the places where people live and work. Administrative regions cannot always address European and national policy needs as regional administrative boundaries are often the result of historical circumstances; they do not necessarily mirror the present day's social and economic reality; commuting might distort important regional data on NUTS such as GDP, employment/unemployment rates, consumption and environmental statistics.

LMAs can be defined as a geographic area designed for the purposes of compiling, reporting, and evaluating employment, unemployment, workforce availability and related topics. It is a statistically defined, economically integrated territory, where the majority of people live and work.

This article aims to present the main principles of the method for delineation of LMAs and based on some interesting results to demonstrate the European value added of this functional geography.


Fr-CPS06-03

Creating Comprehensive Data Worlds using Standardisation

Stephan Müller

Deutsche Bundesbank, Frankfurt am Main, Germany

In the aftermath of the global financial crisis micro data have gained considerably in importance. The amount of data being collected is constantly growing, as well as becoming increasingly varied. Parallel to this, statistical authorities are faced with a rising number of data providers. From this it follows that data literacy is becoming an increasingly important factor when working with data.

This presentation therefore aims to illustrate the path towards enhancing the usability of growing data worlds. A key task here is to connect and integrate the data from these different worlds. This implies a substantial need for the standardisation and harmonisation of data. Standardisation refers to the formal and technical aspects, which are understood to mean an order system and the use of a uniform language to describe the data. Harmonisation, on the other hand, refers to semantic considerations, i.e. linking data content in terms of using the same concepts and identifiers to classify the data.

The ISO standard SDMX can provide a suitable framework for data integration as it is used as a well-functioning information model for data. The Bundesbank has been using SDMX as the basis for its central statistics infrastructure since 2003. Our presentation will therefore outline the challenges of and successful approaches to data integration with a focus on statistical standards, drawing on the Bundesbank’s experience.


Fr-CPS06-04

Preparing for 2021 Census: Use of the individual administrative data for profiling of the Latvian emigrants

Sigita Šulca

University of Latvia, doctoral student, Ligatne, Latvia

Research topicality is related to the decline in population and its impact on the future development of countries. In parallel, mobility of citizens is seen as an essential economic and personal benefit. The marked rise in migratory flows is a result of economic crisis, lower welfare support, and different levels of income between countries, progressive climate change and unrest from military conflicts. Predictions have pointed to the marked increase in mobility “…people will be on the move in numbers and ways we have not seen before.” (Clemens et al. 2008).

Changing migration patterns in the context of the increase in migration flows necessitates the preparation of statistics, which provides the forecasting of migration processes, the planning of logistics and integration measures for migrants and the development of policies for the reduction of migration.

Using the administrative data on the individual level from various registers, it is possible to construct a more objective geographic, demographic and socio-economic profile of the emigrant. This paper will shed light on the demographics and geographic profile as well as socio-economic (education level) aspects.

Friday, 19 October 2018 - AULA - 13:30 - 14:15

Panel -