Share

NAEC Innovation LAB Workshops and Seminars

 

 

The NAEC Innovation LAB is developing innovative projects drawing on talents from across Directorates and mixing different skills, including as part of the OECD Smartdata Framework. As a platform for collaboration with wider communities, the LAB is helping develop links and make use of expertise and data outside the Organisation, including in national governments, academic institutions, think-tanks and the private sector.

2 July 2020 - The Impact of COVID-19 on Corporate Fragility: Insights from a new Calibrated Firm-Level Agent-Based Model

Robert Hillman and George Wharf (Neuron  Capital), Sebastian Barnes (OECD Economics Department) and Duncan MacDonald (OECD Employment Labour and Social Affairs Directorate) share insights from a new Calibrated Firm-Level Agent-Based Model on the impact of COVID-19 on corporate fragility and the role of policies.


The COVID-19 crisis and lockdowns have put huge pressure on firms’ finances.  A number of papers have examined this through stress tests of cashflow or how impacts would be propagated through sectoral input-output relationships.

This project builds a new Agent-Based Model with a fully-fledged network of customer-supplier relationships. Firms’ cashflow and solvency depends on their balance sheet positions, production functions, customer-supplier relationships and trade credit. The model is calibrated using ORBIS firm-level data and to match the OECD Input-Output Tables, the STAN Industry database and the firm distributions from Structural and Demographic Business Statistics (SDBS) and results from the literature. The model is currently calibrated to the UK.

The model is designed to take into account the full heterogeneity of firms and their interactions. This creates the possibility of non-linear reactions to shocks and cascading failures. The model provides a workhorse model that could be extended/applied to many other questions, including macro-financial interactions, labour market dynamics, and productivity/digitalisation.

The model is applied to the impact of COVID-19 and considers the effect of various types of policy intervention to manage the shock. This presentation covers the main technical aspects of the new model together with preliminary results and policy implications.

18 June 2020 - Leveraging big data and online vacancies to identify transversal skills and their impact on the labour market

Fabio Manca and Jarno Vrolijk (Skills Centre/Employment, Labour and Social Affairs Directorate, OECD)

Using a large database of online vacancies collected from internet to develop a machine learning/data-driven measure identifying “transversal skills”, separating them from “technical skills” and using this new measure to estimate the impact of transversal skills on wages and employment in the United Kingdom for the year 2018. Presenting, in addition, a novel method to account for sparsity of skill information in online vacancies and ways to leverage the semantic analysis of job adverts to build a skill bundle matrix for more than 700 occupations. Providing examples of how this matrix can inform labour market transitions and re-training paths of workers.

4 June 2020 - Faster economic indicators with data from aircraft and online photos

Tobias Preis, Professor of Behavioural Science & Finance, University of Warwick Business School

The crisis we are currently fighting has underlined how critical fast indicators are for good decision making. This talk outlines some recent highlights of research which demonstrates how alternative data sources can be used to produce more timely economic and social indicators. First, it illustrates how billions of real-time aircraft location broadcasts can provide a leading indicator for aviation’s direct contribution to GDP in both the UK and the US [1]. This analysis draws on an adaptive nowcasting framework originally developed to generate quicker estimates of current flu cases using Google search volume [2]. Second, it describes how metadata from millions of photographs shared online can be used to produce a real-time indicator for global travel flows [3]. These studies contribute to the broader research programme of the Data Science Lab at Warwick Business School, which investigates whether alternative data from the Internet and beyond can be used to measure and even predict human behaviour.

Faster economic indicators with data from aircraft and online photos


[1] Miller, S., Moat, H. S. & Preis, T. Using aircraft location data to estimate current economic activity. Scientific Reports 10, 7576 (2020) https://www.nature.com/articles/s41598-020-63734-w.pdf

[2] Preis, T. & Moat, H. S. Adaptive nowcasting of influenza outbreaks using Google searches. Royal Society Open Science 1, 140095 (2014) https://royalsocietypublishing.org/doi/pdf/10.1098/rsos.140095

[3] Preis, T., Botta, F. & Moat, H. S. Sensing global tourism numbers with millions of publicly shared online photographs. Environment and Planning A 52, 471-477 (2020) https://journals.sagepub.com/doi/pdf/10.1177/0308518X19872772

14 May 2020 - Using Google data for nowcasting/forecasting: 3 approaches to predicting the COVID-19 impact

Nicolas Woloszko, Hermes Morgavi, Sebastien Turban (OECD Economics Department)

The speed at which the COVID-19 crisis hit, the different impacts between sectors and the need to understand an unprecedented situation call for new sources of data to inform economic policy. Big data offers high velocity, greater granularity and new possibilities.

This LAB Discussion focuses on new work using Google Trends, indices of Google searches that are available daily across countries, regions and hundreds of categories. Applying this work to Covid-19 impacts has allowed us to learn a lot about what the data can be used for and how to use them.

The LAB Discussion will focus on short presentations of on-going work with Google Trends and some wider lessons for working with these data:

  • Nicolas Woloszko – Real-time tracking of economic activity
  • Hermes Morgavi –  GARCH model to nowcast private consumption
  • Sebastien Turban – Identifying the impact of general confinement measures on retail sales

6 March 2020 - Masterclasses in New Approaches to Economic Challenges

On 6 March 2020, NAEC and its partners held masterclasses with some of the world’s leading practitioners on complexity, network analysis and agent-based modelling - See agenda

-> Watch the webcast

-> Introduction

Alan Kirman, Chief Advisor for NAEC, CAMS-EHESS, Paris

-> Understanding Agent Zero (pdf)

Led by Joshua M. Epstein, Professor of Epidemiology, New York University School of Global Public Health and External Professor, Santa Fe Institute

-> Agent-Based Modelling and the Financial System

Led by Richard Bookstaber, Author of The End of Theory

-> Econophysics

Led by Michael Benzaquen, Chair of Econophysics & Complex Systems, LadHyX - CNRS - Ecole Polytechnique, and José Morán, Ecole Polytechnique

-> Stock-flow consistent models and Climate-Economy Modelling (pdf)

Led by Matheus Grasselli, Professor of Mathematics, McMaster University and the Fields Institute, Toronto

The masterclasses were preceded by a one and a half-day conference under the auspices of the NAEC initiative calledIntegrative Economicson 5-6 March 2020

6 March 2020 - Forecasting Financial Crises

Sebastian Poledna & Elena Rovenskaya from the OECD’s Strategic Partnership with the International Institute for Applied Systems Analysis (IIASA) present work on forecasting financial crises using a large-scale ABM model.

The financial crisis of 2007-2008 and the subsequent Great Recession sparked widespread discussion among economists on the requirements for future macroeconomic models. The benchmark model in 2008 of Smets and Wouters, which shares many features with models currently used by central banks and large international institutions, did not give any warning of the emergence of a crisis in 2008 and has difficulty explaining both the depth and the slow recovery of the Great Recession. To address these and other shortcomings, we develop an agent-based model for the euro area that fulfills widely recommended requirements for future macroeconomic models by i) incorporating financial frictions, ii) relaxing the requirement of rational expectations, iii) including heterogeneous agents, and iv) building on appropriate micro-foundations.

Using macroeconomic and sectoral data, the model includes all sectors (financial, non-financial, household, and a general government) and integrates the real side and financial flows as well as balance sheets with stock-flow consistency. Moreover, the model incorporates many features considered important for future policy models, such as a financial accelerator with debt-financed investment, a complete GDP identity, and allows for non-linear responses.

Importantly, we show that the agent-based model can compete with dynamic stochastic general equilibrium and vector autoregression models in out-of-sample prediction. We demonstrate the model during the Financial crisis of 2007-2008 and the subsequent Great Recession as well as the European sovereign debt crisis—two crises that dynamic stochastic general equilibrium models struggled to predict and have difficulties explaining. Making use of out-of-sample forecasting exercises we show that the model predicts an endogenous crisis around the most intense phase of the Great Recession in the euro area, albeit with lower severity without a global downturn (which is exogenous to the model). With conditional forecasts, which include an exogenous shock on exports from the global downturn, we demonstrate that the model explains both the severity and slow recovery of the Great Recession.

This will work was presented during the NAEC Integrative Economics Conference on 5-6 March, but the LAB Discussion focused on the details of the modelling approach.

 

5 February 2020 - Machine learning in the service of policy targeting: The case of public credit guarantees

Emanuele Ciani (Employment, Labour and Social Affairs Dirctorate, OECD) discusses machine learning work at the Banca d’Italia to improve the targeting of policy.

The impact of a public policy partly depends on how effective it is in selecting its targets. Machine learning (ML) can help by exploiting increasingly available amounts of information. Following this idea, a recent stream of (econometrics) literature has been focusing on how to employ ML to address “prediction policy problems” (Kleinberg et al., 2015). This talk introduces the use of ML for policy targeting by discussing a specific application: the design of the eligibility rule for the Italian public credit guarantee for small and medium-sized enterprises. This application will be used to illustrate the main advantages and limitations of using these methods for selecting beneficiaries.

A summary of the work can be found here https://voxeu.org/article/effective-policy-targeting-machine-learning with a link to one of the papers.

 

21 January 2020 - NAEC-Ecole Polytechnique Workshop on Econophysics and Policy

Researchers from Ecole Polytechnique presenting on Econophysics

Econophysics is an interdisciplinary research field, applying theories and methods originally developed by physicists in order to solve problems in economics, usually those including uncertainty or stochastic processes and nonlinear dynamics.

Econophysics has tended to concentrate on financial markets, and these represent an ideal laboratory for testing economics concepts using the terabytes of data generated every day by financial markets to compare theories with observations. The dynamics of financial markets, and more generally of economic systems, may reflect the same underlying mechanisms that are familiar to physicists.

As models become more realistic, analytics often have to give way to numerical simulations and this is well-accepted in physics.

Econophysics - Introduction by Alan Kirman (pdf)

A presentation from researchers from Ecole Polytechnique on the following themes with discussants from OECD directorates:

Presentations by OECD discussants: Boris Cournede ¦ Maria Chiara Cavalleri ¦ Eleonora Mavroedi

> Watch the webcast

4 December 2019 - Using New Forms of Data & Machine Learning to Understand Cities

Daniel Arribas-Bel, University of Liverpool, discusses his work on big data, machine learning and cities.

His recent work integrates novel data sources and machine learning to better quantify different aspects of urban environments. Two main experiments:  Peskett et al. (2019), which uses street-level imagery, deep learning and hierarchical modelling to build an indicator of vegetation public exposure at the neighborhood level; and Arribas-Bel, Garcia-Lopez & Viladecans-Marsal (forthcoming), where all building footprints from the Spanish cadastre are used to delineate city boundaries in a meaningful way, accounting only for territory developed at a minimum density threshold. The two examples highlight the opportunities and challenges on the use of novel data and methods in the study of cities and urban activities.

References:
Stubbings, P.; Peskett, J.; Rowe, F.; Arribas-Bel, D. A Hierarchical Urban Forest Index Using Street-Level Imagery and Deep Learning. Remote Sens. 2019, 11, 1395. Open Access and available at: https://www.mdpi.com/2072-4292/11/12/1395

Arribas-Bel, D.; Garcia-Lopez, M. A.; Viladencas-Marsal, E. (forthcoming). Building(s and) cities:Delineating urban areas with a machine learning algorithm. Journal of Urban Economics.

28 November 2019 - Applying ABM in Financial Markets.

Robert Hillman of www.neuronadvisers.com leads a LAB discussion on applying ABM in financial markets, based on his experience at a hedge fund.

Reviewing the recent revival of behavioral heterogenous agent models in finance, discussing the key features and drivers, and relating them to earlier models, focusing on three potential practical insights. First, the extent to which agent-based models (ABM) offer a means of generating long-horizon asset price simulations and compare results with returns produced using other common simulation methods. Showing that ABM driven simulations can lead to different policy recommendations for those attempting to plan long term, such as pension plan managers and advisors. Second, demonstrating how ABMs can be used to produce conditional return forecasts (scenarios) and provide useful information regarding the implications of those outcomes. Finally, using ABMs to provide insights into the growing interest in factor timing. Research will demonstrate that ABM should stand alongside other methods such as bootstrapping and scenario analysis to provide greater insights regarding the risk and likely performance of investment strategies.

15 November 2019 - Macroeconomic Random Forests: Applying Machine-learning to Macro Forecasting.

Philippe Goulet Golombe (PhD candidate at UPenn)

Over the last decades, an impressive amount of non-linearities have been proposed to reconcile reduced-form macroeconomic models with the data. Many of them amount to having regression coefficients evolving through time: threshold/switching/smooth-transition regression; structural breaks and random walk time-varying parameters.

While all of these schemes are reasonably plausible in isolation, Golombe argues that these are much more in agreement with the data if they are combined. For instance, we can think that a VAR can both exhibit switching behavior and be subject to structural breaks. Without proper algorithms, the exploration of the space of time-variations remain wishful thinking. To this end, he proposes Macroeconomic Random Forests (MRFs). The method recasts many standard macroeconomic non-linearities as a tree structure where each terminal node contain a small macro model. MRFs can be understood as Generalized Time-Varying Parameters when taking the view that Random Forests are adaptive kernel estimators. Combined with novel factors specifically extracted to enhance the MRF’s performance, the new approach exhibit clear forecasting gains over both standard non-linear time series models and vanilla Random Forests.

 

14 November 2019 - Applying Machine Learning to Housing Markets

Xavier Timbeau, Directeur principal, OFCE/Sciences Po, will lead a LAB discussion on his on-going work applying machine learning.

Housing cost prediction using random forest or gradient boost modelisation has become popular. Kaggles have been won repeatedly, forecast performance is undisputed and commercial applications are based on those performances, especially the ability to extrapolate prices from one place to another. The Boston Housing dataset is one famous example of the brilliant application of machine learning techniques and of the current domination of the XGBoost algorithm.

We propose to open the black box a little bit. Usually, economists and social scientists are more interested in understanding than in forecast. But the machine learning stuff is less developed than traditional linear (or their nonlinear extension) workhorse model of empirics. Using partial dependence graphs, variable importance and bootstrapping, it is however possible to get back important tools to assess the impact of a factor on the variable of interest. Machine learning displays then an important characteristic: it a very good generalisation of the linear model, with sparse non-linearity and sufficient parsimony. Apart from a very different approach to find maximum to a likelihood question, the final outcome can be looked at as very close to a standard model. The power of the approach is to elegantly solve problems like variable specifications or spatial autocorrelation and to be able to extract as much information as possible from rich datasets. Some applications, like calculating a price index, classifying spatial areas or evaluating transport policy will also be briefly presented.

15 October 2019 - Impact of Structural Reforms: a Country-centric Assessment with Double-Post Lasso

Eleonora Mavroeidi (OECD Environment Directorate) and Nicolas Woloszko (OECD Economics Department)

A “work in progress” LAB Discussion on on-going work with Double-Post Lasso to explore non-linearities and country-specificities in the impact of structural reforms.

The impact of structural reforms on employment: a country-centric assessment with Double-Post Lasso

The paper uses Double Post-Lasso, a machine learning modelling technique (Belloni, Chernozhukov and Hansen, 2011) to provide a “country-centric” assessment of the effect of structural policies on labour market outcomes. The model produces estimates of the effect of policies for each country that depend on country characteristics, and sheds light on policy interactions. As distinct from most machine learning algorithms, it yields a symbolic representation (i.e. a model), with enhanced interpretability with regard to alternative “black-box” non-linear modelling techniques. This approach also has several advantages compared to standard panel regressions. First, estimates of causal effects are significantly more robust. Second, the analysis accounts for non-linearities, policy interactions, cross-country heterogeneity, and differential timing effects across policies. Third, it exploits cross-country variance by safely dropping some of the country-fixed effects while preventing omitted variable bias thanks to the use of “double selection”. The analysis of labour market policies using the SPIDER database (Égert, Gal and Wanner, 2017) exhibits heterogeneous country effects while ensuring consistency with previous OECD estimates. An increase in unemployment benefits is shown to have a negative effect on employment in countries with high excess coverage such as France or Greece and a strong positive effect on employment in countries with low excess coverage such as Japan or Korea.

1 October 2019 - MOLES: OECD’s new model for the environmental and economic assessment of urban policies

Walid OUESLATI and Ioannis TIKOUDIS from OECD Environment Directorate

Discussion of the structure and different applications of MOLES: a new urban Computable General Equilibrium (CGE) model with microsimulation elements developed in the Environment Directorate. It has many features of an Agent-Based Model (ABM) but is also embedded in a more traditional framework.

The model is tailored to examine the economic and environmental impact of policies targeting the housing and transportation markets in modern cities. Such policies include constraints in long run housing supply (e.g. building height restrictions, no-development zones), property taxation and interventions in the transport system (e.g. fuel and kilometre taxes, public transport subsidies, incentives for electromobility).

Presentation of the final results from the application of MOLES in a sprawled urban area (Auckland, New Zealand). The study explores three central questions. The first is whether the rate of technological change and cost reductions in the electric vehicle industry will be able to achieve significant reductions in greenhouse gas emissions from urban transport. The second concerns the effectiveness of these policy interventions in achieving emission reduction targets. The third question looks at how transport and land use policies affect living costs and housing affordability in urban areas. The findings indicate that policy inaction can prevent urban transport from reaching carbon neutrality any time soon. Under the reference scenario, in which no substantial policy change occurs, total emissions from road transport will continue increasing, while 60% of the per capita emissions in 2018 will still be produced in 2050. Stringent policies that promote public transport and electric vehicles, as well as interventions that give rise to a more compact urban form may reduce the latter number to 30%. The economic impact of these interventions is shown to be particularly diverse: the various policies may yield very different welfare effects, whose magnitude also depends on the time they are implemented. Therefore, the analysis does not only focus on highlighting the policies that curb greenhouse gas emissions and increase welfare, but also on identifying the order at which these policies should be implemented.

5 September 2019 - Connectivity Counts: How is the Geography of International Trade Linkages Changing and What is the Effect on International Shock Transmission?

Eleonora Mavroeidi, Policy Analyst, OECD Environment Directorate

A discussion of a paper which studies the evolution of interconnectivity and centrality of international trade before and after the global financial crises and the variation in the contagion of pre and post-crisis trade shocks. Using network analysis of the OECD inter-country input-output (ICIO) data reveals that changes in the geographical patterns of global value chains (GVCs) during the period 2005-2015 were primarily driven by the increased centrality in the production chain of China and its industries, as well as service sectors across countries. Such changes together with increased participation in GVCs in 2015 and the evolution towards fewer but larger international clusters, imply that trade restriction shocks transmit faster and the cost of negative shocks will be higher than before. Lastly, the impact varies across countries. Shocks in more central countries, such as China and the United States, results in a larger and faster contagion in 2015 compared to 2005, as well as compared to less central economies.

> Watch the webcast

1-3 July 2019 - Fundamentals of Machine Learning for Economists

Michal Andrle, Senior Economist at the IMF Research Department

The course will be a set of lectures and example applications (Matlab/Python code examples will be provided) based on a course that Michal and his colleagues gave at the IMF (programme).

The course aims to provide an in-depth overview of the main concepts and techniques of machine learning. It is aimed both at those who are interested in understanding or eventually applying these statistical techniques and those with some or no experience of machine learning, aiming to give a stronger underlying framework and set out the breadth of issues. In addition, he will cover “causal inference”, which is of particular interest at the OECD given our efforts to identify the impact of policy on outcomes. This will draw on the very recent literature in this area and will focus on the idea of acknowledging specification/model searching in statistical inference and the concept of “principled” model search using adaptive ML approaches (lasso, random forests).

17 April 2019 - NAEC masterclasses

NAEC and its partners held master classes with some of the world’s leading practitioners on complexity, network analysis and agent-based modelling

>> Agenda (pdf) >> Watch the webcast - am / pm

 

Complexity Economics

Led by Alan Kirman, Chief Advisor for NAEC, CAMS-EHESS, Paris

=> Complexity and Economics (pdf)

Elena Rovenskaya, Programme Director, Advanced Systems Analysis, IIASA

=> Towards a Systems Perspective on National Well-being (pdf)

Agent-Based Modelling

Led by Robert Axtell, Chair of the Department of Computational Social Science at George Mason University, and Santa Fe Institute

Networks and Systemic Risk

Led by Thomas Hurd, Professor of Mathematics, McMaster University, Toronto

=> Introduction to Financial Networks and Systemic Risk (pdf)

Macroeconomics

Led by Matheus Grasselli, Professor of Mathematics, McMaster University and the Fields Institute, Toronto

=> An Introduction to Stock-flow Consistent Models in Macroeconomics (pdf)

8 April 2019 - Causal Machine Learning: Heterogeneous Impacts of a Welfare Experiment

A presentation, organised by the NAEC Innovation LAB and the Centre of Entrepreneurship, SME, Regions and Cities, by Anthony Strittmatter, Professor for Econometrics at the Swiss Institute for Empirical Economic Research (SEW-HSG), University of St. Gallen in Switzerland

Abstract: Recent studies have proposed causal machine learning (CML) methods to estimate conditional average treatment effects (CATEs). In this study, I investigate whether CML methods add value compared to conventional CATE estimators by re-evaluating Connecticut’s Jobs First welfare experiment. This experiment entails a mix of positive and negative work incentives. Previous studies show that it is hard to tackle the effect heterogeneity of Jobs First by means of CATEs. I report evidence that CML methods can provide support for the theoretical labor supply predictions. Furthermore, I document reasons why some conventional CATE estimators fail and discuss the limitations of CML methods.

12 February 2019 - Why Big Data Needs Small Data

Professor Roberto Rigobon, Professor of Applied Economics at the MIT Sloan School of Management

A discussion on how to combine big and small data to construct better national statistics in the context of MIT and Harvard’s Billion Prices project. Professor Rigobon will talk about the advantages and the limitations of using big data technique for economic measurement and the possible extension to our area of expertise, measuring competition with prices in services sectors and the links with policy regulations. He will illustrate with concrete examples the IT architecture put in place to collect unstructured data from the web and online retail platforms, such as Walmart and Amazon, the process designed to transform these data into a structured dataset suitable for the daily measurement of CPI inflation in a set of countries. From his experience, we will consider the feasibility of replicating such an approach in the context of the NAEC Innovation LAB and discuss future collaboration.

>> Watch the webcast

7 February 2019 - Brainstorm on Big Data

Big data has the potential to shed new light on policy questions but also requires researchers to approach problems in a different way. The OECD’s Smart Data Strategy is addressing a range of issues around big data and the use of new sources of data. The NAEC Innovation LAB can help in developing applications of big data to policy questions.

A discussion bringing together several OECD directorates to look at how big data could be used to bring new policy insights and how, practically, we can develop some projects that would show this and provide a stronger foundation for working with such data in the future. The aim is to identify the most promising policy questions where big data could be applied in the OECD, what the approach might be and what sources of data would be required.

24 January 2019 - Development Co-operation Directorate (DCD) FLITS Project

Frans Lammersen, Senior Policy Analyst, OECD Development Co-operation Directorate, and colleagues

Discussion about the DCD FLITS project, which aims to break down silos between different DCD communities (i.e. policy, statistics and evaluation).  Through the use of a so-called intelligent reader tool or semantic analysis, the project will contribute to efforts to create the ONE Sight for developing, accessing and sharing information and knowledge on OECD’s work-in-progress and the NAEC Innovation LAB for diversifying and strengthening the OECD’s analytical tools.

14-16 January 2019 - 10 Years After the Crisis - Modelling Meets Policy Making

The 2008 financial crisis posed unprecedented challenges to practitioners and policy makers around the world. Researchers responded in tandem by re-examining the approaches to model financial markets and their interactions with the real economy. Agent-based models, networks, dynamical systems, and mean-field games became part of the emerging research area of systemic risk alongside more traditional economic models.

In a joint OECD NAEC-Fields Institute workshop in Toronto, leading academic experts and policy makers reflected on the lessons learned over the past 10 years and discussed recent advances in modelling of the financial system with the aim of a sustainable, inclusive and stable economy.

=> Summary of conference (pdf)

The first day of the workshop featured mini-courses targeted towards graduate students, postdoctoral fellows and other young researchers

Complexity Economics
Alan Kirman, NAEC Initiative

Agent-Based Models in Economics

Blake LeBaron, Brandeis International Business School
Alissa Kleinnijenhuis, University of Oxford

Asset Price Bubbles: Economics, Mathematics, and Statistics

Matheus Grasselli, McMaster University

Networks and Systemic Risk
Thomas Hurd, McMaster University

Blockchains and Distributed Ledgers in Retrospective and Perspective

Alex Lipton, Co-Founder and CTO, Silamoney and MIT

29 November 2018 - Machine learning and interpretability

Marcin Detyniecki, Head of Data Science and R&D, AXA Data Innovation Lab

Discussion on machine learning and interpretability, which is key to convincing policymakers to use the results of machine learning.

>> Watch the webcast

15 November 2018 - Agent-based modelling/networks

Eleonora MavroediOECD Economics Department

A discussion on the use of agent-based modelling/networks, drawing on her participation in the Sante Fe Institute Complex System Summer School and on-going projects.

23 October 2018 - Policy Experiments

Andy Haldane, Chief Economist and Executive Director of Monetary Analysis and Statistics at the Bank of England

His seminar on "Policy Experiments (pdf)" endorsed the work of the NAEC Innovation LAB.


As a scientific method, economists have gone from telling stories, to using empirics to developing models. All still have an important role in the policy domain – indeed, the role of each is being reshaped and improved by new data and new technologies. The missing ingredient, at least for macro-economic policy purpose so far, has been experimental methods. These hold great promise for the future, as recent examples in the monetary policy and regulatory policy domain illustrate.

>> Watch the webcast

27 June 2018 - Modelling housing using Agant Based Modelling (ABM)

Marc Hinterschweiger and Arzu Uluc (Bank of England) and Adrián Carro of the Institute for New Economic Thinking (INET)

The Bank of England/Oxford team led an in-depth discussion of their work using ABM to model the housing market at the Bank of England which went into depth on the design, calibration and simulation of their model.

>> Watch the webcast

7 June 2018 - Semantics

Neil Thompson (MIT)

Neil Thompson presented his paper on “Science Is Shaped by Wikipedia: Evidence From a Randomized Control Trial” with a focus on the technical aspects of his work on semantic analysis and discussed the use of AI-based techniques in economics. Caroline Paunov, of the OECD Science, Technology and Innovation Directorate, introduced the seminar by providing a short discussion of the relevance of semantics for work conducted in the OECD context and giving concrete examples of applications in the field of science, technology and innovation policy analysis.

31 May 2018 - presentation by DataIKU

The pilot OECD Smart Data Science Platform

The Collaborative Data Science platform from DataIku complements the existing ‘smart data sandbox’ with data science features made easy to access and use by analysts (machine learning, text mining, policy simulation or exploration of large data). It was selected by a panel of OECD experts as part of 2017 call for tender ‘data services and solutions’. DataIKU presented the platform and illustrated its value with several examples relevant in the OECD context. Analysts were also invited to present their potential uses of the project and 10 projects were selected for pilot.

20 October 2017 - Financial markets, network analysis and ABMs

A technical workshop on methodologies and tools for understanding financial markets with Rick Bookstaber, one of the world’s leading risk managers, and Jean-Philippe Bouchaud, Capital Fund Management and École Polytechnique.
>>  Watch the webcast

ABM background paper (pdf)
Presentation - JP Bouchaud (pdf)
Presentation - R Bookstaber (pdf)

29 September 2017 - New perspectives on the labour market: Policy applications using agent-based modelling (ABM)

In a session on macro-economic insights on labour markets using ABM Jean-Philippe Bouchaud (Capital Fund Management and École Polytechnique) discussed a methodology, inspired by statistical physics, that helps in understanding large macro-economic fluctuations. A session on Micro insights on the Labour Markets Using ABM, with Gérard Ballot, Université Paris 2 Panthéon-Assas, and Jean-Daniel Kant, Université Pierre et Marie Curie (UMPC), reviewed French Labour Laws using a model of the recent French labour market (An agent-based approach to evaluate the impact of economic dismissals facilitation on the French labor market).

>> Watch the webcast (OECD only)

ABM background paper (pdf)
Annotated agenda (pdf)
Presentation ABM - J-P Bouchaud / Presentation ABM - A Mourougane / Presentation ABM - G Ballot et JD Kant / Presentation ABM - P Fialho