Work Package 1
The overall goals of the FWG are to integrate and analyze existing paleoflood data at the regional and global scales and to promote and disseminate paleoflood science and data at different levels. To reach these overall goals, FWG has been structured in three Work Packages (WP).
Collecting, storing and sharing paleoflood data
(Collaborative Flood Database)
Coordination: Michael Kahle, Germany (database management, LiPD format transfer); Neil Macdonald, UK (data management, historical archives); Scott St. George, USA (data management, tree rings); Rhawn Denniston, USA (data management, speleothems); Samuel Munoz, USA (data management, fluvial sediments, ECR); Willem Toonen, Belgium (data management, fluvial sediments, ECR); Bruno Wilhelm, France (data management, lake sediments)
To the best of our knowledge, researchers have produced more than 400 historical and paleoflood records worldwide. However, this large and highly valuable data set is dispersed across different data repositories, databases and personal computers. Collecting, storing and sharing all these existing (published) data through a Collaborative Flood Database requires a collaborative effort, but the payoff will be immense.
This first version of the Collaborative Flood Database will offer a single interactive access point to sort the data by record type and will allow comparison, verification and cross correlation among them. Combining data from multiple proxies with mathematical methods (e.g. numerical modelling or statistics) will then lead to an integrated product with better coverage and precision in time, space and intensity level than any single archive type could deliver alone.
A single database will enable holistic insights into the causes and effects of floods, including weather conditions leading to floods and impacts on human history. This could serve as input for regional or global climate model studies (see WP2). Such a product will also allow the creation of external tools, to search for and visualize floods across geographic areas and/or time slices with extraordinary flood risk.
The challenge for a Collaborative Flood Database is to standardize the different types of archives and to assess their degrees of accuracy, while maintaining a fundamental data structure common to all archives types. Indeed, the various types of flood archives (Fig. 1) exhibits a different sensitivity to past flood events due to different environmental settings, temporal resolution, and nature of the flood proxy. The flood sensitivity of different archives may also depend on different flood characteristics (such as water height, duration, discharge, etc.).
These differences result in distinct types of flood information in terms of their precision in locating an event in space and time, as well as understanding the underlying meteorological causes and impacts on societies. As a result, biases are inherent in each archive. Noise driven by other parameters, such as geomorphological changes in a watershed, can also affect the signal recorded in the archives.
Finally, the unified flood database will serve as a necessary starting point for expanding available metadata for each record. The documentation and storage of very specific metadata for each archive type is indeed fundamental to measured or observed parameters that provide palaeoflood information (width of tree-rings, dating of sediment layers by C14, quotes of historical documents, etc.). The accessibility of complete metadata for all the records in the database is important in achieving the transparency desirable in open science.
To achieve these goals, the database will be organized into five main thematic data clusters documenting the source, location, time, classification and reference of the records. In parallel, the nature of the data will be distinguished between three levels:
i) the essential and common base data required for each and every archive type (e.g. location coordinates, point in time, etc.);
ii) the optional common data available for all proxies (e.g. flood level, discharge, etc.);
iii) the optional data specific to each type of proxy (e.g. tree ring density, sediment grain size, etc.).
i) Collect possible contributions to the paleoflood data pool – ONGOING (Existing Flood DBs could be contacted to inform them about the LiPD enhancement);
ii) Collect parameters used inside paleoflood records across all archive types – DONE;
iii) Establish a common file/data-format – PARTIALLY DONE as LiPD format will be enhanced
- Enhance LiPD format to store inferred flood data in a common way (http://wiki.linked.earth/Category:Floods_Working_Group)
- Enhance LiPD format to store observations from historical document archives (http://wiki.linked.earth/Category:Historical_Documents_Working_Group)
All FWG members are invited to join the WG on Linked Earth and add feedback and/or comments there (http://wiki.linked.earth/Category:Floods_Working_Group)
iv) Develop or enhance existing tools to create LiPD files
- Python LiPD Utilities (http://wiki.linked.earth/LiPD_Utilities);
- LiPD Online Portal "lipidifier"
- Export LiPD files from tambora.org
v) Collect LiPD files covering floods, i.e. upload to Linked Earth (http://wiki.linked.earth/Dataset_Tutorial)
i) LiPD format enhanced to standardize flood related parameters as well as parameters of historical climatology/documents (see wiki for details);
ii) LiPD format is enhanced to standardize parameters of historical climatology/documents (see wiki for details);
iii) Provide tools to simplify creation of LiPD files
- Python LiPD Utilities (http://wiki.linked.earth/LiPD_Utilities) - LiPD team
- LiPD Online Portal "lipidifier" - LiPD team
- Export LiPD files from tambora.org - Tambora-Team, Michael Kahle
iv) Generated LiPD files all uploaded to Linked Earth - All members with data available