Search

found 84 results

Research papers, University of Canterbury Library

Semi-empirical models based on in-situ geotechnical tests have become the standard of practice for predicting soil liquefaction. Since the inception of the “simplified” cyclic-stress model in 1971, variants based on various in-situ tests have been developed, including the Cone Penetration Test (CPT). More recently, prediction models based soley on remotely-sensed data were developed. Similar to systems that provide automated content on earthquake impacts, these “geospatial” models aim to predict liquefaction for rapid response and loss estimation using readily-available data. This data includes (i) common ground-motion intensity measures (e.g., PGA), which can either be provided in near-real-time following an earthquake, or predicted for a future event; and (ii) geospatial parameters derived from digital elevation models, which are used to infer characteristics of the subsurface relevent to liquefaction. However, the predictive capabilities of geospatial and geotechnical models have not been directly compared, which could elucidate techniques for improving the geospatial models, and which would provide a baseline for measuring improvements. Accordingly, this study assesses the realtive efficacy of liquefaction models based on geospatial vs. CPT data using 9,908 case-studies from the 2010-2016 Canterbury earthquakes. While the top-performing models are CPT-based, the geospatial models perform relatively well given their simplicity and low cost. Although further research is needed (e.g., to improve upon the performance of current models), the findings of this study suggest that geospatial models have the potential to provide valuable first-order predictions of liquefaction occurence and consequence. Towards this end, performance assessments of geospatial vs. geotechnical models are ongoing for more than 20 additional global earthquakes.

Research papers, University of Canterbury Library

This research aims to explore how business models of SMEs revolve in the face of a crisis to be resilient. The business model canvas was used as a tool to analyse business models of SMEs in Greater Christchurch. The purpose was to evaluate the changes SMEs brought in their business models after hit by a series of earthquake in 2010 and 2011. The idea was to conduct interviews of business owners and analyse using grounded theory methods. Because this method is iterative, a tentative theoretical framework was proposed, half way through the data collection. It was realised that owner specific characteristics were more prominent in the data than the elements business model. Although, SMEs in this study experienced several operational changes in their business models such as change of location and modification of payment terms. However, the suggested framework highlights how owner specific attributes influence the survival of a small business. Small businesses and their owners are extremely interrelated that the business models personify the owner specific characteristics. In other words, the adaptation of the business model reflects the extent to which the owner possess these attributes. These attributes are (a) Mindsets – the attitude and optimism of business owner; (b) Adaptive coping – the ability of business owner to take corrective actions; and (c) Social capital – the network of a business owner, including family, friends, neighbours and business partners.

Research papers, University of Canterbury Library

Geospatial liquefaction models aim to predict liquefaction using data that is free and readily-available. This data includes (i) common ground-motion intensity measures; and (ii) geospatial parameters (e.g., among many, distance to rivers, distance to coast, and Vs30 estimated from topography) which are used to infer characteristics of the subsurface without in-situ testing. Since their recent inception, such models have been used to predict geohazard impacts throughout New Zealand (e.g., in conjunction with regional ground-motion simulations). While past studies have demonstrated that geospatial liquefaction-models show great promise, the resolution and accuracy of the geospatial data underlying these models is notably poor. As an example, mapped rivers and coastlines often plot hundreds of meters from their actual locations. This stems from the fact that geospatial models aim to rapidly predict liquefaction anywhere in the world and thus utilize the lowest common denominator of available geospatial data, even though higher quality data is often available (e.g., in New Zealand). Accordingly, this study investigates whether the performance of geospatial models can be improved using higher-quality input data. This analysis is performed using (i) 15,101 liquefaction case studies compiled from the 2010-2016 Canterbury Earthquakes; and (ii) geospatial data readily available in New Zealand. In particular, we utilize alternative, higher-quality data to estimate: locations of rivers and streams; location of coastline; depth to ground water; Vs30; and PGV. Most notably, a region-specific Vs30 model improves performance (Figs. 3-4), while other data variants generally have little-to-no effect, even when the “standard” and “high-quality” values differ significantly (Fig. 2). This finding is consistent with the greater sensitivity of geospatial models to Vs30, relative to any other input (Fig. 5), and has implications for modeling in locales worldwide where high quality geospatial data is available.

Images, UC QuakeStudies

A photograph of a model city at the Rebuild Central office on Lichfield Street. The model was created by members of the public as part of the Christchurch City Council's Transitional City consultation project.

Images, UC QuakeStudies

A photograph of a model city at the Rebuild Central office on Lichfield Street. The model was created by members of the public as part of the Christchurch City Council's Transitional City consultation project.

Research papers, University of Canterbury Library

Background This study examines the performance of site response analysis via nonlinear total-stress 1D wave-propagation for modelling site effects in physics-based ground motion simulations of the 2010-2011 Canterbury, New Zealand earthquake sequence. This approach allows for explicit modeling of 3D ground motion phenomena at the regional scale, as well as detailed nonlinear site effects at the local scale. The approach is compared to a more commonly used empirical VS30 (30 m time-averaged shear wave velocity)-based method for computing site amplification as proposed by Graves and Pitarka (2010, 2015), and to empirical ground motion prediction via a ground motion model (GMM).

Research papers, University of Canterbury Library

In this paper we apply Full waveform tomography (FWT) based on the Adjoint-Wavefield (AW) method to iteratively invert a 3-D geophysical velocity model for the Canterbury region (Lee, 2017) from a simple initial model. The seismic wavefields was generated using numerical solution of the 3-D elastodynamic/ visco- elastodynamic equations (EMOD3D was adopted (Graves, 1996)), and through the AW method, gradients of model parameters (compression and shear wave velocity) were computed by implementing the cross-adjoint of forward and backward wavefields. The reversed-in-time displacement residual was utilized as the adjoint source. For inversion, we also account for the near source/ station effects, gradient precondition, smoothening (Gaussian filter in spatial domain) and optimal step length. Simulation-to-observation misfit measurements based on 191 sources at 78 seismic stations in the Canterbury region (Figure 1) were used into our inversion. The inversion process includes multiple frequency bands, starting from 0-0.05Hz, and advancing to higher frequency bands (0-0.1Hz and 0-0.2Hz). Each frequency band was used for up to 10 iterations or no optimal step length found. After 3 FWT inversion runs, the simulated seismograms computed using our final model show a good matching with the observed seismograms at frequencies from 0 - 0.2 Hz and the normalized least-squared misfit error has been significantly reduced. Over all, the synthetic study of FWT shows a good application to improve the crustal velocity models from the existed geological models and the seismic data of the different earthquake events happened in the Canterbury region.

Articles, UC QuakeStudies

A document that outlines how timely and accurate information relating to estimating, actual project costs, future commitments, and total forecast cost, will be managed and reported for each project phase in the programme.

Research papers, The University of Auckland Library

This thesis presents the application of data science techniques, especially machine learning, for the development of seismic damage and loss prediction models for residential buildings. Current post-earthquake building damage evaluation forms are developed for a particular country in mind. The lack of consistency hinders the comparison of building damage between different regions. A new paper form has been developed to address the need for a global universal methodology for post-earthquake building damage assessment. The form was successfully trialled in the street ‘La Morena’ in Mexico City following the 2017 Puebla earthquake. Aside from developing a framework for better input data for performance based earthquake engineering, this project also extended current techniques to derive insights from post-earthquake observations. Machine learning (ML) was applied to seismic damage data of residential buildings in Mexico City following the 2017 Puebla earthquake and in Christchurch following the 2010-2011 Canterbury earthquake sequence (CES). The experience showcased that it is readily possible to develop empirical data only driven models that can successfully identify key damage drivers and hidden underlying correlations without prior engineering knowledge. With adequate maintenance, such models have the potential to be rapidly and easily updated to allow improved damage and loss prediction accuracy and greater ability for models to be generalised. For ML models developed for the key events of the CES, the model trained using data from the 22 February 2011 event generalised the best for loss prediction. This is thought to be because of the large number of instances available for this event and the relatively limited class imbalance between the categories of the target attribute. For the CES, ML highlighted the importance of peak ground acceleration (PGA), building age, building size, liquefaction occurrence, and soil conditions as main factors which affected the losses in residential buildings in Christchurch. ML also highlighted the influence of liquefaction on the buildings losses related to the 22 February 2011 event. Further to the ML model development, the application of post-hoc methodologies was shown to be an effective way to derive insights for ML algorithms that are not intrinsically interpretable. Overall, these provide a basis for the development of ‘greybox’ ML models.