Search

found 13 results

Research papers, Lincoln University

After 160 years of colonial settlement, Christchurch has recently experienced a sequence of devastating earthquakes and seen the need for a widespread de- and re-construction of the central city, as well as, many of the surrounding neighbourhoods and peri-urban satellite settlements. This paper will offer a view of the opportunities and restrictions to the post-earthquake re-development of Christchurch as informed by ‘growth machine’ theory. A case study investigating an illegal dump in central Christchurch will be used to assess the applicability of growth machine theory to the current disaster response.

Research papers, University of Canterbury Library

Unreinforced masonry (URM) structures comprise a majority of the global built heritage. The masonry heritage of New Zealand is comparatively younger to its European counterparts. In a country facing frequent earthquakes, the URM buildings are prone to extensive damage and collapse. The Canterbury earthquake sequence proved the same, causing damage to over _% buildings. The ability to assess the severity of building damage is essential for emergency response and recovery. Following the Canterbury earthquakes, the damaged buildings were categorized into various damage states using the EMS-98 scale. This article investigates machine learning techniques such as k-nearest neighbors, decision trees, and random forests, to rapidly assess earthquake-induced building damage. The damage data from the Canterbury earthquake sequence is used to obtain the forecast model, and the performance of each machine learning technique is evaluated using the remaining (test) data. On getting a high accuracy the model is then run for building database collected for Dunedin to predict expected damage during the rupture of the Akatore fault.

Research papers, University of Canterbury Library

Natural catastrophes are increasing worldwide. They are becoming more frequent but also more severe and impactful on our built environment leading to extensive damage and losses. Earthquake events account for the smallest part of natural events; nevertheless seismic damage led to the most fatalities and significant losses over the period 1981-2016 (Munich Re). Damage prediction is helpful for emergency management and the development of earthquake risk mitigation projects. Recent design efforts focused on the application of performance-based design engineering where damage estimation methodologies use fragility and vulnerability functions. However, the approach does not explicitly specify the essential criteria leading to economic losses. There is thus a need for an improved methodology that finds the critical building elements related to significant losses. The here presented methodology uses data science techniques to identify key building features that contribute to the bulk of losses. It uses empirical data collected on site during earthquake reconnaissance mission to train a machine learning model that can further be used for the estimation of building damage post-earthquake. The first model is developed for Christchurch. Empirical building damage data from the 2010-2011 earthquake events is analysed to find the building features that contributed the most to damage. Once processed, the data is used to train a machine-learning model that can be applied to estimate losses in future earthquake events.

Research papers, University of Canterbury Library

After a high-intensity seismic event, inspections of structural damages need to be carried out as soon as possible in order to optimize the emergency management, as well as improving the recovery time. In the current practice, damage inspections are performed by an experienced engineer, who physically inspect the structures. This way of doing not only requires a significant amount of time and high skilled human resources, but also raises the concern about the inspector’s safety. A promising alternative is represented using new technologies, such as drones and artificial intelligence, which can perform part of the damage classification task. In fact, drones can safely access high hazard components of the structures: for instance, bridge piers or abutments, and perform the reconnaissance by using highresolution cameras. Furthermore, images can be automatically processed by machine learning algorithms, and damages detected. In this paper, the possibility of applying such technologies for inspecting New Zealand bridges is explored. Firstly, a machine-learning model for damage detection by performing image analysis is presented. Specifically, the algorithm was trained to recognize cracks in concrete members. A sensitivity analysis was carried out to evaluate the algorithm accuracy by using database images. Depending on the confidence level desired,i.e. by allowing a manual classification where the alghortim confidence is below a specific tolerance, the accuracy was found reaching up to 84.7%. In the second part, the model is applied to detect the damage observed on the Anzac Bridge (GPS coordinates -43.500865, 172.701138) in Christchurch by performing a drone reconnaissance. Reults show that the accuracy of the damage detection was equal to 88% and 63% for cracking and spalling, respectively.

Research papers, The University of Auckland Library

The rapid classification of building damage states or placards after an earthquake is vital for enabling an efficient emergency response and informed decision-making for rehabilitation and recovery purposes. Traditional methods rely heavily on inspector-led on-site surveys, which are often time-consuming, resource-intensive, and susceptible to human error. This study introduces a machine learning-supported surrogate model designed to streamline the assessment of building damage, focusing on the automated assignment of damage placards within the context of New Zealand's post-earthquake evaluation frameworks. The study evaluates two key safety evaluation protocols—Rapid Building Assessment (RBA) and Detailed Damage Evaluation (DDE)—and integrates corresponding databases derived from the 2010–2011 Canterbury Earthquake Sequence (CES) in Christchurch. Six ML classifiers—Multilayer Perceptron (MLP), Random Forest (RF), Support Vector Machine (SVM), K-Nearest Neighbours (KNN), Gradient Boosting Classifier (GBC), and Gradient Bagging (GBag)—were rigorously tested across both databases. The results indicate that the RF-based surrogate model outperforms the other classifiers across both RBA and DDE protocols. Two distinct sets of critical predictors have been further identified for each protocol, allowing for the rapid retrieval of essential data for future on-site surveys, while retaining the RF model's predictive accuracy. The developed surrogate model provides a pragmatic tool for practising engineers to rapidly assign placards to damaged structures and for policymakers and building owners to make informed recovery decisions for earthquake-affected buildings.

Research papers, University of Canterbury Library

The Canterbury Earthquake Sequence (CES), induced extensive damage in residential buildings and led to over NZ$40 billion in total economic losses. Due to the unique insurance setting in New Zealand, up to 80% of the financial losses were insured. Over the CES, the Earthquake Commission (EQC) received more than 412,000 insurance claims for residential buildings. The 4 September 2010 earthquake is the event for which most of the claims have been lodged with more than 138,000 residential claims for this event only. This research project uses EQC claim database to develop a seismic loss prediction model for residential buildings in Christchurch. It uses machine learning to create a procedure capable of highlighting critical features that affected the most buildings loss. A future study of those features enables the generation of insights that can be used by various stakeholders, for example, to better understand the influence of a structural system on the building loss or to select appropriate risk mitigation measures. Previous to the training of the machine learning model, the claim dataset was supplemented with additional data sourced from private and open access databases giving complementary information related to the building characteristics, seismic demand, liquefaction occurrence and soil conditions. This poster presents results of a machine learning model trained on a merged dataset using residential claims from the 4 September 2010.

Research papers, The University of Auckland Library

During the recent devastating earthquakes in Christchurch, many residential houses were damaged due to widespread liquefaction of the ground. In-situ testing is widely used as a convenient method for evaluating liquefaction potential of soils. Cone penetration test (CPT) and standard penetration test (SPT) are the two popular in situ tests which are widely used in New Zealand for site characterization. The Screw Driving Sounding (SDS) method is a relatively new operating system developed in Japan consisting of a machine that drills a rod into the ground by applying torque at seven steps of axial loading. This machine can continuously measure the required torque, load, speed of penetration and rod friction during the test, and therefore can give a clear overview of the soil profile along the depth of penetration. In this paper, based on a number of SDS tests conducted in Christchurch, a correlation was developed between tip resistance of CPT test and SDS parameters for layers consisting of different fines contents. Moreover, using the obtained correlation, a chart was proposed which relates the cyclic resistance ratio to the appropriate SDS parameter. Using the proposed chart, liquefaction potential of soil can be estimated directly using SDS data. As SDS method is simpler, faster and more economical test than CPT and SPT, it can be a reliable alternative in-situ test for soil characterization, especially in residential house constructions.

Research papers, Victoria University of Wellington

The suburb of New Brighton in Christchurch Aotearoa was once a booming retail sector until the end of its exclusivity to Saturday shopping in 1980 and the aftermath of the devastating 2011 Christchurch earthquake. The suburb of New Brighton was hit particularly hard and fell into economic collapse, partly brought on by the nature of its economic structure. This implosion created an urban crisis where people and businesses abandoned the suburb and its once-booming commercial economy. As a result, New Brighton has been left with the residue of abandoned infrastructure and commercial propaganda such as billboards, ATM machines, commercial facades, and shopping trolleys that as abandoned fragments, no longer contribute to culture, society and the economy. This design-led research investigation proposes to repurpose the broken objects that were left behind. By strategically selecting objects that are symbols of the root cause of the economic devastation, the repurposed and re-contextualised fragments will seek to allegorically expose the city’s destructive economic narrative, while providing a renewed sense of place identity for the people. This design-led thesis investigation argues that the seemingly innocuous icons of commercial industry, such as billboards, ATM machines, commercial facades, and shopping trolleys, are intended to act as lures to encourage people to spend money; ultimately, these urban and architectural lures can contribute to economic devastation. The aim of this investigation is to repurpose abandoned fragments of capitalist infrastructure in ways that can help to unveil new possibilities for a disrupted community and enhance their awareness of what led to the urban disruption. The thesis proposes to achieve this research aim by exploring three principal research objectives: 1) to assimilate and re-contextualise disconnected urban fragments into new architectural interventions; 2) to anthropomorphise these new interventions so that they are recognisable as architectural ‘inhabitants’, the storytellers of the urban context; and 3) to curate these new architectural interventions in ways that enable a community-scale allegorical and didactic experience to be recognised.

Research papers, The University of Auckland Library

This thesis presents the application of data science techniques, especially machine learning, for the development of seismic damage and loss prediction models for residential buildings. Current post-earthquake building damage evaluation forms are developed for a particular country in mind. The lack of consistency hinders the comparison of building damage between different regions. A new paper form has been developed to address the need for a global universal methodology for post-earthquake building damage assessment. The form was successfully trialled in the street ‘La Morena’ in Mexico City following the 2017 Puebla earthquake. Aside from developing a framework for better input data for performance based earthquake engineering, this project also extended current techniques to derive insights from post-earthquake observations. Machine learning (ML) was applied to seismic damage data of residential buildings in Mexico City following the 2017 Puebla earthquake and in Christchurch following the 2010-2011 Canterbury earthquake sequence (CES). The experience showcased that it is readily possible to develop empirical data only driven models that can successfully identify key damage drivers and hidden underlying correlations without prior engineering knowledge. With adequate maintenance, such models have the potential to be rapidly and easily updated to allow improved damage and loss prediction accuracy and greater ability for models to be generalised. For ML models developed for the key events of the CES, the model trained using data from the 22 February 2011 event generalised the best for loss prediction. This is thought to be because of the large number of instances available for this event and the relatively limited class imbalance between the categories of the target attribute. For the CES, ML highlighted the importance of peak ground acceleration (PGA), building age, building size, liquefaction occurrence, and soil conditions as main factors which affected the losses in residential buildings in Christchurch. ML also highlighted the influence of liquefaction on the buildings losses related to the 22 February 2011 event. Further to the ML model development, the application of post-hoc methodologies was shown to be an effective way to derive insights for ML algorithms that are not intrinsically interpretable. Overall, these provide a basis for the development of ‘greybox’ ML models.

Research papers, The University of Auckland Library

The Screw Driving Sounding (SDS) method developed in Japan is a relatively new insitu testing technique to characterise soft shallow sites, typically those required for residential house construction. An SDS machine drills a rod into the ground in several loading steps while the rod is continuously rotated. Several parameters, such as torque, load and speed of penetration, are recorded at every rotation of the rod. The SDS method has been introduced in New Zealand, and the results of its application for characterising local sites are discussed in this study. A total of 164 SDS tests were conducted in Christchurch, Wellington and Auckland to validate/adjust the methodologies originally developed based on the Japanese practice. Most of the tests were conducted at sites where cone penetration tests (CPT), standard penetration tests (SPT) and borehole logs were available; the comparison of SDS results with existing information showed that the SDS method has great potential as an in-situ testing method for classifying the soils. By compiling the SDS data from 3 different cities and comparing them with the borehole logs, a soil classification chart was generated for identifying the soil type based on SDS parameters. Also, a correlation between fines content and SDS parameters was developed and a procedure for estimating angle of internal friction of sand using SDS parameters was investigated. Furthermore, a correlation was made between the tip resistance of the CPT and the SDS data for different percentages of fines content. The relationship between the SPT N value and a SDS parameter was also proposed. This thesis also presents a methodology for identifying the liquefiable layers of soil using SDS data. SDS tests were performed in both liquefied and non-liquefied areas in Christchurch to find a representative parameter and relationship for predicting the liquefaction potential of soil. Plots were drawn of the cyclic shear stress ratios (CSR) induced by the earthquakes and the corresponding energy of penetration during SDS tests. By identifying liquefied or unliquefied layers using three different popular CPT-based methods, boundary lines corresponding to the various probabilities of liquefaction happening were developed for different ranges of fines contents using logistic regression analysis, these could then be used for estimating the liquefaction potential of soil directly from the SDS data. Finally, the drilling process involved in screw driving sounding was simulated using Abaqus software. Analysis results proved that the model successfully captured the drilling process of the SDS machine in sand. In addition, a chart to predict peak friction angles of sandy sites based on measured SDS parameters for various vertical effective stresses was formulated. As a simple, fast and economical test, the SDS method can be a reliable alternative insitu test for soil and site characterisation, especially for residential house construction.

Research papers, The University of Auckland Library

A number of field testing techniques, such as standard penetration test (SPT), cone penetration test (CPT), and Swedish weight sounding (SWS), are popularly used for in-situ characterisation. The screw driving sounding (SDS) method, which has been recently developed in Japan, is an improved version of the SWS technique and measures more parameters, including the required torque, load, speed of penetration and rod friction; these provide more robust way of characterising soil stratigraphy. It is a cost-efficient technique which uses a machine-driven and portable device, making it ideal for testing in small-scale and confined areas. Moreover, with a testing depth of up to 10-15m, it is suitable for liquefaction assessment. Thus, the SDS method has great potential as an in-situ testing method for geotechnical site characterisation, especially for residential house construction. In this paper, the results of SDS tests performed at a variety of sites in New Zealand are presented. The soil database was employed to develop a soil classification chart based on SDS-derived parameters. Moreover, using the data obtained following the 2010-2011 Christchurch Earthquake Se-quence, a methodology was established for liquefaction potential evaluation using SDS data. http://www.isc5.com.au/wp-content/uploads/2016/09/1345-2-ORENSE.pdf

Research papers, University of Canterbury Library

High-quality ground motion records are required for engineering applications including response history analysis, seismic hazard development, and validation of physics-based ground motion simulations. However, the determination of whether a ground motion record is high-quality is poorly handled by automation with mathematical functions and can become prohibitive if done manually. Machine learning applications are well-suited to this problem, and a previous feed-forward neural network was developed (Bellagamba et al. 2019) to determine high-quality records from small crustal events in the Canterbury and Wellington regions for simulation validation. This prior work was however limited by the omission of moderate-to-large magnitude events and those from other tectonic environments, as well as a lack of explicit determination of the minimum usable frequency of the ground motion. To address these shortcomings, an updated neural network was developed to predict the quality of ground motion records for all magnitudes and all tectonic sources—active shallow crustal, subduction intraslab, and subduction interface—in New Zealand. The predictive performance of the previous feed-forward neural network was matched by the neural network in the domain of small crustal records, and this level of predictive performance is now extended to all source magnitudes and types in New Zealand making the neural network applicable to global ground motion databases. Furthermore, the neural network provides quality and minimum usable frequency predictions for each of the three orthogonal components of a record which may then be mapped into a binary quality decision or otherwise applied as desired. This framework provides flexibility for the end user to predict high-quality records with various acceptability thresholds allowing for this neural network to be used in a range of applications.

Research papers, University of Canterbury Library

According to TS 1170.5, designing a building to satisfy code-prescribed criteria (e.g., drift limit, member safety, P-Δ stability) at the ultimate limit state and relying on the inherent margins within the design code would lead to an acceptable mean annual frequency of collapse (λ꜀) in the range of 10−⁴ to 10−⁵. Modern performance objectives, such as λ꜀ and expected annual loss (EAL), are not explicitly considered. Although buckling-restrained braced frame (BRBF) buildings were widely adopted as lateral load-resisting systems for office and car park buildings in the Christchurch rebuild following the Canterbury earthquakes in New Zealand, there are currently no official guidelines for their design. The primary focus of this study is to develop a risk-targeted design framework for BRBF buildings that can achieve the performance objectives desired by stakeholders. To this extent, key factors influencing λ꜀ and EAL of BRBF buildings are identified. These factors include gusset plate design, number of storeys, design drift limit, BRBF beam-column connection, brace configuration, brace angle, brace material grade, and analysis method (equivalent lateral force vs. modal response spectrum). A novel 3D BRBF modelling approach capable of simulating out-of-plane buckling failure of buckling-restrained brace (BRB) gusset plates is developed. Prior experimental studies on sub-assemblies conducted elsewhere have demonstrated that gusset plates and end zones may buckle out of plane prematurely, before BRBs reach their maximum axial compression load carrying capacity. Current 2D BRBF macro models, typically used in research, cannot simulate this failure mode. A conventional 2D BRBF model underestimates the λ꜀ of a case-study 4-storey super-X configured steel BRBF building (designed according to NZS-3404) by a factor of two compared to the estimate from the proposed 3D model. These findings suggest that the current NZS-3404 gusset plate design method may undersize gusset plates and that using a 2D BRBF model in this case can significantly underestimate λ꜀. Three improved alternative gusset plate design methods that are easy to implement in practice are identified from the literature. Gusset plates in two case-study 4-storey steel BRBF buildings with super-X and diagonal configurations are designed using both the NZS-3404 method and alternative methods. All three alternative design methods are found to be conservative, resulting in an almost three-fold lower λ꜀ for both case-study BRBF buildings compared to those designed using the NZS-3404 method. Analysis results indicate that (i) bidirectional interaction has no significant effect on gusset plate buckling and (ii) mid-span gusset plates are more susceptible to buckling than corner gusset plates. A framework for seismic loss assessment using incremental dynamic analysis (IDA), called loss-oriented hazard-consistent incremental dynamic analysis (LOHC-IDA), is developed. IDA can be conducted with a generic record set, eliminating the arduous site-specific record selection required to conduct multiple stripe analysis (MSA). Traditional IDA, however, is limited in producing hazard-consistent estimates of engineering demand parameters (EDPs), which LOHC-IDA overcomes. LOHC-IDA improves upon existing methods by: (i) incorporating correlations among engineering demand parameters across intensity levels and (ii) using peak ground acceleration (PGA) to predict peak floor acceleration (PFA). For two case-study steel BRBF buildings, LOHC-IDA estimates the EAL and loss distributions conditioned on the intensity level that closely match the MSA results, with an average absolute error of 5%. The influence of factors beyond gusset plate design on the λ꜀ and EAL of 26 case-study steel BRBF buildings (designed in accordance with TS 1170.5) is examined. Hazard-consistent λ꜀ and EAL for these buildings are estimated using the FEMA P-58 loss and risk assessment framework. Among the 26 case-study buildings, 23 satisfy the maximum code-specified λ꜀ limit of 10−⁴. The EAL, normalised by the total building replacement cost, is highest for 2-storey BRBFs (0.22% on average), followed by 4-storey BRBFs (0.16% on average) and 8-storey BRBFs (0.11% on average). Reducing the design drift limit has the most significant effect on lowering λ꜀ (all BRBF designs were drift governed), followed by transitioning from pinned to moment-resisting beam-column connections, reducing the brace angle, and increasing brace strength. BRBF buildings designed using the equivalent lateral force method, on average, have a lower λ꜀ compared to those designed using the modal response spectrum method. Diagonally configured BRBFs exhibit the lowest λ꜀, followed by super- X and chevron configured BRBFs. Most design variables, apart from drift limit and beam-column connection, have limited influence on EAL. A simple method for EDP-targeted design of steel BRBF buildings is proposed. For this purpose, linear regression and CatBoost machine learning models are developed to predict steel BRBF building EDPs using peak storey drift ratio (PSDR) and PFA estimates from the 26 case-study buildings at intensity levels ranging from 80% to 0.5% probability of exceedance in 50 years. The R²ₐₔⱼ of these models is around 0.98, while the average prediction error is less than 10%. Fundamental period (T₁), total building height (Hₜ), and pseudospectral acceleration at T₁, denoted as Sₐ(T₁), are selected as the features to predict PSDR, while T₁, Hₜ, and PGA are the features selected to predict PFA. The EDP-targeted design has three steps: (i) for a given Hₜ value, the PSDR prediction model is used to identify a suitable T₁ that can achieve a desired PSDR target at the design intensity, (ii) a force-based design is then conducted iteratively to achieve the target T₁ by using an appropriate ductility factor and design drift limit, and (iii) based on the T₁ in the final design iteration, the PFA demand estimated by the PFA prediction models is used as a conservative input for the design of acceleration-sensitive non-structural elements. An equation to predict λ꜀ at the design stage is proposed for collapse risk-targeted seismic design of buildings. This equation comprises three principal components: reserve building strength, a proxy for effective structural stiffness, and reserve building deformation capacity. This equation is calibrated for the collapse risk-targeted design of BRBF buildings in New Zealand using results from 26 case-study BRBF buildings. The validity of this equation is demonstrated with three design verification examples designed to specific λ꜀ targets. Considering λ꜀ from hazard-consistent incremental dynamic analysis as the benchmark, the mean absolute percentage error in the design-stage prediction of λ꜀ of the verification buildings is approximately 10%.