Search

found 169 results

Research papers, University of Canterbury Library

In the period between September 2010 and December 2011, Christchurch was shaken by a series of strong earthquakes including the MW7.1 4 September 2010, Mw 6.2 22 February 2011, MW6.2 13 June 2011 and MW6.0 23 December 2011 earthquakes. These earthquakes produced very strong ground motions throughout the city and surrounding areas that resulted in soil liquefaction and lateral spreading causing substantial damage to buildings, infrastructure and the community. The stopbank network along the Kaiapoi and Avon River suffered extensive damage with repairs projected to take several years to complete. This presented an opportunity to undertake a case-study on a regional scale of the effects of liquefaction on a stopbank system. Ultimately, this information can be used to determine simple performance-based concepts that can be applied in practice to improve the resilience of river protection works. The research presented in this thesis draws from data collected following the 4th September 2010 and 22nd February 2011 earthquakes. The stopbank damage is categorised into seven key deformation modes that were interpreted from aerial photographs, consultant reports, damage photographs and site visits. Each deformation mode provides an assessment of the observed mechanism of failure behind liquefaction-induced stopbank damage and the factors that influence a particular style of deformation. The deformation modes have been used to create a severity classification for the whole stopbank system, being ‘no or low damage’ and ‘major or severe damage’, in order to discriminate the indicators and factors that contribute to ‘major to severe damage’ from the factors that contribute to all levels of damage a number of calculated, land damage, stopbank damage and geomorphological parameters were analysed and compared at 178 locations along the Kaiapoi and Avon River stopbank systems. A critical liquefiable layer was present at every location with relatively consistent geotechnical parameters (cone resistance (qc), soil behaviour type (Ic) and Factor of Safety (FoS)) across the study site. In 95% of the cases the critical layer occurred within two times the Height of the Free Face (HFF,). A statistical analysis of the geotechnical factors relating to the critical layer was undertaken in order to find correlations between specific deformation modes and geotechnical factors. It was found that each individual deformation mode involves a complex interplay of factors that are difficult to represent through correlative analysis. There was, however, sufficient data to derive the key factors that have affected the severity of deformation. It was concluded that stopbank damage is directly related to the presence of liquefaction in the ground materials beneath the stopbanks, but is not critical in determining the type or severity of damage, instead it is merely the triggering mechanism. Once liquefaction is triggered it is the gravity-induced deformation that causes the damage rather than the shaking duration. Lateral spreading and specifically the depositional setting was found to be the key aspect in determining the severity and type of deformation along the stopbank system. The presence or absence of abandoned or old river channels and point bar deposits was found to significantly influence the severity and type of deformation. A review of digital elevation models and old maps along the Kaiapoi River found that all of the ‘major to severe’ damage observed occurred within or directly adjacent to an abandoned river channel. Whilst a review of the geomorphology along the Avon River showed that every location within a point bar deposit suffered some form of damage, due to the depositional environment creating a deposit highly susceptible to liquefaction.

Research papers, University of Canterbury Library

Recent earthquakes have highlighted the vulnerability of existing structure to seismic loading. Current seismic retrofit strategies generally focus on increasing the strength/stiffness in order to upgrade the seismic performance of a structure or element. A typical drawback of this approach is that the demand on the structural and sub-structural elements can be increased. This is of particular importance when considering the foundation capacity, which may already be insufficient to allow the full capacity of the existing wall to develop (due to early codes being gravity load orientated). In this thesis a counter intuitive but rational seismic retrofit strategy, termed "selective weakening" is introduced and investigated. This is the first stage of an ongoing research project underway at the University of Canterbury which is focusing on developing selective weakening techniques for the seismic retrofit of reinforced concrete structures. In this initial stage the focus is on developing selective weakening for the seismic retrofit of structural walls. This is performed using a series of experimental, analytical and numerical investigations. A procedure for the assessment of existing structural walls is also compiled, based on the suggestions of currently available code provisions. A selective weakening intervention is performed within an overall performance-based retrofit approach with the aim of improving the inelastic behaviour by first reducing the strength/stiffness of specific members within the structural system. This will be performed with the intention of modifying a shear type behaviour towards a flexural type behaviour. As a result the demand on the structural member will be reduced. Once weakening has been implemented the designer can use the wide range of techniques and materials available (e.g. use of FRP, jacketing or shotcrete) to ensure that adequate characteristics are achieved. Whilst performing this it has to be assured that the structure meets specific performance criteria and the principles of capacity design. A target of the retrofit technique is the ability to introduce the characteristics of recently developed high performance seismic resisting systems, consisting of a self centring and dissipative behaviour (commonly referred to as a hybrid system). In this thesis, results of experimental investigations performed on benchmark and selectively weakened walls are discussed. The investigations consisted of quasi-static cyclic uni-directional tests on two benchmark and two retrofitted cantilever walls. The first benchmark wall is detailed as typical of pre-1970's construction practice. An equivalent wall is retrofitted using a selective weakening approach involving a horizontal cut at foundation level to allow for a rocking response. The second benchmark wall represents a more severe scenario where the inelastic behaviour is dominated by shear. A retrofit solution involving vertically segmenting the wall to improve the ductility and retain gravity carrying capacity by inducing a flexural response is implemented. Numerical investigations on a multi-storey wall system are performed using non linear time history analysis on SDOF and MDOF lumped plasticity models, representing an as built and retrofitted prototype structure. Calibration of the hysteretic response to experimental results is carried out (accounting for pinching and strength degradation). The sensitivity of maximum and residual drifts to p-delta and strength degradation is monitored, along with the sensitivity of the peak base shear to higher mode affects. The results of the experimental and analytical investigations confirmed the feasibility and viability of the proposed retrofit technique, towards improving the seismic performance of structural walls.

Research papers, University of Canterbury Library

Gravelly soils’ liquefaction has been documented since early 19th century with however the focus being sand-silts mixture – coarse documentation of this topic, that gravels do in fact liquefy was only acknowledged as an observation. With time, we have been impacted by earthquakes, EQ causing more damage to our property: life and environment-natural and built. In this realm of EQ related-damage the ground or soils in general act as buffer between the epicentre and the structures at a concerned site. Further, in this area, upon the eventual acknowledgement of liquefaction of soils as a problem, massive efforts were undertaken to understand its mechanics, what causes and thereby how to mitigate its ill-effects. Down that lane, gravelly soils’ liquefaction was another milestone covered in early 20th century, and thus regarded as a problematic subject. This being a fairly recent acknowledgement, efforts have initiated in this direction (or area of particle size under consideration-gravels>2mm), with this research outputs intended to complement that research for the betterment of our understanding of what’s happening and how shall we best address it, given the circumstances: socio (life) - environment (structures) - economic (cost or cost-“effectiveness’) and of course political (our “willingness” to want to address the problem). Case histories from at least 29 earthquakes worldwide have indicated that liquefaction can occur in gravelly soils (both in natural deposits and manmade reclamations) inducing large ground deformation and causing severe damage to civil infrastructures. However, the evaluation of the liquefaction resistance of gravelly soils remains to be a major challenge in geotechnical earthquake engineering. To date, laboratory tests aimed at evaluating the liquefaction resistance of gravelly soils are still very limited, as compared to the large body of investigations carried out on assessing the liquefaction resistance of sandy soils. While there is a general agreement that the liquefaction resistance of gravelly soils can be as low as that of clean sands, previous studies suggested that the liquefaction behaviour of gravelly soils is significantly affected by two key factors, namely relative density (Dr) and gravel content (Gc). While it is clear that the liquefaction resistance of gravels increases with the increasing Dr, there are inconclusive and/or contradictory results regarding the effect of Gc on the liquefaction resistance of gravelly soils. Aimed at addressing this important topic, an investigation is being currently carried out by researchers at the University of Canterbury, UC. As a first step, a series of undrained cyclic triaxial tests were conducted on selected sand-gravel mixtures (SGMs), and inter-grain state framework concepts such as the equivalent and skeleton void ratios were used to describe the joint effects of Gc and Dr on the liquefaction resistance of SGMs. Following such experimental effort, this study is aimed at providing new and useful insights, by developing a critical state-based method combined with the inter-grain state framework to uniquely describe the liquefaction resistance of gravelly soils. To do so, a series of monotonic drained triaxial tests will be carried out on selected SGMs. The outcomes of this study, combined with those obtained to date by UC researchers, will greatly contribute to the expansion of a worldwide assessment database, and also towards the development of a reliable liquefaction triggering procedure for characterising the liquefaction potential of gravelly soils, which is of paramount importance not only for the New Zealand context, but worldwide. This will make it possible for practising engineers to identify liquefiable gravelly soils in advance and make sound recommendations to minimise the impact of such hazards on land, and civil infrastructures.

Research papers, The University of Auckland Library

The research presented in this thesis investigated the environmental impacts of structural design decisions across the life of buildings located in seismic regions. In particular, the impacts of expected earthquake damage were incorporated into a traditional life cycle assessment (LCA) using a probabilistic method, and links between sustainable and resilient design were established for a range of case-study buildings designed for different seismic performance objectives. These links were quantified using a metric herein referred to as the seismic carbon risk, which represents the expected environmental impacts and resource use indicators associated with earthquake damage during buildings’ life. The research was broken into three distinct parts: (1) a city-level evaluation of the environmental impacts of demolitions following the 2010/2011 Canterbury earthquake sequence in New Zealand, (2) the development of a probabilistic framework to incorporate earthquake damage into LCA, and (3) using case-study buildings to establish links between sustainable and resilient design. The first phase of the research focused on the environmental impacts of demolitions in Christchurch, New Zealand following the 2010/2011 Canterbury Earthquake Sequence. This large case study was used to investigate the environmental impact of the demolition of concrete buildings considering the embodied carbon and waste stream distribution. The embodied carbon was considered here as kilograms of CO2 equivalent that occurs on production, construction, and waste management stage. The results clearly demonstrated the significant environmental impacts that can result from moderate and large earthquakes in urban areas, and the importance of including environmental considerations when making post-earthquake demolition decisions. The next phase of the work introduced a framework for incorporating the impacts of expected earthquake damage based on a probabilistic approach into traditional LCA to allow for a comparison of seismic design decisions using a carbon lens. Here, in addition to initial construction impacts, the seismic carbon risk was quantified, including the impacts of seismic repair activities and total loss scenarios assuming reconstruction in case of non-reparability. A process-based LCA was performed to obtain the environmental consequence functions associated with structural and non-structural repair activities for multiple environmental indicators. In the final phase of the work, multiple case-study buildings were used to investigate the seismic consequences of different structural design decisions for buildings in seismic regions. Here, two case-study buildings were designed to multiple performance objectives, and the upfront carbon costs, and well as the seismic carbon risk across the building life were compared. The buildings were evaluated using the framework established in phase 2, and the results demonstrated that the seismic carbon risk can significantly be reduced with only minimal changes to the upfront carbon for buildings designed for a higher base shear or with seismic protective systems. This provided valuable insight into the links between resilient and sustainable design decisions. Finally, the results and observations from the work across the three phases of research described above were used to inform a discussion on important assumptions and topics that need to be considered when quantifying the environmental impacts of earthquake damage on buildings. These include: selection of a non-repairable threshold (e.g. a value beyond which a building would be demolished rather than repaired), the time value of carbon (e.g. when in the building life the carbon is released), the changing carbon intensity of structural materials over time, and the consideration of deterministic vs. probabilistic results. Each of these topics was explored in some detail to provide a clear pathway for future work in this area.

Research papers, The University of Auckland Library

Though generally considered “natural” disasters, cyclones and earthquakes are increasingly being associated with human activities, incubated through urban settlement patterns and the long-term redistribution of natural resources. As society is becoming more urbanized, the risk of human exposure to disasters is also rising. Architecture often reflects the state of society’s health: architectural damage is the first visible sign of emergency, and reconstruction is the final response in the process of recovery. An empirical assessment of architectural projects in post-disaster situations can lead to a deeper understanding of urban societies as they try to rebuild. This thesis offers an alternative perspective on urban disasters by looking at the actions and attitudes of disaster professionals through the lens of architecture, situated in recent events: the 2010 Christchurch earthquake, the 2010 Haiti earthquake, and the 2005 Hurricane Katrina. An empirical, multi-hazard, cross-sectional case study methodology was used, employing grounded theory method to build theory, and a critical constructivist strategy to inform the analysis. By taking an interdisciplinary approach to understanding disasters, this thesis positions architecture as a conduit between two divergent approaches to disaster research: the hazards approach, which studies the disaster cycles from a scientific perspective; and the sociological approach, which studies the socially constructed vulnerabilities that result from disasters, and the elements of social change that accompany such events. Few studies to date have attempted to integrate the multi-disciplinary perspectives that can advance our understanding of societal problems in urban disasters. To bridge this gap, this thesis develops what will be referred to as the “Rittelian framework”—based on the work of UC Berkeley’s architecture professor Horst Rittel (1930-1990). The Rittelian framework uses the language of design to transcend the multiple fields of human endeavor to address the “design problems” in disaster research. The processes by which societal problems are addressed following an urban disaster involve input by professionals from multiple fields—including economics, sociology, medicine, and engineering—but the contribution from architecture has been minimal to date. The main impetus for my doctoral thesis has been the assertion that most of the decisions related to reconstruction are made in the early emergency recovery stages where architects are not involved, but architects’ early contribution is vital to the long-term reconstruction of cities. This precipitated in the critical question: “How does the Rittelian framework contribute to the critical design decisions in modern urban disasters?” Comparative research was undertaken in three case studies of recent disasters in New Orleans (2005), Haiti (2010) and Christchurch (2010), by interviewing 51 individuals who were selected on the basis of employing the Rittelian framework in their humanitarian practice. Contextualizing natural disaster research within the robust methodological framework of architecture and the analytical processes of sociology is the basis for evaluating the research proposition that architectural problem solving is of value in addressing the ‘Wicked Problems’ of disasters. This thesis has found that (1) the nuances of the way disaster agents interpret the notion of “building back better” can influence the extent to which architectural professionals contribute in urban disaster recovery, (2) architectural design can be used to facilitate but also impede critical design decisions, and (3) framing disaster research in terms of design decisions can lead to innovation where least expected. This empirical research demonstrates how the Rittelian framework can inform a wider discussion about post-disaster human settlements, and improve our resilience through disaster research.

Research papers, Lincoln University

Liquefaction affects late Holocene, loose packed and water saturated sediment subjected to cyclical shear stress. Liquefaction features in the geological record are important off-fault markers that inform about the occurrence of moderate to large earthquakes (> 5 Mw). The study of contemporary liquefaction features provides a better understanding of where to find past (paleo) liquefaction features, which, if identified and dated, can provide information on the occurrence, magnitude and timing of past earthquakes. This is particularly important in areas with blind active faults. The extensive liquefaction caused by the 2010-2011 Canterbury Earthquake Sequence (CES) gave the geoscience community the opportunity to study the liquefaction process in different settings (alluvial, coastal and estuarine), investigating different aspects (e.g. geospatial correlation with landforms, thresholds for peak ground acceleration, resilience of infrastructures), and to collect a wealth geospatial dataset in the broad region of the Canterbury Plains. The research presented in this dissertation examines the sedimentary architecture of two environments, the alluvial and coastal settings, affected by liquefaction during the CES. The novel aim of this study is to investigate how landform and subsurface sedimentary architecture influence liquefaction and its surface manifestation, to provide knowledge for locating studies of paleoliquefaction in future. Two study cases documented in the alluvial setting showed that liquefaction features affected a crevasse splay and point bar ridges. However, the liquefaction source layer was linked to paleochannel floor deposits below the crevasse splay in the first case, and to the point bar deposits themselves in the second case. This research documents liquefaction features in the coastal dune system of the Canterbury Plains in detail for the first time. In the coastal dune setting the liquefiable layer is near the surface. The pore water pressure is vented easily because the coastal dune soil profile is entirely composed of non-cohesive, very well sorted sandy sediment that weakly resists disturbance from fluidised sediment under pressure. As a consequence, the liquefied flow does not need to find a specific crack through which the sediment is vented at the surface; instead, the liquefied sand finds many closely spaced conduits to vent its excess of pore water pressure. Therefore, in the coastal dune setting it is rare to observe discrete dikes (as they are defined in the alluvial setting), instead A horizon delamination (splitting) and blistering (near surface sills) are more common. The differences in styles of surface venting lead to contrasts in patterns of ejecta in the two environments. Whereas the alluvial environment is characterised by coalesced sand blows forming lineations, the coastal dune environment hosts apparently randomly distributed isolated sand blows often associated with collapse features. Amongst the techniques tested for the first time to investigate liquefaction features are: 3D GPR, which improved the accuracy of the trenching even six years after the liquefaction events; thin section analysis to investigate sediment fabric, which helped to discriminate liquefied sediment from its host sediment, and modern from paleoliquefaction features; a Random Forest classification based on the CES liquefaction map, which was used to test relationships between surface manifestation of liquefaction and topographic parameters. The results from this research will be used to target new study sites for future paleoliquefaction research and thus will improve the earthquake hazard assessment across New Zealand.

Research papers, University of Canterbury Library

The recent Canterbury earthquake sequence in 2010-2011 highlighted a uniquely severe level of structural damage to modern buildings, while confirming the high vulnerability and life threatening of unreinforced masonry and inadequately detailed reinforced concrete buildings. Although the level of damage of most buildings met the expected life-safety and collapse prevention criteria, the structural damage to those building was beyond economic repair. The difficulty in the post-event assessment of a concrete or steel structure and the uneconomical repairing costs are the big drivers of the adoption of low damage design. Among several low-damage technologies, post-tensioned rocking systems were developed in the 1990s with applications to precast concrete members and later extended to structural steel members. More recently the technology was extended to timber buildings (Pres-Lam system). This doctoral dissertation focuses on the experimental investigation and analytical and numerical prediction of the lateral load response of dissipative post-tensioned rocking timber wall systems. The first experimental stages of this research consisted of component testing on both external replaceable devices and internal bars. The component testing was aimed to further investigate the response of these devices and to provide significant design parameters. Post-tensioned wall subassembly testing was then carried out. Firstly, quasi-static cyclic testing of two-thirds scale post-tensioned single wall specimens with several reinforcement layouts was carried out. Then, an alternative wall configuration to limit displacement incompatibilities in the diaphragm was developed and tested. The system consisted of a Column-Wall-Column configuration, where the boundary columns can provide the support to the diaphragm with minimal uplifting and also provide dissipation through the coupling to the post-tensioned wall panel with dissipation devices. Both single wall and column-wall-column specimens were subjected to drifts up to 2% showing excellent performance, limiting the damage to the dissipating devices. One of the objectives of the experimental program was to assess the influence of construction detailing, and the dissipater connection in particular proved to have a significant influence on the wall’s response. The experimental programs on dissipaters and wall subassemblies provided exhaustive data for the validation and refinement of current analytical and numerical models. The current moment-rotation iterative procedure was refined accounting for detailed response parameters identified in the initial experimental stage. The refined analytical model proved capable of fitting the experimental result with good accuracy. A further stage in this research was the validation and refinement of numerical modelling approaches, which consisted in rotational spring and multi-spring models. Both the modelling approaches were calibrated versus the experimental results on post-tensioned walls subassemblies. In particular, the multi-spring model was further refined and implemented in OpenSEES to account for the full range of behavioural aspects of the systems. The multi-spring model was used in the final part of the dissertation to validate and refine current lateral force design procedures. Firstly, seismic performance factors in accordance to a Force-Based Design procedure were developed in accordance to the FEMA P-695 procedure through extensive numerical analyses. This procedure aims to determine the seismic reduction factor and over-strength factor accounting for the collapse probability of the building. The outcomes of this numerical analysis were also extended to other significant design codes. Alternatively, Displacement-Based Design can be used for the determination of the lateral load demand on a post-tensioned multi-storey timber building. The current DBD procedure was used for the development of a further numerical analysis which aimed to validate the procedure and identify the necessary refinements. It was concluded that the analytical and numerical models developed throughout this dissertation provided comprehensive and accurate tools for the determination of the lateral load response of post-tensioned wall systems, also allowing the provision of design parameters in accordance to the current standards and lateral force design procedures.

Research papers, University of Canterbury Library

Christchurch City Council (Council) is undertaking the Land Drainage Recovery Programme in order to assess the effects of the earthquakes on flood risk to Christchurch. In the course of these investigations it has become better understood that floodplain management should be considered in a multi natural hazards context. Council have therefore engaged the Jacobs, Beca, University of Canterbury, and HR Wallingford project team to investigate the multihazards in eastern areas of Christchurch and develop flood management options which also consider other natural hazards in that context (i.e. how other hazards contribute to flooding both through temporal and spatial coincidence). The study has three stages:  Stage 1 Gap Analysis – assessment of information known, identification of gaps and studies required to fill the gaps.  Stage 2 Hazard Studies – a gap filling stage with the studies identified in Stage 1.  Stage 3 Collating, Optioneering and Reporting – development of options to manage flood risk. This present report is to document findings of Stage 1 and recommends the studies that should be completed for Stage 2. It has also been important to consider how Stage 3 would be delivered and the gaps are prioritised to provide for this. The level of information available and hazards to consider is extensive; requiring this report to be made up of five parts each identifying individual gaps. A process of identifying information for individual hazards in Christchurch has been undertaken and documented (Part 1) followed by assessing the spatial co-location (Part 2) and probabilistic presence of multi hazards using available information. Part 3 considers multi hazard presence both as a temporal coincidence (e.g. an earthquake and flood occurring at one time) and as a cascade sequence (e.g. earthquake followed by a flood at some point in the future). Council have already undertaken a number of options studies for managing flood risk and these are documented in Part 4. Finally Part 5 provides the Gap Analysis Summary and Recommendations to Council. The key findings of Stage 1 gap analysis are: - The spatial analysis showed eastern Christchurch has a large number of hazards present with only 20% of the study area not being affected by any of the hazards mapped. Over 20% of the study area is exposed to four or more hazards at the frequencies and data available. - The majority of the Residential Red Zone is strongly exposed to multiple hazards, with 86% of the area being exposed to 4 or more hazards, and 24% being exposed to 6 or more hazards. - A wide number of gaps are present; however, prioritisation needs to consider the level of benefit and risks associated with not undertaking the studies. In light of this 10 studies ranging in scale are recommended to be done for the project team to complete the present scope of Stage 3. - Stage 3 will need to consider a number of engineering options to address hazards and compare with policy options; however, Council have not established a consistent policy on managed retreat that can be applied for equal comparison; without which substantial assumptions are required. We recommend Council undertake a study to define a managed retreat framework as an option for the city. - In undertaking Stage 1 with floodplain management as the focal point in a multi hazards context we have identified that Stage 3 requires consideration of options in the context of economics, implementation and residual risk. Presently the scope of work will provide a level of definition for floodplain options; however, this will not be at equal levels of detail for other hazard management options. Therefore, we recommend Council considers undertaking other studies with those key hazards (e.g. Coastal Hazards) as a focal point and identifies the engineering options to address such hazards. Doing so will provide equal levels of information for Council to make an informed and defendable decision on which options are progressed following Stage 3.

Research papers, University of Canterbury Library

Coastal and river environments are exposed to a number of natural hazards that have the potential to negatively affect both human and natural environments. The purpose of this research is to explain that significant vulnerabilities to seismic hazards exist within coastal and river environments and that coasts and rivers, past and present, have played as significant a role as seismic, engineering or socio-economic factors in determining the impacts and recovery patterns of a city following a seismic hazard event. An interdisciplinary approach was used to investigate the vulnerability of coastal and river areas in the city of Christchurch, New Zealand, following the Canterbury Earthquake Sequence, which began on the 4th of September 2010. This information was used to identify the characteristics of coasts and rivers that make them more susceptible to earthquake induced hazards including liquefaction, lateral spreading, flooding, landslides and rock falls. The findings of this research are applicable to similar coastal and river environments elsewhere in the world where seismic hazards are also of significant concern. An interdisciplinary approach was used to document and analyse the coastal and river related effects of the Canterbury earthquake sequence on Christchurch city in order to derive transferable lessons that can be used to design less vulnerable urban communities and help to predict seismic vulnerabilities in other New Zealand and international urban coastal and river environments for the future. Methods used to document past and present features and earthquake impacts on coasts and rivers in Christchurch included using maps derived from Geographical Information Systems (GIS), photographs, analysis of interviews from coastal, river and engineering experts, and analysis of secondary data on seismicity, liquefaction potential, geology, and planning statutes. The Canterbury earthquake sequence had a significant effect on Christchurch, particularly around rivers and the coast. This was due to the susceptibility of rivers to lateral spreading and the susceptibility of the eastern Christchurch and estuarine environments to liquefaction. The collapse of river banks and the extensive cracking, tilting and subsidence that accompanied liquefaction, lateral spreading and rock falls caused damage to homes, roads, bridges and lifelines. This consequently blocked transportation routes, interrupted electricity and water lines, and damaged structures built in their path. This study found that there are a number of physical features of coastal and river environments from the past and the present that have induced vulnerabilities to earthquake hazards. The types of sediments found beneath eastern Christchurch are unconsolidated fine sands, silts, peats and gravels. Together with the high water tables located beneath the city, these deposits made the area particularly susceptible to liquefaction and liquefaction-induced lateral spreading, when an earthquake of sufficient size shook the ground. It was both past and present coastal and river processes that deposited the types of sediments that are easily liquefied during an earthquake. Eastern Christchurch was once a coastal and marine environment 6000 years ago when the shoreline reached about 6 km inland of its present day location, which deposited fine sand and silts over this area. The region was also exposed to large braided rivers and smaller spring fed rivers, both of which have laid down further fine sediments over the following thousands of years. A significant finding of this study is the recognition that the Canterbury earthquake sequence has exacerbated existing coastal and river hazards and that assessments and monitoring of these changes will be an important component of Christchurch’s future resilience to natural hazards. In addition, patterns of recovery following the Canterbury earthquakes are highlighted to show that coasts and rivers are again vulnerable to earthquakes through their ability to recovery. This city’s capacity to incorporate resilience into the recovery efforts is also highlighted in this study. Coastal and river areas have underlying physical characteristics that make them increasingly vulnerable to the effects of earthquake hazards, which have not typically been perceived as a ‘coastal’ or ‘river’ hazard. These findings enhance scientific and management understanding of the effects that earthquakes can have on coastal and river environments, an area of research that has had modest consideration to date. This understanding is important from a coastal and river hazard management perspective as concerns for increased human development around coastlines and river margins, with a high seismic risk, continue to grow.

Research papers, University of Canterbury Library

The purpose of this thesis is to conduct a detailed examination of the forward-directivity characteristics of near-fault ground motions produced in the 2010-11 Canterbury earthquakes, including evaluating the efficacy of several existing empirical models which form the basis of frameworks for considering directivity in seismic hazard assessment. A wavelet-based pulse classification algorithm developed by Baker (2007) is firstly used to identify and characterise ground motions which demonstrate evidence of forward-directivity effects from significant events in the Canterbury earthquake sequence. The algorithm fails to classify a large number of ground motions which clearly exhibit an early-arriving directivity pulse due to: (i) incorrect pulse extraction resulting from the presence of pulse-like features caused by other physical phenomena; and (ii) inadequacy of the pulse indicator score used to carry out binary pulse-like/non-pulse-like classification. An alternative ‘manual’ approach is proposed to ensure 'correct' pulse extraction and the classification process is also guided by examination of the horizontal velocity trajectory plots and source-to-site geometry. Based on the above analysis, 59 pulse-like ground motions are identified from the Canterbury earthquakes , which in the author's opinion, are caused by forward-directivity effects. The pulses are also characterised in terms of their period and amplitude. A revised version of the B07 algorithm developed by Shahi (2013) is also subsequently utilised but without observing any notable improvement in the pulse classification results. A series of three chapters are dedicated to assess the predictive capabilities of empirical models to predict the: (i) probability of pulse occurrence; (ii) response spectrum amplification caused by the directivity pulse; (iii) period and amplitude (peak ground velocity, PGV) of the directivity pulse using observations from four significant events in the Canterbury earthquakes. Based on the results of logistic regression analysis, it is found that the pulse probability model of Shahi (2013) provides the most improved predictions in comparison to its predecessors. Pulse probability contour maps are developed to scrutinise observations of pulses/non-pulses with predicted probabilities. A direct comparison of the observed and predicted directivity amplification of acceleration response spectra reveals the inadequacy of broadband directivity models, which form the basis of the near-fault factor in the New Zealand loadings standard, NZS1170.5:2004. In contrast, a recently developed narrowband model by Shahi & Baker (2011) provides significantly improved predictions by amplifying the response spectra within a small range of periods. The significant positive bias demonstrated by the residuals associated with all models at longer vibration periods (in the Mw7.1 Darfield and Mw6.2 Christchurch earthquakes) is likely due to the influence of basin-induced surface waves and non-linear soil response. Empirical models for the pulse period notably under-predict observations from the Darfield and Christchurch earthquakes, inferred as being a result of both the effect of nonlinear site response and influence of the Canterbury basin. In contrast, observed pulse periods from the smaller magnitude June (Mw6.0) and December (Mw5.9) 2011 earthquakes are in good agreement with predictions. Models for the pulse amplitude generally provide accurate estimates of the observations at source-to-site distances between 1 km and 10 km. At longer distances, observed PGVs are significantly under-predicted due to their slower apparent attenuation. Mixed-effects regression is employed to develop revised models for both parameters using the latest NGA-West2 pulse-like ground motion database. A pulse period relationship which accounts for the effect of faulting mechanism using rake angle as a continuous predictor variable is developed. The use of a larger database in model development, however does not result in improved predictions of pulse period for the Darfield and Christchurch earthquakes. In contrast, the revised model for PGV provides a more appropriate attenuation of the pulse amplitude with distance, and does not exhibit the bias associated with previous models. Finally, the effects of near-fault directivity are explicitly included in NZ-specific probabilistic seismic hazard analysis (PSHA) using the narrowband directivity model of Shahi & Baker (2011). Seismic hazard analyses are conducted with and without considering directivity for typical sites in Christchurch and Otira. The inadequacy of the near-fault factor in the NZS1170.5: 2004 is apparent based on a comparison with the directivity amplification obtained from PSHA.

Research papers, University of Canterbury Library

This dissertation addresses several fundamental and applied aspects of ground motion selection for seismic response analyses. In particular, the following topics are addressed: the theory and application of ground motion selection for scenario earthquake ruptures; the consideration of causal parameter bounds in ground motion selection; ground motion selection in the near-fault region where directivity effect is significant; and methodologies for epistemic uncertainty consideration and propagation in the context of ground motion selection and seismic performance assessment. The paragraphs below outline each contribution in more detail. A scenario-based ground motion selection method is presented which considers the joint distribution of multiple intensity measure (IM) types based on the generalised conditional intensity measure (GCIM) methodology (Bradley, 2010b, 2012c). The ground motion selection algorithm is based on generating realisations of the considered IM distributions for a specific rupture scenario and then finding the prospective ground motions which best fit the realisations using an optimal amplitude scaling factor. In addition, using different rupture scenarios and site conditions, two important aspects of the GCIM methodology are scrutinised: (i) different weight vectors for the various IMs considered; and (ii) quantifying the importance of replicate selections for ensembles with different numbers of desired ground motions. As an application of the developed scenario-based ground motion selection method, ground motion ensembles are selected to represent several major earthquake scenarios in New Zealand that pose a significant seismic hazard, namely, Alpine, Hope and Porters Pass ruptures for Christchurch city; and Wellington, Ohariu, and Wairarapa ruptures for Wellington city. A rigorous basis is developed, and sensitivity analyses performed, for the consideration of bounds on causal parameters (e.g., magnitude, source-to-site distance, and site condition) for ground motion selection. The effect of causal parameter bound selection on both the number of available prospective ground motions from an initial empirical as-recorded database, and the statistical properties of IMs of selected ground motions are examined. It is also demonstrated that using causal parameter bounds is not a reliable approach to implicitly account for ground motion duration and cumulative effects when selection is based on only spectral acceleration (SA) ordinates. Specific causal parameter bounding criteria are recommended for general use as a ‘default’ bounding criterion with possible adjustments from the analyst based on problem-specific preferences. An approach is presented to consider the forward directivity effects in seismic hazard analysis, which does not separate the hazard calculations for pulse-like and non-pulse-like ground motions. Also, the ability of ground motion selection methods to appropriately select records containing forward directivity pulse motions in the near-fault region is examined. Particular attention is given to ground motion selection which is explicitly based on ground motion IMs, including SA, duration, and cumulative measures; rather than a focus on implicit parameters (i.e., distance, and pulse or non-pulse classifications) that are conventionally used to heuristically distinguish between the near-fault and far-field records. No ad hoc criteria, in terms of the number of directivity ground motions and their pulse periods, are enforced for selecting pulse-like records. Example applications are presented with different rupture characteristics, source-to-site geometry, and site conditions. It is advocated that the selection of ground motions in the near-fault region based on IM properties alone is preferred to that in which the proportion of pulse-like motions and their pulse periods are specified a priori as strict criteria for ground motion selection. Three methods are presented to propagate the effect of seismic hazard and ground motion selection epistemic uncertainties to seismic performance metrics. These methods differ in their level of rigor considered to propagate the epistemic uncertainty in the conditional distribution of IMs utilised in ground motion selection, selected ground motion ensembles, and the number of nonlinear response history analyses performed to obtain the distribution of engineering demand parameters. These methods are compared for an example site where it is observed that, for seismic demand levels below the collapse limit, epistemic uncertainty in ground motion selection is a smaller uncertainty contributor relative to the uncertainty in the seismic hazard itself. In contrast, uncertainty in ground motion selection process increases the uncertainty in the seismic demand hazard for near-collapse demand levels.

Research papers, Lincoln University

Globally, the maximum elevations at which treelines are observed to occur coincide with a 6.4 °C soil isotherm. However, when observed at finer scales, treelines display a considerable degree of spatial complexity in their patterns across the landscape and are often found occurring at lower elevations than expected relative to the global-scale pattern. There is still a lack of understanding of how the abiotic environment imposes constraints on treeline patterns, the scales at which different effects are acting, and how these effects vary over large spatial extents. In this thesis, I examined abrupt Nothofagus treelines across seven degrees of latitude in New Zealand in order to investigate two broad questions: (1) What is the nature and extent of spatial variability in Nothofagus treelines across the country? (2) How is this variation associated with abiotic variation at different spatial scales? A range of GIS, statistical, and atmospheric modelling methods were applied to address these two questions. First, I characterised Nothofagus treeline patterns at a 15x15km scale across New Zealand using a set of seven, GIS-derived, quantitative metrics that describe different aspects of treeline position, shape, spatial configuration, and relationships with adjacent vegetation. Multivariate clustering of these metrics revealed distinct treeline types that showed strong spatial aggregation across the country. This suggests a strong spatial structuring of the abiotic environment which, in turn, drives treeline patterns. About half of the multivariate treeline metric variation was explained by patterns of climate, substrate, topographic and disturbance variability; on the whole, climatic and disturbance factors were most influential. Second, I developed a conceptual model that describes how treeline elevation may vary at different scales according to three categories of effects: thermal modifying effects, physiological stressors, and disturbance effects. I tested the relevance of this model for Nothofagus treelines by investigating treeline elevation variation at five nested scales (regional to local) using a hierarchical design based on nested river catchments. Hierarchical linear modelling revealed that the majority of the variation in treeline elevation resided at the broadest, regional scale, which was best explained by the thermal modifying effects of solar radiation, mountain mass, and differences in the potential for cold air ponding. Nonetheless, at finer scales, physiological and disturbance effects were important and acted to modify the regional trend at these scales. These results suggest that variation in abrupt treeline elevations are due to both broad-scale temperature-based growth limitation processes and finer-scale stress- and disturbance-related effects on seedling establishment. Third, I explored the applicability of a meso-scale atmospheric model, The Air Pollution Model (TAPM), for generating 200 m resolution, hourly topoclimatic data for temperature, incoming and outgoing radiation, relative humidity, and wind speeds. Initial assessments of TAPM outputs against data from two climate station locations over seven years showed that the model could generate predictions with a consistent level of accuracy for both sites, and which agreed with other evaluations in the literature. TAPM was then used to generate data at 28, 7x7 km Nothofagus treeline zones across New Zealand for January (summer) and July (winter) 2002. Using mixed-effects linear models, I determined that both site-level factors (mean growing season temperature, mountain mass, precipitation, earthquake intensity) and local-level landform (slope and convexity) and topoclimatic factors (solar radiation, photoinhibition index, frost index, desiccation index) were influential in explaining variation in treeline elevation within and among these sites. Treelines were generally closer to their site-level maxima in regions with higher mean growing season temperatures, larger mountains, and lower levels of precipitation. Within sites, higher treelines were associated with higher solar radiation, and lower photoinhibition and desiccation index values, in January, and lower desiccation index values in July. Higher treelines were also significantly associated with steeper, more convex landforms. Overall, this thesis shows that investigating treelines across extensive areas at multiple study scales enables the development of a more comprehensive understanding of treeline variability and underlying environmental constraints. These results can be used to formulate new hypotheses regarding the mechanisms driving treeline formation and to guide the optimal choice of field sites at which to test these hypotheses.

Research papers, University of Canterbury Library

Deconstruction, at the end of the useful life of a building, produces a considerable amount of materials which must be disposed of, or be recycled / reused. At present, in New Zealand, most timber construction and demolition (C&D) material, particularly treated timber, is simply waste and is placed in landfills. For both technical and economic reasons (and despite the increasing cost of landfills), this position is unlikely to change in the next 10 – 15 years unless legislation dictates otherwise. Careful deconstruction, as opposed to demolition, can provide some timber materials which can be immediately re-used (eg. doors and windows), or further processed into other components (eg. beams or walls) or recycled (‘cascaded’) into other timber or composite products (e.g. fibre-board). This reusing / recycling of materials is being driven slowly in NZ by legislation, the ‘greening’ of the construction industry and public pressure. However, the recovery of useful material can be expensive and uneconomic (as opposed to land-filling). In NZ, there are few facilities which are able to sort and separate timber materials from other waste, although the soon-to-be commissioned Burwood Resource Recovery Park in Christchurch will attempt to deal with significant quantities of demolition waste from the recent earthquakes. The success (or otherwise) of this operation should provide good information as to how future C&D waste will be managed in NZ. In NZ, there are only a few, small scale facilities which are able to burn waste wood for energy recovery (e.g. timber mills), and none are known to be able to handle large quantities of treated timber. Such facilities, with constantly improving technology, are being commissioned in Europe (often with Government subsidies) and this indicates that similar bio-energy (co)generation will be established in NZ in the future. However, at present, the NZ Government provides little assistance to the bio-energy industry and the emergence worldwide of shale-gas reserves is likely to push the economic viability of bio-energy further into the future. The behaviour of timber materials placed in landfills is complex and poorly understood. Degrading timber in landfills has the potential to generate methane, a potent greenhouse gas, which can escape to the atmosphere and cancel out the significant benefits of carbon sequestration during tree growth. Improving security of landfills and more effective and efficient collection and utilisation of methane from landfills in NZ will significantly reduce the potential for leakage of methane to the atmosphere, acting as an offset to the continuing use of underground fossil fuels. Life cycle assessment (LCA), an increasingly important methodology for quantifying the environmental impacts of building materials (particularly energy, and global warming potential (GWP)), will soon be incorporated into the NZ Green Building Council Greenstar rating tools. Such LCA studies must provide a level playing field for all building materials and consider the whole life cycle. Whilst the end-of-life treatment of timber by LCA may establish a present-day base scenario, any analysis must also present a realistic end-of-life scenario for the future deconstruction of any 6 new building, as any building built today will be deconstructed many years in the future, when very different technologies will be available to deal with construction waste. At present, LCA practitioners in NZ and Australia place much value on a single research document on the degradation of timber in landfills (Ximenes et al., 2008). This leads to an end-of-life base scenario for timber which many in the industry consider to be an overestimation of the potential negative effects of methane generation. In Europe, the base scenario for wood disposal is cascading timber products and then burning for energy recovery, which normally significantly reduces any negative effects of the end-of-life for timber. LCA studies in NZ should always provide a sensitivity analysis for the end-of-life of timber and strongly and confidently argue that alternative future scenarios are realistic disposal options for buildings deconstructed in the future. Data-sets for environmental impacts (such as GWP) of building materials in NZ are limited and based on few research studies. The compilation of comprehensive data-sets with country-specific information for all building materials is considered a priority, preferably accounting for end-of-life options. The NZ timber industry should continue to ‘champion’ the environmental credentials of timber, over and above those of the other major building materials (concrete and steel). End-of-life should not be considered the ‘Achilles heel’ of the timber story.

Research papers, University of Canterbury Library

In September 2010 and February 2011, the Canterbury region experienced devastating earthquakes with an estimated economic cost of over NZ$40 billion (Parker and Steenkamp, 2012; Timar et al., 2014; Potter et al., 2015). The insurance market played an important role in rebuilding the Canterbury region after the earthquakes. Homeowners, insurance and reinsurance markets and New Zealand government agencies faced a difficult task to manage the rebuild process. From an empirical and theoretic research viewpoint, the Christchurch disaster calls for an assessment of how the insurance market deals with such disasters in the future. Previous studies have investigated market responses to losses in global catastrophes by focusing on the insurance supply-side. This study investigates both demand-side and supply-side insurance market responses to the Christchurch earthquakes. Despite the fact that New Zealand is prone to seismic activities, there are scant previous studies in the area of earthquake insurance. This study does offer a unique opportunity to examine and document the New Zealand insurance market response to catastrophe risk, providing results critical for understanding market responses after major loss events in general. A review of previous studies shows higher premiums suppress demand, but how higher premiums and a higher probability of risk affect demand is still largely unknown. According to previous studies, the supply of disaster coverage is curtailed unless the market is subsidised, however, there is still unsettled discussion on why demand decreases with time from the previous disaster even when the supply of coverage is subsidised by the government. Natural disaster risks pose a set of challenges for insurance market players because of substantial ambiguity associated with the probability of such events occurring and high spatial correlation of catastrophe losses. Private insurance market inefficiencies due to high premiums and spatially concentrated risks calls for government intervention in the provision of natural disaster insurance to avert situations of noninsurance and underinsurance. Political economy considerations make it more likely for government support to be called for if many people are uninsured than if few people are uninsured. However, emergency assistance for property owners after catastrophe events can encourage most property owners to not buy insurance against natural disaster and develop adverse selection behaviour, generating larger future risks for homeowners and governments. On the demand-side, this study has developed an intertemporal model to examine how demand for insurance changes post-catastrophe, and how to model it theoretically. In this intertemporal model, insurance can be sought in two sequential periods of time, and at the second period, it is known whether or not a loss event happened in period one. The results show that period one demand for insurance increases relative to the standard single period model when the second period is taken into consideration, period two insurance demand is higher post-loss, higher than both the period one demand and the period two demand without a period one loss. To investigate policyholders experience from the demand-side perspective, a total of 1600 survey questionnaires were administered, and responses from 254 participants received representing a 16 percent response rate. Survey data was gathered from four institutions in Canterbury and is probably not representative of the entire population. The results of the survey show that the change from full replacement value policy to nominated replacement value policy is a key determinant of the direction of change in the level of insurance coverage after the earthquakes. The earthquakes also highlighted the plight of those who were underinsured, prompting policyholders to update their insurance coverage to reflect the estimated cost of re-building their property. The survey has added further evidence to the existing literature, such as those who have had a recent experience with disaster loss report increased risk perception if a similar event happens in future with females reporting a higher risk perception than males. Of the demographic variables, only gender has a relationship with changes in household cover. On the supply-side, this study has built a risk-based pricing model suitable to generate a competitive premium rate for natural disaster insurance cover. Using illustrative data from the Christchurch Red-zone suburbs, the model generates competitive premium rates for catastrophe risk. When the proposed model incorporates the new RMS high-definition New Zealand Earthquake Model, for example, insurers can find the model useful to identify losses at a granular level so as to calculate the competitive premium. This study observes that the key to the success of the New Zealand dual insurance system despite the high prevalence of catastrophe losses are; firstly the EQC’s flat-rate pricing structure keeps private insurance premiums affordable and very high nationwide homeowner take-up rates of natural disaster insurance. Secondly, private insurers and the EQC have an elaborate reinsurance arrangement in place. By efficiently transferring risk to the reinsurer, the cost of writing primary insurance is considerably reduced ultimately expanding primary insurance capacity and supply of insurance coverage.

Research papers, University of Canterbury Library

In 2010 and 2011 Christchurch, New Zealand experienced a series of earthquakes that caused extensive damage across the city, but primarily to the Central Business District (CBD) and eastern suburbs. A major feature of the observed damage was extensive and severe soil liquefaction and associated ground damage, affecting buildings and infrastructure. The behaviour of soil during earthquake loading is a complex phenomena that can be most comprehensively analysed through advanced numerical simulations to aid engineers in the design of important buildings and critical facilities. These numerical simulations are highly dependent on the capabilities of the constitutive soil model to replicate the salient features of sand behaviour during cyclic loading, including liquefaction and cyclic mobility, such as the Stress-Density model. For robust analyses advanced soil models require extensive testing to derive engineering parameters under varying loading conditions for calibration. Prior to this research project little testing on Christchurch sands had been completed, and none from natural samples containing important features such as fabric and structure of the sand that may be influenced by the unique stress-history of the deposit. This research programme is focussed on the characterisation of Christchurch sands, as typically found in the CBD, to facilitate advanced soil modelling in both res earch and engineering practice - to simulate earthquake loading on proposed foundation design solutions including expensive ground improvement treatments. This has involved the use of a new Gel Push (GP) sampler to obtain undisturbed samples from below the ground-water table. Due to the variable nature of fluvial deposition, samples with a wide range of soil gradations, and accordingly soil index properties, were obtained from the sampling sites. The quality of the samples is comprehensively examined using available data from the ground investigation and laboratory testing. A meta-quality assessment was considered whereby a each method of evaluation contributed to the final quality index assigned to the specimen. The sampling sites were characterised with available geotechnical field-based test data, primarily the Cone Penetrometer Test (CPT), supported by borehole sampling and shear-wave velocity testing. This characterisation provides a geo- logical context to the sampling sites and samples obtained for element testing. It also facilitated the evaluation of sample quality. The sampling sites were evaluated for liquefaction hazard using the industry standard empirical procedures, and showed good correlation to observations made following the 22 February 2011 earthquake. However, the empirical method over-predicted liquefaction occurrence during the preceding 4 September 2010 event, and under-predicted for the subsequent 13 June 2011 event. The reasons for these discrepancies are discussed. The response of the GP samples to monotonic and cyclic loading was measured in the laboratory through triaxial testing at the University of Canterbury geomechanics laboratory. The undisturbed samples were compared to reconstituted specimens formed in the lab in an attempt to quantify the effect of fabric and structure in the Christchurch sands. Further testing of moist tamped re- constituted specimens (MT) was conducted to define important state parameters and state-dependent properties including the Critical State Line (CSL), and the stress-strain curve for varying state index. To account for the wide-ranging soil gradations, selected representative specimens were used to define four distinct CSL. The input parameters for the Stress-Density Model (S-D) were derived from a suite of tests performed on each representative soil, and with reference to available GP sample data. The results of testing were scrutinised by comparing the data against expected trends. The influence of fabric and structure of the GP samples was observed to result in similar cyclic strength curves at 5 % Double Amplitude (DA) strain criteria, however on close inspection of the test data, clear differences emerged. The natural samples exhibited higher compressibility during initial loading cycles, but thereafter typically exhibited steady growth of plastic strain and excess pore water pressure towards and beyond the strain criteria and initial liquefaction, and no flow was observed. By contrast the reconstituted specimens exhibited a stiffer response during initial loading cycles, but exponential growth in strains and associated excess pore water pressure beyond phase-transformation, and particularly after initial liquefaction where large strains were mobilised in subsequent cycles. These behavioural differences were not well characterised by the cyclic strength curve at 5 % DA strain level, which showed a similar strength for both GP samples and MT specimens. A preliminary calibration of the S-D model for a range of soil gradations is derived from the suite of laboratory test data. Issues encountered include the influence of natural structure on the peak-strength–state index relationship, resulting in much higher peak strengths than typically observed for sands in the literature. For the S-D model this resulted in excessive stiffness to be modelled during cyclic mobility, when the state index becomes large momentarily, causing strain development to halt. This behaviour prevented modelling the observed re- sponse of silty sands to large strains, synonymous with “liquefaction”. Efforts to reduce this effect within the current formulation are proposed as well as future research to address this issue.

Research papers, University of Canterbury Library

In the last century, seismic design has undergone significant advancements. Starting from the initial concept of designing structures to perform elastically during an earthquake, the modern seismic design philosophy allows structures to respond to ground excitations in an inelastic manner, thereby allowing damage in earthquakes that are significantly less intense than the largest possible ground motion at the site of the structure. Current performance-based multi-objective seismic design methods aim to ensure life-safety in large and rare earthquakes, and to limit structural damage in frequent and moderate earthquakes. As a result, not many recently built buildings have collapsed and very few people have been killed in 21st century buildings even in large earthquakes. Nevertheless, the financial losses to the community arising from damage and downtime in these earthquakes have been unacceptably high (for example; reported to be in excess of 40 billion dollars in the recent Canterbury earthquakes). In the aftermath of the huge financial losses incurred in recent earthquakes, public has unabashedly shown their dissatisfaction over the seismic performance of the built infrastructure. As the current capacity design based seismic design approach relies on inelastic response (i.e. ductility) in pre-identified plastic hinges, it encourages structures to damage (and inadvertently to incur loss in the form of repair and downtime). It has now been widely accepted that while designing ductile structural systems according to the modern seismic design concept can largely ensure life-safety during earthquakes, this also causes buildings to undergo substantial damage (and significant financial loss) in moderate earthquakes. In a quest to match the seismic design objectives with public expectations, researchers are exploring how financial loss can be brought into the decision making process of seismic design. This has facilitated conceptual development of loss optimisation seismic design (LOSD), which involves estimating likely financial losses in design level earthquakes and comparing against acceptable levels of loss to make design decisions (Dhakal 2010a). Adoption of loss based approach in seismic design standards will be a big paradigm shift in earthquake engineering, but it is still a long term dream as the quantification of the interrelationships between earthquake intensity, engineering demand parameters, damage measures, and different forms of losses for different types of buildings (and more importantly the simplification of the interrelationship into design friendly forms) will require a long time. Dissecting the cost of modern buildings suggests that the structural components constitute only a minor portion of the total building cost (Taghavi and Miranda 2003). Moreover, recent research on seismic loss assessment has shown that the damage to non-structural elements and building contents contribute dominantly to the total building loss (Bradley et. al. 2009). In an earthquake, buildings can incur losses of three different forms (damage, downtime, and death/injury commonly referred as 3Ds); but all three forms of seismic loss can be expressed in terms of dollars. It is also obvious that the latter two loss forms (i.e. downtime and death/injury) are related to the extent of damage; which, in a building, will not just be constrained to the load bearing (i.e. structural) elements. As observed in recent earthquakes, even the secondary building components (such as ceilings, partitions, facades, windows parapets, chimneys, canopies) and contents can undergo substantial damage, which can lead to all three forms of loss (Dhakal 2010b). Hence, if financial losses are to be minimised during earthquakes, not only the structural systems, but also the non-structural elements (such as partitions, ceilings, glazing, windows etc.) should be designed for earthquake resistance, and valuable contents should be protected against damage during earthquakes. Several innovative building technologies have been (and are being) developed to reduce building damage during earthquakes (Buchanan et. al. 2011). Most of these developments are aimed at reducing damage to the buildings’ structural systems without due attention to their effects on non-structural systems and building contents. For example, the PRESSS system or Damage Avoidance Design concept aims to enable a building’s structural system to meet the required displacement demand by rocking without the structural elements having to deform inelastically; thereby avoiding damage to these elements. However, as this concept does not necessarily reduce the interstory drift or floor acceleration demands, the damage to non-structural elements and contents can still be high. Similarly, the concept of externally bracing/damping building frames reduces the drift demand (and consequently reduces the structural damage and drift sensitive non-structural damage). Nevertheless, the acceleration sensitive non-structural elements and contents will still be very vulnerable to damage as the floor accelerations are not reduced (arguably increased). Therefore, these concepts may not be able to substantially reduce the total financial losses in all types of buildings. Among the emerging building technologies, base isolation looks very promising as it seems to reduce both inter-storey drifts and floor accelerations, thereby reducing the damage to the structural/non-structural components of a building and its contents. Undoubtedly, a base isolated building will incur substantially reduced loss of all three forms (dollars, downtime, death/injury), even during severe earthquakes. However, base isolating a building or applying any other beneficial technology may incur additional initial costs. In order to provide incentives for builders/owners to adopt these loss-minimising technologies, real-estate and insurance industries will have to acknowledge the reduced risk posed by (and enhanced resilience of) such buildings in setting their rental/sale prices and insurance premiums.

Research papers, University of Canterbury Library

The research is funded by Callaghan Innovation (grant number MAIN1901/PROP-69059-FELLOW-MAIN) and the Ministry of Transport New Zealand in partnership with Mainfreight Limited. Need – The freight industry is facing challenges related to climate change, including natural hazards and carbon emissions. These challenges impact the efficiency of freight networks, increase costs, and negatively affect delivery times. To address these challenges, freight logistics modelling should consider multiple variables, such as natural hazards, sustainability, and emission reduction strategies. Freight operations are complex, involving various factors that contribute to randomness, such as the volume of freight being transported, the location of customers, and truck routes. Conventional methods have limitations in simulating a large number of variables. Hence, there is a need to develop a method that can incorporate multiple variables and support freight sustainable development. Method - A minimal viable model (MVM) method was proposed to elicit tacit information from industrial clients for building a minimally sufficient simulation model at the early modelling stages. The discrete-event simulation (DES) method was applied using Arena® software to create simulation models for the Auckland and Christchurch corridor, including regional pick-up and delivery (PUD) models, Christchurch city delivery models, and linehaul models. Stochastic variables in freight operations such as consignment attributes, customer locations, and truck routes were incorporated in the simulation. The geographic information system (GIS) software ArcGIS Pro® was used to identify and analyse industrial data. The results obtained from the GIS software were applied to create DES models. Life cycle assessment (LCA) models were developed for both diesel and battery electric (BE) trucks to compare their life cycle greenhouse gas (GHG) emissions and total cost of ownership (TCO) and support GHG emissions reduction. The line-haul model also included natural hazards in several scenarios, and the simulation was used to forecast the stock level of Auckland and Christchurch depots in response to each corresponding scenario. Results – DES is a powerful technique that can be employed to simulate and evaluate freight operations that exhibit high levels of variability, such as regional pickup and delivery (PUD) and linehaul. Through DES, it becomes possible to analyse multiple factors within freight operations, including transportation modes, routes, scheduling, and processing times, thereby offering valuable insights into the performance, efficiency, and reliability of the system. In addition, GIS is a useful tool for analysing and visualizing spatial data in freight operations. This is exemplified by their ability to simulate the travelling salesman problem (TSP) and conduct cluster analysis. Consequently, the integration of GIS into DES modelling is essential for improving the accuracy and reliability of freight operations analysis. The outcomes of the simulation were utilised to evaluate the ecological impact of freight transport by performing emission calculations and generating low-carbon scenarios to identify approaches for reducing the carbon footprint. LCA models were developed based on simulation results. Results showed that battery-electric trucks (BE) produced more greenhouse gas (GHG) emissions in the cradle phase due to battery manufacturing but substantially less GHG emissions in the use phase because of New Zealand's mostly renewable energy sources. While the transition to BE could significantly reduce emissions, the financial aspect is not compelling, as the total cost of ownership (TCO) for the BE truck was about the same for ten years, despite a higher capital investment for the BE. Moreover, external incentives are necessary to justify a shift to BE trucks. By using simulation methods, the effectiveness of response plans for natural hazards can be evaluated, and the system's vulnerabilities can be identified and mitigated to minimize the risk of disruption. Simulation models can also be utilized to simulate adaptation plans to enhance the system's resilience to natural disasters. Novel contributions – The study employed a combination of DES and GIS methods to incorporate a large number of stochastic variables and driver’s decisions into freight logistics modelling. Various realistic operational scenarios were simulated, including customer clustering and PUD truck allocation. This showed that complex pickup and delivery routes with high daily variability can be represented using a model of roads and intersections. Geographic regions of high customer density, along with high daily variability could be represented by a two-tier architecture. The method could also identify delivery runs for a whole city, which has potential usefulness in market expansion to new territories. In addition, a model was developed to address carbon emissions and total cost of ownership of battery electric trucks. This showed that the transition was not straightforward because the economics were not compelling, and that policy interventions – a variety were suggested - could be necessary to encourage the transition to decarbonised freight transport. A model was developed to represent the effect of natural disasters – such as earthquake and climate change – on road travel and detour times in the line haul freight context for New Zealand. From this it was possible to predict the effects on stock levels for a variety of disruption scenarios (ferry interruption, road detours). Results indicated that some centres rather than others may face higher pressure and longer-term disturbance after the disaster subsided. Remedies including coastal shipping were modelled and shown to have the potential to limit the adverse effects. A philosophical contribution was the development of a methodology to adapt the agile method into the modelling process. This has the potential to improve the clarification of client objectives and the validity of the resulting model.

Research papers, University of Canterbury Library

In major seismic events, a number of plan-asymmetric buildings which experienced element failure or structural collapse had twisted significantly about their vertical axis during the earthquake shaking. This twist, known as “building torsion”, results in greater demands on one side of a structure than on the other side. The Canterbury Earthquakes Royal Commission’s reports describe the response of a number of buildings in the February 2011 Christchurch earthquakes. As a result of the catastrophic collapse of one multi-storey building with significant torsional irregularity, and significant torsional effects also in other buildings, the Royal Commission recommended that further studies be undertaken to develop improved simple and effective guides to consider torsional effects in buildings which respond inelastically during earthquake shaking. Separately from this, as building owners, the government, and other stakeholders, are planning for possible earthquake scenarios, they need good estimates of the likely performance of both new and existing buildings. These estimates, often made using performance based earthquake engineering considerations and loss estimation techniques, inform decision making. Since all buildings may experience torsion to some extent, and torsional effects can influence demands on building structural and non-structural elements, it is crucial that demand estimates consider torsion. Building seismic response considering torsion can be evaluated with nonlinear time history analysis. However, such analysis involves significant computational effort, expertise and cost. Therefore, from an engineers’ point of view, simpler analysis methods, with reasonable accuracy, are beneficial. The consideration of torsion in simple analysis methods has been investigated by many researchers. However, many studies are theoretical without direct relevance to structural design/assessment. Some existing methods also have limited applicability, or they are difficult to use in routine design office practice. In addition, there has been no consensus about which method is best. As a result, there is a notable lack of recommendations in current building design codes for torsion of buildings that respond inelastically. There is a need for building torsion to be considered in yielding structures, and for simple guidance to be developed and adopted into building design standards. This study aims to undertaken to address this need for plan-asymmetric structures which are regular over their height. Time history analyses are first conducted to quantify the effects of building plan irregularity, that lead to torsional response, on the seismic response of building structures. Effects of some key structural and ground motion characteristics (e.g. hysteretic model, ground motion duration, etc.) are considered. Mass eccentricity is found to result in rather smaller torsional response compared to stiffness/strength eccentricity. Mass rotational inertia generally decreases the torsional response; however, the trend is not clearly defined for torsionally restrained systems (i.e. large λty). Systems with EPP and bilinear models have close displacements and systems with Takeda, SINA, and flag-shaped models yield almost the same displacements. Damping has no specific effect on the torsional response for the single-storey systems with the unidirectional eccentricity and excitation. Displacements of the single-storey systems subject to long duration ground motion records are smaller than those for short duration records. A method to consider torsional response of ductile building structures under earthquake shaking is then developed based on structural dynamics for a wide range of structural systems and configurations, including those with low and high torsional restraint. The method is then simplified for use in engineering practice. A novel method is also proposed to simply account for the effects of strength eccentricity on response of highly inelastic systems. A comparison of the accuracy of some existing methods (including code-base equivalent static method and model response spectrum analysis method), and the proposed method, is conducted for single-storey structures. It is shown that the proposed method generally provides better accuracy over a wide range of parameters. In general, the equivalent static method is not adequate in capturing the torsional effects and the elastic modal response spectrum analysis method is generally adequate for some common parameters. Record-to-record variation in maximum displacement demand on the structures with different degrees of torsional response is considered in a simple way. Bidirectional torsional response is then considered. Bidirectional eccentricity and excitation has varying effects on the torsional response; however, it generally increases the weak and strong edges displacements. The proposed method is then generalized to consider the bidirectional torsion due to bidirectional stiffness/strength eccentricity and bidirectional seismic excitation. The method is shown to predict displacements conservatively; however, the conservatism decreases slightly for cases with bidirectional excitation compared to those subject to unidirectional excitation. In is shown that the roof displacement of multi-storey structures with torsional response can be predicted by considering the first mode of vibration. The method is then further generalized to estimate torsional effects on multi-storey structure displacement demands. The proposed procedure is tested multi-storey structures and shown to predict the displacements with a good accuracy and conservatively. For buildings which twist in plan during earthquake shaking, the effect of P-Δλ action is evaluated and recommendations for design are made. P-Δλ has more significant effects on systems with small post- yield stiffness. Therefore, system stability coefficient is shown not to be the best indicator of the importance of P-Δλ and it is recommended to use post-yield stiffness of system computed with allowance for P-Δλ effects. For systems with torsional response, the global system stability coefficient and post- yield stiffness ration do not reflect the significance of P-Δλ effects properly. Therefore, for torsional systems individual seismic force resisting systems should be considered. Accuracy of MRSA is investigated and it is found that the MRSA is not always conservative for estimating the centre of mass and strong edge displacements as well as displacements of ductile systems with strength eccentricity larger than stiffness eccentricity. Some modifications are proposed to get the MRSA yields a conservative estimation of displacement demands for all cases.

Research papers, University of Canterbury Library

Natural hazard disasters often have large area-wide impacts, which can cause adverse stress-related mental health outcomes in exposed populations. As a result, increased treatment-seeking may be observed, which puts a strain on the limited public health care resources particularly in the aftermath of a disaster. It is therefore important for public health care planners to know whom to target, but also where and when to initiate intervention programs that promote emotional wellbeing and prevent the development of mental disorders after catastrophic events. A large body of literature assesses factors that predict and mitigate disaster-related mental disorders at various time periods, but the spatial component has rarely been investigated in disaster mental health research. This thesis uses spatial and spatio-temporal analysis techniques to examine when and where higher and lower than expected mood and anxiety symptom treatments occurred in the severely affected Christchurch urban area (New Zealand) after the 2010/11 Canterbury earthquakes. High-risk groups are identified and a possible relationship between exposure to the earthquakes and their physical impacts and mood and anxiety symptom treatments is assessed. The main research aim is to test the hypothesis that more severely affected Christchurch residents were more likely to show mood and anxiety symptoms when seeking treatment than less affected ones, in essence, testing for a dose-response relationship. The data consisted of mood and anxiety symptom treatment information from the New Zealand Ministry of Health’s administrative databases and demographic information from the National Health Index (NHI) register, when combined built a unique and rich source for identifying publically funded stress-related treatments for mood and anxiety symptoms in almost the whole population of the study area. The Christchurch urban area within the Christchurch City Council (CCC) boundary was the area of interest in which spatial variations in these treatments were assessed. Spatial and spatio-temporal analyses were done by applying retrospective space-time and spatial variation in temporal trends analysis using SaTScan™ software, and Bayesian hierarchical modelling techniques for disease mapping using WinBUGS software. The thesis identified an overall earthquake-exposure effect on mood and anxiety symptom treatments among Christchurch residents in the context of the earthquakes as they experienced stronger increases in the risk of being treated especially shortly after the catastrophic 2011 Christchurch earthquake compared to the rest of New Zealand. High-risk groups included females, elderly, children and those with a pre-existing mental illness with elderly and children especially at-risk in the context of the earthquakes. Looking at the spatio-temporal distribution of mood and anxiety symptom treatments in the Christchurch urban area, a high rates cluster ranging from the severely affected central city to the southeast was found post-disaster. Analysing residential exposure to various earthquake impacts found that living in closer proximity to more affected areas was identified as a risk factor for mood and anxiety symptom treatments, which largely confirms a dose-response relationship between level of affectedness and mood and anxiety symptom treatments. However, little changes in the spatial distribution of mood and anxiety symptom treatments occurred in the Christchurch urban area over time indicating that these results may have been biased by pre-existing spatial disparities. Additionally, the post-disaster mobility activity from severely affected eastern to the generally less affected western and northern parts of the city seemed to have played an important role as the strongest increases in treatment rates occurred in less affected northern areas of the city, whereas the severely affected eastern areas tended to show the lowest increases. An investigation into the different effects of mobility confirmed that within-city movers and temporary relocatees were generally more likely to receive care or treatment for mood or anxiety symptoms, but moving within the city was identified as a protective factor over time. In contrast, moving out of the city from minor, moderately or severely damaged plain areas of the city, which are generally less affluent than Port Hills areas, was identified as a risk factor in the second year post-disaster. Moreover, residents from less damaged plain areas of the city showed a decrease in the likelihood of receiving care or treatment for mood or anxiety symptoms compared to those from undamaged plain areas over time, which also contradicts a possible dose-response relationship. Finally, the effects of the social and physical environment, as well as community resilience on mood and anxiety symptom treatments among long-term stayers from Christchurch communities indicate an exacerbation of pre-existing mood and anxiety symptom treatment disparities in the city, whereas exposure to ‘felt’ earthquake intensities did not show a statistically significant effect. The findings of this thesis highlight the complex relationship between different levels of exposure to a severe natural disaster and adverse mental health outcomes in a severely affected region. It is one of the few studies that have access to area-wide health and impact information, are able to do a pre-disaster / post-disaster comparison and track their sample population to apply spatial and spatio-temporal analysis techniques for exposure assessment. Thus, this thesis enhances knowledge about the spatio-temporal distribution of adverse mental health outcomes in the context of a severe natural disaster and informs public health care planners, not only about high-risk groups, but also where and when to target health interventions. The results indicate that such programs should broadly target residents living in more affected areas as they are likely to face daily hardship by living in a disrupted environment and may have already been the most vulnerable ones before the disaster. Special attention should be focussed on women, elderly, children and people with pre-existing mental illnesses as they are most likely to receive care or treatment for stress-related mental health symptoms. Moreover, permanent relocatees from affected areas and temporarily relocatees shortly after the disaster may need special attention as they face additional stressors due to the relocation that may lead to the development of adverse mental health outcomes needing treatment.