Concept of building performance evaluation and model study by reduced sequences

Get Complete Project Material File(s) Now! »

Buildings sector on French scale

Regarding France, it is true that the unit needs of buildings is decreasing over time (thanks to thermal insulation, considering energy aspects during construction, housing rehabilitation etc.). However, energy demand in this sector remains very high especially due to the large stock of strongly consuming existing buildings. Overall, the consumption in the building sector (residential and tertiary) has been practically stable since 2003. It represents 44% of the total consumption, far ahead of transport (32%), industry (21%) and other sectors mainly agriculture (3%) (Figure I-5).
Figure I-5. Final energy use for each sector adapted from French Environmental Energy Agency ADEME (Source: D. Mauree, 2014)
Buildings sector consumption breaks down into two thirds for residential buildings (main and secondary residences) and a third for the commertial sector [11]. In 2013, space heating and cooling accounted for 68% of this share thus being the major contributor in the energy consumption in buildings. The remaining share was divided between domestic hot water needs, cooking and specific electricity such as lighting and appliance functioning requirements as shown in Figure I-6.
Figure I-6. Final energy use inside buildings adapted from French Environmental Energy Agency ADEME (Image credit: D. Mauree, 2014)

Concept of building performance simulation (BPS) and optimization

From this point, governments all over the world started adopting new regulations and laws that take into consideration environmental impacts for new projects. In addition, research to improve the different sector performances has become more supported by governments through more funding and new policies.
For instance, the European Union had put a policy that requires to commit a 9% reduction in energy use by 2016 based on the 2006/32/EC directives [12], in addition to decreasing the greenhouse gas emissions as well as primary energy consumption by 20% as indicated by the climate change package legislation [13]. Paris climate change accord, COP21 [14], mandates involved countries to limit the total CO2 emission to 40 billion tons emitted per year in order to limit the global warming to 1.5°C.
The building sector offers significant potential for improved energy efficiency with high-performance envelops and energy-efficient systems. From this point, the interest in building design studies has risen.
Building performance simulation (BPS), also denoted Building Energy Simulation (BES), is increasingly used to design buildings because of its emphasis on sustainability [15]. The requirement of building design are comprised of qualitative elements (social impact, esthetics, special planning, etc.) and quantitative elements (cost, yearly-consumed energy, amount of daylight, etc.). The design aims on satisfying multiple criteria in addition to measurable performances. Several papers examine how a geometric model can dynamically be operated in relation to BPS [16]– [18].
The potential of using different BPS tools can be categorized in two possible stages, simplified and detailed design stages [19]. Simplified tools have shown to be useful at certain point of early design stage but might be limited to apply on later design evaluation. Many researchers have published various methods containing high precision calculations, focusing on manual variations ([20], [21]) while others used Monte Carlo algorithms ([19], [22]). When it comes to optimization, most studies focus on optimizing singular or very few objectives such as the electrical consumption ([16], [23], [24]). In general, such methods seek high precision of performance functions, which in turn penalizes the speed of calculation time.
Optimization algorithms run numerical models iteratively, constructing sequences of progressively better solutions up to a point that satisfies pre-defined optimal conditions. This point is not necessarily the globally optimal solution since it might be unfeasible due to the nature of the case study [25] or even the program itself [26]. Because of code features, the search space may be non-linear and have discontinuities, requiring the use of special optimization methods that do not require the computation of the derivatives of the function [27]. In building optimization studies, the building simulation model is usually coupled with an optimization engine, which runs algorithms, and strategies to find what is described to be an optimal solution [28]. Two of the optimization examples used in building optimization and usually applied on simplified models are described hereunder:
• Pattern search, which is an iterative search for the optimum that does not require a gradient and therefore can be used in non-differentiable or continuous functions. The step size is halved in case of no more improvement is possible [29].
• Linear programing that simplifies the problem into a linear problem (matrix) to compute directly the optimum. The optimum falls in an external point if all objective functions and constraints are linear [30].
In their review on simulation based optimization studies for building performance analysis, Nguyen et al. [31] divided the process in three phases:
• Preprocessing phase where the formulation of the optimization problem takes place including the building model, the objective functions and constraints, selecting the appropriate optimization algorithm and coupling it with the model. It is important in this phase for the model to be simplified to avoid severely delaying the optimization process, but not too simplified to avoid inaccurate modeling of building phenomena [32].
• Optimization phase where monitoring, controlling and detecting errors of the study takes place. It is worth mentioning that in this phase, it is almost impossible to estimate the time of convergence of the optimization algorithm. Researchers do not usually mention the time taken by the algorithm to converge to an optimal solution since the behavior of the optimization algorithms is not trivial. However, several attempts have been applied to speed up the time of simulation while still reaching good final results such as in [33].
• Post processing phase where interpretation, verification, presenting of results and decision making take place.
In addition to simplified models based optimization methods, evolutionary algorithms are very common in the building optimization field. They are usually applied in dynamic detailed model optimization due to their learning process that helps in converging faster to optimal solutions based on results from previous iterations. Such algorithms apply the Darwinian principle of survival of the best by keeping a population of solutions of which the poorest are eliminated. Types of such algorithms include Genetic Algorithms (GA) [34], Evolutionary Programing (EP) [35], [36], Covariance Matrix Adaptation, Evolutionary Strategy (CMA-ES) [37] and Differential Evolution (DE) [38]. Other algorithms that mimic natural processes include Harmony Search (HS) [39], Particle Swarm Optimization (PSO) [40], Ant Colony Optimization (ACO) [41] and Simulated Annealing (SA) [42].
As mentioned previously, many building optimization studies use the single objective approach where one objective function can be optimized in an optimization run [43]. However in real world, designers have to deal with several contradictory design criteria simultaneously such as minimizing energy demand while minimizing cost or maximizing internal comfort [44], [45]. Therefore, multi-objective optimization is more relevant than the single objective approach and there exist numerous research papers that consider this approach for optimization as will be shown in the following paragraphs.
In their review done on the optimization methods applied to renewable and sustainable energy, Banos et al. [25] have shed the light on the concept of single and multi-objective optimization. They introduced the fact that in many applications, multi objective optimization is inevitable because of the interaction of several decision parameters.
There exist two main approaches to solve multi-objective problems:
• Scalarization approach that assigns different weight factors to each criterion and therefore back to a single objective problem, the weighted sum of the criteria [46].
• Pareto optimality approaches where a trade-off optimal solution is examined and appropriate solutions are then determined. The approach is referred to as “Pareto optimization”. The basic principle, established by Pareto in 1896, is as follows: “In a multi-objective problem, there is such a balance that one cannot improve one criterion without deteriorating at least one of the other criteria « . This equilibrium is called the Pareto optimum. A solution is said to be Pareto optimal if it is not dominated by any other solution where there is no other solution that can better improve one criterion without deteriorating another. The Pareto front is the set of optimal Pareto solutions. Due to the complexity of BOPs, researchers often use up to two objective functions with very few studying three or more functions such as in [47] who optimized energy consumption, CO2 emission and initial investment cost or in [48] who optimized energy consumption, thermal comfort and initial investment cost. The process of selecting the optimal solution from the front is not trivial and is known as multi-criteria decision-making. Many decision making techniques have been developed [49] such as “pros and cons”, “simple prioritization” and “bureaucratic”.
Stadler et al. [50] created a multi-objective process to minimize CO2 emissions by optimizing the energy systems linked to the building. Similarly, Merkel et al. [51], Milan et al. [52], Lauinger et al. [53] and others have studied building and energy supply system optimization by multi-objective approaches.
In multi-objective problems, splitting building design problems into sub-problems (envelope, systems, renewables…) may lead to missing out on synergies between different areas. As a result, many researchers through optimizing variables from different areas considered the building globally such as in [54]. Yet, performing holistic approaches on buildings, which takes into consideration both the envelope and the systems, leads to the complexity of models under study, especially when analyzing heat networks in the case of multiple buildings i.e. districts or blocks, leading therefore to unfeasible computational time expenses. Simulation of detailed building models may take several minutes in building energy simulation [32]. On the other hand, simulation-based optimization techniques require up to thousands of simulations to evaluate the case study. The optimization schemes may therefore become infeasible due to such computationally expensive models. Usually, very simplified models instead of detailed building models are used to avoid this issue, as in [54]–[56]. Particularly, in [57] Lee used a two-step optimization scheme to deal with an expensive CFD model. In the first step, Lee performed the optimization on the simple CFD model. Then he performed a few detailed CFD simulations on the optimal candidate solutions found in step 1 to refine the results. Other methods employ reducing the population size and/or the number of generations. Mancarella et al. [58] used spatial aggregation to reduce the number of nodes in an energy system network study and Milan et al. [59] reduced nonlinearities and discontinuities to avoid non-convexity of the program. Other work using simplified analytical models can also be found in [60]–[64].
However, these reductions significantly lower the performance of optimization algorithms, and may result in sub-optimal solutions [65]. Surrogate models are among promising solutions to this problem. A surrogate model (meta-model or emulator) is an approximation model of the original. It typically mimics the behavior of the original model to be able to produce the model responses at reduced computational cost.
In the context of optimization, surrogate models can speed convergence by reducing function evaluation cost and/or smoothing noisy response functions [66]. After running the surrogate-based optimization, other refined optimization around the optimal points using the original model can be performed to obtain exact solutions. Klemm et al. [67] employed surrogate based optimization in their study by applying a polynomial regression method on CFD simulation results to derive explicit analytic objective functions, then optimizing them using a simple deterministic optimization method. Magnier and Haghighat [32] used TRNSYS simulations to train an artificial neural network (ANN), then used the trained – validated ANN to couple with the genetic algorithm (GA) to optimize thermal comfort and energy consumption. The database for training the ANN consists of output of 450 simulations. Time for generating the database was 3 weeks, but optimization time was very small. If direct coupling between TRNSYS and GA was used, it would need 10 year to finish the task [32]. Chen et al.
[68] used a feed forwards neural network for the identification of temperature in intelligent buildings and then optimize by the particle swarm optimization (PSO). Eisenhower et al. [69] used the Support Vector Machines method to generate several meta-models of a 30-zone EnergyPlus building model and then performed sensitivity analysis to select the most influential variables for optimization. These authors stated that the optimization using the meta-model offers nearly equivalent results to those obtained by EnergyPlus model.
They also recommended that the use of Gaussian Process regression, sometimes-denoted Kriging models, for optimization of complex buildings require further investigations. Gengembre et al. [70] minimize 20-year life cycle cost of a single-zone building model using a surrogate model and the PSO. They concluded that the accuracy of their surrogate model is acceptable and such a surrogate model can further help designers in design space exploration with cheap simulation cost.
However, the accuracy and sensitivity of surrogate based optimization is currently not a well-developed area, especially when the number of input variables is large [71], the cost function is highly discontinuous or in cases many discrete input variables exist.
The strength and weakness of various surrogate methods is a great research field of computational and statistical science and well beyond the scope of the building simulation community. There is currently no consensus on how to obtain the most reliable estimate of accuracy of a surrogate model, thus the coefficient of correlation R² is often applied, as in [32], [72]. R² is the proportion of the variance of a dependent variable that is predictable from independent variable(s). Furthermore, the random sampling method of inputs, the number of building model evaluations used to construct and validate a surrogate model is still problematic and is often chosen empirically by analysts. It also needs more studies to see whether significant difference between optimization results given by a surrogate model and an ‘actual’ building model exists. In addition to that, the processing time of optimization studies can be severely affected by the balance between the number of variables and their options. Usually, computer clusters are used for complicated optimization problems with large number of variables [73]. These questions are explicit challenges of the building research community.
On the other hand, the use of detailed models is very useful for accurate and credible studies. A holistic approach that might solve those doubts, and working on detailed models, is a current case of interest. It is based on the reduction of input data profiles rather than the model itself. The approach evaluates annual performances of a model starting from a short simulation sequence of typical selected days instead of complete 365 days input data profiles. Therefore, instead of simplifying the models, running short sequences is used to reduce the computational time expenses of a fully dynamic simulation. Figure I-7 illustrates the different approaches of simulation adopted in the BPS domain and their relation to the complexity of the model.
Figure I-7. The relation between the complexity of the case study and the type of time sequence used for simulations.
Table I-1 shows the main advantages and disadvantages of these approaches. As mentioned above, reduced order models such as modal analysis, RC models and metamodels are derived from the case study numerical models and are simulated on complete annual profiles. On the other hand, the reduced simulation sequences are derived from input data profiles and introduced directly to complex models for dynamic simulations.

READ  Convolutional Neural Network (CNN) Approach for Enhancing the Identification of UWB Radar Targets

Model study by short sequence

The literature contains various approaches to select a representative set of historical periods. As shown in Figure I-8, the process starts by the original annual data and ends in a short sequence that will be later used in model testing or optimization. In between, the reduction approach implements day selection algorithms or works through continues testing to generate a sequence to reproduce the annual performance criteria after extrapolating the results found by the reduced simulation. These approaches can be grouped in three main categories: Heuristic Approaches, Iterative Approaches and Grouping Algorithms.

Heuristic Approaches

Heuristic approaches are practical methods that select directly a set of typical days highly influenced by the personal expertise or experience of the developer. The selection is quick but not guaranteed to be optimal, Figure I-9. In their study, Belderbos et al. [74] selected the day that contains the minimum demand level of the year, the day that contains the maximum demand level and the day that contains the largest demand spread in 24 hours. Haller et al. [75] defined short-term fluctuation patterns represented by 13 days from the four seasons, each with three characteristic days that cover low, medium and high renewable energy supply regimes. They added an additional peak time day representing high demand and low renewable energy supply. Fripp et al. [76] discussed within investment periods optimized based on 12 days of sampled data: two for each even-numbered month. One day in each month corresponds to conditions that occurred on the peak-load day of the same. The second day of data for each month corresponds to a randomly selected day from the same month. Hart et al. [77] reduced the data size of energy generation by variable renewables by selecting eight specific days that contain hours with extreme meteorological and load events and 20 random days to characterize typical system behavior. Weights for each day were assigned using least squares to best match the annual load, wind speed, and irradiance distributions.

Table of contents :

General Introduction
Chapter I Concept of building performance evaluation and model study by reduced sequences
I.1. Introduction
I.2. Buildings sector on worldwide scale
I.3. Buildings sector on French scale
I.4. Concept of building performance simulation (BPS) and optimization
I.5. Model study by short sequence
I.5.1. Heuristic Approaches
I.5.2. Iterative Approaches
I.5.3. Grouping Algorithms
I.6. Extrapolation of results
I.7. Analysis and discussion
I.8. Conclusion
Chapter II Description of the Typical Short Sequence algorithm (TypSS)
II.1. Introduction
II.1.1. Objectives
II.1.2. Case study
II.2. TypSS: The process of the algorithm
II.2.1. Global Methodology
II.2.2. Parameters
II.2.3. Initialization
II.2.4. Period setting phase
II.2.5. Typical days’ enhancement phase
II.3. Conclusion
Chapter III Application of TypSS and sensitivity analysis on its input parameters
III.1. Introduction
III.2. Simulation results of the case study
III.2.1. Single individual I1
III.2.1.1. Algorithm output
III.2.1.2. Temporal profiles of the target criteria
III.2.1.3. Annual values and cumulative profiles of the target criteria
III.2.1.4. Comparison with other approaches
III.2.2. Multiple tested individuals
III.2.2.1. Simulation results
III.3. Sensitivity of the TYPSS algorithm to its main parameters
III.3.1. Length of the initial sequence
III.3.2. Length of the generated sequence
III.3.3. Number of tested individuals
III.3.4. Number and type of the target criteria
III.4. Conclusion
Chapter IV Multi-objective optimization using reduced sequences: Introducing OptiTypSS
IV.1. Introduction
IV.1.1. Objective
IV.1.2. Multi-objective optimization method
IV.1.3. Parametrizing the multi-objective optimization method
IV.2. Sequential multi-objective optimization methodology
IV.3. Adaptive multi-objective optimization methodology (OptiTypSS)
IV.4. Comparison of OptiTypSS with an adaptive metamodel based approach
IV.5. Conclusion
General conclusions and perspectives
References
APPENDICES
Appendix A. Typical Short Sequence (TypSS) algorithm
Appendix B. Fifty samples generated by LHS
Appendix C. Temporal profiles obtained by reduced sequences of different lengths
Appendix D. Periodic values obtained by reduced sequences of different lengths
Appendix E. Cumulative profiles obtained by reduced sequences of different lengths
Appendix F. CVRMSE influenced by the number of tested individuals — 154
Appendix G. Two target criteria, three individuals Pareto front
Appendix H. Considering only three Pareto front individuals
Appendix I. Considering 10 individuals in OptiTy

GET THE COMPLETE PROJECT

Related Posts