Optimization of Intensive Daylight Simulations: A Cloud-Based Methodology Using HPC (High Performance Computing)

Download Full Report

What is the Aim

Challenge
Large scale daylight simulations and representations on a single analysis grid are currently impossible with the use of conventional software and computers. Computational limitations that relate to the capacity of computer machines as well as analysis restrictions that relate to the allowable grid node count imposed by daylight simulation software prohibit daylight‐coefficient based calculations on large-scale analysis grids.

Aim
To develop a workflow that has the ability to perform demanding processes in acceptable time.

What We Did 

Approach
The present paper utilizes a real aviation project to develop and test a new workflow.

Method
Radiance related ray‐tracing processes and matrix multiplications occur on the cloud using High Performance Computing and custom scripts that facilitate and accelerate the progression. The analysis grid count is decomposed into manageable fragments and after the calculation is performed, the fragmented values are recomposed in one single list of results that are utilized for coloring the analysis grid mesh.

What We Found

Results
The use of dedicated Unix HPC systems allowed the simulations to be hyper-threaded with almost 99 percent utilization. In contrast, a Windows-based system running Daysim allows for only 26 percent to 50 percent utilization on a single thread. The actual ray-tracing processes were sped up by nearly 32 times (on a 16-core processor). Utilizing a better processor and running the simulation on the cloud was proven insufficient to perform a calculation of that magnitude. The calculation became possible only with the scripted subdivision and re-composition of the grid (sensors).

The introduction of the proposed workflow allows one single simulation to be performed on the whole analysis grid, which provides an immediate benefit to the design team. In addition, utilizing this workflow results in significantly reduced time required to calculate annual daylight metrics, it eliminates possible errors that often derive from manual post-processing and provides a more uniform and reliable result that can be successfully represented on a colored grid plane.

Outsourcing intensive calculations to the HPC servers allows project work to continue unimpeded on local machines as simulations are processed remotely. Software and hardware costs are also minimized as all machines ran open-source software on an adaptive infrastructure that can scale to accommodate the project at hand with little to no modification.

Deliverable
This study resulted in a report, originally published in PLEA 2018 proceedings.

What the Findings Mean

Application
This methodology is not limited to horizontal grids. Vertical grids are equally possible using the same workflow as is. The same methodology can also be used for any illuminance based annual metrics, such as Useful Daylight Illuminance (UDI), Daylight Autonomy (DA), continues Daylight Autonomy (cDA) etc. The population of the matrices and the illuminance calculations, the intensive part of the overall process, remains the same. The script needs to only be expanded at its further end, when the results are already extracted from Radiance and are being post-processed, to calculate the percentage of space or the percentage of time that qualifies with the set threshold for every dynamic daylight metric.

Future
Furthermore, the same methodology can be used for glare studies. Instead of specifying grid points, it will require the generation of input rays for individual pixels. One more application that could benefit from that methodology is the evaluation of active shading systems. Active shading systems require the modification of the nature or shape of the building (i.e., glazing properties) or the obstruction elements (i.e., shading geometry), which is one of the two constants the DC method is assuming. The results need then to be reconstructed based on shading controls or after a predefined schedule. The benefit of this methodology lies in the significantly reduced time required to run the multiple iterations before recomposing them.

Acknowledgments

Team Members:
Mili Kyropoulou
Paul Ferrer
Sarith Subramaniam