Abstract:
This study used explainable machine learning (XML), a new branch of Machine Learning (ML), to elucidate how ML models make predictions. Three tree-based regression models, Decision Tree (DT), Random Forest (RF), and Extreme Gradient Boost (XGB), were used to predict the normalized mean (Cp,mean), fluctuating (Cp,rms), minimum (Cp,min), and maximum (Cp,max) external wind pressure coefficients of a low-rise building with fixed dimensions in urban-like settings for several wind incidence angles. Two types of XML were used — first, an intrinsic explainable method, which relies on the DT structure to explain the inner workings of the model, and second, SHAP (SHapley Additive exPlanations), a post-hoc explanation technique used particularly for the structurally complex XGB. The intrinsic explainable method proved incapable of explaining the deep tree structure of the DT, but SHAP provided valuable insights by revealing various degrees of positive and negative contributions of certain geometric parameters, the wind incidence angle, and the density of buildings that surround a low-rise building. SHAP also illustrated the relationships between the above factors and wind pressure, and its explanations were in line with what is generally accepted in wind engineering, thus confirming the causality of the ML model's predictions.