Meta-learning analysis of deep neural network architectures on diverse numeric datasets via geometric complexity descriptors
| dc.authorid | 0000-0003-2960-8725 | |
| dc.authorid | 0000-0002-8344-1180 | |
| dc.contributor.author | Bulut, Faruk | |
| dc.contributor.author | Dönmez, İknur | |
| dc.date.accessioned | 2026-04-15T11:51:31Z | |
| dc.date.available | 2026-04-15T11:51:31Z | |
| dc.date.issued | 2026 | |
| dc.department | Fakülteler, Mühendislik ve Doğa Bilimleri Fakültesi, Bilgisayar Mühendisliği Bölümü | |
| dc.description.abstract | Meta-learning techniques aim to predict the most suitable learning algorithm for a given dataset based on its intrinsic structural characteristics. These techniques provide a robust framework for understanding algorithmic behavior across diverse data dis tributions and attributes. Although these state-of-the-art models (CNNs and transformers) are widely applied in various machine learning tasks, their use on numerical datasets remains underexplored due to the complexity of their internal structures. This study aims not only to predict the performance of two black-box deep learning models on static datasets but also to conduct a behavioral analysis in order to identify which meta-features most strongly infuence their outcomes. It seems unclear which specifc attributes of a dataset positively or negatively afect the performance of these deep learning models. To bridge this gap, we constructed a meta dataset consisting of 296 datasets, each characterized by 20 meta-features describing the dataset’s statistical, geometric, and structural properties. The analysis identifes which intrinsic dataset properties infuence model accuracy, without relying on raw data or hyperparameter tuning. Results show that both models perform best on datasets with high feature discriminability, as captured by meta-features such as maximum feature efciency, collective feature efciency, and directional separability. In contrast, performance declines with increasing class boundary complexity and nonlinearity, refected in features like class separability measures and the linear classifer nonlinearity metric. While CNNs are more sensitive to local geometric complexity, transformers respond more strongly to global statistical measures such as mutual information and entropy, highlighting their distinct inductive biases. The proposed meta-model accurately predicts the performance of both architectures on unseen datasets (0.96 correlation coefcient, 0.019 MAE, and 0.025 RMSE for CNNs; 0.92 correlation coefcient, 0.027 MAE, and 0.036 RMSE for transformers), enabling performance estimation without costly training. These fndings emphasize the importance of aligning model architecture with dataset geometry and structure. Additionally, the framework supports more interpretable, efcient, and sustainable deep learning model selection in structured data settings. | |
| dc.description.sponsorship | The author(s) disclosed receipt of the following fnancial support for the research, authorship, and/or publication of this article: the open access publication of which was funded by the University of Essex. Yazar(lar) bu makalenin araştırma, yazarlık ve/veya yayınlanması için aşağıdaki mali desteği aldıklarını açıkladılar: açık erişimli yayın, Essex Üniversitesi tarafından finanse edilmiştir. | |
| dc.identifier.citation | Bulut, F., & Dönmez, İ. (2026). Meta-learning analysis of deep neural network architectures on diverse numeric datasets via geometric complexity descriptors. International Journal of Intelligent Systems, 2026(1), pp. 1-18. https://doi.org/10.1155/int/8573962 | |
| dc.identifier.doi | 10.1155/int/8573962 | |
| dc.identifier.endpage | 18 | |
| dc.identifier.issn | 0884-8173 | |
| dc.identifier.issn | 1098-111X | |
| dc.identifier.issue | 1 | |
| dc.identifier.scopus | 2-s2.0-105033338704 | |
| dc.identifier.scopusquality | Q1 | |
| dc.identifier.startpage | 1 | |
| dc.identifier.uri | https://doi.org/10.1155/int/8573962 | |
| dc.identifier.uri | https://hdl.handle.net/20.500.13055/1411 | |
| dc.identifier.volume | 2026 | |
| dc.identifier.wos | WOS:001718625200001 | |
| dc.identifier.wosquality | Q2 | |
| dc.indekslendigikaynak | Web of Science | |
| dc.indekslendigikaynak | Scopus | |
| dc.indekslendigikaynak | PubMed | |
| dc.indekslendigikaynak.other | SCI-E - Science Citation Index Expanded | |
| dc.institutionauthor | Dönmez, İknur | |
| dc.institutionauthorid | 0000-0002-8344-1180 | |
| dc.language.iso | en | |
| dc.publisher | Wiley | |
| dc.relation.ispartof | International Journal of Intelligent Systems | |
| dc.relation.publicationcategory | Makale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı | |
| dc.rights | info:eu-repo/semantics/openAccess | |
| dc.subject | Accuracy Prediction | |
| dc.subject | CNN | |
| dc.subject | Complexity Measures | |
| dc.subject | Dataset Geometry | |
| dc.subject | Meta-Attributes | |
| dc.subject | Model Selection | |
| dc.subject | Transformer | |
| dc.title | Meta-learning analysis of deep neural network architectures on diverse numeric datasets via geometric complexity descriptors | |
| dc.type | Article | |
| dspace.entity.type | Publication |












