Meta-learning analysis of deep neural network architectures on diverse numeric datasets via geometric complexity descriptors

dc.authorid0000-0003-2960-8725
dc.authorid0000-0002-8344-1180
dc.contributor.authorBulut, Faruk
dc.contributor.authorDönmez, İknur
dc.date.accessioned2026-04-15T11:51:31Z
dc.date.available2026-04-15T11:51:31Z
dc.date.issued2026
dc.departmentFakülteler, Mühendislik ve Doğa Bilimleri Fakültesi, Bilgisayar Mühendisliği Bölümü
dc.description.abstractMeta-learning techniques aim to predict the most suitable learning algorithm for a given dataset based on its intrinsic structural characteristics. These techniques provide a robust framework for understanding algorithmic behavior across diverse data dis tributions and attributes. Although these state-of-the-art models (CNNs and transformers) are widely applied in various machine learning tasks, their use on numerical datasets remains underexplored due to the complexity of their internal structures. This study aims not only to predict the performance of two black-box deep learning models on static datasets but also to conduct a behavioral analysis in order to identify which meta-features most strongly infuence their outcomes. It seems unclear which specifc attributes of a dataset positively or negatively afect the performance of these deep learning models. To bridge this gap, we constructed a meta dataset consisting of 296 datasets, each characterized by 20 meta-features describing the dataset’s statistical, geometric, and structural properties. The analysis identifes which intrinsic dataset properties infuence model accuracy, without relying on raw data or hyperparameter tuning. Results show that both models perform best on datasets with high feature discriminability, as captured by meta-features such as maximum feature efciency, collective feature efciency, and directional separability. In contrast, performance declines with increasing class boundary complexity and nonlinearity, refected in features like class separability measures and the linear classifer nonlinearity metric. While CNNs are more sensitive to local geometric complexity, transformers respond more strongly to global statistical measures such as mutual information and entropy, highlighting their distinct inductive biases. The proposed meta-model accurately predicts the performance of both architectures on unseen datasets (0.96 correlation coefcient, 0.019 MAE, and 0.025 RMSE for CNNs; 0.92 correlation coefcient, 0.027 MAE, and 0.036 RMSE for transformers), enabling performance estimation without costly training. These fndings emphasize the importance of aligning model architecture with dataset geometry and structure. Additionally, the framework supports more interpretable, efcient, and sustainable deep learning model selection in structured data settings.
dc.description.sponsorshipThe author(s) disclosed receipt of the following fnancial support for the research, authorship, and/or publication of this article: the open access publication of which was funded by the University of Essex. Yazar(lar) bu makalenin araştırma, yazarlık ve/veya yayınlanması için aşağıdaki mali desteği aldıklarını açıkladılar: açık erişimli yayın, Essex Üniversitesi tarafından finanse edilmiştir.
dc.identifier.citationBulut, F., & Dönmez, İ. (2026). Meta-learning analysis of deep neural network architectures on diverse numeric datasets via geometric complexity descriptors. International Journal of Intelligent Systems, 2026(1), pp. 1-18. https://doi.org/10.1155/int/8573962
dc.identifier.doi10.1155/int/8573962
dc.identifier.endpage18
dc.identifier.issn0884-8173
dc.identifier.issn1098-111X
dc.identifier.issue1
dc.identifier.scopus2-s2.0-105033338704
dc.identifier.scopusqualityQ1
dc.identifier.startpage1
dc.identifier.urihttps://doi.org/10.1155/int/8573962
dc.identifier.urihttps://hdl.handle.net/20.500.13055/1411
dc.identifier.volume2026
dc.identifier.wosWOS:001718625200001
dc.identifier.wosqualityQ2
dc.indekslendigikaynakWeb of Science
dc.indekslendigikaynakScopus
dc.indekslendigikaynakPubMed
dc.indekslendigikaynak.otherSCI-E - Science Citation Index Expanded
dc.institutionauthorDönmez, İknur
dc.institutionauthorid0000-0002-8344-1180
dc.language.isoen
dc.publisherWiley
dc.relation.ispartofInternational Journal of Intelligent Systems
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı
dc.rightsinfo:eu-repo/semantics/openAccess
dc.subjectAccuracy Prediction
dc.subjectCNN
dc.subjectComplexity Measures
dc.subjectDataset Geometry
dc.subjectMeta-Attributes
dc.subjectModel Selection
dc.subjectTransformer
dc.titleMeta-learning analysis of deep neural network architectures on diverse numeric datasets via geometric complexity descriptors
dc.typeArticle
dspace.entity.typePublication

Dosyalar

Orijinal paket
Listeleniyor 1 - 1 / 1
Yükleniyor...
Küçük Resim
İsim:
Tam Metin / Full Text.pdf
Boyut:
1.82 MB
Biçim:
Adobe Portable Document Format
Lisans paketi
Listeleniyor 1 - 1 / 1
Kapalı Erişim
İsim:
license.txt
Boyut:
1.17 KB
Biçim:
Item-specific license agreed upon to submission
Açıklama: