Arşiv logosu
  • Türkçe
  • English
  • Giriş
    Yeni kullanıcı mısınız? Kayıt için tıklayın. Şifrenizi mi unuttunuz?
Arşiv logosu
  • Koleksiyonlar
  • DSpace İçeriği
  • Araştırmacılar
  • Projeler
  • Birimler
  • Analiz
  • Talep/Soru
  • Türkçe
  • English
  • Giriş
    Yeni kullanıcı mısınız? Kayıt için tıklayın. Şifrenizi mi unuttunuz?
  1. Ana Sayfa
  2. Yazara Göre Listele

Yazar "Bilmez, Yakuphan" seçeneğine göre listele

Listeleniyor 1 - 3 / 3
Sayfa Başına Sonuç
Sıralama seçenekleri
  • Yükleniyor...
    Küçük Resim
    Yayın
    A comparative study of deep learning models for automated liver and tumor segmentation in 2d contrast-enhanced MRI images
    (IEEE, 2025) Tokatlı, Nazlı; Bilmez, Yakuphan; Bayram, Mücahit; Bayır, Beyzanur; Özalkan, Helin; Tekin, Zeynep; Örmeci, Necati; Altun, Halis
    This paper presents a comprehensive investigation into deep learning techniques for the automated segmentation of the liver and tumors from 2D abdominal contrast-enhanced Magnetic Resonance Imaging (MRI) slices. Addressing a significant challenge in medical image analysis, our study leverages the public ATLAS dataset [1], using a selection of 60 3D abdominal MRI scans, from which we extracted approximately 3,750 2D slices for model training and evaluation. The core objective was the precise identification and delineation of both the liver organ and any intrahepatic lesions. A comparative analysis was conducted on three U-Net-based architectures: the standard Attention U-Net model incorporating EfficientNet-b3 and CBAM but without Focal Loss, the Attention U-Net model with integrated Focal Loss, and the ResNet34-Based U-Net model. To optimize performance, we explored the efficacy of different loss functions, namely DiceLoss and a hybrid DiceLoss with Focalcoss. Our findings are promising: Among the evaluated models, the ResNet34-Based U-Net demonstrated the highest performance with a Dice score of 91.36% and an IoU score of 89.52%. It was followed by the Attention U-Net with Focal Loss, which achieved 86.41% Dice and 81.61% IoU scores, and the standard Attention U-Net, which obtained 85.93% Dice and 81.19% IoU scores. These results underscore the significant potential of our 2D-based methodology to enhance the precision and efficiency of liver and tumor detection from abdominal scans, offering a valuable tool to support clinicians in early diagnosis and to alleviate their workload.
  • Yükleniyor...
    Küçük Resim
    Yayın
    An AI-powered mobile application for aroid identification and interactive learning: Enhancing pharmacognosy education through deep learning and NLP
    (IEEE, 2025) Tokatlı, Nazlı; Bilmez, Yakuphan; Kılıç, Yusuf; Alpınar, Abdülkerim
    Aroid plants (Araceae family), recognized for their distinct inflorescence, possess significant botanical, pharmaceutical, and practical importance due to their content of both beneficial compounds and toxins such as calcium oxalate crystals. Accurate identification of these species is particularly crucial in pharmacy education; however, morphological similarities among Aroid species often lead to confusion among students. This paper presents a deep learning-based mobile application designed to support both plant identification and interactive learning. The solution leverages EfficientNet and Convolutional Neural Network (CNN) architectures, achieving up to 96 % accuracy in classifying Aroid species. The visual classification model, trained on a comprehensive dataset, is deployed via a RESTful API and integrated within a Flutter-based mobile application. In addition, the app incorporates a Natural Language Processing (NLP)-powered chatbot to address user inquiries regarding plant characteristics and care. While technical evaluations demonstrate robust model performance, a comprehensive user evaluation aimed at assessing the system's educational value, usability, and chatbot interaction is planned as future work. This study underscores the potential of AI-driven mobile solutions in advancing pharmacognosy education, with future developments aimed at expanding the app's botanical scope and enhancing user engagement based on forthcoming survey results.
  • Yükleniyor...
    Küçük Resim
    Yayın
    MelanoTech: Development of a mobile application infrastructure for melanoma cancer diagnosis based on artificial intelligence technologies
    (IEEE, 2024) Tokatlı, Nazlı; Bilmez, Yakuphan; Göztepeli, Gürkan; Güler, Muhammed; Karan, Furkan; Altun, Halis
    This preliminary work introduces MelanoTech, a mHealth application designed and implemented to offer a user-friendly and intuitive interface for the early diagnosis of melanoma, a kind of skin cancer with significant fatality rates [1]. The application demonstrates promising performance in segmentation and classification tasks by utilizing deep learning models with Generative Adversarial Networks (GANs) for data augmentation. MelanoTech achieves a comprehensive accuracy rate of 92%, with a segmentation model accuracy rate of 93% and a lesion detection accuracy rate of 90%. Finally, incorporating data augmentation approaches based on GANs resulted in a 5% enhancement in the model’s performance. These findings highlight the capacity of MelanoTech as a dependable tool for improving the early diagnosis of melanoma and decreasing the workload of physicians in Turkish public hospitals.

| İstanbul Sağlık ve Teknoloji Üniversitesi | Kütüphane | Açık Erişim Politikası | Rehber | OAI-PMH |

Bu site Creative Commons Alıntı-Gayri Ticari-Türetilemez 4.0 Uluslararası Lisansı ile korunmaktadır.


İstanbul Sağlık ve Teknoloji Üniversitesi, İstanbul, TÜRKİYE
İçerikte herhangi bir hata görürseniz lütfen bize bildirin

DSpace 7.6.1, Powered by İdeal DSpace

DSpace yazılımı telif hakkı © 2002-2026 LYRASIS

  • Çerez Ayarları
  • Gizlilik Politikası
  • Son Kullanıcı Sözleşmesi
  • Geri Bildirim