AUTHOR=Soni Piyush Kumar , Rambola Radhakrishna TITLE=TEGAA: transformer-enhanced graph aspect analyzer with semantic contrastive learning for implicit aspect detection JOURNAL=Frontiers in Artificial Intelligence VOLUME=Volume 9 - 2026 YEAR=2026 URL=https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2026.1666674 DOI=10.3389/frai.2026.1666674 ISSN=2624-8212 ABSTRACT=Implicit aspect detection aims to identify aspect categories that are not explicitly mentioned in text, but existing models struggle with four persistent challenges: aspect ambiguity, where multiple latent aspects are implied by the same expression, data imbalance and sparsity of implicit cues, contextual noise and syntactic variability in unstructured user reviews, and aspect drift, where the relevance of implicit cues changes across sentences or domains. To address these issues, this paper proposes the Transformer-Enhanced Graph Aspect Analyzer (TEGAA), a unified framework that tightly integrates dynamic expert routing, semantic representation refinement, and hierarchical graph reasoning. First, a Dynamic Expert Transformer (DET) equipped with a Dynamic Adaptive Expert Engine (DAEE) mitigates syntactic complexity and contextual noise by dynamically routing tokens to specialized expert sub-networks based on contextual and syntactic–semantic cues, enabling robust feature extraction for ambiguous implicit expressions. Second, Semantic Contrastive Learning (SCL) directly addresses data imbalance and weak implicit signals by enforcing semantic coherence among contextually related samples while increasing separability from irrelevant ones, thereby improving discriminability of sparse implicit aspect cues. Third, implicit aspect ambiguity and aspect drift are handled through a Graph-Enhanced Hierarchical Aspect Detector (GE-HAD), which models word- and sentence-level dependencies via context-aware graph attention. The incorporation of Attention Sinks prevents dominant but irrelevant tokens from overshadowing subtle implicit cues, while Pyramid Pooling aggregates multi-scale contextual information to stabilize aspect inference across varying linguistic scopes. Finally, an iterative feedback loop aligns graph-level reasoning with transformer-level expert routing, enabling adaptive refinement of aspect representations. Experiments on three benchmark datasets—Mobile Reviews, SemEval14, and Sentihood—demonstrate that TEGAA consistently outperforms state-of-the-art methods, achieving F1-scores above 0.88, precision above 0.89, recall above 0.87, accuracy exceeding 89%, and AUC values above 0.89. These results confirm TEGAA’s effectiveness in resolving implicit aspect ambiguity, handling noisy and imbalanced data, and maintaining robust performance across domains.