Call For Paper - Upcoming Conferences

Research Article | Open Access | Download PDF
Volume 13 | Issue 4 | Year 2026 | Article Id. IJECE-V13I4P109 | DOI : https://doi.org/10.14445/23488549/IJECE-V13I4P109

EvoCausal-PhysioNet: Lifelong Physiological Signal Recognition with Continual Causal Graph-Transformer, Neural ODE Memory and Counterfactual Adaptation to Multiple Causal Gap


T L Deepika Roy, Nulaka Srinivasu

Received Revised Accepted Published
08 Jan 2026 10 Feb 2026 10 Mar 2026 30 Apr 2026

Citation :

T L Deepika Roy, Nulaka Srinivasu, "EvoCausal-PhysioNet: Lifelong Physiological Signal Recognition with Continual Causal Graph-Transformer, Neural ODE Memory and Counterfactual Adaptation to Multiple Causal Gap," International Journal of Electronics and Communication Engineering, vol. 13, no. 4, pp. 119-131, 2026. Crossref, https://doi.org/10.14445/23488549/IJECE-V13I4P109

Abstract

The investigations of affective intelligence involve the use of computational models that can explain the dynamics of changes in emotional states in a dynamic and individualistic way. Conventional multimodal methods of emotion recognition, despite being useful in limited contexts, have shortcomings in terms of (i) their capability to model directed interdependence between physiological subsystems, (ii) their capacity to maintain previously acquired knowledge of emotions across sessions, and (iii) the generalization in changeable affective and sensing conditions. As a solution to these problems, EvoCausal-PhysioNet is proposed in this paper, a continual multimodal emotion recognition system that combines causal graph reasoning, continuous-time neural dynamics, and adaptive learning processes. The suggested model can be seen as a time-varying directed graph whose latent representations continuously change with time as a Neural Ordinary Differential Equation (Neural-ODE)-based memory; the proposed model contains several physiological and behavioral modalities, including Electroencephalography (EEG), Electrodermal Activity (EDA), facial dynamics, eye gaze, pupil dilation, and cursor motion. It is based on a graph-transformer (self-adaptive attention) to learn whether to interact inter- and intra-modally, and Neural-ODE memory can learn to change over time smoothly and mitigate catastrophic intersession forgetting. In order to make latent emotional trajectories resistant to missing or shifting modalities, the counterfactual adaptation module tries to estimate alternate latent emotional paths in hypothetical sensing circumstances, which will make subsequent cross-subject cross-session generalization. The suggested framework is tested on the AFFET multimodal data, integrating concurrent physiological measurements, behavioral information, and personality factors. The presented experimental findings indicate that EvoCausal-PhysioNet is more accurate (87.3%), has a higher macro-F1 (84.9%), and Cohen's k (0.84) than CNN-, RNN-, GCN-, and Transformer-based baselines. Moreover, causal attention visualizations give understandable information on the comparative role of modalities and personality traits, which offer insights into the neuro-psychological processes of emotion dynamics. Altogether, EvoCausal-PhysioNet introduces a memory-based and adaptive system of continual emotion recognition by providing a connection between affective computing and explainable AI.

Keywords

EEG, EDA, ODE, CNN, RNN.

References

  1. Sander Koelstra et al., “DEAP: A Database for Emotion Analysis; Using Physiological Signals,” IEEE Transactions on Affective Computing, vol. 3, no. 1, pp. 18-31, 2011.
    [
    CrossRef] [Google Scholar] [Publisher Link]
  2. Mohammad Soleymani et al., “A Multimodal Database for Affect Recognition and Implicit Tagging,” IEEE Transactions on Affective Computing, vol. 3, no. 1, pp. 42-55, 2011.
    [
    CrossRef] [Google Scholar] [Publisher Link]
  3. Juan Abdon Miranda-Correa et al., “AMIGOS: A Dataset for Affect, Personality and Mood Research on Individuals and Groups,” IEEE Transactions on Affective Computing, vol. 12, no. 2, pp. 479-493, 2018.
    [
    CrossRef] [Google Scholar] [Publisher Link]
  4. Stamos Katsigiannis, and Naeem Ramzan, “DREAMER: A Database for Emotion Recognition through EEG and ECG Signals From Wireless Low-cost Off-the-Shelf Devices,” IEEE Journal of Biomedical and Health Informatics, vol. 22, no. 1, pp. 98-107, 2017.
    [
    CrossRef] [Google Scholar] [Publisher Link]
  5. Wei-Bang Jiang et al., “SEED-VII: A Multimodal Dataset of Six Basic Emotions with Continuous Labels for Emotion Recognition,” IEEE Transactions on Affective Computing, vol. 16, no. 2, pp. 969-985, 2024.
    [
    CrossRef] [Google Scholar] [Publisher Link]
  6. Minghao Xiao et al., “MEEG and AT-DGNN: Improving EEG Emotion Recognition with Music Introducing and Graph-based Learning,” 2024 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Lisbon, Portugal, pp. 4201-4208, 2024.
    [
    CrossRef] [Google Scholar] [Publisher Link]
  7. Tengfei Song et al., “EEG Emotion Recognition using Dynamical Graph Convolutional Neural Networks,” IEEE Transactions on Affective Computing, vol. 11, no. 3, pp. 532-541, 2018.
    [
    CrossRef] [Google Scholar] [Publisher Link]
  8. Weifeng Li, Wenbin Shi, and Chien-Hung Yeh, “Spatiotemporal Graph Convolutional Networks for EEG-Based Emotion Recognition,” 2024 IEEE International Conference on Signal, Information and Data Processing (ICSIDP), Zhuhai, China, pp. 1-6, 2024.
    [
    CrossRef] [Google Scholar] [Publisher Link]
  9. Jingcong Li et al., “Cross-Subject EEG Emotion Recognition with Self-Organized Graph Neural Network,” Frontiers in Neuroscience, vol. 15, pp. 1-10, 2021.
    [
    CrossRef] [Google Scholar] [Publisher Link]
  10. Zhuoqing Chang et al., “Time-Aware Neural Ordinary Differential Equations for Incomplete Time Series Modeling,” The Journal of Supercomputing, vol. 79, pp. 18699-18727, 2023.
    [
    CrossRef] [Google Scholar] [Publisher Link]
  11. YongKyung Oh et al., “Comprehensive Review of Neural Differential Equations for Time Series Analysis,” arXiv preprint, pp. 1-11, 2025.
    [
    CrossRef] [Google Scholar] [Publisher Link]
  12. Tongjie Pan et al., “Online Multi-Hypergraph Fusion Learning for Cross-Subject Emotion Recognition,” Information Fusion, vol. 108, 2024.
    [
    CrossRef] [Google Scholar] [Publisher Link]
  13. Jinhao Zhang et al., “Subject-Independent Emotion Recognition based on EEG Frequency Band Features and Self-Adaptive Graph Construction,” Brain Sciences, vol. 14, no. 3, pp. 1-19, 2024.
    [
    CrossRef] [Google Scholar] [Publisher Link]
  14. Weitong Sun et al., “MSDSANet: Multimodal Emotion Recognition based on Multi-Stream Network and Dual-Scale Attention Network Feature Representation,” Sensors, vol. 25, no. 7, pp. 1-23, 2025.
    [
    CrossRef] [Google Scholar] [Publisher Link]
  15. Hua Jin et al., “Multimodal Emotion Recognition in Conversations Using Transformer and Graph Neural Networks,” Applied Sciences, vol. 15, no. 22, pp. 1-16, 2025.
    [
    CrossRef] [Google Scholar] [Publisher Link]
  16. A.S. Agrawal, “Bouncing Scenario and Cosmic Dynamics in Modified Theories of Gravity,” arXiv preprint, pp. 1-150, 2024.
    [
    CrossRef] [Google Scholar] [Publisher Link]
  17. Farshad Safavi, Kulin Patel, and Ramana Vinjamuri, “Facial Expression Recognition with an Efficient Mix Transformer for Affective Human-Robot Interaction,” IEEE Transactions on Affective Computing, vol. 16, no. 4, pp. 3081-3094, 2025.
    [
    CrossRef] [Google Scholar] [Publisher Link]
  18. Haoran Yang, “Graph Contrastive Learning and Its Applications in Recommendation Systems,” UTS Digital Thesis Collection, pp. 1-142, 2024.
    [
    Google Scholar] [Publisher Link]
  19. Ricky T.Q. Chen, “Neural Ordinary Differential Equations,” Advances in Neural Information Processing Systems, pp. 1-18, 2018.
    [
    Google Scholar] [Publisher Link]
  20. Yulia Rubanova, Ricky T.Q. Chen, and David Duvenaud, “Latent Ordinary Differential Equations for Irregularly-Sampled Time Series,” Advances In Neural Information Processing Systems, pp. 1-21, 2019.
    [
    Google Scholar] [Publisher Link]
  21. Meisam J. Sekiavandi et al., “Advancing Face-to-Face Emotion Communication: A Multimodal Dataset (AFFEC),” arXiv preprint, pp. 1-18, 2025.
    [
    CrossRef] [Google Scholar] [Publisher Link]