Andreas Triantafyllopoulos
Andreas Triantafyllopoulos
University of Augsburg
Verified email at
Cited by
Cited by
Dawn of the transformer era in speech emotion recognition: closing the valence gap
J Wagner, A Triantafyllopoulos, H Wierstorf, M Schmitt, F Burkhardt, ...
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023
Towards robust speech emotion recognition using deep residual networks for speech enhancement
A Triantafyllopoulos, G Keren, J Wagner, I Steiner, B Schuller
Deep speaker conditioning for speech emotion recognition
A Triantafyllopoulos, S Liu, BW Schuller
2021 IEEE international conference on multimedia and expo (ICME), 1-6, 2021
Probing speech emotion recognition transformers for linguistic knowledge
A Triantafyllopoulos, J Wagner, H Wierstorf, M Schmitt, U Reichel, ...
arXiv preprint arXiv:2204.00400, 2022
An evaluation of speech-based recognition of emotional and physiological markers of stress
A Baird, A Triantafyllopoulos, S Zänkert, S Ottl, L Christ, L Stappen, ...
Frontiers in Computer Science 3, 750284, 2021
Marvel: Multimodal extreme scale data analytics for smart cities environments
D Bajovic, A Bakhtiarnia, G Bravos, A Brutti, F Burkhardt, D Cauchi, ...
2021 International Balkan Conference on Communications and Networking …, 2021
A review of automatic recognition technology for bird vocalizations in the deep learning era
J Xie, Y Zhong, J Zhang, S Liu, C Ding, A Triantafyllopoulos
Ecological Informatics, 101927, 2022
The role of task and acoustic similarity in audio transfer learning: Insights from the speech emotion recognition case
A Triantafyllopoulos, BW Schuller
ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and …, 2021
An overview of affective speech synthesis and conversion in the deep learning era
A Triantafyllopoulos, BW Schuller, G İymen, M Sezgin, X He, Z Yang, ...
Proceedings of the IEEE, 2023
audEERING's approach to the one-minute-gradual emotion challenge
A Triantafyllopoulos, H Sagha, F Eyben, B Schuller
arXiv preprint arXiv:1805.01222, 2018
Multistage linguistic conditioning of convolutional layers for speech emotion recognition
A Triantafyllopoulos, U Reichel, S Liu, S Huber, F Eyben, BW Schuller
Frontiers in Computer Science 5, 1072479, 2023
Redundancy reduction twins network: A training framework for multi-output emotion regression
X Jing, M Song, A Triantafyllopoulos, Z Yang, BW Schuller
arXiv preprint arXiv:2206.09142, 2022
Personalised depression forecasting using mobile sensor data and ecological momentary assessment
A Kathan, M Harrer, L Küster, A Triantafyllopoulos, X He, M Milling, ...
Frontiers in Digital Health 4, 964582, 2022
A personalised approach to audiovisual humour recognition and its individual-level fairness
A Kathan, S Amiriparian, L Christ, A Triantafyllopoulos, N Müller, A König, ...
Proceedings of the 3rd International on Multimodal Sentiment Analysis …, 2022
Distinguishing between pre-and post-treatment in the speech of patients with chronic obstructive pulmonary disease
A Triantafyllopoulos, M Fendler, A Batliner, M Gerczuk, S Amiriparian, ...
arXiv preprint arXiv:2207.12784, 2022
Robust speech emotion recognition under different encoding conditions
C Oates, A Triantafyllopoulos, I Steiner, B Schuller
Fatigue prediction in outdoor running conditions using audio data
A Triantafyllopoulos, S Ottl, A Gebhard, E Rituerto-González, M Jaumann, ...
2022 44th Annual International Conference of the IEEE Engineering in …, 2022
An overview & analysis of sequence-to-sequence emotional voice conversion
Z Yang, X Jing, A Triantafyllopoulos, M Song, I Aslan, BW Schuller
arXiv preprint arXiv:2203.15873, 2022
Personalised deep learning for monitoring depressed mood from speech
M Gerczuk, A Triantafyllopoulos, S Amiriparian, A Kathan, J Bauer, ...
2022 E-Health and Bioengineering Conference (EHB), 1-5, 2022
Daily Mental Health Monitoring from Speech: A Real-World Japanese Dataset and Multitask Learning Analysis
M Song, A Triantafyllopoulos, Z Yang, H Takeuchi, T Nakamura, A Kishi, ...
ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and …, 2023
The system can't perform the operation now. Try again later.
Articles 1–20