Zhao Ren
Title
Cited by
Cited by
Year
The INTERSPEECH 2018 computational paralinguistics challenge: atypical and self-assessed affect, crying and heart beats
B Schuller, S Steidl, A Batliner, PB Marschik, H Baumeister, F Dong, ...
862018
AVEC 2019 workshop and challenge: state-of-mind, detecting depression with AI, and cross-cultural affect recognition
F Ringeval, B Schuller, M Valstar, N Cummins, R Cowie, L Tavabi, ...
Proceedings of the 9th International on Audio/Visual Emotion Challenge and …, 2019
612019
Deep Scalogram Representations for Acoustic Scene Classification
Z Ren, K Qian, Z Zhang, V Pandit, A Baird, B Schuller
IEEE/CAA Journal of Automatica Sinica 5 (3), 662-669, 2018
592018
Exploring Deep Spectrum Representations via Attention-based Recurrent and Convolutional Neural Networks for Speech Emotion Recognition
Z Zhao, Z Bao, Y Zhao, Z Zhang, N Cummins, Z Ren, B Schuller
IEEE Access, 2019
372019
Deep sequential image features on acoustic scene classification
Z Ren, V Pandit, K Qian, Z Yang, Z Zhang, B Schuller
Universität Augsburg, 2017
342017
Attention-based Convolutional Neural Networks for Acoustic Scene Classification
Z Ren, Q Kong, K Qian, MD Plumbley, BW Schuller
DCASE Workshop, 39-43, 2018
302018
An early study on intelligent analysis of speech under covid-19: Severity, sleep quality, fatigue, and anxiety
J Han, K Qian, M Song, Z Yang, Z Ren, S Liu, J Liu, H Zheng, W Ji, ...
arXiv preprint arXiv:2005.00096, 2020
272020
Wavelets revisited for the classification of acoustic scenes
K Qian, Z Ren, V Pandit, Z Yang, Z Zhang, B Schuller
Universität Augsburg, 2017
252017
The university of passau open emotion recognition system for the multimodal emotion challenge
J Deng, N Cummins, J Han, X Xu, Z Ren, V Pandit, Z Zhang, B Schuller
Chinese Conference on Pattern Recognition, 652-666, 2016
252016
Learning image-based representations for heart sound classification
Z Ren, N Cummins, V Pandit, J Han, K Qian, B Schuller
Proceedings of the 2018 International Conference on Digital Health, 143-147, 2018
232018
Attention-based Atrous Convolutional Neural Networks: Visualisation and Understanding Perspectives of Acoustic Scenes
Z Ren, Q Kong, J Han, M Plumbley, BW Schuller
2019 Proceedings IEEE International Conference on Acoustics, Speech and …, 2019
212019
Towards conditional adversarial training for predicting emotions from speech
J Han, Z Zhang, Z Ren, F Ringeval, B Schuller
2018 IEEE International Conference on Acoustics, Speech and Signal …, 2018
182018
Bags in bag: Generating context-aware bags for tracking emotions from speech
J Han, Z Zhang, M Schmitt, Z Ren, F Ringeval, B Schuller
Interspeech 2018, 3082-3086, 2018
112018
Sincerity and Deception in Speech: Two Sides of the Same Coin? A Transfer-and Multi-Task Learning Perspective.
Y Zhang, F Weninger, Z Ren, BW Schuller
INTERSPEECH, 2041-2045, 2016
112016
Implicit fusion by joint audiovisual training for emotion recognition in mono modality
J Han, Z Zhang, Z Ren, B Schuller
ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and …, 2019
102019
EmoBed: Strengthening Monomodal Emotion Recognition via Training with Crossmodal Emotion Embeddings
J Han, Z Zhang, Z Ren, B Schuller
IEEE Transactions on Affective Computing, 2019
82019
Machine Listening for Heart Status Monitoring: Introducing and Benchmarking HSS–the Heart Sounds Shenzhen Corpus
F Dong, K Qian, Z Ren, A Baird, X Li, Z Dai, B Dong, F Metze, ...
IEEE Journal of Biomedical and Health Informatics, 1--13, 2019
52019
Extending the FOV from Disparity and Color Consistencies in Multiview Light Fields
Z Ren, Q Zhang, H Zhu, Q Wang
Proc. ICIP, 1157--1161, 2017
52017
Evaluation of the Pain Level from Speech: Introducing a Novel Pain Database and Benchmarks
Z Ren, N Cummins, J Han, S Schnieder, J Krajewski, B Schuller
Proc. ITG, 56-60, 2018
42018
A comparison of acoustic and linguistics methodologies for Alzheimer’s dementia recognition
N Cummins, Y Pan, Z Ren, J Fritsch, VS Nallanthighal, H Christensen, ...
Interspeech 2020, 2182-2186, 2020
32020
The system can't perform the operation now. Try again later.
Articles 1–20