Follow
Takuma Yagi
Takuma Yagi
Project Researcher, Institute of Industrial Science, The University of Tokyo
Verified email at iis.u-tokyo.ac.jp - Homepage
Title
Cited by
Cited by
Year
Future Person Localization in First-Person Videos
T Yagi, K Mangalam, R Yonetani, Y Sato
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2018
1542018
Ego4d: Around the world in 3,000 hours of egocentric video
K Grauman, A Westbury, E Byrne, Z Chavis, A Furnari, R Girdhar, ...
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2022
1292022
GO-finder: a registration-free wearable system for assisting users in finding lost objects via hand-held object discovery
T Yagi, T Nishiyasu, K Kawasaki, M Matsuki, Y Sato
26th International Conference on Intelligent User Interfaces, 139-149, 2021
62021
Foreground-aware stylization and consensus pseudo-labeling for domain adaptation of first-person hand segmentation
T Ohkawa, T Yagi, A Hashimoto, Y Ushiku, Y Sato
IEEE Access 9, 94644-94655, 2021
42021
Hand-Object Contact Prediction via Motion-Based Pseudo-Labeling and Guided Progressive Label Correction
T Yagi, MT Hasan, Y Sato
32nd British Machine Vision Conference (BMVC), 2021
22021
Style Adapted DataBase: Generalizing Hand Segmentation via Semantics-aware Stylization
T Ohkawa, T Yagi, Y Sato
IEICE Technical Report; IEICE Tech. Rep. 120 (187), 26-31, 2020
12020
Fine-Grained Affordance Annotation for Egocentric Hand-Object Interaction Videos
Z Yu, Y Huang, R Furuta, T Yagi, Y Goutsu, Y Sato
Proceedings of the IEEE/CVF Winter Conference on Applications of Computer …, 2023
2023
GO-Finder: A Registration-free Wearable System for Assisting Users in Finding Lost Hand-held Objects
T Yagi, T Nishiyasu, K Kawasaki, M Matsuki, Y Sato
ACM Transactions on Interactive Intelligent Systems 12 (4), 1-29, 2022
2022
Precise Affordance Annotation for Egocentric Action Video Datasets
Z Yu, Y Huang, R Furuta, T Yagi, Y Goutsu, Y Sato
arXiv preprint arXiv:2206.05424, 2022
2022
Object Instance Identification in Dynamic Environments
T Yagi, MT Hasan, Y Sato
arXiv preprint arXiv:2206.05319, 2022
2022
Egocentric Pedestrian Motion Prediction by Separately Modeling Body Pose and Position
D Wu
IEICE Technical Report; IEICE Tech. Rep. 119 (481), 39-44, 2020
2020
Human-Computer Interaction: an User Evaluation Perspective--
T Yagi, S Shinagawa, K Akiyama, K Hirotaka, R Shimamura, T Matayoshi
IEICE Technical Report; IEICE Tech. Rep. 118 (260), 1-4, 2018
2018
Egocentric Pedestrian Motion Forecasting for Separately Modelling Pose and Location
D WU, T YAGI, Y MATSUI, Y SATO
Future Person Localization in First-Person Videos: Supplementary Material
T Yagi, K Mangalam, R Yonetani, Y Sato
The system can't perform the operation now. Try again later.
Articles 1–14