Volgen
Wangchunshu Zhou
Titel
Geciteerd door
Geciteerd door
Jaar
CommonGen: A constrained text generation challenge for generative commonsense reasoning
BY Lin, W Zhou, M Shen, P Zhou, C Bhagavatula, Y Choi, X Ren
EMNLP 2020 (Findings), 2019
3822019
Bert loses patience: Fast and robust inference with early exit
W Zhou, C Xu, T Ge, J McAuley, K Xu, F Wei
Advances in Neural Information Processing Systems 33, 18330-18341, 2020
3282020
Bert-of-theseus: Compressing bert by progressive module replacing
C Xu*, W Zhou*, T Ge, F Wei, M Zhou
EMNLP 2020, 2020
2192020
BERT-based lexical substitution
W Zhou, T Ge, K Xu, F Wei, M Zhou
Proceedings of the 57th annual meeting of the association for computational …, 2019
1272019
Rolellm: Benchmarking, eliciting, and enhancing role-playing abilities of large language models
ZM Wang, Z Peng, H Que, J Liu, W Zhou, Y Wu, H Guo, R Gan, Z Ni, ...
ACL 2024 Findings, 2023
1262023
A Survey on Green Deep Learning
J Xu*, W Zhou*, Z Fu*, H Zhou, L Li
arXiv preprint arXiv:2111.05193, 2021
118*2021
BERT learns to teach: Knowledge distillation with meta learning
W Zhou*, C Xu*, J McAuley
Proceedings of the 60th Annual Meeting of the Association for Computational …, 2022
812022
Pre-training text-to-text transformers for concept-centric common sense
W Zhou*, DH Lee*, RK Selvam, S Lee, BY Lin, X Ren
ICLR 2021, 2020
742020
Controlled Text Generation with Natural Language Instructions
W Zhou, YE Jiang, E Wilcox, R Cotterell, M Sachan
ICML 2023, 2023
632023
Agents: An Open-source Framework for Autonomous Language Agents
W Zhou*, YE Jiang*, L Li*, J Wu*, T Wang, S Qiu, J Zhang, J Chen, R Wu, ...
arXiv preprint arXiv:2309.07870, 2023
602023
To repeat or not to repeat: Insights from scaling llm under token-crisis
F Xue, Y Fu, W Zhou, Z Zheng, Y You
Advances in Neural Information Processing Systems 36, 2024
592024
X2-VLM: All-In-One Pre-trained Model For Vision-Language Tasks
Y Zeng, X Zhang, H Li, J Wang, J Zhang, W Zhou
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023
562023
Interactive natural language processing
Z Wang, G Zhang, K Yang, N Shi, W Zhou, S Hao, G Xiong, Y Li, MY Sim, ...
arXiv preprint arXiv:2305.13246, 2023
552023
How many unicorns are in this image? a safety evaluation benchmark for vision llms
H Tu, C Cui, Z Wang, Y Zhou, B Zhao, J Han, W Zhou, H Yao, C Xie
arXiv preprint arXiv:2311.16101, 2023
522023
RecurrentGPT: Interactive Generation of (Arbitrarily) Long Text
W Zhou, YE Jiang, P Cui, T Wang, Z Xiao, Y Hou, R Cotterell, M Sachan
arXiv preprint arXiv:2305.13304, 2023
512023
Openmoe: An early effort on open mixture-of-experts language models
F Xue, Z Zheng, Y Fu, J Ni, Z Zheng, W Zhou, Y You
ICML 2024, 2024
502024
Beyond preserved accuracy: Evaluating loyalty and robustness of bert compression
C Xu*, W Zhou*, T Ge, K Xu, J McAuley, F Wei
EMNLP 2021, 2021
472021
Towards interpretable natural language understanding with explanations as latent variables
W Zhou*, J Hu*, H Zhang*, X Liang, M Sun, C Xiong, J Tang
NeurIPS 2020, 2020
402020
Scheduled DropHead: A Regularization Method for Transformer Models
W Zhou, T Ge, K Xu, F Wei, M Zhou
EMNLP 2020 (Findings), 2020
402020
Learning to compare for better training and evaluation of open domain natural language generation models
W Zhou, K Xu
Proceedings of the AAAI Conference on Artificial Intelligence 34 (05), 9717-9724, 2020
392020
Het systeem kan de bewerking nu niet uitvoeren. Probeer het later opnieuw.
Artikelen 1–20