Follow
Hao Zhang
Hao Zhang
Associate Professor, Information Science and Engineering, Ocean University of China
Verified email at ouc.edu.cn
Title
Cited by
Cited by
Year
Retinal blood vessel segmentation using fully convolutional network with transfer learning
Z Jiang, H Zhang, Y Wang, SB Ko
Computerized Medical Imaging and Graphics 68, 1-15, 2018
2092018
Breast cancer classification in automated breast ultrasound using multiview convolutional neural network with transfer learning
Y Wang, EJ Choi, Y Choi, H Zhang, GY Jin, SB Ko
Ultrasound in medicine & biology 46 (5), 1119-1132, 2020
1292020
Efficient multiple-precision floating-point fused multiply-add with mixed-precision support
H Zhang, D Chen, SB Ko
IEEE Transactions on Computers 68 (7), 1035-1048, 2019
532019
Efficient Posit Multiply-Accumulate Unit Generator for Deep Learning Applications
H Zhang, J He, SB Ko
2019 IEEE International Symposium on Circuits and Systems (ISCAS), 1-5, 2019
472019
Design of Power Efficient Posit Multiplier
H Zhang, SB Ko
IEEE Transactions on Circuits and Systems II: Express Briefs 67 (5), 861-865, 2020
412020
New flexible multiple-precision multiply-accumulate unit for deep neural network training and inference
H Zhang, D Chen, SB Ko
IEEE Transactions on Computers 69 (1), 26-38, 2019
342019
Deep learning for the classification of small (≤ 2 cm) pulmonary nodules on CT imaging: a preliminary study
KJ Chae, GY Jin, SB Ko, Y Wang, H Zhang, EJ Choi, H Choi
Academic radiology 27 (4), e55-e63, 2020
312020
Efficient fixed/floating-point merged mixed-precision multiply-accumulate unit for deep learning processors
H Zhang, HJ Lee, SB Ko
2018 IEEE International Symposium on Circuits and Systems (ISCAS), 1-5, 2018
272018
Novel convolutional neural network architecture for improved pulmonary nodule classification on computed tomography
Y Wang, H Zhang, KJ Chae, Y Choi, GY Jin, SB Ko
Multidimensional Systems and Signal Processing 31, 1163-1183, 2020
232020
Area-and power-efficient iterative single/double-precision merged floating-point multiplier on FPGA
H Zhang, D Chen, SB Ko
IET Computers & Digital Techniques 11 (4), 149-158, 2017
192017
High performance and energy efficient single‐precision and double‐precision merged floating‐point adder on FPGA
H Zhang, D Chen, SB Ko
IET Computers & Digital Techniques 12 (1), 20-29, 2018
152018
Segmentation for document layout analysis: not dead yet
L Markewich, H Zhang, Y Xing, N Lambert-Shirzad, Z Jiang, RKW Lee, ...
International Journal on Document Analysis and Recognition (IJDAR), 1-11, 2022
132022
A Real-Time Architecture for Pruning the Effectual Computations in Deep Neural Networks
M Asadikouhanjani, H Zhang, L Gopalakrishnan, HJ Lee, SB Ko
IEEE Transactions on Circuits and Systems I: Regular Papers 68 (5), 2030-2041, 2021
112021
Efficient spiking neural network training and inference with reduced precision memory and computing
Y Wang, K Shahbazi, H Zhang, KI Oh, JJ Lee, SB Ko
IET Computers & Digital Techniques 13 (5), 397-404, 2019
102019
Energy efficient spiking neural network processing using approximate arithmetic units and variable precision weights
Y Wang, H Zhang, KI Oh, JJ Lee, SB Ko
Journal of Parallel and Distributed Computing 158, 164-175, 2021
82021
Variable-Precision Approximate Floating-Point Multiplier for Efficient Deep Learning Computation
H Zhang, SB Ko
IEEE Transactions on Circuits and Systems II: Express Briefs 69 (5), 2503-2507, 2022
72022
Decimal floating-point fused multiply-add with redundant internal encodings
L Han, H Zhang, SB Ko
IET Computers & Digital Techniques 10 (4), 147-156, 2016
72016
Area and power efficient decimal carry-free adder
L Han, H Zhang, SB Ko
Electronics Letters 51 (23), 1852-1854, 2015
72015
FPGA-Based Approximate Multiplier for Efficient Neural Computation
H Zhang, H Xiao, H Qu, SB Ko
2021 IEEE International Conference on Consumer Electronics-Asia (ICCE-Asia), 1-4, 2021
62021
Efficient Multiple-Precision Posit Multiplier
H Zhang, SB Ko
2021 IEEE International Symposium on Circuits and Systems (ISCAS), 1-5, 2021
62021
The system can't perform the operation now. Try again later.
Articles 1–20