Follow
Botty Dimanov
Botty Dimanov
Verified email at cl.cam.ac.uk
Title
Cited by
Cited by
Year
You shouldn’t trust me: Learning models which conceal unfairness from multiple explanation methods.
B Dimanov, U Bhatt, M Jamnik, A Weller
IOS Press, 2020
1012020
Now You See Me (CME): Concept-based Model Extraction
D Kazhdan, B Dimanov, M Jamnik, P Liņ, A Weller
Advances in Interpretable Machine Learning and Artificial Intelligence (AIMLAI), 2020
622020
Is Disentanglement all you need? Comparing Concept-based & Disentanglement Approaches
D Kazhdan, B Dimanov, HA Terre, M Jamnik, P Liņ, A Weller
International Conference on Learning Representations (ICLR) Workshop on …, 2021
152021
MEME: Generating RNN Model Explanations via Model Extraction
D Kazhdan, B Dimanov, M Jamnik, P Lio
NeurIPS 2020 Workshop HAMLETS, 2020
112020
REM: an integrative rule extraction methodology for explainable data analysis in healthcare
Z Shams, B Dimanov, S Kola, N Simidjievski, HA Terre, P Scherer, ...
medRxiv, 2021.01. 25.21250459, 2021
102021
Failing conceptually: Concept-based explanations of dataset shift
MA Wijaya, D Kazhdan, B Dimanov, M Jamnik
arXiv preprint arXiv:2104.08952, 2021
72021
Interpretable Deep Learning: Beyond Feature-Importance with Concept-based Explanations
B Dimanov
42021
Gci: A (g) raph (c) oncept (i) nterpretation framework
D Kazhdan, B Dimanov, LC Magister, P Barbiero, M Jamnik, P Lio
arXiv preprint arXiv:2302.04899, 2023
32023
Step-Wise Sensitivity Analysis: Identifying Partially Distributed Representations For Interpretable Deep Learning
B Dimanov, M Jamnik
ICLR 2019 Debugging Machine Learning Models, 2019
12019
Explainer Divergence Scores (EDS): Some Post-Hoc Explanations May be Effective for Detecting Unknown Spurious Correlations
S Cardozo, GI Montero, D Kazhdan, B Dimanov, M Wijaya, M Jamnik, ...
arXiv preprint arXiv:2211.07650, 2022
2022
Method for inspecting a neural network
BT Dimanov, M Jamnik
US Patent 11,449,578, 2022
2022
The system can't perform the operation now. Try again later.
Articles 1–11