Obserwuj
Klaudia Bałazy
Klaudia Bałazy
Zweryfikowany adres z doctoral.uj.edu.pl
Tytuł
Cytowane przez
Cytowane przez
Rok
Zero time waste: Recycling predictions in early exit neural networks
M Wołczyk, B Wójcik, K Bałazy, IT Podolak, J Tabor, M Śmieja, T Trzcinski
Advances in Neural Information Processing Systems 34, 2516-2528, 2021
432021
LoRA-XS: Low-Rank Adaptation with Extremely Small Number of Parameters
K Bałazy, M Banaei, K Aberer, J Tabor
arXiv preprint arXiv:2405.17604, 2024
112024
Exploiting transformer activation sparsity with dynamic inference
M Piórczyński, F Szatkowski, K Bałazy, B Wójcik
arXiv preprint arXiv:2310.04361, 2023
72023
Zero time waste in pre-trained early exit neural networks
B Wójcik, M Przewiȩźlikowski, F Szatkowski, M Wołczyk, K Bałazy, ...
Neural Networks 168, 580-601, 2023
62023
Step by Step Loss Goes Very Far: Multi-Step Quantization for Adversarial Text Attacks
P Gaiński, K Bałazy
Proceedings of the 17th Conference of the European Chapter of the …, 2023
62023
Direction is what you need: Improving Word Embedding Compression in Large Language Models
K Bałazy, M Banaei, R Lebret, J Tabor, K Aberer
Proceedings of the 6th Workshop on Representation Learning for NLP (RepL4NLP …, 2021
52021
r-softmax: Generalized Softmax with Controllable Sparsity Rate
K Bałazy, Ł Struski, M Śmieja, J Tabor
Computational Science – ICCS 2023 14074 (Lecture Notes in Computer Science …, 2023
22023
Finding the optimal network depth in classification tasks
B Wójcik, M Wołczyk, K Bałazy, J Tabor
Machine Learning and Knowledge Discovery in Databases: European Conference …, 2021
12021
Revisiting Offline Compression: Going Beyond Factorization-based Methods for Transformer Language Models
M Banaei, K Bałazy, A Kasymov, R Lebret, J Tabor, K Aberer
Findings of the Association for Computational Linguistics: EACL 2023, 1788–1805, 2023
2023
Nie można teraz wykonać tej operacji. Spróbuj ponownie później.
Prace 1–9