Obserwuj
Ghada Sokar
Tytuł
Cytowane przez
Cytowane przez
Rok
SpaceNet: Make Free Space For Continual Learning
G Sokar, DC Mocanu, M Pechenizkiy
Elsevier Neurocomputing Journal, 2020
632020
Deep Ensembling with No Overhead for either Training or Testing: The All-Round Blessings of Dynamic Sparsity
S Liu, T Chen, Z Atashgahi, X Chen, G Sokar, E Mocanu, M Pechenizkiy, ...
ICLR 2022.The Tenth International Conference on Learning Representations, 2021
402021
Topological Insights in Sparse Neural Networks
S Liu, T Van der Lee, A Yaman, Z Atashgahi, D Ferraro, G Sokar, ...
ECML PKDD 2020, 2020
32*2020
The Dormant Neuron Phenomenon in Deep Reinforcement Learning
G Sokar, R Agarwal, PS Castro, U Evci
ICML2023, International Conference on Machine Learning, Oral, 2023
282023
Dynamic Sparse Training for Deep Reinforcement Learning
G Sokar, E Mocanu, DC Mocanu, M Pechenizkiy, P Stone
IJCAI-ECAI 2022. The 31st International Joint Conference on Artificial …, 2021
262021
A generic OCR using deep siamese convolution neural networks
G Sokar, EE Hemayed, M Rehan
2018 IEEE 9th Annual Information Technology, Electronics and Mobile …, 2018
202018
Quick and robust feature selection: the strength of energy-efficient sparse training for autoencoders
Z Atashgahi, G Sokar, T van der Lee, E Mocanu, DC Mocanu, R Veldhuis, ...
Machine Learning, 1-38, 2022
182022
Self-Attention Meta-Learner for Continual Learning
G Sokar, DC Mocanu, M Pechenizkiy
AAMAS 2021. 20th International Conference on Autonomous Agents and …, 2021
142021
Learning Invariant Representation for Continual Learning
G Sokar, DC Mocanu, M Pechenizkiy
AAAI Workshop on Meta-Learning for Computer Vision (AAAI-2021), 2020
132020
Quick and robust feature selection: the strength of energy-efficient sparse training for autoencoders
Z Atashgahi, G Sokar, T van der Lee, E Mocanu, DC Mocanu, R Veldhuis, ...
Machine Learning Journal, 2020
132020
Where to Pay Attention in Sparse Training for Feature Selection?
G Sokar, Z Atashgahi, M Pechenizkiy, DC Mocanu
NeurIPS2022, 36th Annual Conference on Neural Information Processing Systems, 2022
112022
Avoiding Forgetting and Allowing Forward Transfer in Continual Learning via Sparse Networks
G Sokar, DC Mocanu, M Pechenizkiy
ECMLPKDD2022, 2021
8*2021
Automatic Noise Filtering with Dynamic Sparse Training in Deep Reinforcement Learning
B Grooten, G Sokar, S Dohare, E Mocanu, ME Taylor, M Pechenizkiy, ...
AAMAS 2023. 22nd International Conference on Autonomous Agents and …, 2023
62023
FreeTickets: Accurate, Robust and Efficient Deep Ensemble by Training with Dynamic Sparsity
S Liu, T Chen, Z Atashgahi, X Chen, G Sokar, E Mocanu, M Pechenizkiy, ...
ICLR 2022.The Tenth International Conference on Learning Representations, 2021
32021
Mixtures of Experts Unlock Parameter Scaling for Deep RL
J Obando-Ceron, G Sokar, T Willi, C Lyle, J Farebrother, J Foerster, ...
arXiv preprint arXiv:2402.08609, 2024
22024
Continual Learning with Dynamic Sparse Training: Exploring Algorithms for Effective Model Updates
MO Yildirim, EC Gok, G Sokar, DC Mocanu, J Vanschoren
Conference on Parsimony and Learning, 94-107, 2024
12024
Continual Lifelong Learning for Intelligent Agents
G Sokar
IJCAI 2021. International Joint Conferences on Artifical Intelligence (IJCAI), 2021
12021
Learning Continually Under Changing Data Distributions
G Sokar
2023
Department of Mathematics and Computer Science
MD Keep, M Pechenizkiy, DC Mocanu, Z Atashgahi, G Sokar
2023
Continual Learning with Dynamic Sparse Training: Exploring Algorithms for Effective Model Updates
M Onur Yildirim, E Ceren Gok Yildirim, G Sokar, D Constantin Mocanu, ...
arXiv e-prints, arXiv: 2308.14831, 2023
2023
Nie można teraz wykonać tej operacji. Spróbuj ponownie później.
Prace 1–20