Follow
Parsa Nooralinejad
Title
Cited by
Cited by
Year
Sparsity and heterogeneous dropout for continual learning in the null space of neural activations
A Abbasi, P Nooralinejad, V Braverman, H Pirsiavash, S Kolouri
Conference on Lifelong Learning Agents, 617-628, 2022
162022
A simple baseline for low-budget active learning
K Pourahmadi, P Nooralinejad, H Pirsiavash
arXiv preprint arXiv:2110.12033, 2021
142021
PRANC: Pseudo RAndom Networks for Compacting deep models
P Nooralinejad, A Abbasi, S Abbasi Koohpayegani, ...
ICCV2023, 2022
62022
NOLA: Networks as linear combination of low rank random basis
SA Koohpayegani, KL Navaneet, P Nooralinejad, S Kolouri, H Pirsiavash
arXiv preprint arXiv:2310.02556, 2023
42023
TaxoNN: a light-weight accelerator for deep neural network training
R Hojabr, K Givaki, K Pourahmadi, P Nooralinejad, A Khonsari, ...
2020 IEEE International Symposium on Circuits and Systems (ISCAS), 1-5, 2020
32020
Multi-Agent Lifelong Implicit Neural Learning
S Kolouri, A Abbasi, SA Koohpayegani, P Nooralinejad, H Pirsiavash
IEEE Signal Processing Letters, 2023
12023
Taxonn: a light-weight accelerator for training deep neural networks on the edge
M Najaf, R Hojabr, K Givaki, K Pourahmadi, P Nooralinejad, A Khonsari, ...
US Patent App. 17/961,396, 2024
2024
BrainWash: A Poisoning Attack to Forget in Continual Learning
A Abbasi, P Nooralinejad, H Pirsiavash, S Kolouri
arXiv preprint arXiv:2311.11995, 2023
2023
NOLA: Networks as Linear Combination of Low Rank Random Basis
S Abbasi Koohpayegani, KL Navaneet, P Nooralinejad, S Kolouri, ...
arXiv e-prints, arXiv: 2310.02556, 2023
2023
The system can't perform the operation now. Try again later.
Articles 1–9