Research - synopsis
(Within the text on this page are used references which can be found at the List of Journal papers and books if they start with D.1 for peer reviewed journal articles, A.1 for research monographs, C.1 for patents and B.1 for edited books or at the List of peer reviewed conference papers if they start with E.1)​​
​​
The focus of Prof. Angelov's research over more than three decades is learning from data aiming to extract automatically human-understandable models (machine learning and explainable AI). This includes addressing problems such as, but not limited to:
-
the fastest reported t-SNE visualisation algorithm (5x or more faster than the faster state-of-the-art method (Barnes-Hut) t-SNE) recently published in TMLR: A. Aghasanli, P. Angelov, Recursive SNE: Fast Prototype-Based t-SNE for Large-Scale and Online Data, Transactions on Machine Learning Research (TMLR), 2025 The Python code is published in GitHub and Matlab code - in the Mathworks' repository - see below
-
interpretability and explainability of the models, including complex models such as deep neural networks based on human-intelligible prototypes [D2, D35, D30, D28, D5, D48; E13, E22, E27, E31, E32, E40; F1, F5, F7, F9, F10, F12, F14-F16]
-
continuous learning from non-stationary data streams [D89; E50, E112, E129, E136], domain adaptation [E10, E15, E8; F7], class-incremental learning [D22], addressing concept drift and shift [D93; E26, E111]
-
evolving clustering [E87] and classifiers (eClass family) [D100, D102, E56, E76; E98], self-evolving predictive models [D77, D111, D112; E81], self-evolving controllers [D31, D110, E46, E102]
-
adversarial attacks [E4, E6-E9, E14, E15, E21], deep fake [E9, E13], anomaly and fault detection [D73; E51, E66, E70, E77, E92] methods
-
Demonstrated that Transfer Learning can be applied successfully across data from inorganic and organic matter; the method presented in the paper is also highly accurate (over 98%) without the need for destructive DNA tests in identifying origin of tusks - legal historic mammoths or endangered in terms of poaching elephants (African or Indian) in A. Aghasanli, P. Angelov, D. Kangin, J. Kerns, R.F. Shepherd, Transfer learning from inorganic materials to ivory detection, Nature Scientific Reports, 15 (1): 15536, April 2025
-
self-calibrating soft sensors [D99, E118, E121, E126, E131]
-
applications to autonomous driving [E31-E32], Earth Observation and remote sensing [D15, D16, D52; E7], aerospace research and defence [E1, E71], COVID detection based on CT scans [E9],etc.
Continual (open ended, "lifelong" learning) of non-stationary data streams takes a particular place in his research interests (e.g. he coined in the turn of the centuries the term "evolving" model structure). He published by 2002-2004 a range of pioneering results in the area now known as evolving intelligent systems [A5, E138]. He introduced a number of concepts and modelling methodologies, such as dynamically evolving fuzzy rule-based models (1998-2004) [A3; E148, E146], and classifiers (2006-2008) [D100], in particular.
Prof. Angelov is particularly interested in human intelligible machine learning, explainable AI [D2, D35, D30, D28, D5; E13, E22, E27, E31, E32, E40; F1, F5, F7, F9, F10, F12, F14-F16] that mimics the human-like reasoning such as Semantically meaningfUl Primitives (SuPrimes) and anthropomorphic machine learning [D46]. Based on this new approach, several new methods were developed: i) xDNN (explainable Deep Neural Networks [D35]; ii) IDEAL [D2]. ​​These new methods are all interpretable-by-design and combine in a synergy machine learning and reasoning. Due to their prototype-based nature they can dynamically self-evolve (can continue to learn and adapt to new data with little (weak supervision, one shot learning) or no training (unsupervised, zero-shot learning). For example, xClass [D22] offers not only learning from few new data samples, but also detects and can learn unknown and not used in training classes as well.
​Another direction of his research is the degree of autonomy of machine learning methods [A2; D47; E95]. Traditional methods for modelling and learning form data involve a lot of handcrafting (for selecting features, model type, parameters, thresholds, etc.). In addition, they are based on many assumptions, e.g. data generation model, randomness or determinism, data independence, infinite amount, etc. Many of these do not hold in practice. The aim of autonomous learning is to have methods, algorithms and tools, which require no or very light user involvement in selecting these and to be driven entirely by the data pattern. This is especially important for streaming data. Prof. Angelov developed different methods, algorithms and published papers as well as a research monograph on this topic.
The algorithms and software for many of these methods can be found in repositories such as:
-
GitHub in Python: https://github.com/Aghasanli-Angelov/RSNE and https://github.com/Plamen-Eduardo/xDNN---Python as well as https://github.com/lira-centre
-
GitHub in Matlab: https://github.com/ashwin0306/Angelov-Aghasanli-Ashwin and https://github.com/Plamen-Eduardo/xDNN-SARS-CoV-2-CT-Scan
-
Mathworks' repository - https://www.mathworks.com/matlabcentral/profile/authors/8333192
​Further details about the software can be found at https://angeloventelsensys.wixsite.com/plamenangelov/software-downloads