Research - synopsis
​​
The focus of Prof. Angelov's research over more than three decades is learning from data aiming to extract automatically human-understandable models (machine learning and explainable AI). This includes addressing problems such as, but not limited to:
-
interpretability and explainability of the models,
-
concept drift and shift, out of distribution and non-stationary data streams,
-
zero- and few shot learning,
-
recursive non-iterative methods
-
evolving systems including clustering, classifiers, predictive models, controllers.
Continual (open ended, "lifelong" learning) of non-stationary data streams takes a particular place in his research interests (e.g. he coined in the turn of the centuries the term "evolving" model structure). He published by 2002-2004 a range of pioneering results in the area now known as evolving intelligent systems. He introduced a number of concepts and modelling methodologies, such as dynamically evolving fuzzy rule-based models (1998-2004), and classifiers (2006-2008), in particular. Another topic of interest is what he called Machine Learning (collaborative systems), US patent 8250004B2, granted 21 Aug. 2012, priority date 23 Oct 2007. In essence, it describes a method for federated machine learning applicable for classification as well as for predictive and control systems as well as for clustering when the data is distributed and, thus available only to the local node/agent/user.
Prof. Angelov is particularly interested in human intelligible machine learning, explainable AI, computationally efficient/recursive computational intelligence techniques that mimics the human-like reasoning and brain, respectively.
More recently, he pioneered (2014) and further developed with his PhD students Xiaowei Gu, Eduardo Soares Almeida (both recipients of Outstanding PhD Dissertation Awards in 2020 and 2023, respectively), Dmitry Kangin and in collaboration with Prof. Jose C. Principe from the University of Florida within The Royal Society funded project (2015-2018) a new, entirely data-centered approach (called “empirical”). This approach combines the discriminative power of locally valid prototype-based models with the elegance of analytical description that is globally valid within a (latent) data space that may be defined by a powerful pre-trained foundation deep learning models, e.g. based on transformers, but with the introduced by him "duality" approach can be directly paired with the raw/original features which are clearly interpretable. It resulted in new computationally efficient methods and algorithms which can be highly accurate. This new approach makes possible for the multi-modal distribution of the typicality (typicality can be considered as a conditional pdf) to be derived directly from the data. Based on this new approach, several new methods were developed: i) xDNN (explainable Deep Neural Networks; 300+ citations for few years); ii) ALMMo (autonomous learning multi-model) systems which itself is a further development of the evolving Takagi-Sugeno (eTS) systems introduced by Angelov earlier (2002) and more recently, IDEAL
​​
These new methods requires a fraction of the computational and time resources to train a model and are explainable because they combine in a synergy the machine learning and reasoning. They can dynamically self-evolve (can continue to learn and adapt to new data without re-training). In addition, IDEAL is zero-shot and one-shot learning versions are also possible. Furthermore, xClass offers not only learning from few new data samples, but also detects and can learn unknown and not used in training classes as well. Prof. Angelov co-organised a number of workshops at top rated conferences, e.g. DALI2019, NeurIPS2019, HCML at ICML2020, ELLIS-HCML2021, NeurIPS2021, PerConAI2022, NeurIPS2022, PerConAI2023, ICCV-2023, PerConAI2024, CVPR-2024).
He publishes at TPAMI, Information Fusion, IEEE Transactions on Cybernetics and other journals, CVPR, ICCV, ECCV, ICLR, IJCNN and other top IEEE conferences.​
​
Another direction of his research is the degree of autonomy of machine learning methods. Traditional methods for modelling and learning form data involve a lot of handcrafting (for selecting features, model type, parameters, thresholds, etc.). In addition, they are based on many assumptions, e.g. data generation model, randomness or determinism, data independence, infinite amount, etc. Many of these do not hold in practice. The aim of autonomous learning is to have methods, algorithms and tools, which require no or very light user involvement in selecting these and to be driven entirely by the data pattern. This is especially important for streaming data. Prof. Angelov developed different methods, algorithms and published papers as well as a research monograph on this topic.