Progress report: VALIS

Hello dear patrons,

In this post, I will discuss some experimental results obtained during the testing of VALIS. The original version of the algorithm dates back to 2009. Back then, I have proved the feasibility of the approach by testing it on a couple of problems and left it at that. This time, I have rewritten the implementation from scratch and made some important modifications to the algorithm, all of which allowed me to test VALIS more thoroughly.

The above chart is a visualization of experimental results. Each hexagonal region represents a single algorithm. Scikit-learn implementations of the following classifiers were tested: k nearest neighbours (kNN), logistic regression (LR), linear and quadratic discriminant analysis (LDA and QDA), naive Bayes (NB), support vector machines (SVM), classification and regression trees (CART), random forest (RF) and AdaBoost (AB). The position of each polygon vertex corresponds to the performance of the algorithm on one of the six benchmark problems. By performance I mean classification accuracy relative to the highest accuracy attained by any of the algorithms. Dashed lines correspond to 5% steps.

As you can see, VALIS compares favourably to many well-established classifiers. In fact, it ranks first by both the geometric mean and the minimum of relative accuracy (96% and 93% respectively)! This level of performance is not something I had anticipated. VALIS was initially conceived as an experimental proof-of-concept algorithm based on the idea of self-organization. The emergence of sensible population-level behaviour from purely local antibody interactions still seems miraculous to me. Present experimental results are an important milestone, demonstrating that VALIS is an effective algorithm competitive with other classifiers.