Obstructing Classification via Projection


  • Pantea Haghighatkhah TU Eindhoven
  • Wouter Meulemans TU Eindhoven
  • Bettina Speckmann TU Eindhoven
  • Jérôme Urhausen Utrecht University
  • Kevin Verbeek TU Eindhoven




Machine learning and data mining techniques are effective tools to classify large amounts of data.
But they tend to preserve any inherent bias in the data, for example, with regards to gender or race. Removing such bias from data or the learned representations is quite challenging. In this paper we study a geometric problem which models a possible approach for bias removal. Our input is a set of points P in Euclidean space Rd and each point is labeled with k binary-valued properties. A priori we assume that it is `easy to classify the data according to each property. Our goal is to obstruct the classification according to one property by a suitable projection to a lower-dimensional Euclidean space Rm (m < d), while classification according to all other properties remains easy.

What it means for classification to be easy depends on the classification model used.
We first consider classification by linear separability as employed by support vector machines. We use Kirchberger's Theorem to show that, under certain conditions, a simple projection to Rd-1 suffices to eliminate the linear separability of one of the properties whilst maintaining the linear separability of the other properties. We also study the problem of maximizing the linear ``inseparability of the chosen property.
Second, we consider more complex forms of separability and prove a connection between the number of projections required to obstruct classification and the Helly-type properties of such separabilities.




How to Cite

Haghighatkhah, P., Meulemans, W. ., Speckmann, B., Urhausen, J., & Verbeek, K. (2022). Obstructing Classification via Projection. Computing in Geometry and Topology, 1(1), 2:1–2:21. https://doi.org/10.57717/cgt.v1i1.8



Original Research Articles