From one perspective, facial recognition is a purely computational exercise – a complex process of object scanning, data correlation and applying machine learning techniques to recognise and learn from patterns in digital image data streams. The human face is just of hundreds of different ‘objects’ that might be scanned and processed by a smart-camera – all instantaneously converted to a set of data-points that is then subjected to complex mathematical manipulations. The fact that this computational process might result in calculations being made about your own face (and, by association, yourself) is of no mathematical significance, let alone interest. It does not matter whether FRT is processing data relating to a human face, a cat face, or a clock face. As soon as any face has been scanned then it ceases to be anything but a series of numbers. Understood in these terms, then, we can see why tech engineers and software developers are sometimes bemused by talk of racial bias or the ethical dubiousness of facial recognition. What could be more neutral and objective than numbers?
Indeed, the total disconnect between a human face and the computational exercise of matching face-data is an inherent feature of FRT. In essence, facial recognition involves making measurements of a large number of key ‘landmark’ features extracted from a face object, such as upper-cheeks, eye valleys and mouth corners. These measurements are commonly taken from photographic images, but can also be made by sensors that shine light beams onto the key facial features without even technically producing an image of the face at all. Indeed, FRT systems will extract data from only small rectangular segments of a face. These data are then usually analysed in terms of their ‘fitness’ with composite ‘eigenface’ datasets compiled from composite analysis of often more than 100 different facial images – sometimes computer-generated ‘synthetic’ faces. At no point is one human face being compared to another’s in the manner that a person might ‘recognise’ someone else. This is an utterly dispassionate and dehumanised statistical procedure.
It is therefore important for us to remain mindful that computer vision systems do not ‘see’ faces (or even pictures of faces) in any way comparable to the way that humans see faces. The adjusted and normalised images that facial recognition systems construct contain no meaningful information to a human observer – these are “utterly uninterpretable” (Offart and Bell 2020) abstract images that favour shapes over texture, mathematical patterns over human perception. FRT is not designed and deployed to recognise you as a unique human being. Instead, FRT is a correlational exercise in moving beyond all difference, distinctiveness and diffuseness – the aim here is to flatten and detect similarity. Moreover, as with data-driven process, facial recognition presumes a residual error rate – any return of a ‘false-negative’ or ‘false-positive’ is not seen as the system ‘failing’, but an expected statistical feature. All told, distilling a face object down to a set of data points which is then compared to other data points in an almost endless recursive loop is a computationally sophisticated task, but also a socially empty exercise.
Throughout our research it will be important for us to examine these dehumanising (and therefore alienating) connotations of FRT as a statistical process. Of course, all forms of surveillance have a traditional tendency toward the “dehumanisation of the observed” (Brighenti 2007, p.337), yet FRT seems to dial this dehumanisation up to another level altogether. As Ruha Benjamin (2019, p.45) reminds us, FRT only ever offers the promise of ‘reading’ a person at a surface level in a way that inherently marginalises any thicker descriptions that might arise from a human gaze. As such, our discussions need to mindful of this thinness. In part, this requires a considered use of language. We need to avoid the misleading equivalency of computer ‘vision’ with the visual processes that take place within a biological brain. We need to eschew anthropomorphic talk of FRT in terms of ‘seeing’, ‘recognising’, ‘remembering’ and ‘noticing’. When evoking a sense of machine ‘recognition’, we need to avoid a false sense of equivalency between the statistical processing of FRT and the human capacity to transfer what is recognised in one domain to another domain. As Nora Khan (2019) puts it: “We must stop with understanding the machine’s seeing as anything like human seeing. This comparison is a fallacy”.
All told, facial recognition is best understood as a computational process that is completely estranged from the people who fall under the scope of the camera. Facial recognition does not care about you, your face or your feelings. One’s facial contours are scanned in the same way that any other object or landscape is digitally mapped, modelled and rendered. One’s face is immediately flattened out into a computational problem of matching complex datasets with other data-sets. As such, facial recognition is not a process of ‘recognising’ a person’s face – facial recognition is a process of scanning a series of geometric locations on an object. Facial recognition does not see your laughter lines and beauty-spots – facial recognition locates a predefined series of data-points. This is an utterly emotionless exercise, with all context, nuance and humanity immediately stripped away as a single digital image is immediately reduced to the basic constituents of all things digital – zeros, ones and no points in-between. Facial recognition is nothing to take personally, or even something to presume is connected to one’s personhood.