And below is my own summary, written in one sitting (as usual!). As always, any inaccuracy or misapprehension of what was presented is entirely my fault. Hope this all makes sense to you!
The talk had the notion of phoneme at the centre, and all the debates existing around its "existence". The first minutes of the talk were a nice overview of the "phoneme" and related notions and ideas leading to it through time: from the contributions of the sanscrit author Patañjali in the 2nd century recognising abstract categories of sound that present variability at the physical level, and the first Icelandic grammarians in the 12th century, to the writings of Sapir in the 1920 and the "phoneme slices" that people claim to have in their languages.
More modern discussions ensued of what the phoneme came to be understood as have been developed by Duriche-Desgenetes (1873), Luis Havet, Badoin de Courtenay (1871) with psychophonetics and physiophonetics, and of course, Henry Sweet in the 1870s and Daniel Jones already in 1911. In the US in the early 20th century, the notion of phoneme came to surface thanks to Bloomfield.
A few definitions of phoneme were revisted by Watt, especially those by Jones (1957), Watt (2009), and a very quote by Pike (1947): "Phonetics gathers raw material. Phonemics cooks it".
A very useful metaphor to discuss phonemes and allophones was recalled by Dom, that of Clark Kent and Superman as being in complementary distribution, and Superman and Spiderman for example being two different allophones of two different phonemes. (It reminded me that I used to refer to phonemes as any of us, and allophonic variants as us in our roles and attires: at school, at a party....Lately I've turned to Johnny Depp as the phoneme, and his million characters as his allophones, his "realisations" in films...)
Other interesting comparisons were introduced, such as the grapheme-allograph relations in Arabic, or even the number of ways we can represent a certain letter, say "A", which poses a very interesting question: what is the boundary that makes a certain sound no longer the same, how much can variation be stretched, what is the boundary?
Alternative analysis of the phoneme included Trubetzkoy's (1939) phonemic oppositions grounded in phonetics, formal notions of phonemes as bundles of features, as those put forward by Jakobson, Fant and Halle in 1952, based on acoustic analyses of instantaneous "time slices" (somehow looking for the centre of events in the signal). Watt also mentioned a game-changer, the work of Chomsky and Halle (1968), that abandons binarity and allows for phonetic gradation with the introduction of articulatory features in their description.
Watt continued the presentation by referring to the debates on the nature and existence of the phoneme that included quotes from Ladd (2013:370) and Dresher (2011:241). The work by Fowler, Shankweiler and Studdert-Kennedy (2016), who revisit a paper they themselves wrote in 1967, was given special attention, since it provides nine forms of evidence of the existence of the phoneme as an entity, including issues like phonemic awareness, adult visual word recognition, the presence of systematic phonological and morphological processes, the existence of speech errors (spoonerisms), and the fact that co-articulation, as was previously claimed, does not really eliminate the presence of a phoneme.
Of course, as Dom remarks, when we look at MRIs, spectograms and waveforms, we may not so easily be able to see discrete units, but machines seem to be programmed to see the signal as composed of chunks. It was interesting to see a cochleargram, because as Watt pointed out, it does show perhaps more continuity than a wide-band spectogram, for instance.
The second part of the talk discussed phonemes in phonetic work done through speech technology, for forensic and also sociophonetic purposes. It discussed some of the findings by (the absolutely brilliant!) PhD student Georgina Brown, who has adapted the ACCDIST programme by Mark Huckvale in UCL into Y-ACCDIST as part of her PhD research. One of the achievements of Y-ACCDIST is the use of the software for speaker comparison even when the data are not necessarily comparable (ACCDIST works well when all speakers have read the same text). I cannot fully do justice to this part of the talk because there are some technical bits that I am not familiar with, and I don't have a head for statistics, but I'll report on what I could follow:
Some examples of the use of the programme were presented, which include the measurement of distance between possible pairs of phonemes through what is known as a Feature Selection process, in which several features are left out to focus on the ones which are most relevant or less redundant, and that helps modelling.
Comparisons across speakers were run through the programme, and Y-ACCDIST was able to assign speakers to a particular accent with almost 90% accuracy. It was interesting to hear that the programme was more accurate when particular features (and not the whole) set was compared, and also when human intervention in the filtering of features to be compared was added to the speaker accent allocation process.
All in all, Watt concludes, the discoveries of the application of tools like Y-ACCDIST and the evidence provided in Fowler et al suggest that it is too premature to declare the demise of the phoneme.
The question period was interesting, and it included comments on issues like the fact that perhaps many approaches to speech analysis begin from the notion of the phoneme but fail to see what happens in naturalistic speech and what participants themselves feel is relevant, and that there is considerable phenomena that cannot be explained through the notion of the phoneme. There is always a search for robustness in experimental settings that fails to see that what should be more robust is what is actually done in natural situations.
All in all, a fascinating talk, with a lot of food for thought. If you ask me, does the phoneme exist? I would say that it's like magic, you feel it's there but at times you cannot pinpoint the actual trick that makes it work.