On when a face veil is and isn’t a problem in lectures.
Some lecturers will rightly encourage forms of student interaction that are impossible for those covering their faces, Eric Heinze, professor of law and humanities at Queen Mary University of London. His book Hate Speech and Democratic Citizenship is published by Oxford University Press argues.
|Photo: James Fryer|
In France, seaside towns tried to ban “burkinis” and Nicolas Sarkozy, the former president who is eying another bid for office, wants to crack down on veils in universities.
Politicians’ claims of responding to security threats are scarcely credible. There are a thousand ways to explode a bomb. In none of the major attacks in Europe have the perpetrators worn burkas. Since the French banned face-covering in public places in 2010, attacks have actually increased.
The controversy is not about security but rather symbolism. To many Westerners, veiled women seem off-putting, hostile or alien to our values. Yet all sorts of people in modern public spaces look off-putting, hostile and alien to our values. If those are to be our criteria for imposing bans, the police will be busy indeed.
When the French introduced their ban, the government cited, among other reasons, the importance of reciprocal exposure of faces. That was hardly a knock-down argument. In Paris, as in London, you can navigate oceans of faces without reciprocally interacting with a single one. You’ll scarcely take two seconds to notice the uncovered faces, so why ban the covered ones?
Still, it would be wrong to conclude that face coverings should be admitted in all circumstances. We need something more nuanced than the all-or-nothing approaches. Universities offer examples of where burkas do and do not pose problems.
Many lecture theatres resemble urban centres. Students stomp in and out, noticed neither by their instructor nor by each other. For the lecturer who needs to explain cellular photosynthesis or atomic half-life, it may matter little whether the auditorium seats 30 or 3,000, or whether one is present at all. Students can easily watch a taped lecture months later, thousands of miles away. They can wrap themselves in a dozen veils or can sit at their computers stark naked. The lecturers may not feel that those topics require the study of individual opinions.
But other lecturers may seek models of communication whereby students interact not as individual data absorbers but as fully fledged citizens. Those lecturers must retain the prerogative to insist on facial exposure when they launch discussions on themes illustrative of citizens’ self-government, such as reintroducing the death penalty or legalising hard drugs.
Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, by Cathy O’Neil.
Book of the week: If all-seeing ‘miracle’ tech is making the decisions we must demystify the tricks, says Danny Dorling, Halford Mackinder professor of geography, University of Oxford, and author of A Better Politics: How Government Can Make Us Happier (2016).
|All-knowing deities: ‘Facebook is more like the Wizard of Oz: we do not
see the human beings involved’, explains Cathy O’Neil of a modern world
ordered by algorithms.|
Many years ago I attended an event at a packed lecture theatre at the annual meeting of the American Association of Geographers. A huge audience had gathered to hear a little-known academic speak on the subject “What your credit card record tells them about you”. But the speaker never appeared. Was it a successful stunt to illustrate just how paranoid we all are about what they know about us? Perhaps the speaker had been trying to show the audience just how little social scientists knew about “big data”, long before the phrase had ever been thought of – or perhaps he had just slept in.
Cathy O’Neil, an academic and former hedge-fund quant, or quantitative analysis expert, has a story to tell, and it is a story about you. She draws from that same deep well of fear that helped to draw crowds at the AAG conference: the suspicion that we are all being observed by hidden forces, algorithms we cannot understand, designed by faceless quants who work to maximise the bottom line for their masters. In the past, there was just one all‑seeing god we had to fear. Now we live in a world with multiple all-knowing deities, each a little different, each oblivious to the fate of most individuals, each unbelievably powerful and each potentially malign.
As O’Neil explains of one of the biggest and most ubiquitous of those deities, “Facebook is more like the Wizard of Oz: we do not see the human beings involved.” We can’t see the quants who decide which of our many friends’ posts we view first, and it turns out that the quants play games with our emotions, testing to see how some groups react to being fed, say, more bad news than good. A majority of users (62 per cent, according to the data O’Neil cites) are completely unaware of this.
These are newly emerging gods, and currently most of them are thought of as benign corporations distributing their software for free, presumably to enhance the common good. A majority (73 per cent) of Americans believe that the search results offered up to them by Google are both accurate and impartial. O’Neil asks how anyone could know if the results we see have been skewed to “favour one political outcome over another”. She reports that Google has prohibited researchers from creating scores of fake profiles in order to map the biases of its search engines. But then again – if they had done so, how would Google know?
US voters, O’Neil claims, have been “microtargeted” by political parties and other unknown groups, which for her explains why 43 per cent of Republicans continue to believe that Barack Obama is a Muslim because “microtargeting does its work in the shadows”. Evidence for these and similar claims made in the book is scant; references are generally restricted to the name of a researcher and the university at which they work, and so ironically the reader has to rely on Google to find the source material. Google’s quants could map out who had most likely read this book and found it most interesting by focusing on such searches. But do they really have the time or inclination? Or do they abide by Google’s infamous dictum, “don’t be evil”?
The pressure to be evil comes from that famous root of all kinds of it – money. Given the peculiarly undemocratic nature of the American presidential voting system, only 1 per cent of swing voters living in swing states can be key to the outcome. According to O’Neil, “the money from the financial 1 percent underwrites the microtargeting to secure the votes of the political 1 percent”. But can such voters be targeted that effectively, and where is this book’s reference to the smoking gun – the political quant who came in from the cold and explained how it was all done?
|Weapons of Math Destruction: How Big Data Increases Inequality |
and Threatens Democracy
Source: Times Higher Education