Interesting: synchronized clapping isn’t as loud as loud as unsynchronized clapping. Here’s the article: physics of the rhythmic applause. (Here’s the non-gatex arXiv version: “self-organization in the concert hall: the dynamics of rhythmic applause.”
We report on a series of measurements aimed to characterize the development and the dynamics of the rhythmic applause in concert halls. Our results demonstrate that while this process shares many characteristics of other systems that are known to synchronize, it also has features that are unexpected and unaccounted for in many other systems. In particular, we find that the mechanism lying at the heart of the synchronization process is the period doubling of the clapping rhythm. The characteristic interplay between synchronized and unsynchronized regimes during the applause is the result of a frustration in the system. All results are understandable in the framework of the Kuramoto model.
It’s the end of the year and many profs are writing letters of recommendation. Here are a few links. First, know that there are differing codes (also some discussion at orgtheory on this). And, biases (e.g., linked to the attractiveness of the student) may play a role in receiving a positive recommendation (but don’t worry, being attractive isn’t always a good thing – sometimes it’s a disadvantage). In short, the signal from letters of recommendation is hard to read. Here’s a piece on the Big 5 personality characteristics and letters of recommendation. Here’s a paper that says letters of recommendation are helpful for medical school admission. This paper says no. And, no, I haven’t read all of the above papers (they were published in journals of varying quality) – I just quickly searched google scholar for various papers related to letters of recommendation.
And an off-topic, end-of-year bonus tip: if you’re already behind on grading student papers – the ol’ staircase method can quickly fix things.
Via Karim’s Twitter feed – perverse incentives in academia.
The Turing Test is a key test of artificial intelligence: can robots fool humans into thinking they are intelligent? Despite optimistic projections for AI (e.g., Herbert Simon made some wild predictions),AI still underwhelms. Well, in some areas (in chess, for example, computers beat humans).
Perhaps the Turing test isn’t the right measure of ‘intelligence.’ But chatbots have yet to fool humans that they actually are human. The yearly Loebner prize puts this to the test, here’s the 2010 winner Suzette, or the 2011 winner Rosette. Chat with either of the bots, or any other for that matter, and you’ll quickly see the problems. Over at orgtheory.net we have ‘tested’ the winning chatbots several times – and inevitably they fail.
The clip below is a game show with some AI bots (ol’ Eliza versus Deep Blue versus an evolutionary algorithm – ok, it’s not the actual bots).
The best clip still is the chatbot v chatbot discussion at Cornell.
Here’s Ray Kurzweil a few weeks ago (at the 2011 Singularity Summit) talking about “from Eliza to Watson to passing the Turing Test.”
The Apple iPhone launch today is setting records. I’ve never stood in line for a product. But here I am, waiting in a thankfully short line for my iPhone 4S. Sorta ridiculous. It feels like the line is an homage of sorts to Steve Jobs. Or something like that.