>In 2010, the Academy sought to combat this verbosity with a new 45-second rule. In response, some winners sped through their acknowledgements, while others used humour or emotion to buy extra time before the music signalled them off. Occasionally, the orchestra was ignored entirely, with speeches like Adrien Brody’s 2003 win for The Pianist running well over the limit.
Brody so clairvoyant that he can ignore limits that don't even exist yet.
This was an enjoyable article but the conclusion where he finds the most thanked woman in oscar speeches and gets a response from her puts it over the top. Amazing.
Not really? It's a lot of work, a multi-week project, but reading a couple hundred word speech can be done in 5 minutes, following a checklist in hand, probably 10 minutes. Times 12 categories, and 80 years of history, that's a lot of time 160 hours, a working month. A lot of effort but humanely doable.
That's true, but assumes you have the checklist of what data to analyze in hand when you start out. If you only decide after the fact which familial relationships have interesting trends, you'd have to start over again. It seems more reasonable to start by transcribing everything to text, annotating that text, and then running a lot of scripting to automatically query that data.
Ok, obviously it's _doable_, but is it worth it? Using LLMs for this purpose would have been significantly cheaper, easier and with the right configuration just as reliable. Once the setup works, you could extend the analysis to all kinds of other interesting branches without having to look at a single speech by hand.
I would even go so far as to say that _not_ using LLMs for this task would be fairly odd, unless I'm missing something or the author really enjoys a month of manually classifying documents to write an interesting and well-written but not exceedingly outstanding article.
Great dive into the nature of the speeches and some interesting tidbits.
Counting the instances of the word “amazing” would be a fun follow up. That was our drinking game cue word. We inevitably stopped at some point because…poisoning became likely.
>In 2010, the Academy sought to combat this verbosity with a new 45-second rule. In response, some winners sped through their acknowledgements, while others used humour or emotion to buy extra time before the music signalled them off. Occasionally, the orchestra was ignored entirely, with speeches like Adrien Brody’s 2003 win for The Pianist running well over the limit.
Brody so clairvoyant that he can ignore limits that don't even exist yet.
Cool. As mentioned at the end, the oscars has site,
https://aaspeechesdb.oscars.org/
"This database contains more than 1,500 transcripts of onstage acceptance speeches given by Academy Award winners and acceptors."
This was an enjoyable article but the conclusion where he finds the most thanked woman in oscar speeches and gets a response from her puts it over the top. Amazing.
Very cool! I think mentions per word could be a good metric for some of these, otherwise a main takeaway is just "everyone crams in more stuff now."
How exactly was the data evaluated? I would assume that manually checking every speech would be too labor-intensive?
Not really? It's a lot of work, a multi-week project, but reading a couple hundred word speech can be done in 5 minutes, following a checklist in hand, probably 10 minutes. Times 12 categories, and 80 years of history, that's a lot of time 160 hours, a working month. A lot of effort but humanely doable.
That's true, but assumes you have the checklist of what data to analyze in hand when you start out. If you only decide after the fact which familial relationships have interesting trends, you'd have to start over again. It seems more reasonable to start by transcribing everything to text, annotating that text, and then running a lot of scripting to automatically query that data.
They probably just used the speech database that the Academy hosts? https://aaspeechesdb.oscars.org/
Ok, obviously it's _doable_, but is it worth it? Using LLMs for this purpose would have been significantly cheaper, easier and with the right configuration just as reliable. Once the setup works, you could extend the analysis to all kinds of other interesting branches without having to look at a single speech by hand.
I would even go so far as to say that _not_ using LLMs for this task would be fairly odd, unless I'm missing something or the author really enjoys a month of manually classifying documents to write an interesting and well-written but not exceedingly outstanding article.
Some people like doing stuff.
> I have added more details in the notes at the end of the article to explain how I found God, but for now, just have faith that I did.
> God cannot give them their next job - Steven Spielberg can
this is a very interesting project. I like when technology is used to analyze cultural or political events
Great dive into the nature of the speeches and some interesting tidbits.
Counting the instances of the word “amazing” would be a fun follow up. That was our drinking game cue word. We inevitably stopped at some point because…poisoning became likely.
[dead]