Prototyping Weeknotes #100
This is the Prototyping team's 100th weeknote. We started the experiment at the start of 2010. At that time we were kicking off two projects, one resulted in Music Trends, one became Zeitgeist and Music Resolver has been informing some more recent work in the Βι¶ΉΤΌΕΔ. Since then we've built scores of prototypes, pushed a few things through to production and standardisation, expanded, changed, moved offices and . And what about now? I just went round the office asking people.
(The week in photos, clockwise from top-left: our publications this year, some windows on nearby Riding House Street, Akua's pineapple cake, whiteboarding, Radio 3 vs R&D, pile of post-its)
One of the project teams is preoccupied with sound. Matt is building a with the . It was a bit of kit used by the Βι¶ΉΤΌΕΔ for sound effects so they didn't have to fire blank cartridges at microphones, which scared the actors and damaged the microphones.
"The noise produced by a stage-pistol, firing blanks, does not sound at all convincing when picked up by normal means in a studio. Because of their comparatively loud peak energy, they had to be used a long way from the microphone and they sounded unreal. Then occasionally they 'misfired' (disastrous in tense situations), they scared the performers, and they required a special license and safe keeping to comply with Βι¶ΉΤΌΕΔ Office regulations."
He's doing this by recreating the signal processing chain as published in the old monograph. At the moment it sounds somewhat like being slapped with a ruler, but under correct recording conditions on a sound stage it should sound like a gun. This is all to prove that we can simulate these things using the web audio APIs, as a way of flexing and testing them. The team of Matt, Chris L, Olivier and Pete are starting to put their prototypes into a narrative to tell a story around it. Chris L is still searching for an ancient mythical journal to help them and Matt is off to play with an old synthesizer next week.
Yves gives one of his regular ABC demos on Thursday. As usual he shows us a terminal view but this time he's doing automatic speaker segmentation () and it appears to be quite successful. The tricky next step is to identify who the speakers actually are. Theo is typing up notes from whiteboards from our initial testing on understanding how users tag audio and video content. Most of the people weren't familiar with tagging as a practice; some tagged while others wrote sentences and phrases. He and Penny have also been comparing 1) auto-generated tags from a perfect transcription, 2) auto-generated tags from a speech-to-text transcription and 3) user-generated tags from the audio. Next up, they're going to move away from paper and build a quick prototype to test these things better and make the analysis easier. So Andrew N's been brought on board for that and he's getting up to speed and reading through the research.
Also at the team demo session on Thursday Chris Newell showed the drag/drop like/hate prototype that demonstrates a quick start and tuning of programme recommendations. Sean's huge PC needed its power supply replacing earlier in the week. Now he's trying to connect to live TV streams from within a browser using the HTML5 video element. Next up will be to test changing channels. Olivier, Yves, Chris L and Michael have been preparing talks for the . Joanne and Pete have been finishing up the report on people's attitudes to personal data, tracking and TV viewing. Duncan is installing to run background tasks with queues and workers. He's using it initially to tweak his . Andrew McP has been reading about subtitles and identifiers, Michael Smethurst's been thinking and talking about and how to use person identifiers to link across news, sports and programmes. Chris G's aim is not to appear in weeknotes, if he's doing his job and everything's going smoothly he won't.
That's everyone in our London office, but some of the team aren't in London so to the emails...
Michael Sparks has been wrapping up the social bookmarking system, "...extracting a usable base layer of second screen sync, interactive push, audience feedback from the work I've done in those areas, and documenting that as I go". He's been testing it on The Apprentice and the Budget this week as they're both high volume test cases. "The apprentice peaked / had a constant throughput of around 2500 tweets/minute and a total of ~130000 tweets in the hour, whereas the budget had a peak of 1100/1500 or so, a mean throughput of ~ 300-350 tweets/minute and a total of ~67000 tweets in the 3 hours."
Libby and Vicky B are organising next week's NoTube final review meeting and in her penultimate week Vicky wrote up the findings from .
And down the end of the office I can hear Vicky S on a video link from Salford. They're just finishing up a meeting on ABC. She's also been developing a few project ideas and is down here on Friday at UCL with the design team spending some time with the HCI students and seeing their final presentations from a 2-week project we set them about enhancing the experience of radio while keeping production costs low.
Finally, we've changed our team name, we're now the Internet Research & Future Services (IRFS) section of Βι¶ΉΤΌΕΔ R&D and we will continue to improve technology and design through research, prototyping and open experimentation. You can find us on Twitter at or contact us at irfs@bbc.co.uk. We'll still be writing weeknotes, but from now on they'll called be the "IRFS weeknotes". See you next week.
Comments Post your comment