This week I’ve been preparing material for two virtual events that are happening at the same time: SHARE and IBM’s Integration Technical Conference. In normal times, I’d have had to make a choice as to which conference to attend, but both are now using remote presentations. So I can give talks (different ones) at both.
The TechCon event is using “live” presentations, streamed with the presenter talking from home. I’ve been doing a lot of presentations like that already over the last few months – I talked about that in another recent post. While the platform running the TechCon stream is different, and some details will change, it’s probably not really going to affect how I work with that material.
The SHARE event is a little more interesting technically as they want the sessions to be prerecorded, submitted as a video, and then having just a short live Q&A at the end. I guess that helps with risky situations like bandwidth restrictions or network failures at the presenter’s end. Running everything centrally makes it more controllable. But I found a couple of challenges doing the recordings.
Just talk? Or write a script?
The first session I have to do for SHARE is an update on MQ features, with perfect timing since MQ 9.2 has just been announced. That talk is one I give regularly, with changes happening each time a new version comes out. But there were a couple of problems. The first is that when I talk to a real audience, that session takes about an hour whether I’m standing in front of them or I’m on the other end of a network link. And SHARE want the videos to be no more than 40 minutes.
I started out simply recording into Camtasia as if I was doing it live. And even though I was aware of the timing requirement, it still took about an hour but I was hoping to be able to edit it. But apart from timing, once I listened back, I didn’t like what I was hearing. The repetitions, digresstion, hesitations and emphases that are part of normal speech all added up, but couldn’t be cleaned up easily.
Scripting
So I decided to write a script instead. Although I’ve written for short videos before, this was a larger scale. But at least I could (I hoped) use the original recording as the basis of the text. I ended up with about 5000 words, typed over a morning, that I then read into a microphone. The script was much tighter and more precise than the “just talking” version. But it turned out to also be a lot shorter – far too much shorter. In trying to get an hour down to 40 minutes, I’d actually chopped it to under 25 minutes. Even though the effective content was identical. In an attempt to bring it closer to the available space, I added some slides and recorded another page of text with paragraphs inserted throughout the original. Which was still at bit too short, so a further extra page went in.
Editing
The various recordings and inserts were then edited in camtasia along with a short video where I did a few linking sentences to join the different sections.
Because the slides had animations that needed to be timed against the audio, one further step was needed: play the edited audio while recording the powerpoint screenshow, manually clicking through the slides to match what I was saying on the soundtrack. That recording could be added to the audio tracks and synchronised.
One final step was to extend the opening title slide for about 15 seconds and put a countdown on it, before I started to talk. I don’t know how useful that will be, but the idea was that people may not join the session exactly on time so that gives a bit of a delay. And I added my old MQ Theme music on the closing slide.
Unsocial Distancing
The other SHARE session I had to record had different challenges. When it’s done in front of an audience, there are two speakers and we tend to swap rapidly between who is talking. It’s not like having a two-part presentation where one of us does the first half and the other does the second. Here, it’s more like a double act with control passing easily between us over eye contact or hand-waves. But now Lyn and I were about 4000 miles apart. How could that work?
We decided to try it.
To get decent audio quality, we both set up Audacity to record our own speech (but not the other person’s). And then a web conference where we performed as best we could, intending that to be a master copy of both sides. I had planned to get both of us to CLAP simultaneously to get a sync point on the different audio recordings but forgot that and it turned out not to be necessary anyway.
The recording went better than I expected. Even the webex version could have been usable in an emergency. Except for one thing – just as in the first attempt at the other session, this was far too long. It took an hour, which was the length the material was designed for when it’s usually run. But this was not something that could be rerecorded with a script very easily. So I decided to take a different route, brutal editing!
Editing
It was always the plan to merge the two separate recordings. The minimum I wanted to do was place the two voices onto a stereo track so they were not both in the centre. A hard pan to left or right would have been going too far, but some separation could be good.
Then I got to the trimming. The handovers from me to Lyn or back again seemed always to have a second or two gap that could be removed. The much shorter silences now sound a lot more natural to me.
Lots of other hesitations could also be taken out without too much of an obvious blip. A few digressions and extraneous comments hit the floor and one slide has been cut completely from the final version. Other editing removed noises like clicks and breathing from the non-talking partner, though those changes of course did not affect the overall length.
It took a while to work through the whole thing but eventually I got 55 minutes down to 38 without substantive losses. I’ve often heard film directors and editors talk about decisions and debates on cutting 2 or 3 frames. This might not have been quite so finely-balanced but it was still an interesting exercise to go through.
Final production required more mechanical work – make sure the slides lined up with the edited audio, add the same theme tune at the back – and then we had a talk we could submit.
Lessons Learned
One lesson is just how much shorter a scripted session may be. I do have now to think about whether to write scripts for the ITC sessions I’m due to deliver instead of just talking. Though since the plan for that includes being on-camera in a small image all the time, reading a script might look odd. Especially without a teleprompter to allow looking straight at the camera.
I’ve also learned more about audio editing, and have some thoughts about how to improve some of the material if we have to do it again.
This post was last updated on July 26th, 2020 at 05:22 pm
Very interesting. I recognise the sound editing issues from when we make our videos too. It can be quite painstaking, but also sometimes fun. The finished article is usually quite satisfying when it works. We used to use Pinnacle, but have recently (a year and a bit ago) moved to Cyberlink PowerDirector and I’m quite impressed with it.