Non-linear editing is editing film and video using software programs such Final Cut Pro, Adobe Premier to arrange static and time-based media assets into a presentation that can tell a story or communicate a message. The media assets are created at different times and locations. The editor uses these pieces, which are stored in the computer in a project folder, to build a narrative that has a beginning, middle, and end.
The editing programs today use many of the tools of the early editing techniques in a digital format such as the razor blade tool. The program’s razor blade tool digitally cuts the digital image in a manner similar to the actual razor blade tool would slice film during the process of machine-to-machine editing.
The four parts of an edited sequence are:
- Scripted action and dialog when the video has a narrative progress.
- Unscripted action and dialog when the video is a piece of journalism or a documentary.
- Titles and graphics.
- Music and sound effects.
Scripted action and dialog involves a performer reciting a written dialog off-book or committed to memory. Performing does not go smoothly, so multiple attempts or takes are made. These takes are assembled in the editing room before the performance is presented to the audience. Sometimes promptors are used, such as during an awards show such as the Oscars or Grammys. A prompter is similar to a television except
a devised script would scroll down the screen.The performer would then rehearse using the device reciting the dialog as it moves down the screen. When the show is filmed the performer recites the dialog. This prevents the performer from forgetting or omitting critical parts of the dialog.
Unscripted action and dialog are created during a documentary or journalistic interview. Involving a non-performer, information is recorded and a storyline created around the interview. Another type of unscripted dialog is the voiceover (VO), in which, an individual would comment on a series of images without being seen. Reporters and narrators use this
in order to be unobserved during a critical point of the video. B-Roll are additional shots provided when actual images cannot be used, such when the crime occurs or the building where a private celebrity party is held.
Titles and graphics are stationary fonts and type that add to the storyline. Sometimes introductory when used at the beginning of a video such as title credits or at the end of the film in ending credits, they add additional information to the images and story being presented. They can be informative such as a graph or the name of the individual being interviewed. Graphics could be additional written information regarding a product for commercials or dates of historical events.
Music and sound effects are often prepared off-camera and recorded. These need to be added at precise moments on the video. To maintain quality of the sound, WAV and AIFF compression formats are used without regard to file size. MP3s , the consumer compression format, lose the fidelity of sound because of loss of data during the compression process and the data cannot be retrieved. The loss data creates a distortion that is noticeable.
Redundancy is the amount of space wasted in the storage media recording pictorial information digitally. Redundancy occurs in two ways: temporal redundancy and spatial redundancy. When a video image is taken, the image is shot in frames per second. In this series of images, a pixel can remain in a fixed location for a period of time. This is temporal redundancy.
Spatial redundancy occurs within one frame. Pixels can be spread or compressed in an image so the space the image consumes in storage can be large if the pixels are spread out. Shrinking the pixels reduces the file size.
The format most commonly used for uploading video to an online hosting site is MPEG-4-AVCHD. AVCHD works well with the flash memory card formats, Secured Digital (SD) and Compact Flash (CF) which are inexpensive types of memory. AVCHD has editing programs that do not need additional step of transcoding the formats to begin the editing process. The AVCHD standard makes it easier for video files to be recognized by editing software through the use of the .MTS (MPEG Transport Stream). The files names can be altered but the videos will remain intact. This allows consumer electronic devices such as prosumer cameras, gaming consoles, Blu-ray players, and other devices to read the coding and render the video and audio with fewer flaws that a “rookie” operator could make through mis-identifying the file formats.
The AVCHD format allows the operator to define the rates the video signal is recorded in the flash medium. The cameraperson can set image quality ranges from low to high bit rates. The higher the bit rate, the better quality of image and the lowest amount of compression. But the reduced compression means larger file sizes and creates a need for a large SD card size. Yet, large size flash cards are becoming less expensive allowing longer videos to be shot by a prosumer camera.
The two methods of video compression are intraframe (I-frame) compression and interframe compression. Compression itself is a mathematically-based algorithm that allows the reduction of the file size of images while maintaining visual quality. The way compression works is similar to compression in photographic files and audio files. Video compression takes advantage of the physical characteristics of filming which is a vast number of individual shots called frames. Frames have similar characteristics to movie film which is actually individual photos taken by a camera at a high rate of speed except that the construction of frames are with computer pixels rather than a chemical emulsion exposed to light.
In compression, the software program relies on the principle of redundant information in multiple images. Redundancy occurs in two types: Spatial redundancy and temporal redundancy. If a pixel occurs in the same location for a number of images, the program notes it in the first image and removes it in subsequent images until the location ceases. The program is using temporal redundancy to compress the image. The second type of redundancy, spatial, the pixels are in the process of stretching, so the software prevents the pixels from expanding.
Video software uses either intraframe compression or interframe compression for reducing the file size. Intraframe works with the issue of spatial redundancy by reducing the expansion of the I-frame 10:1, maintaining the integrity of the data within the image. But the file is still large because the program doesn’t make use of temporal redundancy. Interframe compression addresses both issues of spatial and temporal redundancy.
A software program using interframe compression starts with creating I-frames like the intraframe program would create but it goes further by analyzing the frames to create the I-frame at fixed intervals of 15 frames. These frames are organized into a GOP or group of pictures. In the interframe method, the I-frame is renamed a keyframe, which acts as the reference frame, holding the color value of the pixel in place, similar to a sample in PCM.
The omnidirectional microphone has a polar pattern that picks up sound from every direction. This polar pattern requires a quiet room with little noise. The omnidirectional is used for orchestra, theatre, and choir performances. The microphones would be hung from the rafters of the performance hall, in strategic locations, in order to capture the range of all the instruments and/or voices. The audience members are generally quiet during the performance, although if an audience member coughs or sneezes during a quiet passage, that will be picked up as well.
The bidirectional microphone has a polar pattern that picks up sound from two directions, the front and the back of the microphone element. This pattern blocks sound waves entering from the sides of the microphone. The bidirectional microphone would be used for interviews so any extraneous noise from the audience is blocked and the microphone can be shared. The interviewer and interviewee would be able to hold their discussion without an additional set up.
The cardioid microphone has a “heart-shaped” polar pattern, in which only the sound waves coming from one direction is picked up by the microphone element. This type of microphone is used by speakers during a presentation and vocalists needing to isolate the sound of their voices.
Dynamic range is the differences in loudness within the performance. A music composition has different loudness for artistic emphasis of emotion. If reading a music sheet, there would be written commentary, such as piano or “soft” or forte “loud”, by the composer telling the singer(s) or instrumentalists how to “sound” the piece. An audio engineer needs to take into consideration how to maintain the fidelity of music and the dynamic range, keeping both balanced for the aural effect the composer or presenter is creating.