As we all know theatrical sound film releases are typically projected at 24 or 25 frames per second. Film is a progressive medium where each film frame captures an image at a discrete point in time. However, film must be developed before it can be exhibited and must handled by experienced technicians, making it a costly medium in which to produce artistic works. To reduce flicker, a shutter in the camera would open or close twice for each frame.
Prior to the advent of television, celluloid film was the only commercial means to display visual moving images. However, the introduction of wholly-electronic television broadcast and receiver systems meant a massive change. TV broadcast cameras were able to achieve acceptable image quality by broadcasting images in an interlaced format. A broadcast camera and a TV tube display images in a set number of lines, and the electron scanning beam inside the tube scans or displays each line sequentially, then returns and draws the next line. (Think of a typewriter.) When it gets to the bottom of the tube, it returns to the top of the tube and draws the next set of lines.
In order to allow the electron beam sufficient time to draw all the lines, interlacing was used. In an interlaced format, "frames" become "fields". A field only captures the odd or the even lines of the TV camera lens. After all the odd lines in the first field are captured, then all the even lines of the second field are captured. In the NTSC countries, 59.94 color fields (formerly 60 fields for B&W NTSC) are captured each second. In PAL and SECAM countries, 50 fields are captured each second. When this is broadcast to a TV screen, the fields are displayed as they are captured. The high number of displayed fields avoids flicker on the TV screen.
A common misconception about analog interlaced video is that both fields are capturing the image at the same point in time, they are not. If you blend the fields together, you can get a film-like quality to a series of moving images, but that was not how the technology worked. In an NTSC system, an even field is captured 16ms after a preceding odd field. In the PAL system, it is 20ms. When the image captured is stationary, the effect is film-like or still-picture like. However, when there is movement on the screen, the movement from analog sources will appear much more fluid than it would from film. Movement is being captured twice or two and a half times as often on video as it would be on film.
Television relies on the high field display rate and phosphor decay rates to trick the viewer's eyes into seeing a continuous picture. Our eyes interpolate the information missing on the odd and even scanlines by what has come before and what will come afterwards. A Cathode Ray Tube, the television technology of choice for almost seventy years, is designed to display interlaced video at the standard NTSC (525/480 line) or PAL (625/576 line) resolution. They can also display low resolution progressive scan signals at 240 or 288 lines, respectively, but only video game consoles use that.
In the beginning of broadcast TV, once a TV signal was broadcast and received by the televisions that could receive it, it would disappear into space and it would be gone forever. Repeating a TV program meant performing it again on another evening. In the United Kingdom, which really only uses one time zone, there was some interest in archiving TV broadcasts of important events like the coronation of Queen Elizabeth II.
However, there was a much more practical problem in the U.S. In the U.S. there are four time zones, and it would hardly do to broadcast a TV program on the East Coast at 8:00 P.M. and the same program at 5:00 P.M. on the West Coast. Without a means of recording the program, the networks would have to perform their shows twice an evening, an added expense. Some productions like "I Love Lucy" did use film throughout and could be pre-recorded, but this was a more expensive method of production.
Engineers and technicians on both sides of the pond discovered a practical, if cumbersome means of preserving TV broadcasts, the kinescope (U.S.) or the telerecording (U.K.). A telerecording is a process where a film camera is pointed at a TV monitor and records the image being broadcast. The film camera is synchronized to the TV monitor and once the program has finished recording the film is developed and duplicated for sale or transport.
In the early days of TV broadcasting in the U.S., most programs were made on the East Coast and then the kinescope would be physically shipped out to the West Coast. The program would be watched in the West Coast approximately one week later than the East Coast. By the mid-50s, transcontinental coaxial cable allowed programs to be transmitted electronically. The recorded program's kinescope negative would have to be rushed to be developed and duplicated within three hours so that the program could be shown on the West Coast at the same scheduled time as it was broadcast on the East Coast. However, while the East Coast got a real video feed, the West Coast had to deal with the less-than-stellar quality of an image being broadcast not from a video camera but from a quickly developed 16mm kinescope.
By the late 1950, most hour long U.S. dramas and many half and hour comedies would be shot on film in Hollywood. Game shows, news programs, soap operas, children and local interest programs would often be shot on video or a mix of video and film to keep costs down. Many sitcoms would return to video in the 1970s with All in the Family and by the 80s the bulky video camera were sufficiently portable that location shooting could be shot on video and thereby match the look of the studio sessions.
In the U.S., the kinescope acquired a poor reputation when it came to video quality. One technical obstacle was that while the TV monitor displayed images 59.94 times per second, a standard film camera could only capture 24 images per second. At first, the kinescopes would simply merge a pair of fields and not capture 12 fields, but this could lead to a dark bar across part of the image if the TV and film camera were out of sync. Even when done right the result would still be jerky compared to the original.
In the U.K., film was rarely used for television production and reserved mainly for prestige dramas or location shooting. The telerecording process was being perfected in the mid-50s to the mid-60s to archive TV shows. An earlier system of telerecording is called the suppressed field system, where the camera is exposed to only one set of fields. The result was rather poor in resolution and the TV line structure is very visible. A better system was called the stored field system, where the display circuitry of the TV screen would be adjusted so that the lines from one field would have a longer persistence than the lines from the other field and both could be captured within a film frame.
When Apmex introduced the first practical videotaping system in 1956, this did not mean the end of the kinescope/telerecordings. Videotape could store a program for rebroadcasting at a later time without processing and developing. It could also be degaussed and reused many times. The same tape could be used for color or B&W, but kinescopes were almost always in B&W. However, broadcast quality videotape was very expensive in the 1950s and 1960s. It was also costly and difficult to edit videotape compared to film until the 1970s. Finally, incompatibilities in broadcasting standards meant that exporting TV programs would have to be done on film.
The kinescope/telerecording process eliminates the fluidity in broadcast. What was originally broadcast as a fluid, "play in your living room" experience turns into a "lost film." Of course, if a program was sold to another country, as was Doctor Who, this was the way foreign audiences would have to watch it until the 1970s. Eventually, almost all U.K. videotaped programs from the 1950s and 1960s were wiped so that the tape could be reused and the only way they were archived was through telerecordings. Many telerecordings were also purged from 1972-78. Many of the surviving telerecordings have been rebroadcast or transferred to home video. Some lost programs are only maintained through fan recordings of the broadcast audio and telesnaps, still photograph pictures taken of the TV screen.
In the late 1990s, computer technology had become sufficiently powerful and affordable to try to restore the video look. The process matured into the VIDFire (VIDeo Field Interpolation Restoration Effect) process. In VIDFire, the frame rate is doubled through interpolation to give an approximation of what the video look would look like. Because the fields are blended together on film, the results of trying to separate the frame into odd and even fields just does not work well. Instead, between each pair of frames an intermediate frame is constructed. Then each pair of frames are "interlaced" for transmission or recording onto DVD.
Modern display technologies, in particular LCDs, are inherently a progressive display technology. Like film, an LCD always displays whole frames, which makes for difficulties when trying to view analog interlaced video as it was meant to be seen. In the typical circumstance, a player will combine a pair of interlaced fields and then display the "frame" twice. The result are combing artifacts whenever there is movement between a pair of fields. The fluidity is not completely lost, but the effect is rather ugly and not representative of what the video should have looked like. You can also see this issue with film sources that have been converted to 59.94i through 3:2 pulldown, which turns film material into video material by using odd fields and even fields for some film frames to make up the differences in the frame rate.
Youtube does something similar to what an LCD does for interlaced video but its method avoids the combing artifacts. It essentially drops half the fields and interpolates the missing lines on the other half of the fields, thus turning high frame rate material into standard frame rate material. In essence, what Youtube is doing is essentially a computerized version of something in between the stored and suppressed field processes.
In order to reverse this process, you must first convert your interlaced source to a high frame rate progressive source, then upscale the source to HD. You need to use a particular decoder called YADIF. YADIF is ideal for interlaced sources. YADIF constructs a separate frame for each field of video using information before and after each field. Each resulting frame consists partly of original video and partly of interpolated video. The end result is a high frame rate video. Then you convert this to 720p to keep the high frame rate video intact for Youtube. This Youtube tutorial shows you how you can do this very inexpensively using cheap capture devices : https://www.youtube.com/watch?v=sn_TDa9zY1c
Another great post! One aspect on these restorations I thought was very clever was recovering colour information from the Chroma dots with HD scans of the Telerecording. Wikipedia has a little on it at https://en.m.wikipedia.org/wiki/Colour_recovery#From_chroma_crawl
ReplyDelete