Справочник Пользователя для ViewCast 4100
Advanced Operations
52
ViewCast
Figure 63. Custom fields
Note: Choosing a size larger than 1280 x 420 is not recommended due to the high data rates
CPU usage required.
5. Select the Frame Rate from the dro-down list.
6. Drag the sliders to adjust the Gamma, Brightness, Contrast, Hue, and Saturation (Figure 64).
Figure 64. Video Filter Settings
Note: Click the Restore button to the right of the filter to reset the settings to the default.
7. Click the De-Interlace setting you want to apply (Figure 65). Options include:
None
– Performs no de-interlacing of any kind.
– Performs no de-interlacing of any kind.
Bob0
– Applies inverse telecine de-interlacing to all telecine video.
– Applies motion adaptive de-interlacing to all video that is not telecine.
– Switches dynamically between the two modes as the content changes.
– Available for NTSC video only.
– Applies inverse telecine de-interlacing to all telecine video.
– Applies motion adaptive de-interlacing to all video that is not telecine.
– Switches dynamically between the two modes as the content changes.
– Available for NTSC video only.
Bob1
– Drops the redundant fields and reassembles the video in a 24 fps progressive format.
– Applies inverse telecine de-interlacing to all telecine video.
– Performs no de-interlacing of video that is not telecine.
–Available for NTSC video only.
– Drops the redundant fields and reassembles the video in a 24 fps progressive format.
– Applies inverse telecine de-interlacing to all telecine video.
– Performs no de-interlacing of video that is not telecine.
–Available for NTSC video only.
Advanced
– Is an algorithm for de-interlacing pure video (non-telecine) content.
– Applies motion adaptive interlacing to all video. It detects which portions of the image
are still and which portions are in motion then applies different processing to each
scenario.
– Is an algorithm for de-interlacing pure video (non-telecine) content.
– Applies motion adaptive interlacing to all video. It detects which portions of the image
are still and which portions are in motion then applies different processing to each
scenario.