What Does a Video Processing Chip Do?
A video processing chip performs a variety of functions such as upconversion, deinterlacing, frame rate conversion and noise reduction. They can also perform 2D stabilization and provide a range of display enhancements.
As the complexity of video coding standards increase, software-programmable processor techniques become increasingly important. A highly programmable processor can rapidly execute the specialized control code needed for these algorithms.
Video processing chips enable AV systems to scale content, making it easy to use different resolutions for different displays. They also perform a number of other important tasks like upconversion, deinterlacing, frame rate conversion and edge enhancement.
A video processor is used in a wide range of devices including matrix switchers, amplifiers/splitters, multiviewers and video walls. It is the brains behind a display wall, providing real time visualization capabilities by routing the desired signal to different panels.
The computational requirements of a video processing algorithm are often too high for implementation on standard embedded RISC processors. As a result, specialized video processors are needed. The Moustique MPP ME processor is an example of such a chip. It is optimized for motion estimation acceleration, enabling it to outperform even high-end RISC processors.
A video processor needs to deinterlace interlaced images into a progressive scan. This is a critical task that is magnified in the larger, wall-sized displays now being used in boardrooms and showrooms.
Interlaced video information that is received from standard analog television, cable television broadcasters (CATV), DVD’s and even direct broadcast satellite systems must be de-interlaced in order for digital display devices to properly operate. This is the job of the video processing chip.
A high quality deinterlacer is needed to eliminate jagged edges, combing and other artifacts that can be seen on the larger display screens. Crucial IP’s VPC-1 is a full video processor and deinterlacer with line-doubled output that supports motion adaptive deinterlacing, film cadence detection and low angle directional interpolation. It also implements both blend and bob deinterlacing techniques. Inverse telecine (IVTC) is supported, and the ITelecineCaps member of the D3D11_VIDEO_PROCESSOR_CAPS structure specifies which IVTC modes are supported. Bob deinterlacing works by blending the two fields of an interlaced image into a single frame, while adaptive deinterlacing uses spatial or temporal interpolation on a field-by-field basis.
A video processing chip can convert input signals from a variety of sources. It is used in a wide range of AV equipment such as matrix switchers, amplifiers/splitters and multiviewers but also for presentation systems like video walls or OSD.
Video processors often feature noise reduction capabilities to remove unwanted grain from a video clip. This is a common problem when a video has been shot in poor conditions and is particularly noticeable on smaller sensor devices such as smartphones or action cameras.
Noise reduction is computationally intensive so the speed of the video processor is critical. There are two hardware components that have the biggest impact on speed: the CPU and GPU. These processors are responsible for performing millions of calculations every second, so the performance of your system depends on how well they work together.
A video processing chip can enhance the visual quality of a display. It can perform a range of functions including upconversion, deinterlacing, frame rate conversion, noise reduction and artifact removal. It can also provide edge enhancement capabilities, which are important for LED display applications.
A basic method to avoid combing artifacts is to ignore the even fields and recreate them using only the odd ones. However, this approach compromises image quality by throwing away half of the information. Silicon Optix’s HQV processing technology provides no-compromise deinterlacing and scaling performance for standard-definition and high-definition signals.
The dominant bit rate control knob in any video encoding algorithm is the quantization parameter (QP). By making smart QP choices, it is possible to improve visual quality at a lower bit rate. This requires intelligent algorithms that can efficiently execute on a programmable platform. Ideally, these algorithms can be upgraded by software without changing the bitstream syntax. This allows the processor to keep up with improvements in coding algorithms that can reduce distortion and increase bitrate efficiency.