1 .. SPDX-License-Identifier: GPL-2.0
3 i.MX Video Capture Driver
4 =========================
9 The Freescale i.MX5/6 contains an Image Processing Unit (IPU), which
10 handles the flow of image frames to and from capture devices and
13 For image capture, the IPU contains the following internal subunits:
15 - Image DMA Controller (IDMAC)
16 - Camera Serial Interface (CSI)
17 - Image Converter (IC)
18 - Sensor Multi-FIFO Controller (SMFC)
20 - Video De-Interlacing or Combining Block (VDIC)
22 The IDMAC is the DMA controller for transfer of image frames to and from
23 memory. Various dedicated DMA channels exist for both video capture and
24 display paths. During transfer, the IDMAC is also capable of vertical
25 image flip, 8x8 block transfer (see IRT description), pixel component
26 re-ordering (for example UYVY to YUYV) within the same colorspace, and
27 packed <--> planar conversion. The IDMAC can also perform a simple
28 de-interlacing by interweaving even and odd lines during transfer
29 (without motion compensation which requires the VDIC).
31 The CSI is the backend capture unit that interfaces directly with
32 camera sensors over Parallel, BT.656/1120, and MIPI CSI-2 buses.
34 The IC handles color-space conversion, resizing (downscaling and
35 upscaling), horizontal flip, and 90/270 degree rotation operations.
37 There are three independent "tasks" within the IC that can carry out
38 conversions concurrently: pre-process encoding, pre-process viewfinder,
39 and post-processing. Within each task, conversions are split into three
40 sections: downsizing section, main section (upsizing, flip, colorspace
41 conversion, and graphics plane combining), and rotation section.
43 The IPU time-shares the IC task operations. The time-slice granularity
44 is one burst of eight pixels in the downsizing section, one image line
45 in the main processing section, one image frame in the rotation section.
47 The SMFC is composed of four independent FIFOs that each can transfer
48 captured frames from sensors directly to memory concurrently via four
51 The IRT carries out 90 and 270 degree image rotation operations. The
52 rotation operation is carried out on 8x8 pixel blocks at a time. This
53 operation is supported by the IDMAC which handles the 8x8 block transfer
54 along with block reordering, in coordination with vertical flip.
56 The VDIC handles the conversion of interlaced video to progressive, with
57 support for different motion compensation modes (low, medium, and high
58 motion). The deinterlaced output frames from the VDIC can be sent to the
59 IC pre-process viewfinder task for further conversions. The VDIC also
60 contains a Combiner that combines two image planes, with alpha blending
63 In addition to the IPU internal subunits, there are also two units
64 outside the IPU that are also involved in video capture on i.MX:
66 - MIPI CSI-2 Receiver for camera sensors with the MIPI CSI-2 bus
67 interface. This is a Synopsys DesignWare core.
68 - Two video multiplexers for selecting among multiple sensor inputs
71 For more info, refer to the latest versions of the i.MX5/6 reference
72 manuals [#f1]_ and [#f2]_.
78 Some of the features of this driver include:
80 - Many different pipelines can be configured via media controller API,
81 that correspond to the hardware video capture pipelines supported in
84 - Supports parallel, BT.565, and MIPI CSI-2 interfaces.
86 - Concurrent independent streams, by configuring pipelines to multiple
87 video capture interfaces using independent entities.
89 - Scaling, color-space conversion, horizontal and vertical flip, and
90 image rotation via IC task subdevs.
92 - Many pixel formats supported (RGB, packed and planar YUV, partial
95 - The VDIC subdev supports motion compensated de-interlacing, with three
96 motion compensation modes: low, medium, and high motion. Pipelines are
97 defined that allow sending frames to the VDIC subdev directly from the
98 CSI. There is also support in the future for sending frames to the
99 VDIC from memory buffers via a output/mem2mem devices.
101 - Includes a Frame Interval Monitor (FIM) that can correct vertical sync
102 problems with the ADV718x video decoders.
111 This is the MIPI CSI-2 receiver entity. It has one sink pad to receive
112 the MIPI CSI-2 stream (usually from a MIPI CSI-2 camera sensor). It has
113 four source pads, corresponding to the four MIPI CSI-2 demuxed virtual
114 channel outputs. Multiple source pads can be enabled to independently
115 stream from multiple virtual channels.
117 This entity actually consists of two sub-blocks. One is the MIPI CSI-2
118 core. This is a Synopsys Designware MIPI CSI-2 core. The other sub-block
119 is a "CSI-2 to IPU gasket". The gasket acts as a demultiplexer of the
120 four virtual channels streams, providing four separate parallel buses
121 containing each virtual channel that are routed to CSIs or video
122 multiplexers as described below.
124 On i.MX6 solo/dual-lite, all four virtual channel buses are routed to
125 two video multiplexers. Both CSI0 and CSI1 can receive any virtual
126 channel, as selected by the video multiplexers.
128 On i.MX6 Quad, virtual channel 0 is routed to IPU1-CSI0 (after selected
129 by a video mux), virtual channels 1 and 2 are hard-wired to IPU1-CSI1
130 and IPU2-CSI0, respectively, and virtual channel 3 is routed to
131 IPU2-CSI1 (again selected by a video mux).
136 These are the video multiplexers. They have two or more sink pads to
137 select from either camera sensors with a parallel interface, or from
138 MIPI CSI-2 virtual channels from imx6-mipi-csi2 entity. They have a
139 single source pad that routes to a CSI (ipuX_csiY entities).
141 On i.MX6 solo/dual-lite, there are two video mux entities. One sits
142 in front of IPU1-CSI0 to select between a parallel sensor and any of
143 the four MIPI CSI-2 virtual channels (a total of five sink pads). The
144 other mux sits in front of IPU1-CSI1, and again has five sink pads to
145 select between a parallel sensor and any of the four MIPI CSI-2 virtual
148 On i.MX6 Quad, there are two video mux entities. One sits in front of
149 IPU1-CSI0 to select between a parallel sensor and MIPI CSI-2 virtual
150 channel 0 (two sink pads). The other mux sits in front of IPU2-CSI1 to
151 select between a parallel sensor and MIPI CSI-2 virtual channel 3 (two
157 These are the CSI entities. They have a single sink pad receiving from
158 either a video mux or from a MIPI CSI-2 virtual channel as described
161 This entity has two source pads. The first source pad can link directly
162 to the ipuX_vdic entity or the ipuX_ic_prp entity, using hardware links
163 that require no IDMAC memory buffer transfer.
165 When the direct source pad is routed to the ipuX_ic_prp entity, frames
166 from the CSI can be processed by one or both of the IC pre-processing
169 When the direct source pad is routed to the ipuX_vdic entity, the VDIC
170 will carry out motion-compensated de-interlace using "high motion" mode
171 (see description of ipuX_vdic entity).
173 The second source pad sends video frames directly to memory buffers
174 via the SMFC and an IDMAC channel, bypassing IC pre-processing. This
175 source pad is routed to a capture device node, with a node name of the
176 format "ipuX_csiY capture".
178 Note that since the IDMAC source pad makes use of an IDMAC channel,
179 pixel reordering within the same colorspace can be carried out by the
180 IDMAC channel. For example, if the CSI sink pad is receiving in UYVY
181 order, the capture device linked to the IDMAC source pad can capture
182 in YUYV order. Also, if the CSI sink pad is receiving a packed YUV
183 format, the capture device can capture a planar YUV format such as
186 The IDMAC channel at the IDMAC source pad also supports simple
187 interweave without motion compensation, which is activated if the source
188 pad's field type is sequential top-bottom or bottom-top, and the
189 requested capture interface field type is set to interlaced (t-b, b-t,
190 or unqualified interlaced). The capture interface will enforce the same
191 field order as the source pad field order (interlaced-bt if source pad
192 is seq-bt, interlaced-tb if source pad is seq-tb).
194 This subdev can generate the following event when enabling the second
197 - V4L2_EVENT_IMX_FRAME_INTERVAL_ERROR
199 The user application can subscribe to this event from the ipuX_csiY
200 subdev node. This event is generated by the Frame Interval Monitor
201 (see below for more on the FIM).
203 Cropping in ipuX_csiY
204 ---------------------
206 The CSI supports cropping the incoming raw sensor frames. This is
207 implemented in the ipuX_csiY entities at the sink pad, using the
208 crop selection subdev API.
210 The CSI also supports fixed divide-by-two downscaling independently in
211 width and height. This is implemented in the ipuX_csiY entities at
212 the sink pad, using the compose selection subdev API.
214 The output rectangle at the ipuX_csiY source pad is the same as
215 the compose rectangle at the sink pad. So the source pad rectangle
216 cannot be negotiated, it must be set using the compose selection
217 API at sink pad (if /2 downscale is desired, otherwise source pad
218 rectangle is equal to incoming rectangle).
220 To give an example of crop and /2 downscale, this will crop a
221 1280x960 input frame to 640x480, and then /2 downscale in both
222 dimensions to 320x240 (assumes ipu1_csi0 is linked to ipu1_csi0_mux):
226 media-ctl -V "'ipu1_csi0_mux':2[fmt:UYVY2X8/1280x960]"
227 media-ctl -V "'ipu1_csi0':0[crop:(0,0)/640x480]"
228 media-ctl -V "'ipu1_csi0':0[compose:(0,0)/320x240]"
230 Frame Skipping in ipuX_csiY
231 ---------------------------
233 The CSI supports frame rate decimation, via frame skipping. Frame
234 rate decimation is specified by setting the frame intervals at
235 sink and source pads. The ipuX_csiY entity then applies the best
236 frame skip setting to the CSI to achieve the desired frame rate
239 The following example reduces an assumed incoming 60 Hz frame
240 rate by half at the IDMAC output source pad:
244 media-ctl -V "'ipu1_csi0':0[fmt:UYVY2X8/640x480@1/60]"
245 media-ctl -V "'ipu1_csi0':2[fmt:UYVY2X8/640x480@1/30]"
247 Frame Interval Monitor in ipuX_csiY
248 -----------------------------------
250 The adv718x decoders can occasionally send corrupt fields during
251 NTSC/PAL signal re-sync (too little or too many video lines). When
252 this happens, the IPU triggers a mechanism to re-establish vertical
253 sync by adding 1 dummy line every frame, which causes a rolling effect
254 from image to image, and can last a long time before a stable image is
255 recovered. Or sometimes the mechanism doesn't work at all, causing a
256 permanent split image (one frame contains lines from two consecutive
259 From experiment it was found that during image rolling, the frame
260 intervals (elapsed time between two EOF's) drop below the nominal
261 value for the current standard, by about one frame time (60 usec),
262 and remain at that value until rolling stops.
264 While the reason for this observation isn't known (the IPU dummy
265 line mechanism should show an increase in the intervals by 1 line
266 time every frame, not a fixed value), we can use it to detect the
267 corrupt fields using a frame interval monitor. If the FIM detects a
268 bad frame interval, the ipuX_csiY subdev will send the event
269 V4L2_EVENT_IMX_FRAME_INTERVAL_ERROR. Userland can register with
270 the FIM event notification on the ipuX_csiY subdev device node.
271 Userland can issue a streaming restart when this event is received
272 to correct the rolling/split image.
274 The ipuX_csiY subdev includes custom controls to tweak some dials for
275 FIM. If one of these controls is changed during streaming, the FIM will
276 be reset and will continue at the new settings.
278 - V4L2_CID_IMX_FIM_ENABLE
280 Enable/disable the FIM.
282 - V4L2_CID_IMX_FIM_NUM
284 How many frame interval measurements to average before comparing against
285 the nominal frame interval reported by the sensor. This can reduce noise
286 caused by interrupt latency.
288 - V4L2_CID_IMX_FIM_TOLERANCE_MIN
290 If the averaged intervals fall outside nominal by this amount, in
291 microseconds, the V4L2_EVENT_IMX_FRAME_INTERVAL_ERROR event is sent.
293 - V4L2_CID_IMX_FIM_TOLERANCE_MAX
295 If any intervals are higher than this value, those samples are
296 discarded and do not enter into the average. This can be used to
297 discard really high interval errors that might be due to interrupt
298 latency from high system load.
300 - V4L2_CID_IMX_FIM_NUM_SKIP
302 How many frames to skip after a FIM reset or stream restart before
303 FIM begins to average intervals.
305 - V4L2_CID_IMX_FIM_ICAP_CHANNEL
306 - V4L2_CID_IMX_FIM_ICAP_EDGE
308 These controls will configure an input capture channel as the method
309 for measuring frame intervals. This is superior to the default method
310 of measuring frame intervals via EOF interrupt, since it is not subject
311 to uncertainty errors introduced by interrupt latency.
313 Input capture requires hardware support. A VSYNC signal must be routed
314 to one of the i.MX6 input capture channel pads.
316 V4L2_CID_IMX_FIM_ICAP_CHANNEL configures which i.MX6 input capture
317 channel to use. This must be 0 or 1.
319 V4L2_CID_IMX_FIM_ICAP_EDGE configures which signal edge will trigger
320 input capture events. By default the input capture method is disabled
321 with a value of IRQ_TYPE_NONE. Set this control to IRQ_TYPE_EDGE_RISING,
322 IRQ_TYPE_EDGE_FALLING, or IRQ_TYPE_EDGE_BOTH to enable input capture,
323 triggered on the given signal edge(s).
325 When input capture is disabled, frame intervals will be measured via
332 The VDIC carries out motion compensated de-interlacing, with three
333 motion compensation modes: low, medium, and high motion. The mode is
334 specified with the menu control V4L2_CID_DEINTERLACING_MODE. The VDIC
335 has two sink pads and a single source pad.
337 The direct sink pad receives from an ipuX_csiY direct pad. With this
338 link the VDIC can only operate in high motion mode.
340 When the IDMAC sink pad is activated, it receives from an output
341 or mem2mem device node. With this pipeline, the VDIC can also operate
342 in low and medium modes, because these modes require receiving
343 frames from memory buffers. Note that an output or mem2mem device
344 is not implemented yet, so this sink pad currently has no links.
346 The source pad routes to the IC pre-processing entity ipuX_ic_prp.
351 This is the IC pre-processing entity. It acts as a router, routing
352 data from its sink pad to one or both of its source pads.
354 This entity has a single sink pad. The sink pad can receive from the
355 ipuX_csiY direct pad, or from ipuX_vdic.
357 This entity has two source pads. One source pad routes to the
358 pre-process encode task entity (ipuX_ic_prpenc), the other to the
359 pre-process viewfinder task entity (ipuX_ic_prpvf). Both source pads
360 can be activated at the same time if the sink pad is receiving from
361 ipuX_csiY. Only the source pad to the pre-process viewfinder task entity
362 can be activated if the sink pad is receiving from ipuX_vdic (frames
363 from the VDIC can only be processed by the pre-process viewfinder task).
368 This is the IC pre-processing encode entity. It has a single sink
369 pad from ipuX_ic_prp, and a single source pad. The source pad is
370 routed to a capture device node, with a node name of the format
371 "ipuX_ic_prpenc capture".
373 This entity performs the IC pre-process encode task operations:
374 color-space conversion, resizing (downscaling and upscaling),
375 horizontal and vertical flip, and 90/270 degree rotation. Flip
376 and rotation are provided via standard V4L2 controls.
378 Like the ipuX_csiY IDMAC source, this entity also supports simple
379 de-interlace without motion compensation, and pixel reordering.
384 This is the IC pre-processing viewfinder entity. It has a single sink
385 pad from ipuX_ic_prp, and a single source pad. The source pad is routed
386 to a capture device node, with a node name of the format
387 "ipuX_ic_prpvf capture".
389 This entity is identical in operation to ipuX_ic_prpenc, with the same
390 resizing and CSC operations and flip/rotation controls. It will receive
391 and process de-interlaced frames from the ipuX_vdic if ipuX_ic_prp is
392 receiving from ipuX_vdic.
394 Like the ipuX_csiY IDMAC source, this entity supports simple
395 interweaving without motion compensation. However, note that if the
396 ipuX_vdic is included in the pipeline (ipuX_ic_prp is receiving from
397 ipuX_vdic), it's not possible to use interweave in ipuX_ic_prpvf,
398 since the ipuX_vdic has already carried out de-interlacing (with
399 motion compensation) and therefore the field type output from
400 ipuX_vdic can only be none (progressive).
405 The following describe the various use-cases supported by the pipelines.
407 The links shown do not include the backend sensor, video mux, or mipi
408 csi-2 receiver links. This depends on the type of sensor interface
409 (parallel or mipi csi-2). So these pipelines begin with:
411 sensor -> ipuX_csiY_mux -> ...
413 for parallel sensors, or:
415 sensor -> imx6-mipi-csi2 -> (ipuX_csiY_mux) -> ...
417 for mipi csi-2 sensors. The imx6-mipi-csi2 receiver may need to route
418 to the video mux (ipuX_csiY_mux) before sending to the CSI, depending
419 on the mipi csi-2 virtual channel, hence ipuX_csiY_mux is shown in
422 Unprocessed Video Capture:
423 --------------------------
425 Send frames directly from sensor to camera device interface node, with
426 no conversions, via ipuX_csiY IDMAC source pad:
428 -> ipuX_csiY:2 -> ipuX_csiY capture
430 IC Direct Conversions:
431 ----------------------
433 This pipeline uses the preprocess encode entity to route frames directly
434 from the CSI to the IC, to carry out scaling up to 1024x1024 resolution,
435 CSC, flipping, and image rotation:
437 -> ipuX_csiY:1 -> 0:ipuX_ic_prp:1 -> 0:ipuX_ic_prpenc:1 -> ipuX_ic_prpenc capture
439 Motion Compensated De-interlace:
440 --------------------------------
442 This pipeline routes frames from the CSI direct pad to the VDIC entity to
443 support motion-compensated de-interlacing (high motion mode only),
444 scaling up to 1024x1024, CSC, flip, and rotation:
446 -> ipuX_csiY:1 -> 0:ipuX_vdic:2 -> 0:ipuX_ic_prp:2 -> 0:ipuX_ic_prpvf:1 -> ipuX_ic_prpvf capture
452 To aid in configuration and for backward compatibility with V4L2
453 applications that access controls only from video device nodes, the
454 capture device interfaces inherit controls from the active entities
455 in the current pipeline, so controls can be accessed either directly
456 from the subdev or from the active capture device interface. For
457 example, the FIM controls are available either from the ipuX_csiY
458 subdevs or from the active capture device.
460 The following are specific usage notes for the Sabre* reference
464 SabreLite with OV5642 and OV5640
465 --------------------------------
467 This platform requires the OmniVision OV5642 module with a parallel
468 camera interface, and the OV5640 module with a MIPI CSI-2
469 interface. Both modules are available from Boundary Devices:
471 - https://boundarydevices.com/product/nit6x_5mp
472 - https://boundarydevices.com/product/nit6x_5mp_mipi
474 Note that if only one camera module is available, the other sensor
475 node can be disabled in the device tree.
477 The OV5642 module is connected to the parallel bus input on the i.MX
478 internal video mux to IPU1 CSI0. It's i2c bus connects to i2c bus 2.
480 The MIPI CSI-2 OV5640 module is connected to the i.MX internal MIPI CSI-2
481 receiver, and the four virtual channel outputs from the receiver are
482 routed as follows: vc0 to the IPU1 CSI0 mux, vc1 directly to IPU1 CSI1,
483 vc2 directly to IPU2 CSI0, and vc3 to the IPU2 CSI1 mux. The OV5640 is
484 also connected to i2c bus 2 on the SabreLite, therefore the OV5642 and
485 OV5640 must not share the same i2c slave address.
487 The following basic example configures unprocessed video capture
488 pipelines for both sensors. The OV5642 is routed to ipu1_csi0, and
489 the OV5640, transmitting on MIPI CSI-2 virtual channel 1 (which is
490 imx6-mipi-csi2 pad 2), is routed to ipu1_csi1. Both sensors are
491 configured to output 640x480, and the OV5642 outputs YUYV2X8, the
496 # Setup links for OV5642
497 media-ctl -l "'ov5642 1-0042':0 -> 'ipu1_csi0_mux':1[1]"
498 media-ctl -l "'ipu1_csi0_mux':2 -> 'ipu1_csi0':0[1]"
499 media-ctl -l "'ipu1_csi0':2 -> 'ipu1_csi0 capture':0[1]"
500 # Setup links for OV5640
501 media-ctl -l "'ov5640 1-0040':0 -> 'imx6-mipi-csi2':0[1]"
502 media-ctl -l "'imx6-mipi-csi2':2 -> 'ipu1_csi1':0[1]"
503 media-ctl -l "'ipu1_csi1':2 -> 'ipu1_csi1 capture':0[1]"
504 # Configure pads for OV5642 pipeline
505 media-ctl -V "'ov5642 1-0042':0 [fmt:YUYV2X8/640x480 field:none]"
506 media-ctl -V "'ipu1_csi0_mux':2 [fmt:YUYV2X8/640x480 field:none]"
507 media-ctl -V "'ipu1_csi0':2 [fmt:AYUV32/640x480 field:none]"
508 # Configure pads for OV5640 pipeline
509 media-ctl -V "'ov5640 1-0040':0 [fmt:UYVY2X8/640x480 field:none]"
510 media-ctl -V "'imx6-mipi-csi2':2 [fmt:UYVY2X8/640x480 field:none]"
511 media-ctl -V "'ipu1_csi1':2 [fmt:AYUV32/640x480 field:none]"
513 Streaming can then begin independently on the capture device nodes
514 "ipu1_csi0 capture" and "ipu1_csi1 capture". The v4l2-ctl tool can
515 be used to select any supported YUV pixelformat on the capture device
516 nodes, including planar.
518 i.MX6Q SabreAuto with ADV7180 decoder
519 -------------------------------------
521 On the i.MX6Q SabreAuto, an on-board ADV7180 SD decoder is connected to the
522 parallel bus input on the internal video mux to IPU1 CSI0.
524 The following example configures a pipeline to capture from the ADV7180
525 video decoder, assuming NTSC 720x480 input signals, using simple
526 interweave (unconverted and without motion compensation). The adv7180
527 must output sequential or alternating fields (field type 'seq-bt' for
528 NTSC, or 'alternate'):
533 media-ctl -l "'adv7180 3-0021':0 -> 'ipu1_csi0_mux':1[1]"
534 media-ctl -l "'ipu1_csi0_mux':2 -> 'ipu1_csi0':0[1]"
535 media-ctl -l "'ipu1_csi0':2 -> 'ipu1_csi0 capture':0[1]"
537 media-ctl -V "'adv7180 3-0021':0 [fmt:UYVY2X8/720x480 field:seq-bt]"
538 media-ctl -V "'ipu1_csi0_mux':2 [fmt:UYVY2X8/720x480]"
539 media-ctl -V "'ipu1_csi0':2 [fmt:AYUV32/720x480]"
540 # Configure "ipu1_csi0 capture" interface (assumed at /dev/video4)
541 v4l2-ctl -d4 --set-fmt-video=field=interlaced_bt
543 Streaming can then begin on /dev/video4. The v4l2-ctl tool can also be
544 used to select any supported YUV pixelformat on /dev/video4.
546 This example configures a pipeline to capture from the ADV7180
547 video decoder, assuming PAL 720x576 input signals, with Motion
548 Compensated de-interlacing. The adv7180 must output sequential or
549 alternating fields (field type 'seq-tb' for PAL, or 'alternate').
554 media-ctl -l "'adv7180 3-0021':0 -> 'ipu1_csi0_mux':1[1]"
555 media-ctl -l "'ipu1_csi0_mux':2 -> 'ipu1_csi0':0[1]"
556 media-ctl -l "'ipu1_csi0':1 -> 'ipu1_vdic':0[1]"
557 media-ctl -l "'ipu1_vdic':2 -> 'ipu1_ic_prp':0[1]"
558 media-ctl -l "'ipu1_ic_prp':2 -> 'ipu1_ic_prpvf':0[1]"
559 media-ctl -l "'ipu1_ic_prpvf':1 -> 'ipu1_ic_prpvf capture':0[1]"
561 media-ctl -V "'adv7180 3-0021':0 [fmt:UYVY2X8/720x576 field:seq-tb]"
562 media-ctl -V "'ipu1_csi0_mux':2 [fmt:UYVY2X8/720x576]"
563 media-ctl -V "'ipu1_csi0':1 [fmt:AYUV32/720x576]"
564 media-ctl -V "'ipu1_vdic':2 [fmt:AYUV32/720x576 field:none]"
565 media-ctl -V "'ipu1_ic_prp':2 [fmt:AYUV32/720x576 field:none]"
566 media-ctl -V "'ipu1_ic_prpvf':1 [fmt:AYUV32/720x576 field:none]"
567 # Configure "ipu1_ic_prpvf capture" interface (assumed at /dev/video2)
568 v4l2-ctl -d2 --set-fmt-video=field=none
570 Streaming can then begin on /dev/video2. The v4l2-ctl tool can also be
571 used to select any supported YUV pixelformat on /dev/video2.
573 This platform accepts Composite Video analog inputs to the ADV7180 on
574 Ain1 (connector J42).
576 i.MX6DL SabreAuto with ADV7180 decoder
577 --------------------------------------
579 On the i.MX6DL SabreAuto, an on-board ADV7180 SD decoder is connected to the
580 parallel bus input on the internal video mux to IPU1 CSI0.
582 The following example configures a pipeline to capture from the ADV7180
583 video decoder, assuming NTSC 720x480 input signals, using simple
584 interweave (unconverted and without motion compensation). The adv7180
585 must output sequential or alternating fields (field type 'seq-bt' for
586 NTSC, or 'alternate'):
591 media-ctl -l "'adv7180 4-0021':0 -> 'ipu1_csi0_mux':4[1]"
592 media-ctl -l "'ipu1_csi0_mux':5 -> 'ipu1_csi0':0[1]"
593 media-ctl -l "'ipu1_csi0':2 -> 'ipu1_csi0 capture':0[1]"
595 media-ctl -V "'adv7180 4-0021':0 [fmt:UYVY2X8/720x480 field:seq-bt]"
596 media-ctl -V "'ipu1_csi0_mux':5 [fmt:UYVY2X8/720x480]"
597 media-ctl -V "'ipu1_csi0':2 [fmt:AYUV32/720x480]"
598 # Configure "ipu1_csi0 capture" interface (assumed at /dev/video0)
599 v4l2-ctl -d0 --set-fmt-video=field=interlaced_bt
601 Streaming can then begin on /dev/video0. The v4l2-ctl tool can also be
602 used to select any supported YUV pixelformat on /dev/video0.
604 This example configures a pipeline to capture from the ADV7180
605 video decoder, assuming PAL 720x576 input signals, with Motion
606 Compensated de-interlacing. The adv7180 must output sequential or
607 alternating fields (field type 'seq-tb' for PAL, or 'alternate').
612 media-ctl -l "'adv7180 4-0021':0 -> 'ipu1_csi0_mux':4[1]"
613 media-ctl -l "'ipu1_csi0_mux':5 -> 'ipu1_csi0':0[1]"
614 media-ctl -l "'ipu1_csi0':1 -> 'ipu1_vdic':0[1]"
615 media-ctl -l "'ipu1_vdic':2 -> 'ipu1_ic_prp':0[1]"
616 media-ctl -l "'ipu1_ic_prp':2 -> 'ipu1_ic_prpvf':0[1]"
617 media-ctl -l "'ipu1_ic_prpvf':1 -> 'ipu1_ic_prpvf capture':0[1]"
619 media-ctl -V "'adv7180 4-0021':0 [fmt:UYVY2X8/720x576 field:seq-tb]"
620 media-ctl -V "'ipu1_csi0_mux':5 [fmt:UYVY2X8/720x576]"
621 media-ctl -V "'ipu1_csi0':1 [fmt:AYUV32/720x576]"
622 media-ctl -V "'ipu1_vdic':2 [fmt:AYUV32/720x576 field:none]"
623 media-ctl -V "'ipu1_ic_prp':2 [fmt:AYUV32/720x576 field:none]"
624 media-ctl -V "'ipu1_ic_prpvf':1 [fmt:AYUV32/720x576 field:none]"
625 # Configure "ipu1_ic_prpvf capture" interface (assumed at /dev/video2)
626 v4l2-ctl -d2 --set-fmt-video=field=none
628 Streaming can then begin on /dev/video2. The v4l2-ctl tool can also be
629 used to select any supported YUV pixelformat on /dev/video2.
631 This platform accepts Composite Video analog inputs to the ADV7180 on
632 Ain1 (connector J42).
634 SabreSD with MIPI CSI-2 OV5640
635 ------------------------------
637 Similarly to SabreLite, the SabreSD supports a parallel interface
638 OV5642 module on IPU1 CSI0, and a MIPI CSI-2 OV5640 module. The OV5642
639 connects to i2c bus 1 and the OV5640 to i2c bus 2.
641 The device tree for SabreSD includes OF graphs for both the parallel
642 OV5642 and the MIPI CSI-2 OV5640, but as of this writing only the MIPI
643 CSI-2 OV5640 has been tested, so the OV5642 node is currently disabled.
644 The OV5640 module connects to MIPI connector J5 (sorry I don't have the
645 compatible module part number or URL).
647 The following example configures a direct conversion pipeline to capture
648 from the OV5640, transmitting on MIPI CSI-2 virtual channel 1. $sensorfmt
649 can be any format supported by the OV5640. $sensordim is the frame
650 dimension part of $sensorfmt (minus the mbus pixel code). $outputfmt can
651 be any format supported by the ipu1_ic_prpenc entity at its output pad:
656 media-ctl -l "'ov5640 1-003c':0 -> 'imx6-mipi-csi2':0[1]"
657 media-ctl -l "'imx6-mipi-csi2':2 -> 'ipu1_csi1':0[1]"
658 media-ctl -l "'ipu1_csi1':1 -> 'ipu1_ic_prp':0[1]"
659 media-ctl -l "'ipu1_ic_prp':1 -> 'ipu1_ic_prpenc':0[1]"
660 media-ctl -l "'ipu1_ic_prpenc':1 -> 'ipu1_ic_prpenc capture':0[1]"
662 media-ctl -V "'ov5640 1-003c':0 [fmt:$sensorfmt field:none]"
663 media-ctl -V "'imx6-mipi-csi2':2 [fmt:$sensorfmt field:none]"
664 media-ctl -V "'ipu1_csi1':1 [fmt:AYUV32/$sensordim field:none]"
665 media-ctl -V "'ipu1_ic_prp':1 [fmt:AYUV32/$sensordim field:none]"
666 media-ctl -V "'ipu1_ic_prpenc':1 [fmt:$outputfmt field:none]"
668 Streaming can then begin on "ipu1_ic_prpenc capture" node. The v4l2-ctl
669 tool can be used to select any supported YUV or RGB pixelformat on the
676 1. When using 90 or 270 degree rotation control at capture resolutions
677 near the IC resizer limit of 1024x1024, and combined with planar
678 pixel formats (YUV420, YUV422p), frame capture will often fail with
679 no end-of-frame interrupts from the IDMAC channel. To work around
680 this, use lower resolution and/or packed formats (YUYV, RGB3, etc.)
681 when 90 or 270 rotations are needed.
687 drivers/staging/media/imx/
689 include/linux/imx-media.h
694 .. [#f1] http://www.nxp.com/assets/documents/data/en/reference-manuals/IMX6DQRM.pdf
695 .. [#f2] http://www.nxp.com/assets/documents/data/en/reference-manuals/IMX6SDLRM.pdf
701 - Steve Longerbeam <steve_longerbeam@mentor.com>
702 - Philipp Zabel <kernel@pengutronix.de>
703 - Russell King <linux@armlinux.org.uk>
705 Copyright (C) 2012-2017 Mentor Graphics Inc.