4 Advice to Choose a Camera module for beginners

12 Apr.,2024

 

The camera module is the most integral part of an AI-based embedded system. With so many camera module choices on the market, the selection process may seem overwhelming. This post breaks down the process to help make the right selection for an embedded application, including the NVIDIA Jetson.

Camera selection considerations

Camera module selection involves consideration of three key aspects: sensor, interface (connector), and optics. 

Sensor 

The two main types of electronic image sensors are the charge-coupled device (CCD) and the active-pixel sensor (CMOS). For a CCD sensor, pixel values can only be read on a per-row basis. Each row of pixels is shifted, one by one, into a readout register. For a CMOS sensor, each pixel can be read individually and in parallel. 

CMOS is less expensive and consumes less energy without sacrificing image quality, in most cases. It can also achieve higher frame rates due to the parallel readout of pixel values. However, there are some specific scenarios in which CCD sensors still prevail—for example, when long exposure is necessary and very low-noise images are required, such as in astronomy. 

Electronic shutter 

There are two options for the electronic shutter: global or rolling. A global shutter exposes each pixel to incoming light at the same time. A rolling shutter exposes the pixel rows in a certain order (top to bottom, for example) and can cause distortion (Figure 1).

Figure 1. Distortion of rotor blades caused by rolling shutter

The global shutter is not impacted by motion blur and distortion due to object movement. It is much easier to sync multiple cameras with a global shutter because there is a single point in time when exposure starts. However, sensors with a global shutter are much more expensive than those with a rolling shutter. 

Color or monochrome 

In most cases, a monochrome image sensor is sufficient for typical machine vision tasks like fault detection, presence monitoring, and recording measurements.

With a monochrome sensor, each pixel is usually described by eight bits. With a color sensor, each pixel has eight bits for the red channel, eight bits for the green channel, and eight bits for the blue channel. The color sensor requires processing three times the amount of data, resulting in a higher processing time and, consequently, a slower frame rate.  

Dynamic range 

Dynamic range is the ratio between the maximum and minimum signal that is acquired by the sensor. At the upper limit, pixels appear white for higher values of intensity (saturation), while pixels appear black at the lower limit and below. An HDR of at least 80db is needed for indoor application and up to 140db is needed for outdoor application. 

Resolution 

Resolution is a sensor’s ability to reproduce object details. It can be influenced by factors such as the type of lighting used, the sensor pixel size, and the capabilities of the optics. The smaller the object detail, the higher the required resolution. 

Pixel resolution translates to how many millimeters each pixel is equal to on the image. The higher the resolution, the sharper your image will be. The camera or sensor’s resolution should enable coverage of a feature’s area of at least two pixels. 

CMOS sensors with high resolutions tend to have low frame rates. While a sensor may achieve the resolution you need, it will not capture the quality images you need without achieving enough frames per second. It is important to evaluate the speed of the sensor. 

A general rule of thumb to determine the resolution needed for the use case is shown below and in Figure 2.  The multiplier (2) represents the typical desire to have a minimum two pixels on an object in order to successfully detect it.

Figure 2. Sensor resolution required is determined by lens field of view and feature of interest size

For example, suppose you have an image of an injury around the eye of a boxer. 

  • FOV, mm = 2000mm 
  • Size of feature of interest (the eye), mm = 4mm

Based on the calculation, 1000 x 1000, a one-megapixel camera should be sufficient to detect the eye using a CV or AI algorithm. 

Note that a sensor is made up of multiple rows of pixels. These pixels are also called photosites. The number of photons collected by a pixel is directly proportional to the size of the pixel. Selecting a larger pixel may seem tempting but may not be the optimal choice in all the cases. 

Small pixel Sensitive to noise (-) Higher spatial resolution for same sensor size (+) Large pixel Less sensitive to noise (+) Less spatial resolution for same sensor size (-) Table 1.  Pros and cons of small and large pixel size

Back-illuminated sensors maximize the amount of light being captured and converted by each photodiode. In front-illuminated sensors, metal wiring above the photodiodes blocks off some photons, hence reducing the amount of light captured.

Figure 3. Cross-section of a front-illuminated structure (left) and a back-illuminated structure (right)

Frame rate and shutter speed 

The frame rate refers to the number of frames (or images captured) per second (FPS). The frame rate should be determined based on the number of inspections required per second. This correlates with the shutter speed (or exposure time), which is the time that the camera sensor is exposed to capture the image. 

Theoretically, the maximum frame rate is equal to the inverse of the exposure time. But achievable FPS is lower because of latency introduced by frame readout, sensor resolution, and the data transfer rate of the interface including cabling. 

FPS can be increased by reducing the need for large exposure times by adding additional lighting, binning the pixels. 

CMOS sensors can achieve higher FPS, as the process of reading out each pixel can be done more quickly than with the charge transfer in a CCD sensor’s shift register. 

Interface

There are multiple ways to connect the camera module to an embedded system. Typically, for evaluation purposes, cameras with USB and Ethernet interfaces are used because custom driver development is not needed. 

Other important parameters for interface selection are transmission length, data rate, and operating conditions. Table 2 lists the most popular interfaces. Each option has its pros and cons. 

Features USB 3.2 Ethernet (1 GbE) MIPI CSI-2 GMSL2 FPDLINK III Bandwidth 10Gbps 1Gbps DPHY 2.5 Gbps/lane CPHY 5.71 Gbps/lane 6Gbps 4.2Gbps Cable length supported < 5m Up to 100m <30cm <15m <15m Plug-and-play Supported Supported Not supported Not supported Not supported Development costs Low Low Medium to high Medium to high Medium to high Operating environment Indoor Indoor Indoor Indoor and outdoor Indoor and outdoor Table 2. Comparison of various camera interfaces

Optics 

The basic purpose of an optical lens is to collect the light scattered by an object and recreate an image of the object on a light-sensitive image sensor (CCD or CMOS). The following factors should be considered when selecting an optimized lens-focal length, sensor format, field of view, aperture, chief ray angle, resolving power, and distortion. 

Lenses are manufactured with a limited number of standard focal lengths. Common lens focal lengths include 6mm, 8mm, 12.5mm, 25mm, and 50mm. 

Once you choose a lens with a focal length closest to the focal length required by your imaging system, you need to adjust the working distance to get the object under inspection in focus. Lenses with short focal lengths (less than 12mm) produce images with a significant amount of distortion. 

If your application is sensitive to image distortion, try to increase the working distance and use a lens with a higher focal length. If you cannot change the working distance, you are somewhat limited in choosing an optimized lens. 

 Wide-angle lens Normal lens Telephoto lens Focal length <=35mm50mm>=70mm Use case Nearby scenes Same as human eye Far-away scenes Table 3. Main types of camera lenses

To attach a lens to a camera requires some type of mounting system. Both mechanical stability (a loose lens will deliver an out-of-focus image) and the distance to the sensor must be defined. 

To ensure compatibility between different lenses and cameras, the following standard lens mounts are defined. 

 Most popularFor industrial applicationsLens mountM12/S mountC-mountFlange focal lengthNon-standard17.526mmThreads (per mm)0.5 0.75 Sensor size accommodated (inches)Up to ⅔Up to 1Table 4. Common lens mounts used in embedded space

NVIDIA camera module partners 

NVIDIA maintains a rich ecosystem of partnerships with highly competent camera module makers all over the world. See Jetson Partner Supported Cameras for details. These partners can help you design imaging systems for your application from concept to production for the NVIDIA Jetson. 

Figure 4. NVIDIA Jetson in combination with camera modules can be used across industries for various needs

Summary

This post has explained the most important camera characteristics to consider when selecting a camera for an embedded application. Although the selection process may seem daunting, the first step is to understand your key constraints based on design, performance, environment, and cost. 

Once you understand the constraints, then focus on the characteristics most relevant to your use case. For example, if the camera will be deployed away from the compute or in a rugged environment, consider using the GMSL interface. If the camera will be used in low-light conditions, consider a camera module with larger pixel and sensor sizes. If the camera will be used in a motion application, consider using a camera with a global shutter. 

To learn more, watch Optimize Your Edge Application: Unveiling the Right Combination of Jetson Processors and Cameras. For detailed specs on AI performance, GPU, CPU, and more for both Xavier and Orin-based Jetson modules, visit Jetson Modules. 

Since the camera module plays an increasingly important role in electronic products, let’s learn more about it so that you can make right decisions concerning camera module of your products.

We’re going to provide some tips and the manufacturing process of camera module in the following content. Hope it helps.

How to choose a proper camera module

  • Early in the design phase, you have to consider and determine the pixel, the size and the type of camera module. Because for certain product,  it’s close to impossible to change once these parameters are decided,
  • Check if the camera module you select is compatible with the platform, or more precisely, the SOC (system on chip) which the certain device is using.
  • Determine the imaging direction on the basis of the structure and placement of the camera module. The software can adjust a 180° imaging direction while it can’t do the 90° one.

2 different ways to pack the sensor

Before we get down to the manufacturing process of a camera module, it’s important that we get how the sensor is packed clear. Because the way of packaging affects the manufacturing process.

Sensor is a key component in the camera module.

In the manufacturing process of a camera module, there are two ways to pack the sensor: chip scale package (CSP) and chip on board (COB).

Chip scale package (CSP)

CSP means the package of the sensor chip has an area no greater than 1.2 times that of the chip itself. It’s done by the sensor manufacturer, and usually there is a layer of glass covering the chip.

Chip on board (COB)

COB means the sensor chip will be directly bonded to PCB (printed circuit board) or FPC (flexible printed circuit). COB process is part of the camera module production process, thus it’s done by the camera module manufacturer.

Comparing the two packaging options, CSP process is faster, more accurate, more expensive, and may cause poor light transmittance, while COB is more space-saving, cheaper, but the process is longer, the yield problem is larger, and can’t be repaired.

 

Manufacturing process of a camera module

For camera module using CSP:

1. SMT (surface mount technology): first prepare the FPC, then attach the CSP to FPC. It’s usually done in a large scale.

2. Cleaning and segmentation: clean the large circuit board then cut it into standard pieces.

3. VCM (voice coil motor) assembly: assemble the VCM to the holder using glue, then baker the module. Solder the pin.

4. Lens assembly: assemble the lens to the holder using glue, then bake the module.

5. Whole module assembly: attach the lens module to circuit board through ACF (anisotropic conductive film) bonding machine.

6. Lens inspection and focusing.

7. QC inspection and packaging.

 

For camera module using COB:

1. SMT: prepare the FPC.

2. Conduct COB process:

  • Die bonding: bond the sensor chip onto FPC.
  • Wire bonding: bond extra wire to fix the sensor.

3. Continue to VCM assembly and the rest of the procedures are the same as the CSP module.

 

This is the end of this post. If you want to know more about camera module, just leave your comment below or contact us. We’re glad to hear from you!

Note: We do not own the images used in this post. Feel free to contact us if they belong to you, and we’ll take them down as quickly as we possibly can.

4 Advice to Choose a Camera module for beginners

Camera Module: Tips to Choose & Manufacturing Processes