Translating data rate and frame rate
Before an image from a vision system can be stored or processed in a host computer, it already has passed several other processing and transport steps. In this two part article we will clarify the data flow and show you how you can get an estimate how the data rate correlates with the frame rate. We will also provide some tips on how to discover the data rate limiting factor in your vision system.
Data generation at pixel level
To be able say something about a frame rate we first have to answer the question how much data we actually need to transport. The data generation starts at the pixel level in block 1 of the schematic overview. How much data we generate depends on the pixel bit depth and the resolution of the image sensor. The possible bit depths depend on your sensor. Nowadays for machine vision sensors this bit depth is mostly 8 or 10 bit.
With respect to the resolution it is important to note that in this article we only consider the full sensor resolution. In case of a region of interest (ROI) you have to be careful in estimating the frame rate along the way as described here. Due to the sensor specific read-out scheme the ROI frame rate might not always be the frame rate as expected based on the pixel count.
To illustrate let’s take a 25 Mp camera as an example. A typical 25 MP camera has a resolution of 5,120 by 5,120 or 26,214,400 pixels. The amount of data for a single image is now easily obtained by multiplying the pixel count with the pixel bit depth. For an 8 bit pixel depth the image of a 25 Mp camera will consist of 209,715,200 bits while for a 10 bit pixel depth it will consist of 262,214,400 bits.
Internal sensor architecture and camera processing
The first subsystem of the total vision solution that has to deal with the image data is the image sensor itself e.g. we stay in block 1 of the schematic overview. It has to read out the pixels convert the analog signal to a digital signal and supply the digital image data at the output pins of the sensor.
To get a feeling of the numbers let us take a real 25 Mp sensor as example, e.g. the Vita 25K from On Semiconductor which can be found in our S-25A70 camera. The sensor specs state the pixel data is read out over 32 channels with a rate of 620 Mbit/s at a 10 bit pixel depth. With this information and the total data size of a 10 bit 25 MP image, as calculated in the previous section, the maximum frame rate that is supported by the image sensor can be calculated. To do this multiply the amount of channels with the data rate per channel and divide it by the image size: 32 x 620.000.000 / 262.144.000 = 75.7 frames per second (fps).
It should be noted that the calculations in this article are estimates. Due to overhead data, the actual frame rates might deviate slightly from the reported values.
Although often hidden for the user, after the sensor, but before the interface, there is the camera and thus manufacturer specific processing electronics e.g. block 2 of the schematic overview. This part contains the manufacturer specific features. The processing rate has to equal the sensor data rate.
From camera processing to frame grabber
After the camera processing, the interface (block 3 in the schematic overview) that is used to transfer the image data from camera to frame grabber becomes important. Here we will focus on the CoaXPress (CXP) interface. In the case of the Vita 25k it is interesting to consider two different situations, e.g. transporting the image data in 8 bit and 10 bit over a CXP-6 (which stands for 6.25 Gbit/s) interface with four cables (CXP-6 x4).
There is another aspect of data transmission that requires some more explanation… This is the fact that the pixel bit depth with which the sensor is being read out does not have to be equal to the bit rate at which the data is transferred over the interface. The bit depth by which the sensor reads out the analog pixel signal determines the higher useful bit depth but in the camera processing step the pixel data could be re-sampled at a lower bit depth to reduce the amount of data. The bit depth at which the sensor reads out the pixels is often referred to as the sensor bit depth while the bit depth at which the data is transported over the interface is often referred to as pixel format.
Within the CoaXPress standard there are several sub standards that define the data rate per connection. The advantage of a lower data rate is that a longer cable can be used. Continuing with the CXP-6 x4 CoaXPress interface configuration in our example – the CXP-6 indication refers to a data rate of 6.25 Gbit/s per cable. However, not all bandwidth can be used for image data transfer. 20% of this bandwidth is used for encoding the data such that a correct data transfer is guaranteed (the so called 8 to 10 bit encoding). Furthermore around 4% has to be reserved for overhead data. The effective CXP-6 bandwidth per cable that can be used for image data transfer is therefore: 6.25 Gbit/s x 0.80 x 0.96 = 4.8 Gbit/s. With four cables 19.2 Gbit/s (or 2,400 Mbyte/s, remember 8 bit is 1 byte) can be fully used for image data transfer.
Let’s go back to the mentioned examples, 8 bit and 10 bit pixel format over CXP-6 x4. Now that we have calculated the CXP-6 x4 bandwidth that is available for image data transfer, we can calculate the frame rate that is supported by the interface for the two different pixel formats by dividing the available data rate by the amount of data:
|Pixel format||Data 25 Mp (bits)||Interface||Data rate available for image transfer||Interface supported max frame rate|
|8 bit||209,715,200||CXP-6 x4||19.2 Gbit/s||91.6 fps|
|10 bit||262,214,400||CXP-6 x4||19.2 Gbit/s||73.2 fps|
With the 8 bit pixel format the interface supports a maximum frame rate that lies above the maximum frame rate of the Vita sensor (75.7 fps). This means that in 8 bit the sensor will be the data rate limiting factor. In 8 bit pixel format, a CXP-6 x4 camera with a Vita 25K sensor using the full sensor ROI will thus not reach a frame rate above 75.7 fps.
The only way to increase the frame rate in this situation is to take a smaller ROI or to use a sensor (analog) binning mode, binning in the digital domain in the camera processing step will not improve anything in this case.
With the 10 bit pixel format the maximum frame rate supported by the interface is actually lower compared to the maximum frame rate supported by the sensor. This means that in 10 bit pixel format the interface will be the data rate limiting factor. Although the sensor supports 75.7 fps the interface will limit the achievable frame rate to 73.2 fps.
When the interface is the limiting factor, reducing the ROI, analog on sensor binning and digital binning in the camera processing step will help to increase the frame rate.
For this specific case of a 25 Mp camera we have listed below the theoretical frame rates that could be achieved with the various CXP configurations if the interface would be the data rate limiting factor.
|Configuration||Total data rate (Gbit/s)||Available for image data (Gbit/s)||fps for 25 Mp camera (8 bit)||fps for 25 Mp camera (10 bit)|
We have now discussed the path from image sensor through camera processing and are now half way through the interface, exactly in the middle of the cable connecting the camera with the frame grabber. This is a good moment to stop for now and continue with the last steps next week when we will especially discuss the transfer from frame grabber to host PC!