Videoarchiver Tool

© 2018-2019, Kevan Hashemi Open Source Instruments Inc.

Contents

Introduction
Warnings
Operation
Coming Soon
Time Synchronization
File Sizes
Exposure Compensation
Color Balance
Design
Troubleshooting
Version Changes

Introduction

Note: This manual applies to Videoarchiver 16 with LWDAQ 9.1.6+. See Changes for new features if you are using older versions.

The Videoarchiver is a component of our LWDAQ Software. It runs alongside the Neuroarchiver to record video from Animal Cage Cameras (ACC) and biometric signals from Subcutaneous Transmitters (SCTs) simultaneously and synchronously with one another. The Videoarchiver allows us to view live video from ACCs, to change their IP addresses, and to turn on and off each camera's white and infrared illumination. The Videoarchiver makes sure that the clocks on the cameras remain synchronous to within ±50 ms with the clock of the data acquisition computer. The Neuroarchiver does the same thing for its own recordings. The result is continuous video and biometric recordings synchronized to within ±50 ms.


Figure: Videoarchiver on MacOS, Recording from Five A3034B Cameras. All cameras have their own control bar. In this example, each camera is named after its serial numbers, and each IP address is derived from the serial number.

We use a live video display when we adjust the picture acquired by each camera. Each camera has its own Live button to bring up a live display. This display is an un-compressed stream from the camera. The un-compressed stream shows us what will later be compressed and stored during recording. The delay between movement in the field of view of the camera movement in the live display should be barely noticeable to the human eye. We use the live display to determine the effect of illumination, adjust the focus of the camera, and make sure we have no distracting reflections in the field of view. Once we are satisfied with the picture obtained from the camera, we turn off the live display and start recording to disk with the Rec button.

During recording, the un-compressed video is not sent to the data acuisition computer. Instead, the un-compressed stream is stored temporarily within the camera in in one-second segments, each named after the time in Unix seconds. The camera compresses these segments, reducing their size by a factor of ten with no appreciable loss in image quality. It is these one-second compressed segments that the Videoarchiver downloads to the data acquisition computer. The delay between movement in the field of view and the arrival of compressed video in the data acquisition computer is the recording lag. The Videoarchiver displays the recording lag in units of seconds on the right side of each camera's control bar.

We can view the incoming compressed segments with the MRec button, but the display will be delayed by recording lag. Every minute, the Videoarchiver adds new segments to its video recording file. Every ten minutes, it creates a new recording file. We must refrain from viewing the recorded files why they are being added to by the Videoarchvier. Doing so can cause the Videoarchiver to stall.

Warnings

All: We recommend you use one instance of LWDAQ to run the Videoarchiver alone for recording video, and another instance of LWDAQ to run the Neuroarchiver for recording SCT signals.

Windows: On Windows, you may encounter a color scheme error when you try to display live vide. Our mplayer video display tool does not work with the more sophisticated color schemes you can select for your Windows desktop. To avoid delays and notifications, we suggest you right-click on the desktop, select "Personalize", and change your "Color Theme" to "Windows 7 Basic", which you find by scrolling down.

All: Do not attempt to play the video file that is being prepared on disk by the Videoarchiver. The Videoarchiver deletes this file periodically. If you try to play the file, the Videoarchiver will encounter a file access error and recording will stall. Use the recording monitor to view the incoming video from one camera at a time, with the MRec button.

All: The Videoarchiver is under active development. Do not be alarmed if the version you download looks different from the version presented in the figures of this manual. Check the change log for when changes are introduced.

Operation

The Videoarchiver is written in TclTk. It is available in the Tool Menu of the LWDAQ Program. Before you can use it, you must download the Videoarchiver libraries, which are available in Videoarchiver.zip. When you open the Videoarchiver for the first time, you will see the following message. Click on the link to download the Videoarchiver software.


Figure: Videoarchiver on Linux, Requesting Download of Video-Handling Software.

Download and decompress the zip file. The result is the Videoarchiver directory. Place the Videoarchiver directory in your LWDAQ directory, next to the Tools, Sources, and Build directories. Having followed our Animal Cage Camera set-up instructions, the Videoarchiver should be able to communicate with your camera. The cameras and the data acquistion computer will be connected to the same power-over-ethernet (PoE) switch, forming a local ethernet. When we first open the Videoarchiver, we will be presented with a camera list consisting of only one camera, as shown above. We enter the IP address of one of our cameras in the IP entry box. When we ship a camera, its IP address will be 10.0.0.X, where X is the last three digits of the serial number, with leading zeros dropped. You can always reset the IP address of a camera to a universal default value of 10.0.0.34 by with the help of the camera's configuration switch.

We can check communication with a camera at any time, and identify which of several we are communicating with, by turning on its white LEDs. We select "5" from the White LED menu button and the LEDs on a camera will turn on, or we will see a timeout error in the Videoarchiver text window. Once we have established communication, we can change the IP address of a camera usig the IP button.

The Live and Rec buttons cause the Videoarchiver to configure a camera for streaming or recording. It is when we start streaming or recording that the Videoarchiver applies our choice of camera version, rotation, and exposure compensation. We can adjust these parameters at other times, but they have no effect. Thus we cannot change the rotation of a live display once the display has begun. We must stop the live display and start it gain. We can, however, turn on and off the white and infrared LEDs at any time. The Record_All and Stop_All buttons allow us to begin and end recording from all cameras in the list. We add cameras to the list with the Add_Camera button, and we can save and load camera lists with Save_List and Load_List. The Configure button opens a configuration panel. The Directory button allows us to choose a master directory for video recording, in which the Videoarchiver will create directories named after the cameras and store its video files.


Figure: Videoarchiver on Windows 7, Recording from Five ACCs.

Each camera has its own Live button to initiate live display of the camera image. After a few seconds, a window with the live display will open. Close the window to terminate the display, or use the camera's Stop button to terminate it. You should notice a delay of no more than one frame period between movements in the field of view of the camera and their appearance on the display. Use the live display to check that your camera is the the right place, has the correct image rotation, is focused well, and that its video configuration provides adequate resolution and contrast. In theory, we can open live displays for every camera in our list, by pressing all their Live buttons. With ten cameras, however, it's not clear how we would view each display in detail at one time. If we have multiple displays open, we can close them all with the Stop_All button.

Each camera has its own version parameter, by which we can select high or low resolution video. The low-reolution video will typically be of higher frame rate than the higher-resolution video. The following table gives the currently-defined camera versions.

Version Resolution
(Width×Height)
Frame Rate
(fps)
Quality
(crf)
File Size
(kByte/s)
A3034B-HR820×416202340
A3034B-LR410×308301520
Table: Camera Version Parameters.

We specify the quality of the compression with the "constant rate factor" or "crf", used by the H264 compression algorithm to control the extent to which moving objects will be blurred in the compressed video. At crf=1, there is no blurring at all, and at crf=51, all movement and edges in the picture are blurry. The standard value for high-quality compression is crf=23, and this is the value we use for our higher-resolution video. When we choose lower-resolution, we increase the compression quality still further with crf=15 so that we can see every pixel in the video output, and we can take advantage of the higher frame rate the lower-resolution vicdeo provides.

When we display video with Live or MRec, the Videoarchiver scales the display by a factor display_zoom. By default, display_zoom is 1.0, which draws each video pixel on one computer screen pixel. Full-resolution images will take up most of your computer screen. Set display_zoom to 0.5 for a compact display of full-resolution video, or 2.0 for an enlarged display of low-resolution video.

Each camera has its own rotation parameter, by which we rotate the image. The rotation parameter can be 0°, 90°, 180°, or 270°. We can set the camera down on any of its three sides without a cable coming out, and then choose the rotation value so that the image as soon as it comes out of the image sensor.

Each camera has its own exposure compensation parameter, which controls how the camera adjusts its exposure time. Higher values increase the exposure time so as to give a better view or dimly-lit objects. Lower values decrease the exposure time so as to give a better view of brightly-lit objects. The default value is four (4), which we deem to be a good starting point for viewing animals in cages.

Each camera has its own Rec button that starts recording video from the camera. There is also a Record_All button that will start recording from all cameras in the list. When it starts recording from a camera, the Videoarhiver creates a directory named after the camera in the master directory. We specify the master directory with the Directory button. Because the camera names will be used as directory names, do not use spaces or special characters other than dashes and underscores in your camera names. We recommend you name each camera after its serial number, because this will simplify both your IP address assigments and your association between cameras and recording archives.

The Videoarchiver's recording process is designed to provide continuous, clock-synchronized recording of video to disk. It is not designed to provide simultaneous viewing of video during recording. Nevertheless, the MRec button will open a recording monitor window that will display a delayed version of the video being produced by the camera. The delay will be five or six seconds, and is the time it takes the camera to compress a video segment, and for the Videoarchiver to download and write the segment to disk. The recording monitor window operates for only one camera at a time. We can press the MRec buttons of consecutive cameras and have the recording monitor switch between them as we go. Close the recording monitor when you are done viewing the video: the fewer processes the Videoarchiver has to manage during continuous recording, the more reliable it will be.


Figure: Two Animal Cage Cameras (A3034B). Left: white LEDs on dim. Right: infrared LEDs on full power, seen through the infrared blocking filter of our mobile phone camera.

If we unplug a camera during recording, the Videoarchiver will attempt to re-start recording some time later, as dictate by its restart_wait_s parameter. Any time communication with a camera takes longer than the time given by the Videoarchiver's connection_timeout_s parameter, the Videoarchiver will assume that the camera needs to be re-started. When recording is interrupted, we say it is stalled, and the Videoarchiver marks the state of the camera as "Stalled". In Videoarchiver 11, the restart wait is thirty seconds, and the connection timeout is three seconds. We can view and adjust these parameter, and other Videoarchiver parameters, in the Configuration Panel, which we open with the Configure button.


Figure: The Videoarchiver Configuration Panel.

The Videoarchiver keeps a log of all the times a camera stalls, and all the times it is able to re-start recording. We can view this log with the View_Restart_Log button, and clear the log with the Clear_Restart_Log button. The Save button saves the Videoarchiver parameters, but does not save the camera list.

The Q button provided for each camera queries the camera for its log files, and prints them out to the text window. We can use these log files to track down problems with compression and communication. The R button allows us to reboot the camera without disconnecting its power. The U button causes the Videoarchiver to update the camera's firmware. If you see errors that are not timeout errors when you communicate with a camera, it could be that the camera firmware needs to be updated to be compatible with your version of the Videoarchiver. In that case, press the U button. The updated takes less than ten seconds and does not require a re-boot.

We construct a new camera list with the Add_Camera button. We can delete cameras from a list with the X button at the end of each list entry. We can save and load lists to and from disk. The lists are saved as text files containing a sequence of Tcl commands that configure the Videoarchiver's internal camera list. When we open the Videoarchiver, it does not load a camera list automatically. We must load the list with the Load_List button, having previously saved it with the Save_List button. You can save any number of different camera lists, but if you want to load a particular camera list when you open the Videoarchiver, make sure cam_list_file in the Configuration Panel points to your camera list, and press the Save to save the Videoarchiver configuration.

We turn on and off the camera's built-in white and infrared illumination with the Wht and IR buttons. We can set them to any one of six power levels zero to five, where zero is off and five is full-power. In the future, the Videoarchiver will provide an auxilliary twenty-four hour cycle program that allows us to set the dimming and brightening of white and infrared LEDs to simulate day and night.

Time Synchronization

The files recorded to disk are H264-compressed videos stored within mp4-containers. The files are saved in a subdirectory of the master directory. You specify the master directory yourself, but the Videoarchiver creates the subdirectories itself. If the directories exist, it re-uses them. There is one subdirectory for each camera, and they are named the same as the camera name in the Videoarchiver camera list.

The video files are named in the format Vx.mp4, where the x is a ten-digit Unix time. The first frame of the video was generated by the camera at the start of this Unix time, as reported by the clock on the data acquisition computer. Video file V1525694641.mp4 has Unix time 1525694641, which is "Mon May 07 08:04:01 EDT 2018". The video contains twenty frames per second. Its first frame was generated some time between zero and one twentieth of a second after the start of Unix second 1525694641. Because the videos are time-stamped in this way, and the frames within the video are arriving at a constant rate, we can navigate to a particular time within a video by counting frames. The time defined by the video file is what we call video time.

We also have computer time, which is the time on the data acquisition computer clock. The computer time may or may not be kept accurate by an network time server. Whether the computer clock is accurate or not, it is what we use to set the time on the camera clocks, and it is the camera clock that dictates the segmentation of video at one-second boundaries and assigns the Unix time of the segment name. The Videoarchiver attempts to keep all the camera clocks synchronized with the computer clock to within ±25 ms. It does this by correcting the camera clock by a few tens of milliseconds every time it starts a new recoding file. With record_length_s set to 600, this correction will take place every ten minutes, and each video file has length six hundred video-time seconds.

Meanwhile, we might be recording biometric signals with SCTs and an Octal Data Receiver (ODR). The ODR has its own temperature-compensated clock that keeps time to ±1 ppm. The Neuroarchiver records biometric signals in NDF file that are time-stamped in the same way as the video files. The default Neuroarchiver output file is in the format Mx.ndf, where x is the Unix second that began immediately before the first sample recorded in the NDF file. Because the NDF recordings contain clock messages from the ODR, we can navigate to a particular time within an NDF by counting clock messages. The time defined by the video file is what we call signal time.

The computers used to record video and telemetry from laboratory animals are not usually connected to the internet at large, and so do not have access to a network time server. A computer clock left to run without network correction will, in our experience, drift by ±1 s/day, or 10 ppm. The ODR clock is ten times more accurate than most computer clocks. We might think to correct the computer clock using the ODR clock. But correcting a computer clock is a delicate business. When we move the clock backwards, some actions that have already taken place will be presumed not to have taken place, and will be repeated. When we move the clock forwards, some actions that have not taken place will be presumed to have taken place, and will be skipped. When a computer adjusts its time to match a network, it does so by slowly speeding up or slowing down. Even if we were willing to embark upon the development of such a synchronization algorithm for the data acquisition computer, what would we do when we wanted the computer synchronized with respect to absolute time with the help of a network server? And in any case, even a 1 ppm drift is 100 ms per day, and after a year, this amounts to half a minute. Even the ODR clock will be wrong by a significant amount eventually. If our objective is to synchronize our recordings to an absolute universal time, we must have access to a network time server. But that is not our objective.

Our objective is to synchronize signal time with video time so that we can view signals and video simultaneously in the Neuroarchiver, and be sure that the two are simultaneous to within ±50 ms. It does not matter to us that the computer time is wrong, so long as video time and signal time are synchronized with one another. Our strategy is to synchronize both video time and signal time with computer time. As we have already mentioned, we synchronize the signal time on the camera clocks every ten minutes by default. The Neuroarchiver, meanwhile, by default records one-hour NDF files, as measured by counting ODR clock messages. When the Neuroarchiver has received exactly one signal-time hour of data, it stops recording and waits until the start of the next second of the computer clock. When the new second begins, the Neuroarchiver immediately resets the ODR clock and begins recording another signal-time hour of data, naming the new NDF file after the new second. If the computer clock is a fraction of a second faster than the ODR clock after one hour, this process results in a minimal loss of data. But if the computer clock is a fraction of a second slower, we will lose up to one second of data every hour, and consecutive NDF file timestamps will differ by 3601 s instead of 3600 s.

During the recording process, the ACC is recording its own live, uncompressed video stream in one-second time-stamped segment files. It is also running several parallel compression processes that read in the un-compressed segments and produce compressed segements that each have a key frame at the beginning, so each may be played seprately. The Videoarchiver queries each camera frequently enough that as soon as a new compressed segment becomes available, the Videoarchiver downloads the segment to the data acquisition computer and deletes the segment from the camera. Every transfer_period_s, the Videoarchiver adds the new segments to its existing video file, until the file is, by default, ten minutes long in video time.

When we concatinate the segments into a ten-minute video, the key frames remain in place, and we can navigate quickly and accurately to any one-second boundary in video time. The video file V1525694641.mp4 was constructed out of 1-s segments. It contains a key frame at the start of every second of video time. The time on the phone clock at the beginning is 08:04:01 and at the end is 08:05:01. The file itself is 7.4 MBytes, or 120 kBytes/s on average. When we are viewing video and signals in the Videoarchiver, we must use a playback interval length that is a multiple of one second, or else the video time navigation will be inaccurate and slow. In order to get to a half-second video time, the video player must find a key frame preceeding the half-second time, and play the video until the half-second time to reconstruct the image at the intermediate moment, then stop. While it is doing this, it either displays the unwanted video, or it displays a flat color. Neither behavior makes for easy viewing.

File Sizes

To investigate the effect of resolution and frame rate upon recording file size, we set up a prototype ACC to record video of one or two people moving about on a couch, in an attempt to simulate a close-up view of animals in a cage. The following table give measured values network bandwidth required to live stream un-compressed video from the camera, and the size of the video when compressed with our standard compression quality.

Live StreamingBandwidth
(MBit/s)
H264 Compressed Size
(MByte/min)
410 X 308 at 20 fps51.1
410 X 308 at 40 fps111.1
820 X 616 at 20 fps153.1
820 X 616 at 40 fps173.8
1640 X 1232 at 10 fps1623
1640 X 1232 at 20 fps1826
Table: Live Streaming Bandwidth and Compressed Video Size for Various Video Resolution Settings. When live-streaming, the ACC transmits individual frames compressed with JPEG, but with no inter-frame compression. The compressed video is produced by the H264 algorithm with crf=23.

The compressed video size does not increase in proportion to frame rat. The H264 compression carries information from one frame to the next, so that each new frame is a list of modifications to the previous frame, rather than a new presentation of an entire frame. When we navigate within a video, we move first to a key frame, and then play forward. The ACC video files have key frames at every one-secod boundary of video time, and by synchronization, these one-second boundaries are simultaneous with Neuroarchiver signal time to ±50 ms

The record_length_s parameter sets the length of the recorded video files. When we add new segmets to the end of a video file, the process is not as simple as adding new data to an NDF file. We cannot simply write the new file on to the end of the old file. Instead, we must read the entire existing file into memory, followed by the new segments, and concatinate them to create a new, self-consistent video file. We use this to replace the original file. The longer the video file becomes, the more effort we must spend to extend it with new segments. On the other hand, the shorter the video files, the larger the number of files we have to deal with in our file system. We recommend that you leave the recording file length at 600 s, which is the default. This length limits the number of video files to six per hour, but at the same time avoids copying longer files. The length of time between segment transfers is also important, because the longer we wait to transfer segments, the fewer times per hour we have to copy the video file. We recommend transfer_period_s of 60. The transfer process takes roughly one second with a five-minute long video file. If we are recording from ten cameras, the transfer time amounts to ten seconds. If we attempt to transfer every ten seconds, we will have no time for downloading segments. By transferring every sixty seconds, the transfer time is kept to less than 20% of the time available.

Exposure Compensation

The camera adjusts its exposure time automatically to suit the ambient illumination. As we describe in more detail elsewhere, we can control how the camera chooses its exposure time, so as to favor bright or dark parts of the image. In the Videoarchiver, set the EC value to something between -10 and +10 to make the image darker and brighter respectively. The exposure compensation value will be applied only when you start live capture or start recording. At other times, it will do nothing.

Color Balance

The ACC comes with a lens that has no infrared blocking filter. It can see in the light of its own infrared illumination, or its own white illumination. As we describe in more detail elsewhere, a mixture of infrared and white light disrupts the color fidelity of the image. In particular, green objects appear red or blue. In a room with fluorescent and white LED lighting, the camera colors will be accurate. When the lights are off and we use the ACC's infrared illumination, the colors will be monochrome. You can force the image to be monochrome by setting the saturation to −100. Normal color balance we achieve with saturation set to zero. Amplification of the color is provided by saturation set to +100.

Design

Videoarchiver.tcl: The Videoarchiver Tool script, in TclTk.
Raspberry Pi: Home page for embedded Linux modules.
Camera V2: Manual for the Camera Module Version Two with IMX219 image sensor.
raspivid: The command-line utility provided by the Raspberri Pi to control the IMX219 image sensor.
MPlayer: The player we use to display camera video.
ffmpeg: The video encoder we use generate video files.

Troubleshooting

The Verbose checkbutton turns on additional reporting in the Videoarchiver's text window. We get to see the temperature of the microprocessor on the camera, the time taken to download each segment, and the time by which the incoming segments lag behind the local clock. This lag is an indication of how well compression is proceeding on the camera. If the camera is overheating, or activity in its field of view is excessive, it is possible that the camera will not be able to compress the video stream in real time. The lag should be between four and five seconds. If it is greater than ten seconds, or is increasing steadily, some problem is slowing down either the compression on the camera or the transfer of the compressed segments to the data acquisition computer. The Q button, when provided, downloads and prints out a set of log files from the camera. These files can offer clues as to what might be going wrong when video display or recording fails.

Coming Soon

We plan to enhance the Videoarchiver in the near future with the following features.

  1. The Videoarchiver will provide an auxilliary twenty-four hour cycle program that allows us to set the dimming and brightening of white and infrared LEDs to simulate day and night.

Version Changes

Here we list changes in recent versions that will be most noticeable to the user. You will find the source code in the Tool directory of the LWDAQ distribution.