Animal Cage Camera (A3034)

© 2018-2019 Kevan Hashemi, Open Source Instruments Inc.
© 2018-2019 Michael Collins, Open Source Instruments Inc.

Contents

Description
Power Supply
Set-Up
Lenses
Exposure Compensation
Color Balance
Design
Modifications
Development

Description

[04-MAY-18] The Animal Cage Camera (ACC) works with the Videoarchiver Tool to provide video recording and monitoring of small animals in cages. The A3034A is a rectangular circuit board that spans the width of a mouse cage, supported by four legs on its corners. The A3034A camera is at the center of the board, looking down into the cage, and equipped with a fish-eye lens. Around the edge of the circuit board are twelve white LEDs and twelve infra-red LEDs that provide variable illumination. An embedded computer on the circuit board provides continuous upload of compressed video over TCPIP, as well as illumination control over TCPIP. The embedded computer is a Raspberry Pi with an accompanying camera module. The entire circuit runs on a single 24-V power input, and communicates through one RJ-45 socket over a local area network.


Figure: Animal Cage Camera (A30234A). The A3034A mounts by cable ties to the pipes behind an IVC mouse cage, as shown here. Ethernet socket at the rear, on the embedded computer, as shown here.

The A3034 image sensor is the IMX219 by Sony Semiconductor. Its diagonal is 4.6 mm. It provides an array of 3280×2464 pixels. By default, the Vidoarchiver instructs the A3034 bins these pixels into 4×4 blocks so as to produce an array 820×616 pixels, each 4.48-μm square. By binning the pixels, we reduce the effect of pixel noise, making the sensor more sensitive to objects seen in dim light. We also reduce the network and computational cost of transferring and compressing the images into a synchronous and chronologically exact recording, and reduce the space occupied on disk by the recordings. The Videoarchiver does, however, support 1640×1232 resolution if we want it.

We can equip the A3034 with a variety of lenses, our preference being high-resolution, wide-angle lenses for viewing animal cages at close range. The photograph above shows the DSL215, which provides the A3034 with a 120° field of view. We discuss the choice of lenses below. We ship the A3034 with the lens focused at the optimal range for viewing an animal cage, but if you want to adjust the focus, rotate the lens in its mount.


Figure: The DSL215 Wide-Angle Lens. Turn the lens clockwise to focus on farther objects. When objects at infinity are in focus there is no point in going father clockwise.

The Videoarchiver Tool in our LWDAQ software provides live display of camera video on the computer screen, or synchronous recording of camera video to disk. When recording to disk, the Videoarchiver provides an optional time-delayed display of the video on the computer screen, which we call the monitor display. In both cases, the Videoarchiver receives twenty picture frames per second from the camera. When displaying live video, the Videoarchiver directs these frames to an MPlayer window, which displays them immediately. When recording to disk, the Videoarchiver directs the video frames to an ffmpeg segmentation process that stores the data in one-second segments that are time-synchronized to within one one twentieth of a second with the local computer clock. The Videoarchiver generates the final video recording file by compressing and combining synchronized segments into a longer segment the begins and ends upon a one-second computer-clock boundary.


Figure: Example Animal Cage Camera Recording. A one-minute recording made with an A3034X 150 mm above a table top, exposure compensation set to two (EV = 2), a DSL215 wide-angle lens. The video is compressed with ffmpeg's H264 codec with "-preset veryfast". The file is 7.4 MBytes long.

The A3034X is designed to operate in a faraday enclosure. Here is the A3034X installed over an Animal Location Tracker (A3032) in an FE2F faraday enclosure, with a plastic tub representing a mouse cage.


Figure: Animal Cage Camera (A30234X) and Animal Location Tracker (A3032C).

In the photograph above, we supply the A3034X with power with feedthrough at the back of the enclosure and a black power cable. Its Ethernet connection enters through an RJ-45 feedthrough with a blue cable. The red cable is the ALT (Animal Location Tracker) platform's LWDAQ root cable. This cable passes through an RJ-45 feedthrough in the back of the faraday enclosure and then to a LWDAQ Driver (A2071E). The gray cable is the logic cable that connects the ALT to an Octal Data Receiver (A3027). The ALT gets it power from its LWDAQ cable. On either side of the platform are two Loop Antennas (A3015C). Their coaxial cables leave through BNC feedthroughs at the back of the enclosure, and so connect to the A3027 antenna inputs.

Power Supply

[13-APR-18] The A3034 requires its own 24-V power supply. We use the same power adaptor we provide with the LWDAQ Driver (A2071E) and Command Transmitter (A3029C). The power socket is marked +24V on the printed circuit board. The power plug is 5.5 mm outer diameter and 2.1 mm inner diameter, with the inner contact positive.


Figure: Power Adaptor. Connect 100-250 VAC, 50-60 Hz with a computer power cable. These power supplies are generic, here is an example data sheet.

We bring this 24-V power into a faraday enclosure with a power jack bulkhead connector. The power socket of the connector faces outward and receives the plug of the power adaptor. Within the enclosure, the cable attached to the bulkhead connector plugs into the A3034.


Figure: Power Cable and Bulkhead Connector.

The A3034 will operate at full performance for input voltages 18-24 V. For a 24 V input, when streaming live video with ambient lighting, the A3034 consumes 100 mA. When we turn on the visible LEDs to full power, the current consumption increases by 30 mA. When we turn on the infrared LEDs, the current consumption increase by 40 mA.

Set-Up

[23-APR-18] The diagram below shows how one Animal Cage Camera (A3034) and one Animal Location Tracker (ALT, A3032) are powered and controlled for animal tracking. The animal cage goes on the ALT platform, with the camera on top. You will need an Ethernet Hub with at least three sockets so that your computer can communicate with the camera and a LWDAQ Driver at the same time.


Figure: ACC and ALT Connections. The animals themselves live in a cage on the ALT platform within a faraday enclosure. (1) The Neuroarchiver and Videoarchiver tools run on the data acquisition computer. (2) An ethernet hub allows us to connect three things together with a local ethernet: the computer, the driver, and the camera. (3) The LWDAQ Driver provides power and communication to the Octal Data Receiver and Animal Location Tracker. (4) The driver and the camera each get their power from an identical 24-V adaptor. (5) The ethernet hub has its own power adaptor. (6) The Octal Data Receiver picks up the signals from the implanted transmitters and decodes their channel numbers and sample values. (7) Loop antennas pick up transmitter signals. (8) The Animal Location Tracker platform sits under the animal cage and provides fifteen tracker coils for measuring the postion of the implanted transmitters. (9) The Animal Cage Camera sits above the animal cage, looking down with a wide-angle lens. (10) CAT-5 ethernet cables, one of which passes through an RJ-45 feedthrough into the faraday enclosure. (11) Coaxial 50-Ω cable carries radio frequency signals. (12) Feethrough conectors bring signals and power into the faraday enclosures. (13) A USB cable brings transmitter sample timing and content to the Animal Location Tracker.

The power for the camera enters the Faraday Enclosure via its own feedthrough. The feedthrough has a two-wire power cable soldered permanently to its inner contacts. We plug the far end of this cable into the camera.

We ship each A3034A/X with its own IP address. A typical addresses is 10.0.0.234. This address will be marked on the circuit board with a label. Our LWDAQ Drivers have a default IP address 10.0.0.37. Set up your computer to use its wired Ethernet connection to communicate with the 10.0.0 subnet. Consult the Configurator Manual for instructions on setting up communication with a solitary LWDAQ Driver. Once you have communication with the LWDAQ Driver, you can unplug the driver and plug in the camera in its place.

When you connect power to the camera, it boots up. The visible and infrared LEDs turn on, although you can see only the visible LEDs with your eye. Wait for one minute until the LEDs turn off. Now attempt to communicate with the camera. Open the Videoarchiver Tool in the LWDAQ Program. Before you use the Videoarchiver for the first time, you must download and install the Videoarchiver Libraries. Try one of the On and Off buttons for he visible LEDs. If the camera responds, your connection is working.

The A3034A provides three two-pin plugs on the front side that allow us to connect external LED arrays to replace the LED arrays mounted on the circuit board. An array of five white LEDs connected to P4 will disable LEDs D1-D6, so that these five external white LEDs can provide alternate illumination. The same goes for P5, the second visible LED connection. An array of ten infra-red LEDs on P3 will disable the twelve on-board infra-red LEDs and replace them with ten external LEDs.

[23-APR-19] We can change the A3034 IP address by logging into its Raspberry Pi embedded computer. From a terminal on your data acquisition computer, use ssh, or "secure shell", to log in as the user pi. If the current IP address is 10.0.0.234, you use the command "ssh pi@10.0.0.234" from a Linux or Unix terminal. If you are running Windows, use a DOS command prompt and navigate tp the Videoarchiver's Windows/ssh folder and execute ssh.exe pi@10.0.0.234. Enter the password "osicamera". When you are logged into the Pi, execute "cat /etc/dhcpcd.conf". You should see a print-out of the internet configuration file. Look for an un-commented line like "static ip_address=10.0.0.234/24". This is the line you have to change. You can edit the file with the VIM editor using "vim /etc/dhcpcd.conf", but if you don't know VIM, you will have trouble editing and saving the file. Another option is to download the original file, edit it on your own computer, and upload it again using scp, or "secure copy". Edit the IP address to match your requirement. If you want to put the A3034 on a local area network, you will need to modify the routers value as well. With the new file in place, execute "sudo reboot" to apply the new values. Once you change the IP address, you will not longer be able to contact the Pi with the original IP address. If you forget the IP address, there is no way to contact the Pi other than guessing the IP address.

[09-MAY-19] We can record from more than one ACC on the same data acquisition computer, but we must be wary of using up the entire bandwidth of our Ethernet connection for the video streams, and of using up the computer's entire processing capacity for the compression of those streams. Even with resolution 820×616 and twenty frames per second, the data rate from each A3034 is around 2 MBytes/s, or 20 MBits/s. Five cameras streaming images across a 100-MB Ethernet will consume the entire Ethernet bandwidth. Either we must make sure we have Gigabit Ethernet, or we must install the cameras on separate networks. Another option is to reduce the resolution of the camera images to 410×308, which will reduce the network bandwidth by a factor of four. If you want to try this lower resolution, let us know and we will add support for it to the Videoarchiver.

Lenses

[07-MAY-19] Our preferred supplier of lenses is Sunex. The table below summarises the properties of a few lenses when used with the A3034's IMX219 image sensor. This sensor is 3.7 mm wide and 2.8 mm high. The horizontal field of view of a lens when combined with the IMC219 image sensor is the angular field of view of the camera along the 3.7-mm width of the sensor. The diagnoal field of view is the field of view along the 4.8-mm diagonal of the sensor. The field of view provided by a lens is always a circle. If the circle just fills the width of the sensor, our image will have dark corners, and the horizontal and diagonal fields of view will be the same. If the circle just fills the entire sensor, the diagonal field of view will be greater than the horizontal. We prefer to equip the A3034 with a lens that fills the entire sensor with its image.

Lens Effective
Focal
Length (mm)
Horizontal
Field of
View (deg)
Diagonal
Field of
View (deg)
Comments
DSL2122.0102140Full sensor image, Cost $50
DSL2151.6135180Full sensor image, sharp focus, cost $100
DSL2161.3187187Circular image, sharp focus, cost $100
DSL2192.0116160Full sensor image, sharp focus, cost $100
DSL2242.298134Full sensor image, cost $50
DSL2272.0108148Full sensor image, sharp focus, cost $100
DSL8538.02432Full sensor image, sharp focus, $50
Table: Summary of Lens Performance in the A3034 Camera.By default, we equip the ACC with lenses that have no infrared-blocking filter. But if you are going to use the ACC to record video in sunlight or incandescent light, consider asking us to provide a lens with infrared blocking to correct the image color balance.

The lenses that give sharp focus across the entire field of view cost twice as much as those that provide only one of these two qualities. The DSL212, for example, provides a wide field of view, but at $50, it does not provide resolution to match our image sensor.


Figure: DSL212 Images in White (Left) and Infrared (Right) Light.

The DSL219, on the other hand, provides sharp images and great depth of field over its 116° field of view, but at a cost of $100. We include a lens like the DSL219 with each A3034 camera we ship. If you have space to look down upon a rectangular animal cage from a height of 30 cm, this is the lens we recommend.


Figure: DSL219 Images in White (Left) and Infrared (Right) Light.

The DSL227 provides a wide field of view, splendid, sharp focus, and great depth of field. It is also a $100 lens. If you want to place your lens close to a rectangular animal cage, this is the lens we recommend.


Figure: DSL227 Images in White (Left) and Infrared (Right) Light.

We suggest you consider the depth of field and angular field of view your application demands, and then we will choose with you a lens that provides the best use of the image sensor to provide bright, sharp images of your animals.

Exposure Compensation

[08-MAY-19] The A3034 camera adjusts its exposure time automatically to suit the illumination in the image. But it can favor the dimmer parts of the image or the brighter parts of the image, and we control which parts it favors by setting its exposure compensation value.


Figure: Images with Exposure Compensation Value Minus Ten (Left), Zero (Center), and Plus Ten (Right). Not the the color balance in this image is poor: we have natural light entering the windows of our office, bringing infrared light into the field of view. The shirt I am wearing is in fact blue, not purple.

The Videoarchiver allows us to set the exposure compensation value to any integer between −10 and +10 with an entry box, at the time we start recording. Once recording begins, we cannot adjust the compensation value without stopping and starting again. We recommend an exposure compensation of +4 for looking at animals in dimly-lit cages, but we leave it to the experimenter to find the value that gives the best contrast for animal viewing.

Color Balance

[12-MAY-19] By default, we equip the ACC with a lens that focus images of both visible and infrared light. In white light, such as the white light provided by the ACC's white LEDs, the IMX219 image sensor provides bright, accurate color images like the one shown on the left below.


Figure: Images of Brightly-Colored Object in White LED Light (Left), Infrared LED Light (Center), and White Plus Infrared Light (Right).

In infrared light, the camera provides a monochrome image. Its red, green, and blue pixels are all equally sensitive to infrared light, so no colors emerge in infrared illumination. When we mix the two, we get shades of red and blue, but green objects appear gray. Sunlight and incandescent lights provide a mixture of visible and infrared light. The ACC with its standard lenses provides poor color balance when viewing objects in natural light or in rooms lit with light bulbs. We can, however, equip the ACC with an infra-red blocking lens if your application uses natural light exclusively.


Figure: Relative Response of Red, Green, and Blue Pixels versus Wavelength for IMX334C Sony Semiconductor Image Sensor.

We do not have the relative response of the IMX219C image sensor, but we expect it to be much the same as that of the IMX334C, shown above. The A3034A's infre-red emitter is the APT2012F3C, with peak emission wavelength 940 nm. Our white LEDs are the L130-2780, which appears to the human eye to be the same color as an object at 2700-Kelvin. But the actual emission spectrum of the LED is not a black-body emission spectrum, as shown below.


Figure: Spectra of Various White LEDs.

The white LED is actually a blue LED covered with yellow phosphor. The result is a combination of blue and yellow light that appears to our eyes to be the white. The color of objects in this light, may not be the same as the color they would appear in incandenscent or fluorescent light.

Design

S3034_1: Animal Cage Camera Schematic.
A303401A.jpg: View of A303401A printed circuit board.
A303401A.zip: Gerber files of PCB for A3034X.
A303401B.zip: Gerber files of PCB for A3034A.
A303401B Top: Drawing of A303401B printed circuit board top-side.
A303401B Bottom: Drawing of A303401B printed circuit board bottom-side.
Raspberry Pi: Home page for embedded Linux modules.
Camera V2: Manual for the Camera Module Version Two with IMX219 image sensor.
raspivid: The command-line utility provided by the Raspberri Pi to control the IMX219 image sensor.
MPlayer: The player we use to display camera video.
ffmpeg: The video encoder we use generate video files.
Code: Animal Cage Camera Programs.

Modifications

[11-MAY-18] The A303401A circuit board requires that we displace L1 and add wire links to accommodate an inverted footprint. The result is our A3034X with half and full-power visible illumination and full-power infrared illumination.

For the A303401B we propose to add a bridge rectifier to protect the circuit from reversal of its 24V power supply. We will populate the four-bit DACs to give full control of brightness of both LED arrays. We must add a couple of 0-V pads for scope probes, and test points for all signals. Move vias at least 25 mils from pads.

[21-MAR-19] We have a new layout designed for mounting behind a cage in an IVC rack, by strapping to the two vertical pipes behind the cage. The A3034A is 140 mm × 100 mm. It provides the same visible and infra-red LEDs as the A3034X. It's DAC resistor arrays are fully-populated. The embedded processor sits in an enclosure mounted on the bottom-side of the board, while the camera looks out through the top-side. The LEDs are mounted on the top-side as well. We forgot to include a bridge rectifier to protect the converter. We add three two-pin connectors P3-P5 that allow us to add external infra-red and visible LED arrays to take the place of those on the circuit board.

Development

[25-MAR-18] We assemble the first prototype A3034A with circuit board A303401A. At power-up, the white LEDs shine brightly for less than a second, then go out. Two resistors over heat and burn. We find that U1, the current mirror, is suffering from thermal run-away. Our original circuit runs 2 mA through U1-6 at maximum brightness and expects 2 mA to flow through U1-3 as well. The two transistors are in the same SOT-323 and our assumption was they would remain at the same temperature. If U1-3 heats up, we expect U1-6 to heat up too, dropping its base-emitter voltage, and so controlling the current through U1-6. The base-emitter voltage drop for a given collector current decreases with temperature by roughly 2.4 mV/°C, as we show for diodes here. We find that for currents larger than 500 μA into U1-6, the current through U1-3 increases during the first few seconds. The LEDs turn off because U1-3 drops below the minimum 18 V required to provide current to the LEDs through Q2 and Q3. Instead, current flows through the base junctions of Q2 and Q3 into U1-3. When we have 20 mA flowing through U1-3 and 10 V, current dissipation in U1-3 is 200 mW, which exceeds the maximum for the UMX1 dual transistor.

[26-MAR-18] We change R1-R4 to 100 kΩ, 50 kΩ, 27 kΩ, and 14 kΩ respectively. At full brightness we have 400 μA flowing through U1-6. We remove Q3. We are powering only D1-D6. We have R6 = 270 Ω and R5 = 2.2 kΩ. We observe 1.7 V across R5, which implies 770 μA through U1-3. We do not have thermal run-away, but we have the U1-3 current is twice that of U1-6, which implies that the junction of U1-3 is around 7°C hotter than that of U1-6 (VT Ln(770/400) ÷ 2 mV/°C ≈ 7 °C). The voltage across R6 we assume is around 1.7 V also so we have 6.3 mA flowing through the white LEDs. If Q2, a ZXTP2025F has typical current gain of 380, we expect base current Q2-3 to be 20 μA ≪ 770 μA. We could decrease R6 to 100 Ω and so increase the LED current to 10 mA. The power dissipation in R6 will then be 10 mW, which is fine.

[27-MAR-18] We have R1-R4 all driven by the same 3.3 V and their resistance in parallel is 7.2 kΩ. The voltage across them all is 2.7 V for a current of 370 μA. Base-emitter voltage drop is 0.61 V. We have R6 = 100 Ω. Voltage across R5 = 2.2 kΩ is 1.5 V for 680 μA. Voltage across R6 is 1.4 V for 14 mA. We remove R6 and still see 1.5 V across R5, suggesting the base current drawn by Q2 is negligible.

We replace R1-R4 with a single 18 kΩ and see 2.8 V across it for 150 μA. We have R5 = 5 kΩ and 0.8 V across it, so 160 μA. The voltage across R6 = 100 Ω is also 0.8 V for 8 mA into the LEDs. We load Q3, D7-D12, and R7 = 100 Ω and see 8 mA flowing into the new diodes. The voltage drop across both chains of LEDs is 16 V for average forward drop of 2.7 V per diode. We load 5 kΩ for R15 and 18 kΩ for R10, to which we connect 3.3 V. We load D15-D26. We get 8 mA through the twelve infra-red diodes. Pin Q5-1 is at 15 V, making the average forward drop of the diodes 1.25 V.

We test the visible and infra-red illumination for image-taking in a cage. The visible illumination is bright. Our visible LED is the white L130-2780 of the Luxeon 3014 series. It is a 2700K warm white emitter in a P1206 package. The infra-red illumination is too dim for us to obtain a blob image of a toy mouse. The infra-red is the XZTHI54W. It is an 880-nm emitter in a P0805 package. According to its data sheet, this LED should emit a minimum of 2π × 0.8 mW = 5 mW of infra-red light at 20 mA forward current, or 2 mW at 8 mA. We drop R14 from 100 Ω to 27 Ω and R10 from 18 kΩ to 10 kΩ. We now see 1.0 V across R14, so 37 mA flowing through LEDs. The LED forward voltage is now 1.38 V. We put an SD445 photodiode up agains one of the LEDs and get 3.4 mA ÷ 0.6 mA/mW = 5.7 mW of infra-red light for an input power of 50 mW, or 11%. We drop the current to 10 mA and see 1.9 mW or 13%. We try an HSDL-4400 with 37 mA and get 4.1 mW. We restore the original LED. Our white LEDs at 8 mA give us photocurrent 2.8 mA. Assuming an average wavelength of 500 nm this is 11 mW. The electrical input power is 100 mW, so efficiency is around 11%.

[04-APR-18] We choose new DAC resistor values R4 = 40.2 kΩ up to R1 = 316 kΩ for the visible light control and R13 = 20.0 kΩ up to R10 = 160 kΩ for the infra red light control. Assuming the U1 and U2 base-emitter drop is around 0.6 V and the logic HI is around 3.3 V, we expect the following control currents versus DAC count.


Figure: Control Current versus DAC Count. The visible light control current flows into U1-6. The infrared control current flows into U3-1.

Assuming that the control currents are mirrored exactly by U1 and U3, we calculate the visible and infra-red LED current versus DAC count. We have R5 = R15 = 4.7 kΩ, R6 = R7 = 100 Ω, and R14 = 27 Ω.


Figure: Expected LED Current versus DAC Count. The visible light current flows through two parallel chains of LEDs.

The maximum forward current of our infra-red LED is 50 mA. We expect to be just under the maximum at 44 mA for full brightness. The maximum current through the white LED is 120 mA and our maximum current is 6 mA. We remove our photodiodes D13 and D14 and replace them with phototransistors one hundred times more sensitive to light, and set R8 = R9 = 20 kΩ.

[13-APR-18] We have two A3034X, W0381 and W0382. We are shipping W0381 to ION along with ALT V0385. The Raspberry Pi username is pi@10.0.0.234 and password is "osicamera".


Figure: Shipment 2048, A3034X and Accessories.

In the figure above we see two Ethernet cables and an RJ-45 feedthrough to carry the Ethernet connection from a local area network hub to the A3034X. The power adaptor is in a white box, and its bulkhead connector is in a bag. We have standoffs to raise and lower the camera, cable ties to fasten the Ethernet cable to the circuit board, extra flex cables for the camera connection, and wider-angle lens for use with the camera.

[24-APR-18] We have two videos of a cell phone clock, one 14-s long, the other 100-s long. We compress both with all eight ffmpeg compression speed settings, which we activate with options like "-preset veryslow". We leave the image quality at its default value, which we specify with "-crf 23".

ffmpeg -i inputfile -c:v libx264 -crf 23 -preset veryfast output.mp4

The "crf" stands for "constant rate factor". When this parameter is 0, the compression is lossless. When it is 51, the quality is the lowest possible with the H264 encoder.


Figure: Compression Time and Compressed File Size for FFMPEG Preset Values. We compress a 14-s MJPEG video and a 100-s MJPEG video of a cell phone clock.

We are surprised to see that veryfast gives the smallest file. We try a 30-s video half of which is partly taken up with our hands moving and adjusting the phone under the camera. We use a script stored in Scripts.tcl.


Figure: Compression Time and Compressed File Size for FFMPEG Preset Values. We compress a 30-s MJPEG video with some hand movement. Original file size 59.7 MByte.

We make a 30-s video in which our hands are moving the phone continuously, with a diagram as a background, and repeat our measurement.


Figure: Compression Time and Compressed File Size for FFMPEG Preset Values. We compress a 30-s MJPEG video with continuous hand movement. Original file sizze 63.8 MByte.

We pick "veryfast" as our preset value. It's three times faster than the default, and the files are the same size or smaller. We expect the maximum size of the compressed videos to be around 150 kBytes/s when many objets are moving quickly, and the minimum size to be 10 kBytes/s when nothing is moving.

[08-MAY-18] We consolidate all scripts into a single directory. We make all ffmpeg and mplayer calls directly from Tcl. A watchdog process, defined in Tcl, runs independently and monitors the segment directory. If the ffmpeg segmentation process is abandoned by the Videoarchiver, the watchdog will terminate the segmenter when there are more than a maximum number of files in the segment directory. We record for fifteen minutes on MacOS and obtain fifteen 20 fps, H264 video files each exactly one minute long, each beginning with our cell phone clock at 01 seconds. We do the same thing on Windows, but the file vary in length from 55 s to 65 s. In one example, ffprobe tells us that the video length is 64.1 s, there are 1282 frames, and the frame rate is 20 fps. We combine 9 such videos together to form one of 542.85 s (9:03) duration and 10857 frames at 20 fps. The time on our phone clock is 8:43:12 at the start and 8:52:14 at the end.

With nothing moving in the field of view, our compressed 1-s video segments are 56 kBytes long, with our set-up diagram as a background. With the phone clock in view, they are 62 kBytes long. With our hands spinning the phone the files are 250 kBytes.

To help with off-line development of the Videoarchvier Tool, we implement a virtual video feed in the Videoarchiver that we can activate with virtual_video. The feed reads a video file in the Virtual directory once and streams tham to a local TCP port. We use the ffmpeg -re option to request that the input file be streamed at 20 fps, but it appears that this frame rate is not enforced. The files we record with the virtual feed are marked as having twenty frames per second, but they are stretched out in time. A one-minute video of 20 fps loses the first three seconds and lasts for 64 s.

[08-MAY-18] We compress a five-second movie of five white rats moving around in a cage. With the veryfast algorithm, the file is 1.3 MBytes. With veryslow it is 1.2 MBytes. We can crop a video stream with ffmpeg, and extract sections of a video as well. The following command extracts the interval from time 00:00 to 01:17 and preserves only the rectangle with top-left corner x=0, y=100, width 720 and height 900 (pixels).

ffmpeg -i V1.mp4 -ss 00:00:00 -t 00:01:17 -filter:v "crop=720:900:0:100" V1_cut.mp4

In mplayer we can jump to a particular time in a video with:

mplayer -ss 01:30 Blob_Track.mp4

These features will permit us to navigate through video files to particular locations to match EEG recordings. We can operate mplayer in slave mode to have it play video files that do not exist at the time we open the player window.

mplayer -slave -quiet -idle -input nodefault-bindings -noconfig all
loadfile V1525694641.mp4
pausing seek 30 0

We start mplayer in slave mode and tell it to idle when it's done playing a video. We also override all video screen key bindings so the user cannot quit, pause, or otherwise divert the playback with the mouse and keyboard in the video window. We deliver commands via stdin in this example (the keyboard). We load a video file, then seek absolute time 30 s and pause.

[10-MAY-18] We have nine recorded files each nominallly 600 s long. Their names all end with 488. According to ffprobe, the frame rate is 20 fps. Eight have either 12004 or 12005 frames and one has 11981 frames, for a total of 108016, an average of 12001.8 frames in each 600-s video.

[07-JUN-18] At ION/UCL we record clear and synchronous video from an A3034X and A3032C of one mouse in a cage.

[12-APR-19] We connect a PT17-21C-L41-TR8 phototransistor to 3.3 V under our desk lamp and see 200 μA collector current. In over-head lights 130 μA, under a cloth 0.0 μA. Desk lamp directly on top 1.1 mA. In shadow 10 μA. We have both D13 and D14 on the same side of the A3034B. We load 20 kΩ for R8 and R9.

[19-APR-19] We ship A3034A numbers W0384 and W0385 with IP addresses 10.0.0.235 and 10.0.0.234 respectively to ION. We are left with A3034A number W0386 and A3034X number W0382. There is one more A3034X at ION, number W0381, with IP 10.0.0.234.

[07-MAY-19] Concatinating videos with ffmped is not as simple as listing two video files as input and specifying one output file. Here is how we concatinate two mp4 files in which the video is encoded with h264.

ffmpeg -i out1.mp4 -i out2.mp4 -filter_complex "[0:v:0] [0:a:0] [1:v:0] [1:a:0] concat=n=2:v=1:a=1 [v] [a]" \
  -map "[v]" -map "[a]" -c:v libx264 combined.mp4

The "filter_complex" that controls the concatination is the string with lots of brackets inside. The [0:v:0] tells ffmpeg to take the first file (0:) and extract its first video stream (v:0), after which [0:a:0] extracts that same file's audio, and [1:v:0] and [1:a:0] do the same for the second input file. We want a "concat" filter function defined by the string on the right of the first equal sign, so the concat filter is "n=2:v=1:a=1" meaning two input files, one video stream, one audio stream. We have a final [v] [a] to give a name to the video and audio streams produced by the concatination. Later, we have map commands to map the [v] and [a] streams into the output file. And for good measure we use the libx264 codec.

[11-MAY-19] We test the ACC with 410×308, 1640×1232, and 3280×2454 resolution, and frame rates 5 to 40 fps. We find that the Raspberry Pi will not supply the 3280×2454 images. At 1640×1232, recording at 40 fps lags behind the incoming video stream, and we get video files that are shorter than their specified length, losing around four seconds every minute. At 20 fps recording works fine. We implement rotation of the image stream using the raspivid rotate option. Videoarchiver 3.6 provides menu buttons to set rotation, resolution, frame rate, and exposure compensation.

[21-MAY-19] We enable the A3034's wireless interface by editing /boot/config.txt and commenting out the line that disables the wireless interface.

pi@raspberrypi:/boot $ cat config.txt
start_x=1
gpu_mem=128
dtoverlay=pi3-disable-wifi

We reboot to implement this change. We set the wireless network name and password with the raspi-config utility. The wireless interface connects to our local router, but our connection to servers outside our local network is intermittent. We disable the wired ethernet with ifconfig eth0 down and the wireless interface provides reliable connection. We use apt-get to update the repository lists, then upgrade. We install ffmpeg and confirm that tclsh is installed. We now find we can use ffmpeg to segment the video stream into one-second mpeg files on the Pi. The following command runs in the background (need -nostdin for that) and writes its error-only output to a segmentation log file.

ffmpeg -nostdin -loglevel error -i tcp://10.0.0.236:2222 -framerate 20 -f segment \
	-segment_atclocktime 1 -segment_time 1 -reset_timestamps 1 -c copy \
	-strftime 1 S%Y-%j-%H-%M-%S.mp4 >& segmentation_log.txt &

We then use the following script to compress these segments.

set fl [glob S*.mp4]
set t [clock seconds]
foreach fn $fl {
  puts "compressing [file tail $fn]"
  exec ffmpeg -loglevel error -i $fn -c:v libx264 -preset veryfast V[file tail $fn]
}
puts "compressed [llength $fl] files in [expr [clock seconds]-$t] seconds."

It takes 74 seconds to compress 64 files. We are using only one of the Pi's four cores. We use the following tcl script to compress individual files.

set fn [file tail [lindex $argv 0]]
puts $fn
exec /usr/bin/ffmpeg -i $fn -c:v libx264 -preset veryfast V[file tail $fn] >& L[file root $fn].txt

And call it with this line using four cores.

find ./ -name "S*.mp4" -print | xargs -n1 -P4 tclsh single.tcl

The job is done in 40 s. With two cores, 49 s.

[22-MAY-19] We boot up a new Raspberry Pi with a keyboard, mouse, and monitor. We connect to the Brandeis University wired network. We add a key directory .ssh to the pi home directory, and we write the Videoarchiver's public key to a file authorized_keys. When we connect to the Pi without a password, we pass the Videoarchiver's private key as authorization, which matches the Videoarchiver public key saved on the Pi.

[23-MAY-19] We compress 74 MPEG 1-s segments on a Raspberry Pi 3B+. With -P1 to -P4 the process takes 97 s, 62 s, 53 s, 50 s. We set up two compression processes running continuously looking for segment files, while ffmpeg creates the segments from the camera stream, all in the Pi's Segments directory. With 20 fps and 820 × 616, the compression keeps up with segmentation. But when we ask for 40 fps, the compression cannot keep up.

[24-MAY-19] We record 60 ten-minute videos over-night and use ffmpeg to measure the duration of each. They are each either 600.20 s or 600.25 s long. We synchronize the W0383 system clock with our laptop clock and measure a 300-ms delay between starting a secure shell and setting the camera clock. We account for this latency and get the two clocks within 50 ms. Five hours later, the camera clock is lagging 1.5 s behind the computer clock.

[30-MAY-19] We measure the time it takes one compressor process on our Raspberry Pi 3B+ to compress one second of video at various resolutions and frame rates. The camera is viewing one person sitting at a desk and performing the measurements. Each measurement we make with fresh video.


Table: Segment Compression Times on Raspberry 3B+ versus Resolution and Framerate.

With three compressor processes, we can double the rate of compression. We want the average compression rate to be greater than one segment per second. We see that 1640 × 1232 cannot be sustained by the 3B+ even with 10 fps.

[31-MAY-19] The size of our 820 × 616, 20 fps video varies with content. In darkness 0.1 MByte/min, slowly-changing illumination but no movement 3.5 MByte/min, steady illumination with some movement 4.5 MByte/min, steady illumination with constant full-field movement of arms and body 8.4 MByte/min. In the last video, the compressed one-second segments varied in size from 100-200 kByte.

When we leave the recording of 820×616 20 fps running for a while, with compression on the camera, we notice it starts to lag behind. With "vcgencmd measure_temp" and "vcgencmd measure_clock arm" we can measure the CPU temperature and clock speed. When we turn on recording, the CPU warms from 37°C to 60°C in a minute. The CPU clock speed is 1.2 GHz. After a few minutes, the CPU is at 82°C and the CPU is running at 926 MHz. After ten minutes, at 83°C the clock drops to 600 MHz.

[02-JUN-19] Yesterday we allowed the camera CPU to heat up to 85°C, its clock dropped to 600 MHz. We stopped the Videoarchiver and compression. Today its temperature is 45°C and clock is still 600 MHz. We reboot. Clock frequency remains 600 MHz. We cycle power. Clock frequency is still 600 MHz. We start recording. Temperature is soon 56°C and clock speed has increased to 1.2 GHz.

[03-JUN-19] We run three compressors -preset veryfast 820×616 20 fps on a Raspberry Pi 3B+ without heat sinks, in a plastic case, board vertical, then add shiny heat sinks and repeat. We record clock speed and CPU temperature.


Figure: Effect of Shiny Heat Sink Upon Warm-Up of Raspberry Pi 3B. Temperture (C) and Frequency (MHz) versus Time (s). Start compression at time zero.

[04-JUN-19] Unless we specify otherwise, all our H264 compression is done with constant rate factor set to its default value of 23, which is half-way from maximum quality (-crf 0) to minimum quality (-crf 51). We record 410×308 at 20 fps with -crf 23, and monitor temperature and clock speed. We move around in the field of view. The ten-minute videos are around 15 MBytes, We switch to 820×616 at 20 fps, but with -crf 40, and monitor temperature and clock speed.


Figure: Heating with High-Quality, Low-Resolution Video, and Low-Quality, High-Resolution Video. Start compression at time zero. Ambient temperature is 22°C. The Raspberry Pi 3B is in a plastic box, vertical, with shiny heat sinks. HQLR is 410×308 20 fps -crf 23. LQHR is 820×616 20 fps -crf 40.

We compress a 10-minute 410×308 20 fps video from -crf 23 to -crf 40. Size drops from 15 MBytes to 1.5 MBytes. We compress a one-minute 820×616 at 20 fps video V1525694641.mp4 of size 7.4 MByte with -preset veryfast and -crf 23. Speed ×12.1, size 5.9 MByte, output looks the same as the input. We repeat with -crf 30. Speed ×13.8, size 2.5 MByte, quality slightly degraded. With -crf 40, speed ×15.7, size 0.9 MByte, quality greatly degraded, including some compression artifacts. We shrink the image with the following, and get ×27.4, size 2.1 MByte.

ffmpeg -i V1525694641.mp4 -c:v libx264 -preset veryfast -crf 23 -vf scale=410:308 shrunk.mp4

We expand the the shrunken image with:

ffmpeg -i shrunk.mp4 -c:v libx264 -preset veryfast -crf 23 -vf scale=820:616 big.mp4

Speed is ×14.0 and size is 4.9 MByte. Quality about the same as -crf 30 on original image with no shrink and expand. We shrink with -crf 10 (speed ×22.0) and expand with -crf 23 (speed ×12.4) and the result is almost as good as the original video.

[05-JUN-19] We find that our Raspberry 3B+ wireless interface is disabled, but there is no instruction in /boot/config.txt disabling the interface. It turns out that the interface is being blocked by rfkill. We unblock with "sudo rfkill unblock 0", reboot, and wlan0 interface is up. Add the line "dtoverlay=pi3-disable-wifi" to config.txt. Now the wlan0 is disabled on reboot.

We want to make 820×616 our default resolution with the IMX219. But if the CPU overheats, we want to kill the compressor processes and re-start them with lower resolution and slightly higher quality. When we display these in the Neuroarchiver, we will scale them by a factor of two.

set plist [split [exec ps -ef | grep compressor] \n]
foreach p $plist {exec kill [lindex $p 1]}

[06-JUN-19] We record from an IMX219 attached to a Raspberry Pi 3B+ with shiny heat sinks. We monitor temperature and clock speet. Our HQLR test is 410×308 20 fps -crf 15. Our MQHR test is 820×616 at 20 fps -crf 23. The 1-s segments for both recordings are the same size: 60 kByte for no movement, 120 kByte for lots of movement.


Figure: Heating with High-Quality, Low-Resolution Video, and Medium-Quality, High-Resolution Video. Start compression at time zero. Ambient temperature is 24°C. The Raspberry Pi 3B+ is in a plastic box, vertical, with shiny heat sinks. HQLR is 410×308 20 fps -crf 23. MQHR is 820×616 20 fps -crf 23.

So far as we can tell, the compression time lag in both recordings never gets above 7 s. The output files are a few seconds shorter than we specify. The verbose output in the Videoarchiver reveals that the transfer process is receiving and transferring segments out of order, so that a later segment arrives first, and we close one video with too few seconds, and a few jumps.

To support remote compression we now have a directory /home/pi/Videoarchiver that contains three essential files: compressor.tcl, interface.tcl, and videoarchiver.config. The latter contains the line "sensor IMX219" or "sensor OV5647" depending upon the camera module, and another line "platform 3B" or "platform 3B+". Videoarchiver 4.1 uses the configuration file to adjust its resolution settings. The interface script creates a TCPIP server to accelerate communication with the camera. The compressor script is what does the local compression of the segments created by a local ffmpeg segmenter. Both these scripts must be present on the camera for remote compression to work. If they are not present, remote compression will fail, but local compression will work fine.

We test sustained recording with local and rempote compression with IMX219/Pi3B, and OV5647/Pi3B+. We specify 20 fps. The one-minute files from the IMX219/Pi3B contain 1201 frames, and the ten-minute files contain 12005 frames. Thus the files appear to be 60.025 and 600.25 s long respectively. The frame rate from the IMX219/Pi3B is 20.008 fps. The one-minute files from the OV5647/34B+ are 1294 frames long, or 64.70 s assuming 20 fps.

[07-JUN-19] We connect an IMX219 to our Pi3B+ and record at 20 fps, asking for one-minute videos. One contains 1186 frames, the other 1161, when both should contain 1200 frames. The operating system running on the Pi3B+ has a graphical user interface, desktop, and many other installed features that our 3B minimal Linuc, terminal interface system does not have. We continue with the A3034A, with 3B and minimal Linux. The result of "uname -a" on this system is "Linux raspberrypi 4.19.42-v7+ #1219 SMP Tue May 14 21:20:58 BST 2019 armv7l GNU/Linux".


Figure: Requested and Observed Frames per Minute from A3034A. Local compression, -crf 23, Raspberry Pi 3B, IMX219 camera.

We equip our A3034A with dhcpcd.cond so it first tries to connect to a wired network with DHCP, then falls back to its static address. The wireless network is remains disabled with /boot/config.txt.

[10-JUN-19] We take the SD card out of our IMX219/3B camera and put it into our OV5647/3B+ camera. We request 20 fps and record one-minute videos. Four such videos each have either 1296 or 1297 frames. We take the SD card of our OV5647/3B+ and put it in our IMX219/3B camera. We obtain 11 one-minute videos at 20 fps and get 1200 or 1201 frames in each. Average is 1201.4. In ten-minute files we have 12004-12006 frames, so ffmpeg says they are 600.25 s long.

We compare heating in a Raspberry Pi 3B with shiny heat sinks for 820×616 20 fps -crf 23 with local and remote compression. Heating with local compression is less than 5°C. With remote compression and no movement in the image, the temperature settles to 80°C and the lag is 4 s. But when we start moving in the field of view again, the temperature climbs to 84°C and clock speed drops to 800 MHz. The lag increases past 15 s and images are arriving out of order.


Figure: Heating with Local and Remote Compression. Start recording at time zero. Ambient temperature is 24°C. The Raspberry Pi 3B is in a plastic box, vertical, with shiny heat sinks. LC3B is local compression, RC is remote compression. Video in both cases is 820×616 20 fps -crf 23.

We prepare a new Raspberry Pi 3B+ for use with the Videoarchiver and the IMX219. We load into enclosure and attach copper square heat spreaders and black anodized heat sinks. We connect the SD card to our laptop, creat Videoarchiver in the boot partition, copy into this directory compressor.tcl, interface.tcl, id_rsa.pub, videoarchiver.config (says what kind of camera), and monitor.tcl (we use to monitor temperature). We move SD card to the Pi, to which we have a monitor connected. We use raspi-config to enable the camera interface, enable ssh, and set the password of the pi user. We add dtoverlay=pi3-disable-wifi to /boot/config.txt. We add net.ifnames=0 to /boot/cmdline.txt. We move the Videoarchiver directory into the pi home directory. We overwrite /etc/dhcpcd.conf with our dhcpcd.conf. We edit dhcpcd.conf to set the static IP address. We create folder /home/pi/.ssh and copy id_rsa.pub into this folder, renaming it as authorized_keys. We can now get live video, and we can record video with local compression, using Videoarchiver 4.3. We cannot perform a photo download because we don't have tclsh installed. We cannot perform recording with remote compressio because we don't have tclsh and ffmpeg installed.

[11-JUN-19] We update Rasbian on our new Pi with "apt-get update" and "apt-get dist-upgrade", which takes thirty minutes. We follow with "apt-get install tclsh", and "apt-get install ffmpeg". We can now record with remote compression. We do so for forty minutes 820×616 20 fps -crf 23. The ten-minute files each contain 11599, 11563, 11573, and 11555 frames. We repeat for forty minutes at 410×308 20 fps -crf 15. These ten-minute videos have 11503-11552 frames each.


Figure: Heating with Remote Compression, Medium-Quality High-Resolution (MQHR) and High-Quality Low-Resolution (HQLR). Start recording at time zero. Ambient temperature is 24°C. The Raspberry Pi 3B+ in plastic box, vertical, with copper heat spreaders and black heat sinks. The rise in MQHR temperature at 1800 s is us waving our hand in front of the camera for one minute.

[21-JUN-19] We have a Raspberry Pi 3 in plastic enclosure with shiny heat sinks. We try combinations of resolution, frame rate, and quality, and observe heating over ten minutes.


Figure: Heating for Resolution, Framerate, and Quality with Remote Compression. HR = high resolution = 820×616. LR = low resolution = 404×308.