Hello. Your project is interesting but also something of a challenge. For this project, several prototypes will be likely be required, and you would need to be prepared to test-run prototypes and provide feedback. This is because there are system and performance constraints that have yet to be clarified.
First and foremost is the architecture of your Meerkat system. An optimal camera interface can only be designed once the Meerkat class system is reviewed. I can imagine the LiDAR calling a method of a new CameraControl (or some similar name) class, passing the desired timestamp of a frame to be saved to disk. The CameraControl class would select the frame with the nearest timestamp. But such a design would depend on the camera subsystem running as a separate thread of control capturing images as frequently as possible. Would the current Meerkat design support this?
Power requirements need to be considered. It would be ideal to have the camera taking timestamped images continuously. However, this will significantly increase power consumption.
We would also need to test that the system's performance, especially with regard to the LiDAR, is not impacted by the additional load of driving the camera and capturing frames. CPU, interrupts, RAM, etc. all need to be considered. I have no idea at present of the spare resource capacity of your Meerkat system.
The new camera control subsystem could directly call V4L2. But perhaps development could be accelerated, with potential benefits in flexibility and hardware compatibility, if the FFmpeg video processing and device control libraries were installed. These options need to be considered in light of the packages and kernel modules already installed from the stretch distro.
Being able to stream images from the camera module could be achieved by a stand-alone streaming service run only for that purpose when needed. Small, free, open source software already exists to provide this capability. I would propose installing such a "shrink wrapped" product to be used when aligning the device.
I hope that I have conveyed how this project could be a work in progress for several weeks as various design options are considered and trialed. This project will involve some learning-by-doing. I am prepared for this if you are. Are you interested in discussing this further?
Hi Christopher,
Thanks for your interest!
Yes, separate thread of control capturing images as frequently as possible. Our process could "pipe" the timestamp/trigger in to a reduced version of motion (by ccrisan)?
Power is not an issue.
I could test additional load impact. Loads of CPU remaining, but RAM may be an issue. "motion" is efficient. Is that what you have used?
Hi Peter,
I was not familiar with Motion. And it isn't clear to me why ccrisan forked it. But I still have a couple of days to look into it. My knee-jerk response is that it is probably overkill for a simple ring buffer of 320x200 YUV frames at 30fps over a few seconds at most. (By the way, such a ring buffer could consume several megabytes; almost 2MB per second of duty cycle. Could that exhaust available RAM?) But to be fair, I haven't yet looked at Motion's daemon. I expect that hitting the V4L2 or FFmpeg libraries should be much more efficient than interfacing to any daemon, but I shouldn't preempt that decision. (I'm thinking, perhaps Motion provides a simple camera interface library which would be very convenient.) So I will look into Motion over the next couple of days. Thanks for prompting me to look at it.
Christopher,
Not using much RAM at the moment (133/748M), so the GPU possibly could be given 512M leaving over 300M for the buffer.
No probs. Thank you
Hello,
We have chosen a bid/proposal due to general experience, understanding of the problem & solution, experience with embedded image streaming and time for completion.
Unfortunately, we won't be progressing with your proposal.
Thank you for your time!