Meerkat Revision (1b)
Featured

Bids
4
Status
Ended
This project has ended

The listing is below is no longer available. Feel free to search for other job listings.

Location
Remote (anywhere)
Position
Freelance / Casual

Daintree Systems is looking for someone with C++ ability and availability for a short-term contract (1 to 2 weeks). Briefly, we need to capture camera images of passing vehicles when they are detected by a LiDAR connected to a Linux OS Raspberry Pi.

Please see the uploaded file

File attachments
Requested by
Peter B.
Joined Aug 2020
Submitted 21 Aug 2020 at 07:49
Expired 3 years ago
Bidding guide $1500 - $3000
Average bid $2,525.00

Dub Bartolec

I can help you with this but need to discuss some details as expected bidding guide is not realistic. Here are my details: www.proiotware.com

Regards

Dub

(0)
$100
Deliver in 12 days
Scholarshelp

Hello

(0)
$2600
Deliver in 5 days
Jeremy Ardley

Hi,

I'm a long term developer in C++ in real time embedded systems, primarily using pthreads on linux based O/S to handle real-time digital media.

I am proposing a stand-alone application that can be run as a service. The application works directly with the camera using libmml and receives triggers from a separate lidar application via a pipe or IP port.

I have developed my own C++ encapsulation of posix pthreads using resource management principles to prevent resource leaks in high reliability long period operation.

I spent several years in commercial surveillance and motion detection projects including operating linux based adaptive motion detection and video capture software. I worked with a Melbourne company to commercialise their software. I visited the UK and Singapore multiple times to visit partners in video based security software and systems. I also kept current with the latest technology by regular visits to overseas trade shows. I am intimately familiar with the algorithms and constraints of motion detection systems e.g. adapting to light changes, or algorithmically classifying regions as 'noisy'.

I have deployed and operated MOTION software with POE IP cameras. This was using a linux based video storage system host. My experience using MOTION is that it is very difficult to eliminate false triggers. As a result I prefer a hybrid sensing system

I'm in the process of developing a camera/door area controller with PIR & microwave sensors linked into an ARM (orange PI) local controller powered over POE. I may improve that with a digital I/O board to provide interfaces to door strikes as well as card readers. I will probably also put an RTSP client on the unit to receive video and images from cameras and then relay them using multicast to remote storage systems.

I note that my project has some similarities to your project where you use lidar for object detection rather than any image algorithm.

I have developed and delivered multiple risk-of-life systems including the triple zero Fire and Emergency Services command and control systems in Perth. My fire station embedded linux software and systems had 5 nines reliability - considerably better than the Telstra communications network they operated over.

Just to reinforce that, for the fire station project there were 50+ fire stations connected by a Telstra DSL network to two redundant control centres. Every station ran my embedded processors with local station control functions coded in C++ (station power and security via industrial controllers), VOIP phones, software telephony functions, local voice servers, redundant DNS, and capacity for redundant communications paths to redundant control rooms and to other stations. Each control room ran servers with my management software, voice servers, voice logging and retrieval, and multiple operator places with VOIP phones. That whole network of 50+ sites, comms links, and the need to fail gracefully was ultimately 5 nines reliable as a system. I'm pretty proud of that.

Another relevant project in 2015 was a railway risk-of-life warning system using embedded processors to control a radio interface and generate audio warning messages when the channel was clear. I developed the hardware electronic radio interface, all the software, and a web server to allow upstream systems to command the radio interface via POST commands to engage the radio channel when idle and deliver one of many possible warning messages. This was another 5 nines system.

I also have a very good knowledge of linux on embedded systems including Raspberry Pi and the use of linux packages to achieve goals. This includes VOIP, DNS, Firewall, Mail, Web Servers, Databases etc.

I developed my own linux live distribution for high reliability real-time applications on X-86 processors. This is discontinued as armbian is now equally suitable.

I have good IPv4 and IPv6 deployment and firewall skills. I presently run dual stack IPv4 and IPv6 in my network. The majority of my traffic is now IPv6. I provide public DNS, web, and mail services on IPv6 using my equipment running on armbian (I could do it in raspbian, but I have faster/larger armbian based processors)

I possess and use Raspberry Pi boards including 3B using Raspbian, as well as several Orange Pi boards under Armbian; and one nano-Pi dual homed gigabit router.

I have significant experience in digital media, including capture, ingest, and archive of digital media - video, still images and audio.

I have extensive experience in digital evidence, specifically establishing and verifying chain of custody of the evidence using cryptographic functions and signatures.

I am a beta tester for Grandstream in digital media devices - IP video communications using VOIP.

I design and build support hardware - electronic and mechanical. I am competent with KiCad and LibreCad. I am an MIEEE.

My Approach to Your Project:

I see the primary issue is the selection of the 'best' image of several to capture the most detail of a target identified by the Lidar system. This will generally mean having the image with the target most in centre of field of view.

The elements that are required are -

  • A reference matrix image of the scene without a target at low monochrome resolution & say 32x32 tiles - this can be parameterized.

  • A mask matrix to eliminate noisy areas

  • A centre weighted matrix to convolve with captured images

  • An edge weighted matrix to monitor overall lighting changes

  • A capture thread sampling the camera at full resolution at a frame rate matching the capacity of the host processor and RAM

  • A processing thread that generates low resolution tiled matrices from the captured images (probably same thread as capture)

  • An STL queue of tiled images, raw images, and metadata

  • A convolution function to weight the difference between captured and reference and generate a figure of merit

  • A selection function to select the highest figure of merit image from the queue - based on lidar time

  • A publishing thread to publish a selected image and metadata either to file, pipe, or database

  • A responder thread to respond to triggers from the lidar system

  • A noise assessment function to update the noise mask matrix

  • A lighting assessment function using the edge weighted matrix to update scene brightness information

  • A reference matrix update function to use captured images to slowly update the reference matrix

  • A pruning function to limit the image queue size

  • A cryptographic function to sign published images and metadata for chain of custody purposes

  • If time permits, a function or external process to 'box' a selected image (in my opinion, not really necessary)

  • If time permits, provision of a deb service for the application (probably unlikely in the proposed time frame)

I propose to use some of my own software, specifically my encapsulation of pthreads. I will not use much other software as there is no guarantee any particular library is thread-safe and leak free.

I will not use any compressed format unless and until the selected image is published. I would use a library or package to do that, and I would by preference use PNG output.

Resources

I have the latest KDevelop system and can cross develop for the Pi or even develop on the Pi itself. Incidentally, I use the docker version which I see as the way system development needs to go.

I will use CMake and SVN. Documentation will be in Doxygen format.

I do not have a Pi camera. I can source one locally or you can ship one to me.

Schedule

I propose joint development of a project definition document defining interfaces to the camera system and constraints such as frame rates and expected operating conditions. This will be max 2 days.

Following that will be development of the system with testing aimed for the last part of the second week

It is likely the work will take longer than 2 weeks elapsed, so I will simply quote a fixed figure.

Terms

I will do the work as a non GST subcontractor. Payment 50% at the end of week 1. 50% on completion. Plus costs such as buying a camera or other hardware.

(0)
$3000
Deliver in 10 days
Christopher Martin

Hello, Peter.

This bid is following on from our discussion in the Public Questions section. The paragraphs that follow repeat much of what I already said there. I haven't yet had time to study Motion properly, and I am concerned that I will not be able to do so before this project closes in 1 day. So I am placing this bid now. If Motion turns out to provide a distinct advantage over libraries such as V4L2 or the FFmpeg suite, then I will gladly use it instead.

I have considerable experience in Linux, C++ programming and video processing. I ported the FFmpeg video processing suite to RISC OS, the original ARM operating system. I have also used FFmpeg's utilities and libraries on several projects.

I could not promise to deliver a shrink-wrapped product after just a week or two. For this project, I think it likely that several prototypes will be required. The product will need to be developed progressively in stages, and you would need to be prepared to test-run prototypes and provide feedback. This is because there are system and performance constraints that have yet to be fully tested and clarified.

First and foremost is the architecture of your Meerkat system. An optimal camera interface can only be designed once the Meerkat class system is reviewed. I can imagine the LiDAR calling a method of a new CameraControl (or some similar name) class, passing the desired timestamp of a frame to be saved to disk. The CameraControl class would select the frame with the nearest timestamp. But such a design would depend on the camera subsystem running as a separate thread of control capturing images as frequently as possible. Would the current Meerkat design support this?

I note that you said that power requirements are not a concern, so it would be ideal to have the camera taking timestamped images continuously. We would then need to test that the system's performance, especially with regard to the LiDAR and interrupt or thread-switching latency, is not impacted by the additional load of driving the camera and capturing sufficient frames per second.

The new camera control subsystem could directly call V4L2. But perhaps development could be accelerated, with potential benefits in flexibility and hardware compatibility, if the FFmpeg video processing and device control libraries were installed. These options need to be considered in light of the packages and kernel modules already installed from the stretch distro.

Being able to stream images from the camera module could be achieved by a stand-alone streaming service run only for that purpose when needed. Small, free, open source software already exists to provide this capability. I would propose installing such a "shrink wrapped" product to be used only when aligning the device, if indeed that will be done infrequently.

I hope that I have conveyed how this project could be a work in progress for several weeks as various design options are considered and tested. This project will involve some learning-by-doing. I am prepared for this, and I hope that you are too. My estimated hours could be overly optimistic but will be sufficient if the early prototypes perform as hoped and the organisation of the existing Meerkat code is accommodating.

$44/hr
Estimated 100 hours
Christopher Martin

Hello. Your project is interesting but also something of a challenge. For this project, several prototypes will be likely be required, and you would need to be prepared to test-run prototypes and provide feedback. This is because there are system and performance constraints that have yet to be clarified.

First and foremost is the architecture of your Meerkat system. An optimal camera interface can only be designed once the Meerkat class system is reviewed. I can imagine the LiDAR calling a method of a new CameraControl (or some similar name) class, passing the desired timestamp of a frame to be saved to disk. The CameraControl class would select the frame with the nearest timestamp. But such a design would depend on the camera subsystem running as a separate thread of control capturing images as frequently as possible. Would the current Meerkat design support this?

Power requirements need to be considered. It would be ideal to have the camera taking timestamped images continuously. However, this will significantly increase power consumption.

We would also need to test that the system's performance, especially with regard to the LiDAR, is not impacted by the additional load of driving the camera and capturing frames. CPU, interrupts, RAM, etc. all need to be considered. I have no idea at present of the spare resource capacity of your Meerkat system.

The new camera control subsystem could directly call V4L2. But perhaps development could be accelerated, with potential benefits in flexibility and hardware compatibility, if the FFmpeg video processing and device control libraries were installed. These options need to be considered in light of the packages and kernel modules already installed from the stretch distro.

Being able to stream images from the camera module could be achieved by a stand-alone streaming service run only for that purpose when needed. Small, free, open source software already exists to provide this capability. I would propose installing such a "shrink wrapped" product to be used when aligning the device.

I hope that I have conveyed how this project could be a work in progress for several weeks as various design options are considered and trialed. This project will involve some learning-by-doing. I am prepared for this if you are. Are you interested in discussing this further?

Last updated 4 years ago
Peter B.

Hi Christopher,

Thanks for your interest!

Yes, separate thread of control capturing images as frequently as possible. Our process could "pipe" the timestamp/trigger in to a reduced version of motion (by ccrisan)?

Power is not an issue.

I could test additional load impact. Loads of CPU remaining, but RAM may be an issue. "motion" is efficient. Is that what you have used?

Christopher Martin

Hi Peter,

I was not familiar with Motion. And it isn't clear to me why ccrisan forked it. But I still have a couple of days to look into it. My knee-jerk response is that it is probably overkill for a simple ring buffer of 320x200 YUV frames at 30fps over a few seconds at most. (By the way, such a ring buffer could consume several megabytes; almost 2MB per second of duty cycle. Could that exhaust available RAM?) But to be fair, I haven't yet looked at Motion's daemon. I expect that hitting the V4L2 or FFmpeg libraries should be much more efficient than interfacing to any daemon, but I shouldn't preempt that decision. (I'm thinking, perhaps Motion provides a simple camera interface library which would be very convenient.) So I will look into Motion over the next couple of days. Thanks for prompting me to look at it.

Peter B.

Christopher,

Not using much RAM at the moment (133/748M), so the GPU possibly could be given 512M leaving over 300M for the buffer.

No probs. Thank you

Last updated 4 years ago
Peter B.

Hello,

We have chosen a bid/proposal due to general experience, understanding of the problem & solution, experience with embedded image streaming and time for completion.

Unfortunately, we won't be progressing with your proposal.

Thank you for your time!