D-Link Forums

The Graveyard - Products No Longer Supported => IP Cameras => DCS-932L => Topic started by: acellier on November 19, 2011, 02:03:33 PM

Title: "Motion detection" vs. edge detection
Post by: acellier on November 19, 2011, 02:03:33 PM
     Having gained a couple of month's experience with a DCS9930L and DCS-932L both uploading via ftp to a server, based on the camera's notion of "Motion detection", I see it's quite a waste of bandwidth and storage (not to mention the time to sift through the result).
     It appears that the "Motion detection" may really just be "changed average light level in the selected box(es)".
     My question, for the signal processing experts among you, is this: Since the pictures are apparently (immediately) converted to jpg, and since jpg involves cosine transforms which store information according to perceived spatial frequency, can't a better motion detection be implemented by observing the amplitude of the high-frequency components in the jpg file - corresponding closely, I would think, to edge detection?
     There are more sophisticated computer-based software packages available, but the idea is to avoid having a computer running 24/7. Seems like the examining of selected jpg coefficients could be done in-camera as easily as the average-light comparisons?
Title: Re: "Motion detection" vs. edge detection
Post by: JavaLawyer on November 19, 2011, 02:16:51 PM
I have some experience developing edge detection algorithms as well as the more traditional delta between pixels in adjacent images in video.

To minimize the processing, I presume that the algorithm simply parses each frame into boxes (x by x pixels), averages the magnitude of each box, and compares the result to the analogous box in the adjacent video frame. Using this approach, the sensitivity level would define the required delta (i.e. magnitude difference) between two boxes to trigger a motion event.

Older MPEG compression algorithms used a similar approach. If a movie had a scene in a room where the camera was still, the compression algorithm would use the same footage from the previous frame for the parts of the new frame that didn't change, reducing the video file size. Only the moving (action) would carry over to subsequent frames.
Title: Re: "Motion detection" vs. edge detection
Post by: acellier on November 19, 2011, 02:22:16 PM
Yes but do you get my point, or disagree, that perhaps there is significant information in the changes in the transformed domain?
Title: Re: "Motion detection" vs. edge detection
Post by: JavaLawyer on November 19, 2011, 02:40:58 PM
Yes, your suggested approach would provide greater sensitivity in a way that would seem to be more meaningful to identifying motion detection events rather than simply averaging pixel magnitudes between images.  The ease of implementation would all depend on where (and how) the detection is measured in the image capture/processing pipeline. On its face, it appears the the workload should be somewhat similar between the two detection methods.