• March 28, 2024, 01:47:52 AM
  • Welcome, Guest
Please login or register.

Login with username, password and session length
Advanced search  

News:

This Forum Beta is ONLY for registered owners of D-Link products in the USA for which we have created boards at this time.

Author Topic: "Motion detection" vs. edge detection  (Read 3545 times)

acellier

  • Level 4 Member
  • ****
  • Posts: 417
"Motion detection" vs. edge detection
« on: November 19, 2011, 02:03:33 PM »

     Having gained a couple of month's experience with a DCS9930L and DCS-932L both uploading via ftp to a server, based on the camera's notion of "Motion detection", I see it's quite a waste of bandwidth and storage (not to mention the time to sift through the result).
     It appears that the "Motion detection" may really just be "changed average light level in the selected box(es)".
     My question, for the signal processing experts among you, is this: Since the pictures are apparently (immediately) converted to jpg, and since jpg involves cosine transforms which store information according to perceived spatial frequency, can't a better motion detection be implemented by observing the amplitude of the high-frequency components in the jpg file - corresponding closely, I would think, to edge detection?
     There are more sophisticated computer-based software packages available, but the idea is to avoid having a computer running 24/7. Seems like the examining of selected jpg coefficients could be done in-camera as easily as the average-light comparisons?
Logged

JavaLawyer

  • BETA Tester
  • Level 15 Member
  • *
  • Posts: 12190
  • D-Link Global Forum Moderator
    • FoundFootageCritic
Re: "Motion detection" vs. edge detection
« Reply #1 on: November 19, 2011, 02:16:51 PM »

I have some experience developing edge detection algorithms as well as the more traditional delta between pixels in adjacent images in video.

To minimize the processing, I presume that the algorithm simply parses each frame into boxes (x by x pixels), averages the magnitude of each box, and compares the result to the analogous box in the adjacent video frame. Using this approach, the sensitivity level would define the required delta (i.e. magnitude difference) between two boxes to trigger a motion event.

Older MPEG compression algorithms used a similar approach. If a movie had a scene in a room where the camera was still, the compression algorithm would use the same footage from the previous frame for the parts of the new frame that didn't change, reducing the video file size. Only the moving (action) would carry over to subsequent frames.
« Last Edit: November 19, 2011, 02:19:14 PM by JavaLawyer »
Logged
Find answers here: D-Link ShareCenter FAQ I D-Link Network Camera FAQ
There's no such thing as too many backups FFC

acellier

  • Level 4 Member
  • ****
  • Posts: 417
Re: "Motion detection" vs. edge detection
« Reply #2 on: November 19, 2011, 02:22:16 PM »

Yes but do you get my point, or disagree, that perhaps there is significant information in the changes in the transformed domain?
Logged

JavaLawyer

  • BETA Tester
  • Level 15 Member
  • *
  • Posts: 12190
  • D-Link Global Forum Moderator
    • FoundFootageCritic
Re: "Motion detection" vs. edge detection
« Reply #3 on: November 19, 2011, 02:40:58 PM »

Yes, your suggested approach would provide greater sensitivity in a way that would seem to be more meaningful to identifying motion detection events rather than simply averaging pixel magnitudes between images.  The ease of implementation would all depend on where (and how) the detection is measured in the image capture/processing pipeline. On its face, it appears the the workload should be somewhat similar between the two detection methods.
« Last Edit: November 19, 2011, 02:56:37 PM by JavaLawyer »
Logged
Find answers here: D-Link ShareCenter FAQ I D-Link Network Camera FAQ
There's no such thing as too many backups FFC