• June 21, 2018, 03:17:34 AM
  • Welcome, Guest
Please login or register.

Login with username, password and session length
Advanced search  

News:

This Forum Beta is ONLY for registered owners of D-Link products in the USA for which we have created boards at this time.

Author Topic: DCS-2132L experiences, lessons and workarounds  (Read 15171 times)

RiverWay

  • Level 1 Member
  • *
  • Posts: 21
DCS-2132L experiences, lessons and workarounds
« on: February 23, 2013, 10:55:10 AM »

I have a DCS-2132L that is installed as the primary camera in a flash flood/debris flow monitoring and warning system. Here are some lessons learned and workarounds for issues that came up during design, build and installation.

On-board infrared is always going to cause false or unwanted motion events. Examples are snowfall, rainfall, flying insects, bats, owls and blowing organic debris. Turn off the on-board infrared. Use external infrared illuminators, and mount them far from the camera. Close-in extraneous objects will never cause a night-time false motion event again. When snowing at night and with remote illuminators, no falling snow is visible whatsoever. Note that this installation applies to implentation for a distant target, about 120 feet away from the camera. I am using multiple infrared illuminators made by SuperCircuits and they are superb.

Use a light-colored motion target, and draw your motion windows on the target. This will further reduce false motion events.

FTP and the writing of a single snapshot is by far the fastest and most reliable method for getting a motion flag to a computer or server. Use of FTP allows for a network configuration that avoids many of the security hazards in Windows networking, such as NetBIOS. There is no useful utility in D-ViewCam for programmatically using an event flag, i.e., to run an executable when a motion event image or video is output by the camera. Pity. Surely D-Link will address this someday.

When using FTP to write images triggered by motion events, sometimes a text file notification is sent as well. I have not been able to figure this out, because the text file is very rarely sent. It may be because true motion is very rare, and the event triggers right now are almost always the result of aperture perturbations, not real motion.

Aperture perturbations. I'm using this as a generalized term to apply to camera hardware changes that are picked up as undesired and uninteresting motion events. Examples are the persistent flips at dawn and dusk between night and day modes until the quantity of light allows the camera to settle into one mode or the other. I had to develop programmatic algorithms to create and respect consecutive-event thresholds that must be met before triggering sirens. On the setup page for Event Setup/Event, there is mention in the right help panel: "Video motion detection: select the windows which need to be monitored." I have not found that this is implemented, and it sure would be handy. An example is spider webs that appear overnight, and that then blow in the wind to cause false positives. Until a long hike can be made (every day in the summer?) to remove the webs, they cause false events/alarms. If separate motion windows could be drawn and managed, and each could deliver uniquely identifiable motion flags, this type of extraneous motion could be handled easily be refining the motion criteria that trigger events, i.e., if windows A (critical), B and C all detect motion, ignore the event. If only window A (critical) detects motion, trigger event.

The camera is mounted in a custom-built enclosure for protection from weather. The enclosure is then mounted to a very heavy platform at the edge of a high cliff. The lesson here is stability: nothing ever moves in the wind. Otherwise, even the slightest mount-oriented movement can cause false motion positives.

The system is wireless and offers live streaming video to client computers up to 1/2 mile from the 2132L mount location. A yagi antenna is used to reach 1000 feet from the wireless access point to the 2132L. On the 2132L, 30 fps is nice, but causes excessive jitter when motion quantity is high. Examples are snowfall, rainfall and streamflow.  Bandwidth is not the limitation, nor is the server - an 8-core machine with ample video RAM. I have not been able to identify the source of the high-motion video jitter, but dropping down to 15 fps allows the camera to better-handle large amounts of viewed motion without impacting utility and purpose.

D-ViewCam is being used as the video server. No client is allowed direct access to the camera, eliminating any possibility that clients could overload the camera-side of the wireless system - which is mission critical. In the camera, Advanced/Access List allows for limiting direct camera access.

The remote 2132L installation (including illuminators) is battery-powered, and charge is maintained by a solar module/charge controller system. For stress-testing, adaptation to and recovery from power outages is critical. Thus, the camera is always on and never loses power. However, a power interruption at the server/network system that causes loss of the wireless signal for more than 5 or 10 minutes causes an issue at the camera when server power, and hence wireless access point power, is restored: the camera cannot relocate the wireless network. The only solution has been a long hike and climb to manually reboot the camera (power cycle it). I have found no resolution. It would be nice if the camera could again join the newly-energized network without manual intervention. Fortunately, in this installation, sustained power loss beyond the capacity of backup power are very rare events. 942Ls do not have this issue - they will rejoin the network no matter how long the wireless access points have been de-energized.

On the camera, in Setup/Image Setup/Image Settings/Exposure Mode, the ability to customize exposure settings has been very useful, particularly for high-contrast nighttime conditions.

For the sharpness setting on the same configuration panel, any sharpness setting greater than zero increases video image size and hence bandwidth requirements. What you get out of this depends on your needs. Unsharpened image quality from the 2132L is so good, there is no point in using sharpening that can put pressure bandwidth in this application. Also, as inherent to the nature of greyscale vs. color video, the higher bandwidth pressure is meaningful during daylight hours, not at night.

Audio-in gain settings are very limited - there are just two options. This does not matter in this application, but it may matter for other uses.

Accurate time is mission-critical. A slow client computer, when accessing D-ViewCam/RemoteView on the server computer, will experience delayed video playback so that what is viewed is history, rather than real-time. In benchmarking, a slow computer could easily be showing video that lags by 20 seconds or more. A time server is installed on the computer server. All hardware, including cameras and client computers, refer to the the time server to keep accurate and, most importanty, uniform time. On the 2132L, point the camera to a time server in Setup/Time and Date/Automatic Time Configuration.

To address this so that client computer owners understand if they have a video delay issue, they move a digital bar clock under the display of date and time delivered on the video stream by the camera. If the time on their own system clock is ahead of the camera, i.e., the camera video is lagging, they know they have an issue and what they are seeing is not in real-time. The same computer hardware issue will be impacting their client software that monitors the server for qualified motion events - their sirens may be delays. Bottom line - consider using a time server in networked applications. This also solves camera time synchronization issues for time-critical matched video comparisons.

Setup/Event Setup requires study to understand. Time invested is worthwhile. It is a very flexible utility and truly useful.

SD Card: I've never found this to be useful because images can be saved on a computer. I might change my opinion someday, particularly in regard to having a secondary backup image repository.

Maintenance/System/Save Configuration: don't delay. Backup your settings. A lot of work can be invested in configuring the camera. If a problem occurs and the camera must be reconfigured, your backup will save much time.

An example. In Setup/Network Setup/LAN Settings, QOS Settings have been configured to ensure that Event/Alarm (mission critical) and Management have the highest priorities. Thinking that COS Settings would then be a useful companion to prioritizing traffic, I changed COS settings. The camera freaked out, required a manual power cycle to reboot (a 1-hr roundtrip task, which is problematic), and all settings were lost.

That saved configuration saved a potential great deal of additional hassle.

The best route for client computer owners to access live streaming video delivered by D-ViewCam is with MS Internet Explorer because it is simple.

The inability of RemoteView to provide multiple camera views, when the software is written to do so, is silly.

Two workarounds...

Open multiple Explorer windows, and access the same user account in each window. Open a different camera in each window. Turn off toolbars that cause clutter. This is actually more useful than being stuck with multiple camera views in fixed-size windows because they can be overlayed, one window on top of another.

Alternatively, when using D-ViewCam or RemoteView, open different instances of the program in different sandboxes (Sandboxie, etc.). Bingo, different cameras are viewable simultaneously.

But because D-ViewCam is written to graphically overtake the monitor screen (full-screen application), the advantage of having sizeable windows is lost. To address this, use a windows utility such as WinSplit Revolution. The approach is not perfect, but it does the job and gets rid of the Explorer junk that can clutter the view with the first method.

The sandbox/Winsplit method is going to be too complicated for many casual users, so have them stick to the browser/RemoteView/multiple windows method. For admin folks and those that will take a little time to understand their system, it may be worthwhile.

The 2132L has operated flawlessly to -15F. It has been manually turned off for colder temperatures, because -15F was the limit that I needed and I did not want to cause unnecessary stress too far beyond design limits.

Finally, two 942Ls are also connected to the warning system. They are being used strictly for delivery of live streaming video at two important locations, and not for motion events. The configuration options on the 942Ls are fairly crude compared to the 2132L - which is why I jumped at the opportunity to buy, try, and ultimately use the 2132L as the primary monitoring camera. The 942Ls have little flexibility for detecting and using motion events, are of lower video resolution, and are significantly inferior to the 2132L.

As the price point for the 2132L declines, I will likely replace the 942Ls with 2132Ls in the monitoring and warning system.












Logged

RYAT3

  • Level 10 Member
  • *****
  • Posts: 2110
Re: DCS-2132L experiences, lessons and workarounds
« Reply #1 on: February 25, 2013, 08:34:34 PM »


I don't see a thumbups button here, so I will just say thanks for the write up.

I did notice when I had my 5222L focus'd very close, that any dust/lint flying in the air would get cataloged in the video archive... (ie, relates to your snow).

The 5222L now with only PIR on and a real focus length of further away doesn't pick up these objects, but I could see how snow flakes at 5-10 feet away would get showcased.

Well, at least I don't have any Roswell Rods floating around in my place that I am aware of!!!  :o

How long is your battery/solar setup good for? Care to share details of what you are using?

I've had no issues with unplugging the 2132l or 5222l in my home network and having it reconnect.  Your access point might be the issue? 

Have you considered the motion jpeg link instead of dviewcam, in a couple browser windows? 

dviewcam hogs down my dual core desktop (2006 build) and dual core laptop machine (2008)...

Or maybe pulling the image yourself every 1-2 seconds with a request to the jpeg image?

This should stream:

http://internalIPaddress/video/mjpg.cgi?profileid=1

for current snapshot:

http://internalIPaddress/image/jpeg.cgi


If accessing external, use the exteranlIPaddress:port

Maybe dviewcam is giving you access to the alert system, or is that separate?  ???



Logged

RiverWay

  • Level 1 Member
  • *
  • Posts: 21
Re: DCS-2132L experiences, lessons and workarounds
« Reply #2 on: February 26, 2013, 09:55:09 AM »

The reason for choosing a cam/motion event system for triggering warnings is that any "permanent" stream channel installation, such as a float switch, will be immediately destroyed by the first flood/debris flow. This would require immediate infrastructure replacement, which will not happen. A camera mounted out of the way, using motion as the motion as an event trigger, seemed ideal. This meant there were many issues to identify and build workarounds for, particularly uninteresting motion events. With a couple of D-link cams in place prior to the flood threat, I had a fair grasp on what false motion events would need attention. Snow and rain are two of them. There are large differences in day and night motion sensitivity, with night being the most sensitive because of precipitation/aerial debris highlighting in high-contrast views, but either way the goal was to find the sweet spot to absolutely minimize extraneous motion events while maintaining sensitivity from that of large volumes of flowing water and debris. The on-board 2132L utilities and features, all major improvements compared to the 942Ls, made the project feasible.

The power system on the remote camera side is designed to provide 13Ah per 24-hr period to accomodate the most extreme sun-limited circumstances - the longest nights and longest periods of stormy weather. At this site and extreme, this will allow for a match between daily available solar power and daily power usage. Technically, the system will provide about 7 days of autonomy (powered by battery alone) during the spring. It is overdesigned because it must not fail. A potential rodent cut in a cable is an example of applied use of autonomy time. Max power used by the 2132L, two infrared illuminators and supporting electronics gizmos, around 12/21 when stormy stretches occurred, was in fact 13.0 Ah over one 24-hr period, but all other stress days maxed out at 12.5 Ah.

A 12v-5v stepdown regulated power supply feeds the camera. The illuminator feed is a direct 12-v pass from the power controller at the control panel. The controller manages battery charging, a well as the voltage passed to the camera site.

I do not understand the 2132L reconnect problem after extended AP power loss/wireless signal loss, it was a surprise. Cycling the wireless access point on/off after initial power restoration made no difference. The camera simply decides it can't find the AP. Or maybe the AP can't find the camera. Only a camera reboot solved it. A 942L tied to the same access point as the 2132L does not have this problem, and never has. It's not signal quality or strength - the access point is picking up the 2132L at -60 dBm, and the signal at the camera is about the same.

I tried every camera settings combination possible to make the video stream as efficient as possible for slower client computers. An 11-yr-old Pentium 4 with too little video memory was used to benchmark. The reason is that some folks, for whatever reason, may be stuck with a slow computer, and I needed to be able to recommend a minimum configuration, or at least a minimum pc warning system configuration that would meet needs. Ideally, a client computer program must handle 3 streaming videos, several utility programs for communications with the server, and a client warning program - all simultaneously and with as little lag as possible.

In the end, the primary monitoring camera (the 2132L) must be configured for optimal warning system performance (event detection, streaming video). It must not fail, and must not be stressed in any way (bandwidth, accees count, etc.) Optimization of client computers is of secondary importance. H.264, 1280x800 is ideal for the quality/bandwidth balance. FPS is dropped to 15 because 30 is not necessary and bogs down slower machines, and could also potentially and unnecessarily slow the client side of the wireless network during high threat periods (more motion equals higher bandwidth needs). No direct client computer access is permitted to the camera, to protect the system, the camera, and camera performance. This eliminates any ability to use anything but H.264.

The system has no discernible impact at all on the 8-core processor in the server. This was the goal. The server must do all of the processing, as well as serve everything to up to a dozen client computers, and do so efficiently during server updates and backups, and all in the instance of the busiest anticipated video events. And it must do so while having ample room to provide timely siren triggers for flood and debris flow events, i.e., no impact on required companion data processing.

In the end, for client computers on this warning network, my recommended minimum pc configuration is quad core, 4GB of RAM but 8-12GB is far better, min 1GB video RAM. Dual core just does not cut it, as you experience. My dual-core laptop quickly starts to overheat and lag if the 2132L video stream is fullscreen and when motion increases (i.e., precipitation, wind or flows).

The greatest pc limitation is when viewing full-screen video. Also, on that 11-yr-old pc, for just the 2132L and simultaneous running of the client warning system software, audio on vs. audio off is the threshold difference between no lag and substantial lag in streaming video. I was surprised because the incurred lag seems disproportionate to audio bandwidth, but it might be because the full-screen stream is right on the edge of system capabilities when audio is off. Moving out of full-screen mode to a smaller video window will improve performance considerably.

I never really considered transfer of intermittent frames to clients. The objective was/is to provided a solid live stream to help folks get past the head-case issues of never knowing what will occur when. A glance at the video is assurance (or the visual egress indicator), but the filtered motion events/siren triggers are the real warning. Rephrased, streaming video is for comfort, but motion events are the formal warning of trouble. The warning system software does not tax even the slowest systems, and so streaming video is nice but not required.

-----

Plenty of rods here. I ignore them, except for the ones in the middle of winter when it is 20 below zero, there is heavy snowpack on the ground, there is no wind and there is nothing to blow around. When they fly upward and around obstacles, well, I just don't know what is going on. Large snowflies, I guess. As most know, they are event triggers, which is why we have captured video of them. They were part of the consideration for design of the warning system.

-----

Edit: D-ViewCam is used to serve video, but it is of no utility for the warning system. There is no way to run an executable using an event trigger. Instead, I have to use the delivery of a motion event file on the server as an event flag. On the one hand this was a royal pain to program for. On the other hand, it allowed for the building of a robust program, and the warning system itself is completely independent of D-ViewCam.









« Last Edit: February 26, 2013, 10:00:09 AM by RiverWay »
Logged