• April 24, 2024, 12:14:51 AM
  • Welcome, Guest
Please login or register.

Login with username, password and session length
Advanced search  

News:

This Forum Beta is ONLY for registered owners of D-Link products in the USA for which we have created boards at this time.

Author Topic: How to send audio to camera?  (Read 4517 times)

efrecon

  • Level 1 Member
  • *
  • Posts: 4
How to send audio to camera?
« on: September 03, 2014, 03:57:53 AM »

(I also own a DCS-2230, on which I have tried most of the same stuff without more luck, so I will be cross-posting there as well.)

I have been spending a number of days figuring out how to send AUDIO to the camera so this audio stream will be played on the internal speaker without any luck. Has anyone been able to do that? Could anyone point me to a knowledgable person at D-link (?) or elsewhere? Here is what I've been trying:

  • My first attempt has been to use the documented NIPCA API/SDK and especially the /dev/speaker.cgi entry. Basically, what should work is to post S16_LE PCM data (so signed 16 bits per sample, little endian encoded) at a rate of 8000 samples per second and mono. The data should be packetised using the (A)CAS format. Since I have code to get data from the camera, I have been able to verify that there isn't any problem with my ACAS packetisation (I've tried to packetise and de-packetise local data to verify that it's kept the same!). However, when I send this all to the camera, with proper timing, the result is sound on the speaker, but hardly what I've sent! I've tried many other sound formats, etc. without any luck. These were just a last-shot, since what's documented is what I've described above...
  • My second attempt has been through understanding what the Internet Explorer interface is doing. When running on IE, the ActiveX component performs a continuous POST operation to /ipcam/speakstream.cgi. I've captured a stream sent from my PC using wireshark, removed the header from the POST operation and thus been able to get to the raw binary data of the stream that is being sent to the camera using that entry point. I've come so far as to understand that the audio is coded using some sort of ADPCM, without understanding which exact variation it uses. The closest I got was G.726: When I listen back to the raw data using G.726, I can hear myself, albeit with an enormous amount of background noise.

So which entry point should I use? Is there yet another one? and in which specific format should the sound and packets be. POST:ing using regular HTTP is the only real option that I have, UDP is out of my scope of operation.

[/list]
Logged

efrecon

  • Level 1 Member
  • *
  • Posts: 4
Re: How to send audio to camera?
« Reply #1 on: September 03, 2014, 07:15:05 AM »

I finally managed to make some progress on this. So audio sent to the
Code: [Select]
/ipcam/speakstream.cgi seems to actually be in G.726 format, unlike what was stated in my previous message. I've been able to construct back a .WAV header, prepend it to the raw data and open this successfully in a number of programs (e.g. audacity, avplay and mplayer). That is good news...

The bad news is that I still can't send any proper to the camera. I am now having two issues when POSTing to the camera:

  • I am sending data in small chunks, and trying to mimic the G.726 specification at that, i.e. 125ms of data, i.e. 500 bytes. After a while the camera closes the connection. Why?
  • I still haven't been able to send and listen back data that I would have generated myself. I've captured using
Code: [Select]
avconv -f alsa -i hw:1 -acodec g726 -ar 8000 -ac 1 -f s16le - > me_talking.g726[code][list] amd send this raw data to the camera in chunks, as described above. But I can't hear anything. This is valid data to what I can understand though: if I add the .WAV header, I can listen it back in those various programs. Any idea?[/li]
[/list]
[/code]
Logged