Recording stream to output file.

Oct 9, 2009 at 7:14 PM

Jer, great work on this as media tools for WPF are sorely lacking!

A little background...I started out trying to use WME and then ExpressionEncoder for a video recording project I'm working on.  As both require having those applications on the client machine I'm trying to move away from using those applications.  I've been able to get my camera working with your lib to show the stream but where I'm running into issues is in recording the stream.  If my understanding is correct I need to mix using this lib with DirectShow.net.  I've been looking at their examples and have a rudimentary understanding of what the process is doing to record.  Where I'm hitting a snag is that even with their examples my LogiTech web cam will record fine but my companies device that I'm trying to record off of returns an error.  I ran into this same issue with an earlier release of yours where it somehow just didn't recognize that that camera's feed (recognized that it existed but wouldn't show the feed).  The error I'm getting is when it tries to start the capture graph it throw a -2147023729 error code (this is with their capwmv sample that uses the AsfFilter to record.  Knowing you know this technology pretty well I'm hoping you can at the least point me in the right direction with this.  Thanks again!

Coordinator
Oct 10, 2009 at 11:35 PM
Edited Oct 10, 2009 at 11:38 PM

DirectShow.NET is included in the interop folder of the source code, so you are free to use that if you need.

There are a couple routes you can go with MediaKit and recording video.

The first, if you just want to use windows media, then you can use http://windowsmedianet.sourceforge.net/.  The VideoCaptureElement has a flag called EnableSampleGrabbing and an event NewVideoSample.  You could use those to get the raw bitmap and send the samples to the the windows media SDK using the interop lib linked above.

I did something similar here:  http://jmorrill.hjtcentral.com/Home/tabid/428/EntryId/278/Vista-DWM-Hacking-Capture-and-Stream-D3D-WPF-Windows-Real-Time.aspx

That project no longer works as it used undocumented features in DWM for Vista (which are now broke after some service packs), but look at the WMVFile.cs from the source I posted.  That class will take in a raw bitmap and it will either stream over tcp or encode to disk...or both.  That class was taken from the wm.net interop lib samples...I just hacked it up to fit the specific needs...so use with caution!

The other route you can is quite a bit tougher if yer not a dshow geek, but would be capable of more than just WM.  You would either have to copy and paste a new VideoCaptureElement/Player or edit the VideoCaptureElement/Player code (mostly the VideoCapturePlayer.cs).  The basics of what you want to do is have a graph that looks like this:

capture_source -> [Tee splitter] -> (Splits into two streams)

(From Tee Splitter) -> Stream 0 -> Video Render (like it currently uses)

(From Tee Splitter) -> Stream 1 -> WM ASF Writer

To do this you will have to manually instantiate the WM ASF Writer, then use some interfaces from the wm.net interop lib to configure the filter (ie encoding profile, if you are streaming tcp, writing to disk, etc) before you hook it up to anything.  This configuration code would use similar/same code as you would see in the WMVFile.cs I was talking about earlier.

Now if you replace "WM ASF Writer" with an mpeg4 encoder, muxer and a filewriter, you could potentially write to mpeg4.

Oct 14, 2009 at 3:11 PM

Jer,  You rock if I haven't said that before! 

I was able to take that first project and run with it.  I modified the D3DImage to public like stated in a different post to use to grab the bitmap for snap shots of the video, but for the WMV section I ran into an ugly error telling me the following...

Unable to cast COM object of type 'System.__ComObject' to interface type 'WindowsMediaLib.IWMInputMediaProps'. This operation failed because the QueryInterface call on the COM component for the interface with IID '{96406BD5-2B2B-11D3-B36B-00C04F6108FF}' failed due to the following error: No such interface supported (Exception from HRESULT: 0x80004002 (E_NOINTERFACE)).

The code worked great off the examples using a WinForms, but in the WPF world it wasn't happy...I switched the call to append a new frame to use a delegate as such...  this.Dispatcher.BeginInvoke(System.Windows.Threading.DispatcherPriority.Normal, new appendDelegate(this.myVidFile.AppendNewFrame), e.VideoFrame); ... and it works great.  One weird thing I saw though was that the videoframe coming from NewVideoSample was upside down.  I just used the .RotateFlip method on it to turn it so no biggie.

My next question in this process is Audio.  I know how to find the devices and get the audio, but I'm not sure if using the code I'm currently using to build the WMV is appropriate for the Audio too.  If not I'm assuming I'll need to head down the second route you had mentioned and build a graph for the audio as well as the video splits.  If there's a way to push the audio into the WMV I'm currently building you'll have made my day.

Coordinator
Oct 22, 2009 at 2:57 AM

Sorry for the really late reply!

"This operation failed because the QueryInterface call on the COM component for the interface with IID" Usually happens to me when I create COM objects on an STA thread and access it from another thread.  If you create the COM on STA, the only thread ever allowed to touch it is the SAME thread.  If you create the COM on MTA (if supported) any MTA thread can access it.

I'm surely not an audio guy, but you could (in theory) add the audio input device to the graph, hook up another sample grabber for the audio samples and pass them to the WM libraries.  It should be entirely possible to make another class that just dealt with audio so you wouldn't have to modify mediakit (if you find that easier).

Oct 23, 2009 at 4:33 PM

Don't sweat the delay, this isn't your day job (but thanks for doing it).  In the time that passed I started jumping more head first into option 2 and learning more about Dshow.  I've been hacking away at the VideoCapturePlayer and trying to add in Tee's for audio and video (I've assumed each would get it's own Tee).  I'm able to get it working fine with preview, but as soon as I try to hook the capturegraph up for output the preview stops displaying.  I'm not really sure why this would happen unless it has to do with the code after the cap device is add to the graph and the call is made to SetVideoCaptureParameters.  Here's the code I've changed (I've commented out the items I'm looking to add in since I can't get past the SetOutPutFileName yet.  Thanks! (I'll give the other option a try but I'm pretty invested in this direction so I'd still like to pursue it).

 

private void SetupGraph()/* Clean up any messes left behind */

FreeResources();

 

try

{

 

/* Create a new graph */

m_graph = (

IGraphBuilder)new FilterGraphNoThread();

#if

m_rotEntry =

DEBUGnew DsROTEntry(m_graph);

#endif

 

* with rendering a capture graph */

/* Create a capture graph builder to help

 

mediaController = (

 

var captureGraph = (ICaptureGraphBuilder2)new CaptureGraphBuilder2();IMediaControl)this.m_graph;/* Set our filter graph to the capture graph */

 

 

 

 

 

 

int hr = captureGraph.SetFiltergraph(m_graph);DsError.ThrowExceptionForHR(hr);/* Add our capture device source to the graph */

 

{

m_captureDevice = AddFilterByName(m_graph,

 

VideoCaptureSource);

m_videoCaptureSourceChanged =

}

 

{

m_captureDevice = AddFilterByDevicePath(m_graph,

 

VideoCaptureDevice.DevicePath);

m_videoCaptureDeviceChanged =

}

 

if (m_videoCaptureSourceChanged)FilterCategory.VideoInputDevice,false;else if (m_videoCaptureDeviceChanged)FilterCategory.VideoInputDevice,false;/* If we have a null capture device, we have an issue */

 

 

 

{

 

* then just use the default media subtype*/

if (m_captureDevice == null)throw new Exception(string.Format("Capture device {0} not found or could not be created", VideoCaptureSource));if (UseYuv && !EnableSampleGrabbing)/* Configure the video output pin with our parameters and if it fails

 

SetVideoCaptureParameters(captureGraph, m_captureDevice,

}

 

if (!SetVideoCaptureParameters(captureGraph, m_captureDevice, MediaSubType.YUY2))Guid.Empty);else

 

/* Configure the video output pin with our parameters */

SetVideoCaptureParameters(captureGraph, m_captureDevice,

 

 

Guid.Empty);/*Add our audio capture device source to the graph */

 

{

m_audioCaptureDeviceFilter = AddFilterByName(m_graph,

 

AudioCaptureSource);

m_audioCaptureSourceChanged =

}

 

{

m_audioCaptureDeviceFilter = AddFilterByDevicePath(m_graph,

 

AudioCaptureDevice.DevicePath);

m_audioCaptureDeviceChanged =

}

 

 

 

 

 

if (m_audioCaptureSourceChanged)FilterCategory.AudioInputDevice,false;else if (m_audioCaptureDeviceChanged)FilterCategory.AudioInputDevice,false;var rendererType = VideoRendererType.VideoMixingRenderer9;/* Creates a video renderer and register the allocator with the base class */

m_renderer = CreateVideoRenderer(rendererType, m_graph, 2);

 

{

 

 

{

 

mixer.GetMixingPrefs(

dwPrefs &= ~

dwPrefs |=

 

if (rendererType == VideoRendererType.VideoMixingRenderer9)var mixer = m_renderer as IVMRMixerControl9;if (mixer != null && !EnableSampleGrabbing && UseYuv)VMR9MixerPrefs dwPrefs;out dwPrefs);VMR9MixerPrefs.RenderTargetMask;VMR9MixerPrefs.RenderTargetYUV;/* Prefer YUV */

mixer.SetMixingPrefs(dwPrefs);

}

}

 

{

m_sampleGrabber = (

SetupSampleGrabber(m_sampleGrabber);

hr = m_graph.AddFilter(m_sampleGrabber

 

}

 

 

 

hr = m_graph.AddFilter((

 

 

hr = m_graph.AddFilter((

 

 

 

 

 

 

 

 

 

 

m_graph.Connect(videoOutPin, vidTeeInPin);

m_graph.Connect(audioOutPin, audTeeInPin);

 

 

 

 

 

if (EnableSampleGrabbing)ISampleGrabber)new SampleGrabber();as IBaseFilter, "SampleGrabber");DsError.ThrowExceptionForHR(hr);InfTee vidTee = new InfTee();IBaseFilter)vidTee, "vidTee");DsError.ThrowExceptionForHR(hr);InfTee audTee = new InfTee();IBaseFilter)audTee, "audTee");DsError.ThrowExceptionForHR(hr);var videoOutPin = DsFindPin.ByDirection(m_captureDevice, PinDirection.Output, 0);var audioOutPin = DsFindPin.ByDirection(m_audioCaptureDeviceFilter, PinDirection.Output, 0);if (videoOutPin == null)throw new Exception("Could not query the video output pin on source filter");if (audioOutPin == null)throw new Exception("Could not query the video output pin on source filter");var vidTeeInPin = DsFindPin.ByDirection((IBaseFilter)vidTee, PinDirection.Input, 0);var audTeeInPin = DsFindPin.ByDirection((IBaseFilter)audTee, PinDirection.Input, 0);var vidTeeOutPin = DsFindPin.ByDirection((IBaseFilter)vidTee, PinDirection.Output, 0);var vidCapTeeOutPin = DsFindPin.ByDirection((IBaseFilter)vidTee, PinDirection.Output, 1);var audTeeOutPin = DsFindPin.ByDirection((IBaseFilter)audTee, PinDirection.Output, 0);var audCapTeeOutPin = DsFindPin.ByDirection((IBaseFilter)audTee, PinDirection.Output, 1);/* Intelligently connect the pins in the graph to the renderer */

hr = m_graph.Render(vidTeeOutPin);

 

 

 

hr = m_graph.Render(audTeeOutPin);

 

 

 

 

 

 

 

 

 

 

Marshal.ReleaseComObject(vidTeeOutPin);Marshal.ReleaseComObject(vidTeeInPin);Marshal.ReleaseComObject(videoOutPin);DsError.ThrowExceptionForHR(hr);Marshal.ReleaseComObject(audTeeOutPin);Marshal.ReleaseComObject(audTeeInPin);Marshal.ReleaseComObject(audioOutPin);IBaseFilter videoCompression = CreateFilter(FilterCategory.VideoCompressorCategory, "MJPEG Compressor");IBaseFilter audioCompression = CreateFilter(FilterCategory.AudioCompressorCategory, "MPEG Layer-3");//Add Capture Compression Filters

hr = m_graph.AddFilter(videoCompression,

 

hr = m_graph.AddFilter(audioCompression,

 

 

 

hr = captureGraph.SetOutputFileName(

 

 

"VideoCompression");DsError.ThrowExceptionForHR(hr);"AudioCompression");DsError.ThrowExceptionForHR(hr);IBaseFilter mux;IFileSinkFilter sink;MediaSubType.Asf, @"C:\MyCapVid.WMV", out mux, out sink);DsError.ThrowExceptionForHR(hr);//IBaseFilter videoCompression = CreateFilter(FilterCategory.VideoCompressorCategory, "MJPEG Compressor");

 

//IBaseFilter audioCompression = CreateFilter(FilterCategory.AudioCompressorCategory, "MPEG Layer-3");

 

////Add Capture Compression Filters

 

//hr = m_graph.AddFilter(videoCompression, "VideoCompression");

 

//DsError.ThrowExceptionForHR(hr);

 

//hr = m_graph.AddFilter(audioCompression, "AudioCompression");

 

//DsError.ThrowExceptionForHR(hr);

 

//IBaseFilter mux;

 

//IFileSinkFilter sink;

 

//hr = captureGraph.SetOutputFileName(MediaSubType.Asf, @"C:\MyCapVid.WMV", out mux, out sink);

 

//DsError.ThrowExceptionForHR(hr);

 

//hr = captureGraph.RenderStream(PinCategory.Capture,

 

// MediaType.Video,

 

// m_captureDevice,

 

// videoCompression,

 

// mux);

 

//DsError.ThrowExceptionForHR(hr);

 

//hr = captureGraph.RenderStream(PinCategory.Capture, MediaType.Audio, m_audioCaptureDeviceFilter, audioCompression, mux);

 

//DsError.ThrowExceptionForHR(hr);

 

* with the base classes */

/* Register the filter graph

SetupFilterGraph(m_graph);

 

/* Sets the NaturalVideoWidth/Height */

SetNativePixelSizes(m_renderer);

HasVideo =

 

true;/* Make sure we Release() this COM reference */

 

}

 

{

 

Marshal.ReleaseComObject(captureGraph);catch (Exception ex)/* Something got fuct up */

FreeResources();

InvokeMediaFailed(

}

 

new MediaFailedEventArgs(ex.Message, ex));/* Success */

InvokeMediaOpened();

}