-
Notifications
You must be signed in to change notification settings - Fork 8
Tutorial
You can map an already existing native RGB image source. BGR is mainly implemented because of OpenCV API where you can reference a BGR Mat. You can also reference a managed byte array.
IntPtr nativeRGBImage = ..some native source;
var rgba = new RgbImage(H264Sharp.ImageFormat.Rgb, w, h, nativeRGBImage);
// Or managed
byte[] managedImageBytes = some managed source..
var rgba = new RgbImage(H264Sharp.ImageFormat.Rgb, w, h, managedImageBytes, offset, count);
You can reference a native source YUV I420 planar image.
IntPtr Y = ..;
IntPtr U = ..;
IntPtr V = ..;
var yuvp = new YUVImagePointer(Y, U, V, width, height, strideY, strideUV);
IntPtr Yuv = ..some YUV Source;
var yuvp = new YUVImagePointer(Yuv, width, height);
In this format, U and V are interleaved. This is an extremely common native camera output format.
IntPtr Y = ..;
IntPtr UV = ..;
var yuvNv12 = new YUVNV12ImagePointer(Y, UV, width, height, strideY, strideUV);
IntPtr YuvNV12Contiguous = ..;
var yuvNv12 = new YUVNV12ImagePointer(YuvNV12Contiguous, width, height);
YUV formats provide more efficient encoding because they bypass the RGB-to-YUV conversion.
For output from the decoder, you can allocate an RGB format image container.
// Will allocate w*h*4 native bytes
var rgba = new RgbImage(H264Sharp.ImageFormat.Rgba, w, h);
var bgra = new RgbImage(H264Sharp.ImageFormat.Bgra, w, h);
// Will allocate w*h*3 native bytes
var rgb = new RgbImage(H264Sharp.ImageFormat.Rgb, w, h);
var bgr = new RgbImage(H264Sharp.ImageFormat.Bgr, w, h);
You can also refer to existing native memory or a managed byte array.
For YUV output where you want to own the bytes:
// Will allocate (width * height) for Y + ((width * height) / 2) bytes for U and V
YuvImage yuvOut = new YuvImage(width, height);
Otherwise, you can use YUVImagePointer
to refer to the internal memory of the decoder.
Optionally, configure your converter if needed. The converter is used by both the decoder and encoder, and settings are global.
// What's inside the native side currently
var config = Converter.GetCurrentConfig();
config.EnableSSE = 1;
config.EnableNeon = 1;
config.EnableAvx2 = 1;
config.NumThreads = 32;
config.EnableCustomThreadPool = 1;
Converter.SetConfig(config);
Create your encoder, decoder, or both.
H264Encoder encoder = new H264Encoder();
H264Decoder decoder = new H264Decoder();
You can change the path of the OpenH264 DLL from Cisco. The default one is stored in Defines.CiscoDllName
, and you can assign it globally or locally.
// For global
Defines.CiscoDllName = "/yourPath/openh264-2.4.1-win64.dll";
// For local
H264Encoder encoder = new H264Encoder("/yourPath/openh264-2.4.1-win64.dll");
H264Decoder decoder = new H264Decoder("/yourPath/openh264-2.4.1-win64.dll");
It is recommended to use the default decoder initialization.
decoder.Initialize();
encoder.Initialize(width, height, bitrate: 2_500_000, fps: 30, ConfigType.CameraCaptureAdvanced);
Encode your source images.
List<byte[]> encodedFrames = new List<byte[]>();
if (!encoder.Encode(source, out EncodedData[] ec))
{
foreach (var encoded in ec)
{
encodedFrames.Add(encoded.GetBytes());
}
}
Decode your encoded frames.
RgbImage rgb = new RgbImage(H264Sharp.ImageFormat.Rgb, w, h);
foreach (byte[] encoded in encodedFrames)
{
if(decoder.Decode(encoded, 0, encoded.Length, noDelay: true, out DecodingState ds, ref rgb))
{
// Do something with rgb
}
}
You can write your rgb output to an image on WPF
<Image x:Name="Decoded" HorizontalAlignment="Left" VerticalAlignment="Top" Margin="10,30,10,10"></Image>
void DrawEncodedImg(RgbImage frame)
{
Dispatcher.BeginInvoke(new Action(() =>
{
if (Decoded.Source == null)
{
Decoded.Source = new WriteableBitmap(frame.Width, frame.Height, 96, 96,
PixelFormats.Bgr24, null);
}
var dst = (WriteableBitmap)Decoded.Source;
dst.Lock();
int width = frame.Width;
int height = frame.Height;
int step = frame.Stride;
int range = frame.Stride * frame.Height;
dst.WritePixels(new Int32Rect(0, 0, width, height), frame.NativeBytes, range, step);
dst.Unlock();
// You should pool the RgbImage containers since this switches to the UI thread.
// Or sencronize and reuse the same container
pool.Add(frame);
}));
}
You can convert your Mat into input for the encoder without any copy. Here using OpenCvSharp with camera capture:
var capture = new VideoCapture(0, VideoCaptureAPIs.WINRT);
capture.Open(0);
Mat frame = new Mat();
while (captureActive)
{
if (capture.Read(frame))
{
var g = new RgbImage(ImageFormat.Bgr, frame.Width, frame.Height, (int)frame.Step(),
new IntPtr(frame.DataPointer));
encodedSuccess = encoder.Encode(g, out EncodedData[] ec);
}
}