Kinesis Video Streams - Just Getting Started

Just started playing with Amazon Kinesis Video Streams.

I’ve been meaning to get around to Kinesis video streams but… well, life. And my new job at AWS that’s kept me busy studying for both the SysOps Admin and Developer certifications. Those are done now (at least at the Associate level) so I played a bit, as a reward.

Architecture

A full video pipeline has a producer and a consumer, and perhaps some intermediate processing steps. In the example I am building I’ll have a camera, something that reads an RTSP steam from the camera and writes it to an Amazon Kinesis Video Stream. To read from the stream I’ll use a simple web-based solution.

Building the Plugin

The code to start with is the GStreamer plugin. The build instructions were confusing so I submitted a Pull Request to make them more clear. That may be accepted by the time you read this, but if not, here’s what you do:

git clone https://github.com/awslabs/amazon-kinesis-video-streams-producer-sdk-cpp.git
mkdir -p amazon-kinesis-video-streams-producer-sdk-cpp/build; 
cd amazon-kinesis-video-streams-producer-sdk-cpp/build; 
cmake -DBUILD_GSTREAMER_PLUGIN=ON ..

This is ALMOST what was in the docs. The key part is that you MUST include -DBUILD_GSTREAMER_PLUGIN=ON or it does not build the plugin.

Run the Example

To run this you need to have an RTSP camera handy. I’m a fan of the Unifi Camera Product line and have several. You will need to find the RTSP URL that applies to you.

I hate typing commands over and over, so I’m a fan of makefiles. In the build folder I made a new file called Makefile.run:

ACCESS="<your AWS access key>"
SECRET="<your AWS secret key>"
REGION="<your AWS region>"

RTSP ="<your RTSP URL>"

create:
	aws kinesisvideo create-stream --stream-name "MyKVStream" --data-retention-in-hours "24"
	# note this assumes you have setup the AWS CLI already

list:
	 aws kinesisvideo list-streams

kvssink:
	export GST_PLUGIN_PATH=.
    gst-launch-1.0 rtspsrc location=${RTSP} short-header=TRUE ! rtph264depay ! h264parse ! video/x-h264, format=avc,alignment=au ! kvssink stream-name=${STREAM} storage-size=512 access-key=${ACCESS} secret-key=${SECRET} aws-region=${REGION}

fake:
	export GST_PLUGIN_PATH=.
	gst-launch-1.0 rtspsrc location=${RTSP} short-header=TRUE ! rtph264depay ! h264parse ! video/x-h264, format=avc,alignment=au ! fakesink
		

Create the Stream

You should first create the stream (make -f Makefile.run create). You can verify that it was properly created (make -f Makefile.run list).

Connect the Camera Feed to the Stream

You then connect the feed to the stream (make -f Makefile.run kvssink). If you get errors you can check your pipeline (make -f Makefile.run fake) using a fake source. I used that to discover that I needed to add the ‘h264parse’ to the gstreamer pipeline.

Viewing the Video

I used the excellent Kinesis Video Stream Viewer. It’s a static web site hosted on GitHub. Fill in the fields with your AWS credentials, region, and your stream name and hit play!

Next Steps for Me

It’s been years since I played with video, and GStreamer has matured amazingly over that time. I really need to dig into how it works, what the pipelines really do. And of course, on the AWS side I need to understand what I can do with the streams there. I want to plug into Amazon Rekognition and do some really cool stuff!

 Share!