Username: Password:

Author Topic: Technical notes: How to do video recording, and to do it well.  (Read 4044 times)


  • Producer
  • *****
  • Posts: 1169
    • View Profile
Technical notes: How to do video recording, and to do it well.
« on: November 18, 2015, 06:33:01 pm »
This is a technical document which I've been intending to write for a while, and I figured now would be as good a time as any to actually do it. Technically this is related to the video requests section, but it's more of a general knowledge document, understanding the technical side of things.

If you actually want to UNDERSTAND how it works, and how to get the absolute best out of your equipment, then you might want to read on this.

If you want a place to start, you can read through the document and see where you can end up, for what you're willing to invest in time and resources.


Before we go any further...

"I already have X setup, and it's good enough! Why the hell do I need to understand any of this technical crap?"

If you are satisfied with good enough, then yes, what you have will suffice. You may be surprised by the price differentials and the quality (or quality of life) improvements you can get for relatively cheap though.

That and you can deal with the fact people who want to get better and produce higher quality work will want to work their way up, and eventually make your work look bad. Unless you already know this stuff, and more, in which case 'You wouldn't be asking that question'.

... Any other questions? No? Good, let us begin.

Starting at the bottom: Where do I start, and how?

Now, if we're dealing with the absolute basics, all you REALLY need is a device that will capture video to your PC. There's quite a few of those, but you know, they all vary in quality and a whole bunch of other things.

Now, to understand what a device DOES to achieve its capture, the easiest way to understand the device is actually by its requirements to operate. Unsurprisingly, the better the device, the more the device requires to work.

So how does this thing work anyway?

This is your first port of call. Usually how you install the device will tell you leaps and bounds about how the device operates, or more accurately, what you should expect out of the device in quality terms.

I do need to take a detour before we start though.


Quick lesson before we move on - understanding how video is recorded digitally.

Now, when you see video, there's a resolution attached to it, as well as a framerate. I'm pretty sure most people would be aware that 720p video looks sharper than NTSC or PAL, and 1080p video looks sharper and cleaner than 720p. 60 frames per second feels smoother than 30 frames per second, or 15, and so on.

Thing is, to perfectly capture a video, the device needs to write to disk all the data in question.

Funny thing is, the math for this is KNOWN, because what you need to do is store the colour information for each and every one of those pixels. For example a 4:2:2 RGB colourspace, you need 4 bytes for Red, 2 for green and 2 for Blue. It's a fairly common one, although there's weaker standards such as 4:2:0 YUV which uses less data per pixel, and if you ever watched an anime that looks washed out, then you get the Bluray and it looks more vibrant, there's a reason why.

There's 1280 pixels across and 720 down for a 720p source, and you'll also need some overhead to put the data in a matrix and link each still frame one after another.

The math gets complicated, but it amounts to 'If you want pixel perfect capture at 720p60, the device that's writing the data needs to be able to dump about 240-260 MB (That's MegaBYTES, not Megabits) per second, every second. At 1080p60, you'll need about 550-600MB/s'

What happens if you can't record the data at that speed? Simple, you lose data because you missed it. In short? You lose frames.

Now, for pretty practical reasons (Namely, that's a LOT of data per second of footage!) we don't actually store our video this way. Or basically, there's a reason why I don't normally send the 'raw footage' of im@s video recordings - downloading 16GB for 2 minutes and 15 seconds of footage is insane, if only because entire GAMES aren't that big in a lot of cases.

So we run an encoding process, which means we compress the video. In a lot of cases, because we're interested in making it visually close, we compress the video with what's known as a lossy codec. This means when we run our compression, we drop a lot of detail that most people wouldn't honestly really notice (eg certain gradienting) which in turn brings the filesize down.

As an example, most anime subs at 720p24/30 are what, 300-500MB for 25 minutes? That's a lot less insane than the lossless (If I was to record an anime using the capture equipment, you'd be looking at oh just say 80GB for that same episode?) and is more practical for transport.

Bear in mind that this sort of compression is done AFTER you have a final video. You don't do it on the fly if trying to get the smallest file size and best quality is your goal.

Now, if you're going to ask 'So why can't I compress lossless on the fly and try to cheat the requirements a little?' well...

The kindest way to put it is this way - the processing power required to do so is immense, and the moment the CPU stalls because the codec doesn't know how to handle a certain image in the timeframe it has, you lose that frame. And probably six or eight either side.

If you've ever watched a TV broadcast, and the image goes weird with pixelated blocks, bad colour or a whole raft of weirdness, that's one reason why it could have happened. Generally the codec that governs this equipment (while very good) cannot handle certain colour or light patterns. In the TV world (and real life) there's a lot of things that don't happen nearly often enough to cause the codec to choke, so you can get surprisingly good on the fly encoding if you're prepared to fork over the money.

Funnily enough, streaming uses a variant of that, and gaming is also the one thing that can throw very unnatural patterns. In fact, I usually use a particular game (Atelier Totori) to test compression codecs with their robustness. Most of them fail miserably.


So what did this aside teach us? The fact that if you CAN'T write that fast, you MUST be compressing on the fly, and consequently you must be suffering various degrees of information loss.

Now, we start to cover the first step in determining how good a device is - How you get it to work with your computer.

USB 2.0 vs USB 3.0 vs Thunderbolt vs PCI-e

If the above statement made absolutely no sense to you, the fastest way to cover it is this:

Does your device plug into any USB port?

Does your device require you to plug it into a very SPECIFIC type (namely a blue marked) of USB port?

Does your device require you to use a Mac specific (well, near now) port?

Do you have to open your computer and plug it into a slot that could also fit your graphics card, or maybe your sound card?

How you answer that question will determine what the device HAS to do to record.

If you can plug it into any USB port, it's a USB 2.0 device. This means (Because I can read tech specs) that the device is limited to an absolute throughput of 300Mbps, because that's what the port ITSELF supports. Not 300MBps, you're off by a factor of 8.
Adding in technical overheads and encoding and a whole bunch of other stuff, and it REALLY amounts to 'USB 2.0 can transfer on the best of days about 30-40MB/s'

The good news is that the device can work on most computers because you don't need a write speed that fast.
The bad news? Well, you're compressing 82.5% of all the data in 720p, and somewhere close to 96% of all the data at 1080p. A lot of information's going to go missing.

If you're just planning to just show your friends, some quick and dirty videos or streaming, it's probably good enough. If you intend to do anything to the video (like say a mashup, or some other video work like say a mini documentary or something) you'll probably be griping how the bits look worse compared to what you're working on.

If you're holding a USB 3.0 device AND the device says you need 200+MB/s write, then you might be in luck, assuming you have an SSD in the PC that's doing the operation in question, and it writes fast enough.

Maybe. If the stars align.

There's been... shall we say 'issues' with the implementation of USB 3.0

I could go on for a while about the problems, but it amounts to this - the maximum throughput of a USB 3.0 port is SUPPOSED to be 6Gbps. If you're wondering, that's about 500MB/s.

To do this, what mainboard manufacturers do is use the PCI-e lanes to provide the throughput to support the port. All well and good so far right?

Well, here's the kicker. A LOT of the implementations usually limit you to 1 PCI-e lane. Maybe 2. Why? Because PCI-e lanes are a premium, and configuring the board to properly send data everywhere is expensive, and a lot of manufacturers don't actually expect people do tax the USB 3.0 port to its actual theoritical maximum.

Each PCI-e lane is 600Mbps. If you're working with PCI-e v2 or 3, it's 1200Mbps (Before you ask, v3 doesn't triple the speed, it makes each lane more reliable). To do it properly you need at LEAST 4 (8 on older implementations)

Considering the hard cap of PCI-e lanes on a board is between 24 and 32 lanes total, and you need to allocate them for stuff like graphics cards and the like, you can see why cheat, and hope no one REALLY cares.

To be fair, if you're transferring onto a USB 3.0 thumb drive, you probably wouldn't REALLY notice.

Your capture card will though, and this is why USB 3.0 is basically shooting in the dark as a reliable medium - You have to be SURE your board (be it laptop or desktop) does it right, because so many manufacturers do it wrong. Generally those who need one generally know exactly what they're doing, and buy from places who ARE willing to accept returns due to this guessing game.

Yes, I've played the guessing game before, and for the record, it sucks (And to complicate matters, we even had a malfunctioning unit to throw into the mix).

Now, if you're told 'You need a thunderbolt port/New model Macbook/pro to use this device with a SSD or other device that can write at 200+MB/s' then lucky you, you don't have the USB 3.0 conundrum. I can also hazard a guess which of the two devices on the market you have too. (They're not that common.)

Why? It's pretty simple - All implementations of Thunderbolt are geared to work at a guaranteed minimum of 10Gbps. (The actual limit is 12Gbps, which is for the record about 1GB (as in Gigabytes) per second) This was in part due to the fact Thunderbolt was also designed to feed apple displays at 1080p, 2k and 4k WITHOUT the use of HDMI. Yep, it can carry HD signals if you know how to do it, and if you remember the math, you'd know how much data needs to go through this port to do so lossless.

Apple contracted the work to Intel, and forced the spec to be very specific - By the time this spec became available to non-macs, the design was very rigid, so there's no need to pray.

Of course, if you need the write speed, well you still need the write speed to record. Otherwise... compression city folks.

If you have to have a desktop, and you have to open up the device and install a card, you have a PCI-e card.

Now, if the card says it absolutely needs the ability to write fast enough because it doesn't compress, that's what it says. Compression or write speed is YOUR problem, and you can choose to go lossy, or write raw, or try your hand at lossless on the fly. The device will often not do anything other than show the signal.

If it insists it DOESN'T need a particular write speed, then the card itself has a compression chip (or set of chips) forcing lossy compression. Since it's dedicated hardware it can be surprisingly good, but well, the device still needs to do so on the fly.

Now in terms of pricing, devices that compress tend to be relatively cheap (barring a couple of very specialised exceptions) since well, you're wanting a signal, not a GOOD one. You shouldn't be paying more than about $200 US for a device that forces compression, unless you absolutely know you need it (Broadcasting with a laptop, where you know the laptop cannot handle the compression.) Said device is 450 US, but is used by TV stations around the world for snap setups.

Why? Because you can get devices above 200 USD that do BOTH compressed and uncompressed. Unless you're working with a very old computer (as in say Core 2 Duo or older) you probably can do the compressed option just fine.

If you want to capture lossless, you want to read the box, and read the requirements. If in doubt ask, and in the process become friends with your supplier, because he'll realise you're not a gaming moron who he can milk money out of.

If you're willing to play the guessing game with USB 3.0, you can get decent uncompressed USB3.0 devices for what 200-250 USD?

If you have a spare PCI-e slot in your computer that is 4x or bigger (Read your mainboard specs for that one) you can get capture cards for 200ish USD that can do lossless 1080p60 if you want it to.

Now, why do I recommend this as an option?

Because of the fact it's now relatively cheap to get the write speed.

For an idea, a 250 or 256GB SSD can be had at about 150 US, as long as you have a computer that supports SATAIII (at 6Gbps per drive). This will be enough to support 720p60 lossless recording for about 35 minutes. They are extremely fast, and basically as long as you don't buy some dodgy SSD (once again, be friends with your retailer) you'll do just fine.

You can get bigger ones for more money of course, and if you have the space and the ports to connect the drives you can RAID 0 them up, and basically use them as one super drive.

Congratulations, you now know how easy it is to retrofit virtually any system made in the last 3-4 years into a recording station, without spending a mint on a new PC.

Now this should end Part 1.

What, there's more, and I'll be covering a topic that will drive everyone nuts, because, well it does. How do the inputs work? What the hell is HDCP, and why the hell am I seeing slight rings on my recordings anyway?

Don't go shopping just yet, we'll be right back after I get some sleep. No, really don't, because you might want to understand the next bit BEFORE you commit to this.
Games are streamed at
No focus, any platform, suggestions welcome

Currently accepting Platinum Stars requests: - My technical notes on good quality recording.


  • Producer
  • *****
  • Posts: 1169
    • View Profile
Re: Technical notes: How to do video recording, and to do it well.
« Reply #1 on: November 21, 2015, 01:37:54 pm »
Part 2 - Understanding connections, delay, ghosting, and the evil elephant in the room.

Connections: What the hell is half these cables?

Now, you generally have a way of connecting your particular device to your particular display.

With a capture device, it's more or less the same way - You have RF, composite, component, HDMI, DVI, VGA, Display port, etc, etc.
You will also note, if you read the tech specs of a device, that it'll also list its supported resolutions. They're listed like 1080p60 and so on.

To keep things relatively simple, you have to keep in mind what you're trying to capture. A NES doesn't use component for example, and a DVD player may not have HDMI. Unless you want to do a signal conversion (I'll cover that in another post) stick with what you have to.

Now, to really speed things up, I'm going to run a mini cheat sheet, because I'm a little lazy.

RF supports 120i29.97 144i25 240i29.97
Composite supports RF resolutions, and 240p25/29.97, 480i29.97, 480i59.94, 576i25 and 576i50
Component supports RF + Composite resolutions and 720i29.97, 720i30, 720i59.94, 720i60, 720p24, 720p29.97, 720p30, 720p50, 720p59.94, 720p60, 1080i25, 1080i29.97, 1080i30, 1080i59.94, 1080i60, 1080p24, 1080p25, 1080p29.97, 1080p30, 1080p50, 1080p59.94 and 1080p60.*
HDMI supports all of the above AND 2k, 4k and 8k connections, as well as extend sound support beyond stereo (such as Dolby digital surround sound, 2.1 -8.1 channel sound or other variants)
VGA will support about anything to I think about 16xx by 10xx resolution (Don't quote me on it) and DVI will support virtually any signal beyond that. DVI's kinda weird, once we get there.
Displayport is convertable to HDMI or DVI, and can do the same resolutions as either.

*Note that there's a whole bunch of OTHER framerates but aren't used that often, and I also probably missed a few somewhere along the line.

The cliff notes of the above cheat sheet is:

- Use the best cables the device AND the item you're playing will support and can get your hands on if you have a choice between RF, Composite, or Component.

- If you have to choose resolution for recording purposes it depends on what the purpose is. For raw footage, honestly, it doesn't matter too much unless you're expecting people to pause.

If you intend to pause the footage for ANY reason (Screenshots, splicing video into a mashup, or whatever), pick a progressive option (you notice I prefix a resolution with an i or p, p is progressive). Interlacing is easier on the device displaying, but pausing will reveal very strange pictures when you pause, ruining the effect.

You'll note a LOT of capture cards will not support 1080p59.94 or 1080p60 or above. The reason is that interlacing (or the i) cheats by interlacing two frames to create single output. It means load wise you can get VERY big pictures with no apparent loss in signal quality (for the technically minded, you can instantly upscale a 720p60 image into a perceived lossless 1080i60 image) for those watching the video. Just don't pause the thing, or you'll see why it's cheating.

Right now, you understand your options and you'll now pick your poison (or you might want HDMI as your signal carrier of choice, because it looks so easy) and now you're going to plug the thing straight into your device... and now you're watching the play through your capture device's program or your streaming program.

... Except all your movements are sluggish and you can't do awesome shootemup videos because there's a significant delay on your actions. What the hell?


Delay, splitting, ghosting and the big elephant in the room

We're going to split this into two parts, digital splitting problems, and analog splitting problems.
But you may ask, what's analog, and what's digital, and why the hell didn't you define it earlier?
Answer's simple - the resolutions are enough to give most people a headache, so you probably stopped to digest what I said above before moving on.

It's pretty simple really - Anything that's not HDMI, Display Port (or Ethernet or a few other weird things) only runs in analog mode.
Anything I listed in the first set are digital by their very nature.

DVI is the magical exception, only because some DVI cables to run in VGA standard OR in HDMI standard. DVI-A is an analog only DVI cable (using the VGA standard) and DVI-D is a digital only standard (which is functionally similar to HDMI, with one important difference.)
The fun part is you can get what's known as a DVI-I which can use EITHER with the right converter.

Splitting - Why should I split the signal, and pay for all these extra cables and this black or white box anyway?

Now the reason why you get a delay is pretty simple - each 'end device' you send your data to has to spend time to make it to your screen.

On a TV with all its processing off, it'll probably spit up the video onto the screen for your reactions in a few milliseconds.

On REALLY expensive TVs that are huge and support really big resolutions, they're designed for cinematic experiences. What this means is that it'll do post processing BEFORE displaying it onto the screen, making it look better (in theory anyway) because the assumption is that an extra 30-100ms isn't going to hurt your enjoyment of whatever you're watching.

Sometimes you can't turn processing off, which can make it poor for gameplaying, cause you keep missing every note in im@s, because you're about 1/5th of a second too late.

Now, in comparison, the capture card has to process the signal, convert it via the driver, send the data to a program which then displays it for you (and while the PC may be encoding video to stream, or writing to the hard drive) Needless to say, it's going to take more time to do so, so consequently the lag.

Now, you'd ideally want to play off a TV (or monitor) because of the low latency of the screen, and if you're recording or streaming, it can take a second or two delay, since anyone watching doesn't need your fast reflexes.

So what we do is send the same identical signal to both our TV or monitor and our capture card with as little delay as possible.

The easiest way to split an analog cable into two is to literally just split it into 2. You can even do it at home with a knife if you REALLY want to. (You rather not though, since you can buy cables that do this without the mess.) Doing the split this way is what's known as an unpowered split.

It functionally works, since you're sending the same signal except with a reduced intensity, and to keep things simple, how analog signals work is by differentiating between the amplitude and latitude of a signal (damned if I can remember the correct term) and using that to determine the picture. On composite, it's handled by a single cable, on component, one handles the gradient, and image placement, and the other two handle the colour. You can have some really funky effects if you misplug the cables into the wrong spots, because the colour information will be wrong, but it'll still work.

Of course, doing it this way introduces problems namely the following:

Since you're sending the same electrical signal down TWO lines, and the device can't know it requires to amplify the signal higher, you're going to have weaker signals at BOTH ends.

The exact math is something an electrical engineer can tell you, but roughly the signal strength that arrives down a particular line is 50% - (the impedance^ of the cable per metre multiplied by the total length of the split cable. Note it doesn't matter if the split itself is at the very beginning or the end of the cable prior to its destination, the loss is the same)

Now this can cause problems (if say for example the cable run is above a metre/3 feet) because you now have a signal that's very weak, and the device may not be able to accurately identify the aspects of the signal and approximates instead. It could be very dark, or the colours could be incorrect.

We can fix this problem by getting what's known as a powered splitter.
What this does is receives the signal incoming, then sends two identical signals down the two (or more) lines set. They're more expensive, as you can guess.
They solve the first problem fairly well, and the image itself looks more correct...

Except now you look closer at the screen and your recording, and you swear you're starting to see after images on the screen.

This is a second problem that's known as 'ghosting', and it's a real pain to fix. In fact, it's never quite 100% fixable, since you need the stars to align, a LOT.

What causes this phenomenon? Well, basically, the first thing you learn about RCAs (which is what the cable in question actually is) is that there is no 'Plug this end to this side' Basically, signals from one end to the other, no matter which end you start your signal at.

The next thing you will learn (from an engineering textbook most likely, or maybe an OH&S safety class regarding electricity) is that if given a chance, a electrical voltage that is sent down a wire if it can't discharge that end will go back the way it came to try and find a earth. I'm simplifying this, but it's known as reflecting. You can REALLY see this happen if you decide not to plug one set of cables of an unpowered split into anything, because the entire thing WILL go back the way it came.

So what the hell does this have to do with our video signal?

Pretty easy - what you're actually seeing is a constant after image of what happened previously, slightly shifted. Call it a corruption of the signal actually.

This happens for the reasons below:

1. You're using an unpowered split, and one end isn't plugged into anything at all. (See explanation above)
2. Your end devices aren't absorbing the voltage sent across the devices properly. Basically you're seeing the result of a little excess voltage that bounced around for a bit, corrupting the signal.

You can fix 1 - you can't leave the ends unplugged.
Fixing 2 requires a lot of work and is WAY beyond the scope of this document. You'll have to find good quality everything (Cables that work right, a capture device that doesn't cheap out, a TV that doesn't cheap out, a split that does things perfectly and a console that's not even slightly finicky) and/or employ an Electrical Engineer to precisely measure (or at least precisely enough) out the voltage sent down the lines, then figure out the exact amounts the devices at the end actually take, and tailor the whole thing to within something like a 99% accuracy.

Or just hope you hit the magic combination that reduces the reflections to something not downright terrible.

So given the above, you're probably saying 'Well, screw this, I'm going to go digital!'
You'd be right, except...

Now, for a quick explanation to how digital works, basically, everything's converted to binary, ie 0/1 and that's sent down the line determined by if the voltage is below the established median, or above. There's issues (such as encoding) but for most part, the device doesn't have to really figure out if the intensity is at +200 or 198, just if the voltage is 2v or not 2v, and if so, which way it's not 2v. (or whatever that is)

Bear in mind that it still has to determine the baseline, so that's why HDMI cables (and all other digital cables) have known maximum transmission lengths (Because you can't outrun impedance ) but they can be easily boosted by a repeater which as you'd expect, repeats the signal.

Digital signals are generally more reliable in getting what you're supposed to be seeing, (unlike Analog where it's a lot easier to corrupt the signal) but beyond a certain level of corruption, you probably see the signal crash and burn.

So clearly your best option is to go digital (HDMI) and get this nice reliability. You're not trying to record to a device in the next suburb, so just get a HDMI repeater, split the signal to both devices and you're good... right?


HDCP - Why AV experts for the last 10 years have been wishing that they could nuke Hollywood off the face of the planet.

You might remember that I mentioned that DVI could support digital signals. You'd be quite correct, because... DVI-D is in fact considered HDMI 1.0. Yes, this means if your graphics card supports it, you can in fact send sound in a DVI cable. You can buy a DVI-D <-> HDMI adaptor.

There's a catch though. All versions of HDMI 1.1 up implemented (by insistence of Hollywood) what's known as HDCP.

For HDCP 1.0 -1.4

Any device that is 'permitted' to decode HDCP will have a key to decrypt the signal. Any device that is allowed to send to a end device (like a TV) will have another set of keys.

Before the device will send any of the encrypted data, the end device must complete the handshake by sending back the XOR (I think) of its key combined with the challenge. If the answer matches, the device will then send on the encrypted data, which in turn will be viewable as a HDMI 1.0 signal once the decryption key is applied.

Now, the implementation has had issues, and the fact I can tell you all of this means that someone figured out how the process works. Then someone supposedly released the master key for HDCP 1.4, which can be used to create a decryption key because of simple algebra (Well as simple as these things get) because of the fact we know what the RESULT of the key combined with a challenge key must be (And fetching a challenge key is as simple as grabbing any BR device with HDCP 1.0 to 1.4 on it).

Of course, any device that is deemed to NOT be allowed to decrypt was never granted a license. Like oh, every single capture device out there, to prevent you from making pirate copies of Titanic in 1080p 3D sidepacking before shooting yourself after listening to Celine Dion.

Without said license, and with the backing of the DMCA (Which mandates long prison sentences for anyone illegally tampering with encryption not exempted) we had what reigned for the X360, PS3 and I believe the WiiU.

There are items that can forcibly decrypt HDCP. Since I'm in Australia, and I had legal reason to, I actually had a fully fledged HDCP breaker up to HDCP 1.4 and I think 2.0. (Never really tested that though, before the thing went up in flames). I'm pretty sure if someone REALLY wanted to make a fuss, I could be arrested upon entry to the US over it, but frankly, there's a million other things I could be done in for, so reaching for 'Broken encryption' is probably overkill.

Granted, I have to ask for one of these to be made by HAND, because this is the sort of thing you're not supposed to do, and to be honest, if you really wanted to pirate Titanic in glorious 1080p24 (before finding a cliff to jump off) it's actually a lot faster and easier to just rip the files off the disc by getting a BR drive for a PC without ever hitting HDCP. You really have to know you need a HDCP breaker and not something else before you go looking for one.

Since all the above is true... there were a few developments built around HDCP. If I told you the latest revision of HDCP is 2.3, with 2.4 in development, you might get an idea what happened.

- They made a new master key and new resulting keys for new items, meaning any device that wants to read the signal needed to have a new key.

- They also made a new encryption algorithm and interaction which hasn't been figured out yet.

- They also (finally) decided to add in the ability to remotely revoke keys. Yes, this feature isn't entirely new (It's been around in 1.4 and I think was introduced in 1.2) but before getting a 'naughty list' relied on sticking the bad key list onto individual BRs for distribution. Since 2.0+ devices can be connected for interactivity (Due to revisions in HDMI, since I think we're finally at HDMI 2.0 cables))  getting a key (and resulting end device) to be nonfunctional is so much easier now.

- They made all devices on a chain identify themselves AND to force all their end targets into HDCP handshaking. I'm not entirely sure how this part works, but previously (on 1.4) if a device failed the handshake, it would not be able to decrypt the data (Since it probably didn't have the ability to). On 2.0 and above, any device that fails to decrypt will make the playing device (Say a very recent BR player) to assume the chain has been compromised and to display a HDCP failure screen.

Challenges are also issued often, as opposed to once, meaning that failing to answer a challenge for whatever reason will fail you, halfway during a movie.

Of course, this means you can fail because of a faulty cable, and this is one reason why AV experts worldwide want to shoot Hollywood over.


'But I can buy this one device that magically take HDCP off without doing any of that fancy stuff you said! I swear it works!'

That is true - You CAN get items that remove HDCP by accident. There's two ways to do it actually.

a) trick the broadcasting device to thinking that they passed HDCP but fail to encrypt the signal by falling back onto HDMI 1.0 without triggering a HDCP failure event.
b) trick the broadcasting device into thinking the device in question doesn't actually support HDMI 1.1 or above, and fall back to HDMI 1.0. This is done at the split, by the split reporting as a device as opposed to passing on the challenge to the end devices.

I won't go into detail how it works (I have to get my hands on one and examine the reporting pattern of the device (How it handles EDID and other things) to determine which of the two it is.)

In theory though, the PS3 DOES support DVI-D even though you don't actually see a DVI port (But know what HDMI 1.0 actually IS...), so if you trick the PS3 into letting you use that for compatibility purposes, it'll work. (I think this is a knock on effect of the OtherOS feature that used to be present on the PS3)

I don't know anyone who's tested it with a PS4 though, but preliminary says that particular trick won't work anymore if the PS4 deems certain content to be HDCP protection worthy. From what I can tell, the PS4 (And Xbox One) DOES in fact use the revoking list feature in 1.4, so we know that the HDCP used in the PS4 and Xbox One is at least 1.4

I do have it on the very best authority (namely someone who I provided for work reasons one who works with a PS4) that the PS4 is a HDCP 1.4 or HDCP 2.0 device.

Needless to say if you're relying on a device to do your capturing because something works NOT how it's supposed to, don't be surprised if it doesn't anymore because it got patched.

Me personally? I like having things work as per design, not because of an exploit. Of course there are costs involved, but I also don't like surprises, and I also know how to handle when things don't go according to plan.


Now, I hope things don't get much harder than they are now, and the signs Microsoft and Sony have made towards something resembling sane HDCP application is helpful (for gaming at least) although like I said, it's very easy to take it away, since HDCP has been updated to actually patch itself now.

With all of this, you might have a solid footing into understanding the hardware side of things, to ensure a good foundation (for your level of quality of course) to record the best video you can manage.

If you're an editor, you should know the basic rules of editting - Make your material using the finest stuff you have, then encode to suit.

Or if you don't know that, you'd probably understand this - Garbage in, Garbage out.
Games are streamed at
No focus, any platform, suggestions welcome

Currently accepting Platinum Stars requests: - My technical notes on good quality recording.


  • Producer
  • *****
  • Posts: 1169
    • View Profile
Re: Technical notes: How to do video recording, and to do it well.
« Reply #2 on: November 22, 2015, 06:54:20 pm »
Part 3 - Wait, there's more? Conversions, workflow and designs.

There's always more (and surprisingly the previous post had to be edited down from the character limit) and here we'll cover a few things you probably need to consider.

You might be splitting (if you don't want to play off your capture card) and you find out that the capture card can't handle the same resolution as the display device you're working on. Splitting requires you to send the exact same signal to both sources, since logically, you're sending the same signal to both devices.

A common example would be a lot of PS3 RPGs, which suffer greatly reading wise if you try to play them in 480i or 576i (NTSC and PAL respectively) because the text can get borderline unreadable, but your capture card doesn't support 720p or greater.

You might find that you need (or just want to) to play a game at 1080p59.94, but your capture card doesn't support 1080p59.94. This is a very common scenario, because you need a certain level of spec (see the bandwidth requirements) for this one.

You may actually have a device you want to record that's too backward for the device you're trying to capture,. An example is you find out that you need component cables to process a PSP Go or PSP 2000 or 3000, but your card only supports HDMI. Or you find out that the device only does 480p, and... well you find out that there aren't capture devices that actually do 480p.

There are ways around this problem of course. What we can do is forcibly convert the signal from one format to another, even from analog to digital, or from digital to analog.

Conversions - How to handle things when you can't just plug X into Y, and how do you do so effectively?

The first thing I need to say before anything else, because there will be an idiot or two out there who doesn't understand how signals are carried works.

"If you do a format change, or a carrier change, you MUST get a converter. They are by their very definition powered, and are effectively mini capture cards."

This means:

If you want to move between an analog signal (say component) to a digital signal (say HDMI) you will need a device that will take the analog signal, do a whole bunch of processing, then at the output feed a HDMI signal.

If you want to do a format change (eg, turn a 480i30 signal into a 720p59.94 one) you will need to do the same thing (a device that takes the signal, and spits back out a new one.) This is known as upscaling or downscaling.


Word of warning before we continue - Cables that seem to break the above rule.

If you EVER get told 'Oh, you can just get this cable to convert component (RGB usually, instead of the YUV which say a PS3 uses) to HDMI' (Which is pretty common) which SEEMS like it breaks the rules stated above, there's an answer to this - it actually doesn't.

Why? Because the device it works on (And there's like a couple of those) does a little tricky thing to make it work.

In fact, the signal sent through the 'HDMI" is actually analog, because of DVI. Namely, DVI-I, with a bit of engineering on the device to actually know how to handle where the electrical currents are on the HDMI output. The cable WON'T work unless the device knows how to handle this signal.


Stupid warnings out of the way, now we need to understand where to put these devices.

The first thing you need to remember is this: Any and ALL conversions you make take TIME. It's basically a capture card that passes on the signal.
The second thing you need to remember is this: Upscaling and downscaling are not perfect, so the quality of the output will vary. This also takes more time.

If your converter or scaler is designed for entertainment (You know, like mixers and the like) you have to keep in mind it's designed for cinematic experiences, and they will add SIGNIFICANT time to the signal. Doesn't matter when you watch a movie, but it may matter if you're playing Smash brothers. Granted, if you're lucky enough to have one around, you can deploy that and save yourself some money if you use it to make the recording video the right shape and size, and probably make it look better too.

This means you want to put any conversions in places where the extra delay will not hurt, and convert only when it makes sense.

Putting the theory into practice - Let's build the right recording scenarios for our requirements.

Example 1: I want to record the Xbox 360 game Deathsmiles II to upload my play videos to youtube, and I need footage to study to get better. I can afford to put a full outlay, and use a desktop PC running windows to handle my work that can play games released in the last two years at high graphics settings.

Currently the connection is: Xbox 360 HDMI -> HDTV

For those that don't know, Deathsmiles 2 is a Horizontal scroller by Cave, that is natively 720p60. Stages are run at 15 minutes each.


This is surprisingly a fairly straightforward scenario. The game is run at 720p60 which most modern capture devices have, and the requirements aren't too pressuring. Since the player wants to STUDY the gameplay and shooters require pixel perfect accuracy, lossy compression may not be a good idea, since it may blur the lines and misrepresenting the firing fields just means trouble.
The player can reencode the footage to upload to youtube themselves later.

The Xbox 360 does not apply HDCP on their end, so this means you don't need to use a HDCP breaker, nor do you need to worry about signal quality issues or any sort of conversion. This is the simplest HDMI network you'll ever make.


You will need to purchase for the signals network: 2x HDMI cables, any decent 1 in and 2 out HDMI splitter. We'll just say '40-50 dollars US for this set would probably be somewhere in the right ballpark'

Signals network:

Xbox 360 HDMI-> HDMI 2x1 splitter
                           HDMI port 1 -> HDTV
                           HDMI port 2 -> Capture device.

Capture device:
You will need a computer that has:

1. SATA III capablity, and the ability to RAID0 (ideally)
2. Spare HDD bays to put in at least 1 (probably 2) SSDs
3. Ideally a spare PCI-e 4x port or better that you can actually fit a card into (Sometimes you have spares, but GPUs are known to take up 2 slots, physically covering one of the ones you have spare)
If you don't have that, you can fall back to a USB 3.0 device, but bear in mind that the risk involved.

Space wise, you'd need at least one 256GB SSD at a mimumum. Ideally, you'd want 2, and considering I've seen them for about 100-130 each, it's not an expensive purchase. You can start up, and when you find the money, drop the money on another, then do the process to RAID 0 (or JBOD if your board supports) the two drives up.
Alternatively, you could just get a single bigger one if the drive is fast enough, and it allows for more space if/when you decide to expand things. I'll outline some options below.

Actual suggested products: (All items are RRP, so keep an eye out and go shopping for discounts/bargains)

- SSDs that meet the speed cap. As long as your combination clears 350MB/s you're fine, although interestingly enough, you'd be able to record at 1080p60 for short bursts if you can get a collective total of 500MB/s or better.

eg: 2x Samsung 850 pro 256GB drives are 130 each at the moment, but interestingly, the 512 variant is 219. (Before you ask, the 1TB variant is 429, so doubling up again isn't that cost effective as of writing) You could just grab a single 512, then grab another later if you want more record time.

Capture device:
- A Blackmagic Intensity 4K (199 US) or a Decklink 4K (499 US but this might be a little overkill)
- If you want a device that is USB 3.0 (and you are certain your USB 3.0 ports are up to scratch) a AVerMedia CV710 ExtremeCap U3 (Which is 180 US)
- If you're running a mac or otherwise have a thunderbolt port, a Blackmagic Intensity Shuttle will work as that's 1080p60. (Remember how I said there's only two thunderbolt devices? The only OTHER thunderbolt device, which I might add is awesome, and does 2k, on top of 1080p60 etc is made by AJA, and it's about 1500 US. Keep that in mind before you start accusing me of bias towards one company.)

Rationale on the capture devices:

Personally, I'd look into something like a Blackmagic Intensity 4K as a starting block IF you have the PCI-e x4 slot to work with. If you REALLY have the money, I would strongly advise splurging on the Decklink 4k but you ARE looking at spending about another 300 dollars.

I honestly can't (From a professional standpoint) recommend anything else at the 200ish US price point for a whole of one reason - Most everything else either has built in encoding (Which is a major problem) or doesn't support 1080p60, and the other devices which are easier to use are only 20-30 dollars cheaper, if that but missing 1080p50/59.94/60 support is a really big thing.

Other companies DO offer 1080p60 uncompressed, if you don't mind blowing 600-700 dollars (ie, where the Decklink is, and where actual competition starts). Avermedia offer uncompressed 1080p60 only through their USB 3.0 line (Why? I don't know), and that's... well about it.

We will cover when skipping pixel perfect is a practical option later.

Adding in the extension is petty cheap (it's 30 dollars, and we're spending at least 400 here, so if you're scrimping that badly, you might want to reconsider recording altogether until you have the 30 dollars if uncompressed is a requirement.)

All in cost:
Minimum - 130 + 50 + 189. Under $400 US, with expandablity via picking up more SSDs to add to your SSD running. As a side effect, this setup is ALSO good for uncompressed 1080p60 for about 15 minute recordings on 256GB.

When I said the cost of entry these days was low, I really wasn't kidding.

Example 2: I want to record im@s performances from im@s2 and OfA, because Setsuna makes this stuff looks so easy and I want to be like her (said no one) and frankly, I could do a better job than she can, cause this is so easy, and she's a lazy bitch!
(Let it be said that I'm more than willing to poke fun at myself.)

Current setup:


The capture device and drive hardware I won't cover - It's identical.

Now the fun part is... You're now dealing with HDCP.

So you have a few options.

Option 1:

Change to the PS3's component. HDCP is a HDMI thing only remember?

PS3 Component -> Component splitter
Component Splitter 1 -> HDTV
Component Splitter 2 -> Capture device.

This works if you're using a capture card with component. It's the easiest way to do it, and once you realise that the difference between a component splitter and a composite splitter is just the number of ports on the board, you can find a composite splitter that does a really good job, and just get two of them! Just hope ghosting isn't a thing.

But what if you're using something that doesn't HAVE component ports? (Avermedia U3 Extreme comes to mind)

Then you'll need to introduce a component to HDMI converter.

In im@s' case, you don't really need to worry about input lag (Unless you're doing a gameplay video) so you can just (Using Option 1 as a base):

PS3 component -> Component to HDMI converter

HDMI -> HDMI splitter etc.

If you're playing something that DOES require more responsive controls you want to instead insert the Component to HDMI converter at the LAST step. (Using Option 1 as a base)


Component Splitter 2 -> Component to HDMI Converter -> Capture device.

Since recording (or streaming) doesn't require low latency, we don't care if there's a second delay or not.
The converter in question is like 30-60 dollars, but quality may vary.

If you have some way to either a) trick the PS3 into going into DVI-D + audio (ie HDMI 1.0) mode, do that instead, or if you have a HDCP breaker. For legal reasons, I cannot discuss the breaker in the United States of America.

Tricking the PS3 into DVI-D mode is fairly simple though, although it does require a fair bit of trial and error - you need to find a splitter that reports itself as a HDMI 1.0 only compatible device (Like a DVI-D splitter) and trick the PS3 into falling back onto that.

Example 3.

I'm a Final Fantasy nut, and I want to do long plays of my PS2 games, starting with a perfect run of FFVII using the PS2 backward compatibility, then moving up towards FF12. I intend to do 10 hour sessions at a time, before editing them into let's plays for youtube.

Well, here is a case where it's simply impractical to record lossless. (You'd need some absurdly large array anyway!)

Bear in mind that all the lossless options (the Avermedia Extremecap U3 and the BM cards) can just tell the CPU to compress on the fly anyway, so if you want the flexibility, you might as well stick to them.

If you DON'T...

Other noteworthy capture devices in the 150-200 range:

Avermedia C985 - Compressed (1080p30 footage) but has an onboard encoder on the card itself. The upper cap is 60000 kbps, which is pretty good. It's first pass only, but in a lot of cases, frankly, most people won't notice unless you're sending them to someone like me (who will just compare that to a raw version) or to someone who's a 60fps or higher master racer.

AVerMedia GC550 Live Gamer Extreme  - I'm guessing the newer version of the U3. It's about the same price too.

Sadly, for the price points, I really can't recommend anything else. They're either too similar (The Happauge) or they're inferior feature or performance wise (the Elagto) compared to other products on the market.

If you want to suggest other devices, I'm all ears, since considering the entry price points, going lossless and telling the CPU to do the encoding instead is a much more practical option.

Now, if you don't want editting problems galore, you REALLY should be converting the signal you're capturing into a progressive, since from memory most of the PS2 games run at 480i or 576i.

So ideally, you want something that can rescale and do a signal conversion.

If you're NOT particularly picky, you could pick up a really cheap component with a forced 720p50-60 conversion at about 70 dollars. It's probably not the best, but it's basic and does the job.

If you want something better, you'd ideally want to get a proper one with the ability to set what you want, but they're in the 200-300 range.

Apart from that, stick this conversion as the LAST step and only to the capture device, unless the delay is low enough that it doesn't matter and you really want a mostly digital path.


This should cover some of the more common requirements for capture. There's still more (as always) where we talk about alternative setups, building a PC from scratch, macs and what happens when you can get good hardware on the cheap.
Games are streamed at
No focus, any platform, suggestions welcome

Currently accepting Platinum Stars requests: - My technical notes on good quality recording.


  • Producer
  • *****
  • Posts: 1169
    • View Profile
Re: Technical notes: How to do video recording, and to do it well.
« Reply #3 on: March 09, 2016, 10:36:20 am »
Part 4 - Muhahahaha! ... Or what happens when you get given a list of requirements and a flexible budget, as well as more reasonable setups for those without bundles of cash burning a hole in their pocket.

This is late, but considering recently someone hit me up for advice on how to build a PC, I just remembered that I had to finish this up, and in the process I got the opportunity to finish up my research.


Building a fail proof system - I wanna build a dedicated PC for broadcast and/or recording, but where do I start or hell, why should I?

When building a workflow, you have to consider your requirements and expected work environment.

Or in plain English - Just what the hell are you using this for and what do you intend to do with it?

If your goal is to just stream at low quality, then any old thing will do.

If you want to record at lossless 720p59.94 then upload somewhere, you'll need to ensure your setup allows for that.
If you want perfect 1080p59.94 or 60, or maybe 2k303D(SidebySide), you'll need the requisite write speeds and processing power to handle of that.
If you want to be able to broadcast and record a 4K signal while playing without the threat of lag spikes due to processor load, you'll need to figure out how to handle that.
If you want to process lots of video without locking your PC up for days at a time, you'll need to account for that too.

Solution 1: Sufficiently powered PC with capture device attached.

This is the easiest solution and the cheapest. Apart from choice of capture device, any requirements (HDCP breaking) and supporting drives (assuming lossless), there should be no issues if you're trying to capture a console. However, there's a few drawbacks.

- If you're trying to record the PC you're on, the encoding and recording is going to significantly impact your PC performance. There's a reason why so many PC streams for upper end titles suffer so much - most of the streamers end up overtaxing the CPU, and streaming is particularly strenuous at higher resolutions.

- If you're encoding video, you'll need to remember that the process of encoding WILL chew up stupid amounts of CPU time and do a lot of IO onto the drive/s handling the process. So if you wanted to play a taxing title while encoding, well... be prepared to suffer performance, and run the risk of corrupting the encodes in the process. Heck, I've had to redo some im@s recordings because once I was doing text editing using notepad++, and that managed to spike the CPU at an inopportune time. You have been warned.

Solution 2 - build a secondary PC to handle the above issues.

The benefits of building a secondary PC are obvious - The work is on a dedicated machine, and as long as the machine is configured correctly, it'll do what you want it to do, and you don't have to worry about said PC doing anything else OTHER than what you want it to do, so less chance of a hiccup.

Catch is, it'll cost money to do so. (Well, duh.) Good news is that all the notes previous (ie card selection, supporting drives) can be transferred over to the new device, so it's not like you HAVE to start from scratch. Modularity was part of the design, so if you wanted to go further, there are costs, but you do NOT have to scrap your previous investments until you're prepared to move onto higher quality equipment and are prepared to make your older investments redundant.

If you want to invest money into building a dedicated solution, then read on.


Understanding encoding - What components for this PC matter the most for encoding and streaming anyway?

The first thing you HAVE to remember about encoding is that the CPU being used really does matter.

In fact, for streaming, about 90% of the load is handled by your CPU, because there just isn't enough time to hand off the work to any other processes, and besides, most cards are NOT optimized to handle this sort of fast number crunching (and to add insult to injury, technically the cards COULD do so... but most of the streaming programs and codecs don't use even a small fraction of said power anyway because no broadcasting software actually EXPECTS a specialized GPU to ever show up in the first place!)

For encoding work (ie recording the raw absolute -> encoding for distribution eg Youtube) a GPU can pick up more or the slack, but unless you own a specialized graphics card, the contribution is ~5-10% of the overall workload required. (A Nvidia Quadro line card would get you much more, but of course, they also cost an arm and a leg (as in 1k+ for the good models))

You won't use all that much RAM for streaming (Think about it, you're going to be encoding then punting through a stream of roughly 1mbit - 10mbit, which is only about 20MB/s. As you can guess the overheads are tiny compared to the amount of RAM you can get cheaply.) and for encoding you can get good benefits from more RAM, but you'll just end up caching the whole thing anyay unless for some unknown reason you can shove the entire clip you're trying to encode can just fit into the RAM (Which is unlikely the higher the resolution is).

So for all you PC enthusiasts out there who are smart enough to build PCs on their own, your expenditure emphasis is simple.

Once you clear your recording requirements (Namely your method of capture and required supporting hardware) CPU over all else. You get a GPU to lessen the load of displaying the screen, and of course if you're bored, you can convert the PC into a gaming PC by swapping in a decent (or good) graphics card, but I'm assuming you have a budget here.

Also, be warned, overclocking WILL bite you in the backside more often than not - You'll have to overclock conservatively or even not at all, because errors WILL ruin your day (you'll have to restart everything from scratch), even if they're not fatal.


Let's play - Building a PC based off actual specifications.

If you want to play along, I'll be using NewEgg US prices, as of the 9th March 2016. I'm making ZERO attempts of trying to find good pricing, and brand selection is mostly random from personal experience. If you know of better equipment, by all means, do so.

Entry level - i7 4th gen system

Intel i7 4790 - $310 - CPU

GIGABYTE G1 Gaming GA-Z97X - $165 - Mainboard. Options are surprisingly limited due to phasing out of 4th gen boards.

Corsair Vengeance Blue 8GB (2x4GB) DDR3 - $150

Graphics card - Seriously? Any old thing. Hell, use this as an excuse to upgrade your existing graphics card. You're only adding the card in to lessen the load on the CPU by not using the onboard. $0-75
If you want to have this serve as a half decent gaming PC, throw in a GTX 750 Ti at absolute most, and you'll get decent to good mileage. They're in the $120 range.

Samsung 850 EVO 250GB -$120 - OS drive.  Note, if you have brand preference, and/or cost requirements, basically 'You need something to run the OS off. You could just use a mechanical.'

Additional drives for lossless - Read the earlier parts. Namely 'if cost restricted but want raw record, SSDs in RAID. If you can afford it, SAS. If you're recording lossy, a decent mechanical.' I'll include example modulars (with the capture set) in a separate section, and you just tack it on top.

Capture device - Will be included as a modular.

750W Corsair CX750M - $130 - Power supply - There's a lot of theory in this but it amounts to this 'Your base load would say 600W is enough. When you're firing all cylinders, uh, you'll need that extra 150W, particularly if you intend on some of the better setups, and/or if you intend on using a decent to good graphics card'.

CPU fan+thermal paste - $50-100? Pick the one you like I suppose. You may need this because not all CPUs come with stock fans, and you need a fan of SOME kind at least. Just make sure it fits.

Case - ATX Mid or Full case (ideally full) - 100-200 US

Shipping - 50 US

Assembly - Your problem. Learn to do it yourself or find someone who can.

All in (Assuming my math is right) ~ 1100 US.

Sound expensive? Well... wait until you see what the 6th gen i7 version will cost you.


6th gen i7 setup.

CPU-  i76700k - $379

Mainboard - ASUS Z170 - $155 (! I did warn you how odd this is)

RAM - G.SKILL Ripjaws V Series 16GB (2 x 8GB) 288-Pin DDR4 - $93 (! I wasn't kidding the first time)

All other notes the same.

Cost - ~1100 US.

... See what I mean?


Adjustment of the above only for a very peculiar setup (This gets its own special mention):

Thunderbolt port enabled Mainboard for an i7 6th gen - GIGABYTE G1 Gaming GA-Z170X-Gaming G1 (rev. 1.0) LGA 1151 Intel Z170 - $480 US. (Basically +320 over the other i7 6th gen setup.)



Lossy record:
1x Mechanical drive (anything from 100US+ depending on the size. Please don't use Green drives. Really.)
1x Capture device (AVerMedia GC550, with CPU to handle encoding on the fly, 170ish?)

Lossless record 1080p60 @60m:
2x 512GB SSDs (Samsung 850PRO @ 220 each)
1x capture device (Avermedia CG550 running at lossless (170ish) or a Blackmagic Decklink  Studio 4K ($600, note 2K + 4K + 3D, but as a specialist card, requires knowledge to work well.)

And finally...

The record anything you damn well please in the future with ease option - REQUIRES THUNDERBOLT:

2 or more 512GB SSDs or higher (1TBs maybe, @ 440 each)

One Ultrastudio 4K - 995 US

The 4K extreme model is a little overkill since we'll be using the PC to do the write and encoding at our end anyway.

The main advantage of this is that it's a plug and play solution that is self adjusting, can up/down scale on command, does most of the processing, and most future formats. Including say a PS4's 3D signal, if you say wanted to record Project Diva or... other games with 3D support. (I said nothing. Particularly about Plat- Yeah, nothing.)

All you need is a thunderbolt connection and a PC that can write, and of course, you can take the unit around to any place with said thunderbolt connection (Like say, oh, a recent mac pro or another PC with a thunderbolt connection) and work there without having to open up the device to plug in the card.

Of course this IS sort of overkill and essentially lets you gain full control of virtually any modern signal (and I could go on with some very inappropriate mental comparisons) but considering previously that level of entry used to cost 2-4k US AND had significant configuration issues (namely PCI-e laning nightmares) it's pretty much one of those things you could really use, and the cost of entry is a lot lower than it used to be.

... As you can guess, I really, really want one. (I could make all SORTS of inappropriate mental images concerning this device.)


Ahem. Well, that should cover everything you'll probably ever need to know about how to do recording properly, at least from a hardware perspective.

If there's any further questions, I'll field them, mostly cause I probably forgot something here and there.
Games are streamed at
No focus, any platform, suggestions welcome

Currently accepting Platinum Stars requests: - My technical notes on good quality recording.