Sunday, January 8, 2017

YouTube Playthrough and Demonstration Series

This Christmas, I got a capture device.  The device in question is an I-O Data GV-USB2.  It can accept composite or s-video input and has stereo sound inputs.  The manual is in Japanese but the drivers are in English.

One of the reasons why I acquired this device is because I found a disturbing lack of video game footage captured from real hardware on YouTube.  While there are plenty of playthroughs or longplays of various games, many of these are from emulators.  Footage directly captured from consoles tends to be older and is reduced to 30 frames per second.  The heyday of 480i/30 frames per second was the Playstation 2 era.  Before the Playstation 2 and the Dreamcast, it was not often used and almost never used by the SNES or Genesis.  They used 240p and ran at 60fps.  So did many vintage computers from Apple, TI, Commodore and Atari.  Even 320x200 256 color VGA graphics is just double-scanned 240p.

As many people know, 240p is a hack of 480i.  TV tubes were designed to display 480 interlaced lines 60 times per second (in NTSC countries).  The odd lines of an image would be displayed, followed by the even lines of an image and your eyes would see fluid motion.  30 times per second the TV would be drawing odd lines and 30 times per second the TV would be drawing even lines. 240p works by telling the TV to odd lines always, 60 times per second. Because the even lines are never being drawn, there is a space between the lines which can be noticed at times as scanlines.  The console or computer is sending a complete frame for the TV to draw on the odd lines.




240p works in every regular NTSC CRT TV.  It is not used in broadcast TV or on home media, whether it is VHS, DVD or Laserdisc.  The resolution of a 240p signal is too poor for lifelike images but perfect for video game pixel art.  Fortunately, most composite capture devices will capture a 240p signal, but these devices treat it as an interlaced 480i signal.  My device treats the lines from the first frame as the odd frame and the lines from the second frame as the even frame and the monitor usually displays both at the same time.  Motion will show interlaced artifacts when the the image is no longer the same from frame to frame.  Thanks to AviSynth's SeparateFields function, you can recover the full 60 frames from an analog video capture.

YouTube is a useful source of acquiring game information.  Unfortunately the number of game videos using an emulator vastly outnumbers the number of videos using real hardware.  Moreover, most of those real hardware videos were posted prior to YouTube's 60p support or simply don't bother to do a proper conversion.  Because YouTube only supports 60p with a 1280x720 or better resolution (1920x1080, 2560x1440 or 3840x2160 currently) they need to be upscaled properly.  This I have learned how to do in order to give the best presentation of videos I can.

I have put several videos on YouTube already at the time of this writing, captured from my real hardware.  I am initially going to place my videos into two categories.  Each will be added to a playlist.  The first category of videos will be Playthroughs, where I show a game from its beginning to its completion.  Most of these playthroughs will not win me any awards in the speedrunning or no-hit or no-death categories.  The second category will be Demonstrations, where I show off a substantial portion of a game I cannot beat or I just want to demonstrate some interesting feature or effect from a game.

Capturing Atari or Famicom requires routing the RF video through a VCR, which converts it to composite video and audio.  TV tuner devices with an analog coaxial screw do exist but they often have severe problems with the kind of RF output by a video game machine.  A VCR will output proper composite video and separate audio.  While it won't look or sound any better than the RF output, it is simple to capture.

NES video will look very gritty.  The NES not only offsets each line by 1/3 of a pixel for every three lines, the odd frames are shifted one pixel compared to the even frames.  This makes the image sharper than the Sega Master System but it also less friendly to analog capture devices. It is the nature of the beast, the only improvement I could conceivably do is to mod a system with a NESRGB board.  The NESRGB board will output S-Video.

For SNES video, I am fortunate to have an official NES S-Video cable, so the output will look not only significantly sharper than NES video, but it won't have that three-line stairstep caused by the filter inside the NES PPU.  Unfortunately, I do not have a modded 1-Chip console, so it will look a little fuzzy.

Sega Genesis video will show lots of artifacting, which many Genesis games employed to show more colors than the system could officially display.  Sonic 2 in particular goes nuts with this to show transparencies.  Even the Master System could have issues with artifact colors.

IBM PC CGA captures can use old CGA or new CGA cards, but I discovered a simple trick to tame old CGA cards so they can be captured : https://www.youtube.com/watch?v=8k-U4LiL4GI I have also captured from the IBM PCjr.'s composite output.  While I could capture from the Tandy 1000 SX or TX's composite output, I find that its composite output is very washed out compared to CGA or PCjr.

Game Boy captures will have to be done with a Super Game Boy for now.  Someday soon I may be able to acquire a Super Game Boy 2 from Japan for proper speed.  I also hope to acquire an SD Card Launcher for the Gamecube so I can use my Game Boy Player with the Game Boy Interface Ultra Low-Latency version for true 240p captures.

Before I acquired my capture device, I thought I could perhaps do this using a camera pointed at the TV screen.  This simply didn't work for a variety of reasons, although I do have a few videos up where I believe it is important to show some physical hardware or feature that cannot be experienced through a capture card.

I considered whether I should add commentary to my videos.  Often game commentary is mostly about filling dead air with talk.  Talking masks game audio.  If YouTube had a commentary track feature that could be turned on and off I would consider it.  Since it does not, I believe that a game's audio is preferably to my nasal tones.

So far, I have used footage of games that can be completed in about an hour.  There are games that take hours upon hours to finish. While YouTube can host 10 hour videos, I cannot store 10 hours of YUV-encoded video on my system!  Well I can, but it will have to be precompressed which limits the resulting video quality.  So for now, that playthrough of SNES Final Fantasy III will have to remain something of a long term goal.

Here are the links to the playlists.  First are the Demonstrations :

https://www.youtube.com/playlist?list=PLvqpAsa7-dlxkXzRZIo3pn_BHSuGG_iWe

Second are the Playthroughs :

https://www.youtube.com/playlist?list=PLvqpAsa7-dlzF0dR7Wzi1tWD7jeD37oBt

5 comments:

  1. I couldn't let "Because the even lines are never being drawn, there is a space between the lines which can be noticed at times as scanlines" lie; the idea of pervasive spaces is folklore.

    A real CRT scans diagonally. After distinguishing between the leading edge of the sync level (triggering horizontal) and a sufficiently long period of the sync level (triggering vertical), there's no connection between horizontal and vertical motion. Every sweep from left to right also descends by slightly less than the height of a line — they're diagonal sweeps, going downhill.

    The vertical scanning has no concept of the current location of the horizontal scanning and vice versa. Interlaced lines appear 'between' those of the previous field not because the CRT acts line a line printer, printing one line, skipping one line, printing one line, etc, but because that vertical sync is triggered in the middle of a horizontal period rather than the end. Which means that the next set of diagonals start from the middle at the top rather than from the left edge. Which, because they're diagonals, means their centres are placed in between.

    To repeat and expand: a classic CRT has no mechanism pooling horizontal and vertical scanning locations, has no special case tests for different fields, has no idea which field it's in.

    If you look at a classic shadow mask, you'll see it's based on circles, with a tiled pattern. The diagonal scans do not align with the printed pattern, they just fly by and hit whichever phosphors they hit.

    You'll also notice that the majority of the surface area of a shadow grille is the black area in between circles of colour. That equates to lost light. One of the early manufacturing challenges of colour TVs compared to back and white was firing proportionally much extra energy at the screen to get acceptable brightness when most of it was going to be thrown away. Black and white screens don't have any sort of grille.

    So: there's no row-like mechanism to support interlaced video — a single scan is as tall as the gun firing it. Getting decent brightness out of a colour screen was an early engineering challenge. What do you think most manufacturers did? They just made the beam being shot sufficiently large that each line joined up with the chronologically next. Then the next field is an overpaint.

    Possibly only the Trinitron screens are likely to have a space between lines, and they were patent protected until the late-'90s. So if you were in the classic period and weren't using a Sony, you probably didn't see scan lines.

    Why this idea persists? My guess of an answer: Trinitrons were widely cloned once the patent expired and CRTs have a finite lifespan so the surviving sample set isn't representative; emulator authors tend to think digitally, believing in horizontal scans and the line-printer model, and reason backwards from there. Then what you see with an emulator affects your earlier memories. That's human perception.

    ReplyDelete
  2. I think the truth is somewhere in between, Tom. The sets are certainly going to sync with an offset on even fields, so there is merit in the OP. However, as you point out, beam spot size is definitely a factor and so the 480i effective res of a normal signal is exaggertated IMO. Regarding scan lines being diagonal - technically true - however, you ignore yoke position. I am sure many sets have a yoke rotation that effectively compensates for the continuous vertical sweep. Thus scan lines can end up level.

    ReplyDelete
  3. @Greg yes, you're absolutely right. I emphasised that the scans are diagonal because it helps to hit the point that vertical and horizontal scanning run simultaneously, without interconnection. So it's not draw horizontal left-to-right, signal vertical to perform a discrete step while retracing, repeat. There are no discrete steps.

    Otherwise, my main contention is merely that "there is a space between the lines" stated as an absolute is incorrect. There isn't necessarily. Links being not permitted, Google for e.g. 'a real space invader the reel todd' and check out the image attached to the article you find. You should see an extreme close-up shot of one of the enemies from the Atari 2600 Space Invaders on a real CRT. You won't see any spaces between lines.

    There are plenty of photos with gaps between lines, it's unambiguously true some screens showed gaps; it's just not true in the absolute that CRTs showed gaps between lines.

    To be explicit though: I'm aware that I'm pedantically splitting hairs, not disagreeing.

    ReplyDelete
  4. This comment has been removed by the author.

    ReplyDelete
  5. You say there is no 240p in home media, but I'm pretty sure early VideoCD/Philips CDi content was non-interlaced MPEG at 240p (NTSC) or 288p (PAL).

    ReplyDelete