Skip to content

Capturing monochrome video using logic analyzer

charlysan edited this page Jul 24, 2023 · 5 revisions

Capturing MDA/Hercules video using a logic analyzer

Table of contents

Introduction

The purpose of this guide is to understand how MDA/Hercules video signal works. I will try to capture monochrome TTL video signals coming from an Hercules Graphics Card, using a cheap 8-channels 24 MHz logic analyzer.

The main goal will not be to capture video in real time, but rather to understand all the signals involved, and if possible re-generate a screen frame using data captured by the logic analyzer along with some minimal python scripting.

Hardware used

  • JUKO XT + NEC v20
  • Winbond Hercules Graphics Card
  • Monochrome green phosphor monitor
  • 24 MHz 8CH clone logic analyzer

Getting started

First thing to do will be to add an extra DB9 connector "in parallel" to our Hercules Graphics Card video output, and to connect the LA to it following the pinout:

 

And the final result might look like this:

 

First capture

Next step would be to boot the PC and start capturing data. Initial boot screen shows this:

 

And data captured by the logic analyzer looks like this:

 

Understanding the signals

Let's first take a look at IBM's Technical Reference manual specs for Screen Display:

 

And now let's do some measurements over the acquired data:

Horizontal Sync

 

Horizontal Sync ~= 49 Hz

Vertical Sync

 

Vertical Sync ~= 18.141 KHz

The measured values for hsync and vsync are quite close to the specs, even though we are using a cheap logic analyzer.

Video and Intensity signals

From seasip.info website, we can found the following notes regarding "high intensity" signal:

 

If we look again at the screen we'll notice that there are 6 "rows" that have bold text:

 

and if we have a closer look to the LA signals for a single frame (hsync interval), we will notice that there are also 6 "columns" with some high levels for video + intensity signals:

 

So, with just a glance at the signals, you can have an idea where on the screen there should be normal or bright text.

Generating a screen frame from logic data

The idea here will be to use some Python graphics library that could write pixels in a window using (x,y) coordinates. For that task I will choose pygame.

Exporting logic data

Let's export the data captured by the logic analyzer using csv format. I will export a single frame (1 HSYNC period):

 

Output format will look like this (using ISO8601 timestamp format):

Time [s],HSYNC,VSYNC,INTENSITY,VIDEO
2023-07-18T21:18:33.461836583+00:00,0,1,0,0
2023-07-18T21:18:33.461882708+00:00,1,1,0,0
2023-07-18T21:18:33.461891167+00:00,0,1,0,0
2023-07-18T21:18:33.461937833+00:00,1,1,0,0
...

As we are focusing in a single frame (HSYNC period), which is 50 Hz^(-1) ~= 0.02 seconds < 1 second, we can discard everything above 1 second 2023-07-18T21:18:33, and also the +00:00 part. That would simplify our .csv file by leaving timestamp in nanoseconds:

Time [ns],HSYNC,VSYNC,INTENSITY,VIDEO
461836583,0,1,0,0
461882708,1,1,0,0
461891167,0,1,0,0
461937833,1,1,0,0
...

Working in nanoseconds will simplify the calculations a bit.

Parse csv file

Something to note about the csv file created by this logic analyzer software, is that it only shows transitions (e.g. hsync going from low to high at N time). This has some implications:

  • video/intensity pixel count and X position will need to be calculated based on signal transitions
  • some approximations will need to be done (e.g. rounding)

For simplicity, I will only work with video signal (2 bits) for the moment. And the first approximation we have to do is to calculate the "pixel period", which can be taken from MDA/Hercules Bandwidth value:

Bandwidth = 16.257 MHz -> pixel_period ~= 61.5 ns

So, with a video signal pulse time width and the pixel clock value, we should be able to get the "pixel count" within a transition. E.g.

467740000,0,1,1,1
467740250,0,1,0,0
video_high_start = 467740000
video_high_end = 467740250
video_high_length = 467740250 - 467740000 =  250 ns

With this value we can calculate the number of pixels:

pixel_count = video_high_length/pixel_period = 250 ns / 61.5 ns =  4.065 

This is not an integer, but we could round it. However, this might give us a clue about the real pixel_clock (or Bandwidth) value. If we use 62.5 instead, we would get 4.0.

In an analog way, we could get the x coordinate for that row, taking as 0 the timestamp after hsync pulse:

x_pos = (video_high_start - video_high_start) / pixel_period

where video_high_start is the timestamp for the transition from 0 to 1 for video signal; and video_high_start is the timestamp for the hsync transition from 1 to 0.

Refresh screen frame logic

The function used to draw (or refresh) a single frame could look like this:

def refresh_scren(screen, screen_buffer):
    video_prev = 0
    video_high_start = 0
    video_high_end = 0
    hsync_prev = 0
    line_number = 0

    for row in screen_buffer:
        # Time [ns],HSYNC,VSYNC,INTENSITY,VIDEO
        t = int(row[0])
        hsync = int(row[1])
        vsync = int(row[2])
        intensity = int(row[3])
        video = int(row[4])

        # reset line number if vsync is low
        if vsync == 0:
            line_number = 0
            continue

        # hsync transition (high -> low)
        if hsync == 0 and hsync_prev == 1:
            hsync_prev = 0
            line_number += 1
            line_start_time = t
            continue

        # hsync transition (low -> high)
        if hsync == 1 and hsync_prev == 0:
            hsync_prev = 1
            continue

        # video transition (low -> high)
        if video == 1 and video_prev == 0:
            video_prev = 1
            video_high_start = t

        # video transition (high -> low)
        if video == 0 and video_prev == 1:
            video_prev = 0
            video_high_end = t

            pixel_count = round(
                (video_high_end - video_high_start) / pixel_period
            )
            x_pos = round((video_high_start - line_start_time) / pixel_period)

            for _ in range(pixel_count):
                draw_pixel(screen, x_pos, line_number, (color_r, color_g, color_b))
                x_pos = x_pos + 1

And in the main function we would parse the file and push it to a buffer, that would be passed as an argument to the refresh function:

def main():
    signal.signal(signal.SIGINT, signal_handler)
    args = sys.argv[1:]

    if len(args) < 1:
        print("\nYou must specify a csv file to process")
        sys.exit(0)

    filename = args[0]

    # Initialize pygame
    pygame.init()
    pygame.display.set_caption("Logic Analyzer Video Capture Tool")

    # Create the window
    screen = pygame.display.set_mode((DEFAULT_SCREEN_WIDTH, DEFAULT_SCREEN_HEIGHT))
    clock = pygame.time.Clock()

    # Parse file and create screen_buffer
    screen_buffer = parse_csv(filename)

    refresh_scren(screen, screen_buffer)

    # Main loop
    running = True
    while running:
        for event in pygame.event.get():
            if event.type == pygame.QUIT:
                running = False
        pygame.display.flip()
        clock.tick(60)  # Limit to 60 frames per second

First attempt to generate an image

After putting all parts together and running the script for test_boota.csv I got this image:

 

The result is far from perfect, but it is quite good for a first attempt.

Improving the results

After the first attempt, I then tried to improve the generated image by some "trial and error" tests, along with extra measurements.

Vertical line patterns as source

To have a better view of the issue, I decided to create a vertical lines pattern using BGI:

#include <graphics.h>
#include <conio.h>
 
main()
{
   int gd = DETECT, gm;
   int x;
 
   //init graphics
   initgraph(&gd, &gm, "C:/TC/BGI");

   for (x=0;x<720;x+=4) {
      line(x,0,x,347);
   }
   line(719,0,719,347);

   getch();
   closegraph();
   return 0;
}

This generates the following pattern:

 

Adding back porch

I then did the following measurements:

 

This "back porch" would be part of the "horizontal blanking" period, and it is around 1452 ns. So, I took that into account after the hsync transition:

black_porch = 1542

# hsync transition (high -> low)
        if hsync == 0 and hsync_prev == 1:
            hsync_prev = 0
            line_number += 1 + skip_lines
            line_start_time = t + back_porch
            continue

And I also added some key events, so I can easily change that value, along with clock period, by pressing some keys:

 

And the result of that experiment was this:

pixel_period = 61.5
back_porch = 1531

And I could improve the first capture a bit more:

 

However, I could never get perfect results; and some times I got better image quality using 1532 ns instead of 1531.

I came to the conclusion that some error would be expected because of the quality and sampling rate of this cheap logic analyzer.

Aspect ratio and scanlines emulation

Let's see how Prince of Persia looks:

 

The aspect ratio of the rendered image is not correct. I could try to stretch the image using pygame, but I decided to keep it simple for the moment just try out two different approaches:

Scanlines emulation

This should be straightforward by just leaving one horizontal line black between each other:

 

Modifying pixel size

This also involves leaving one horizontal line black between each other, but I also stretched the pixel y size by two. This would result in a stretched and brighter image:

 

Again, this is not the correct aspect ratio; but it doesn't look bad.

Colors

Changing colors can be accomplished by just setting different RGB values:

  • white: (170, 170, 170)
  • green: (0, 170, 0)
  • amber: (170, 91, 0)

 

Next steps

In a future experiment I might try to capture "real time" video. sigrok-cli could be a good candidate for capturing logic data, and a different format and approach must be used to reduce lag.

Source code and examples

Python script can be found here. I also included .csv example files in this folder.

Gallery