System Of Levers

Solving problems no one else seems to be having


Named Pipes to Turn CLI Programs Into Python Functions

Tl;dr

A named pipe is like a file that doesn't store anything. It has a path (the name I guess). It can be opened, read from, and written to, but the content is temporary and stored in memory.

Since a named pipe is also a pipe it acts a bit different than a normal file. When you open it you can only open it as read-only or write-only not read-write. The idea is you'd have one process with it open for write (the producer) and one process with it open for read (the consumer). This matches a pipe since they have a read-end and a write-end. At least in Unix.

A call to open for read or write will block until something else opens it for the other. Calls to read or write will also block if the named pipe is empty or full respectively.

Named pipes can be useful if you need to create CLI pipelines with programs that consume or produce multiple inputs and outputs. You have to be careful not to create deadlocks due to the blocking behaviour though. You also can't use this with a program that needs to seek in a file or that reads and writes the same file.

I'm using them to run file based CLI programs from Python without writing to the disk.

What I Was Trying To Do

I'm using two command line programs rgbasm and rgblink. They're part of the RGBDS GameBoy assembler toolchain. If I were running them from the command line I would use the following two commands:

rgbasm A.asm -o A.o
rgblink A.o B.o -o A.gb -m A.map -n A.sym

I want my Python script to be able to provide A.asm and get back the contents of A.gb, A.map, and A.sym.

So I can just have my script write A.asm, use subprocess.run() to run the two commands then read in the three files!

Unnecessary Constraints!

I didn't want to write anything to disk.

Why?

My justification was that I want to run this in a server and having it writing to and reading from the disk felt wrong. That's a bad reason though. This is for a personal toy project. Making scalable, production quality software is not the goal! I'm not saying I achieved scalable, production quality software though, just that it's not the goal.

The actual reason was that I thought it would be possible to do and I wanted to figure out how. That's a good reason! At least for a personal toy project. Even if it is a bit of a detour.

What Did I Know Going In

Named Pipes / FIFOs

I quickly came across named pipes and how to create them in Python with os.mkfifo or from the command line with mkfifo (I don't know about windows though). After being briefly distracted by going overboard with context managers to create and delete a temp directory and a bunch of FIFOs I ran into my first gotcha. My program froze when I tried to open a FIFO.

The problem was that I was only opening one end of the FIFO/pipe and that will block until the other end is opened.

This was lucky! Confusing, but lucky! You see I didn't know what I was doing and I very well could have accidentally written my code in a way that missed this problem. Then I never would have learned that opening a FIFO/named pipe will block unless the other end of also open. I also wouldn't have realized that I knew less what I was doing than usual and I'd need to pay more attention.

You can use os.open() to open without blocking and that's what I did at first. Something like this (rough memory probably not exactly what I did):

def opener(path, flags):
    return os.open(path, flags | os.O_NONBLOCK)

with open('fifo_in', 'w', opener=opener) as wf, \
         open('fifo_out', 'rb', opener=opener) as rf:
    wf.write(input_data)
    p = subprocess.Popen(['rgbasm', 'fifo_in', '-o', 'fifo_middle'])
    subprocess.run(['rgblink', 'fifo_middle', '-o', 'fifo_out'])
    p.wait()
    output = rf.read()

I was surprised when it worked. But I'm pretty sure this can deadlock in a few ways though. Here's what's happening:

So What's the Problem?

For one thing, it only works because the input isn't very big! Pipes/named-pipes/FIFOs have a maximum capacity. If you try to write to one when it's full then it will block or fail if O_NONBLOCK is set. Reading from one that's empty will have similar results. There's more info in the man(7) page in the I/O on pipes and FIFOs, Pipe capacity, and PIPE_BUF sections. So if the input data were bigger then the python script would crash on wf.write(input_data).

Similarly, if the output were too big then rgblink would block when it tries to write to a full pipe. This means the python script would be blocked from reading anything from the pipe so everything will just be stuck!

Maybe Don't Use O_NONBLOCK?

Ok, I could change it to something like:

p_rgbasm = subprocess.Popen(['rgbasm', 'fifo_in', '-o', 'fifo_middle'])
p_rgblink = subprocess.Popen(['rgblink', 'fifo_middle', '-o', 'fifo_out'])
with open('fifo_in', 'w') as wf, \
         open('fifo_out', 'rb') as rf:
   wf.write(input_data)
   p_rgblink.wait()
   output = rf.read()

Now we're not using O_NONBLOCK so at least reads and writes won't fail in the Python code. I even think this might work for this case! Here's what I think is happening:

I could probably fix this by getting rid of p_rgblink.wait() but I don't actually know if rf.read() will always return the entire output. I think it will as long as rgblink doesn't do something strange like close the file and reopen it to write some more. The reason I think this is because read() will read until it hits an end-of-file. I think that only happens when all the write-ends of the pipe are closed.

Anyway we have other problems.

Other Problems

The Python script could have blocked on open('fifo_in', 'w') because we have no guarantee about how rgbasm and rgblink open files. For example if:

rgbasm opens them in the order

  1. fifo_middle
  2. fifo_in

and rgblink opens its files in the order

  1. fifo_out
  2. fifo_middle

So now we're in a deadlock!

  1. The Python script is blocked waiting for rgbasm to open fifo_in.
  2. rgbasm won't open fifo_in since it's blocked waiting for rgblink to open fifo_middle.
  3. rgblink won't open fifo_middle since it's blocked waiting for the Python script to open fifo_out.
  4. The Python script won't open fifo_out because of point 1!

Grumble

Maybe there's something in asyncio that could help me?

Ok, no, that's more complicated than I thought it would be and I don't see a magic wand for this anyway.

Fine I'll Use Threads

I already have deadlocks. I can't make it much worse right?

import threading
import subprocess
import io

def doRGBStuff(input_data):
    def do_write(file_name, data):
        with open(file_name', 'w') as wf:
            wf.write(data)

    def do_read(file_name, out_stream):
        with open(file_name, 'rb') as rf:
            out_stream.write(rf.read())

    write_thread = threading.Thread(target=do_write, args=('fifo_in', input_data))
    write_thread.start()

    p_rgbasm = subprocess.Popen(['rgbasm', 'fifo_in', '-o', 'fifo_middle'])
    p_rgblink = subprocess.Popen(['rgblink', 'fifo_middle', '-o', 'fifo_out'])

    out_stream = io.BytesIO()
    read_thread = threading.Thread(target=do_read, args=('fifo_out', out_stream))
    read_thread.start()
    read_thread.join()
    return out_stream.getvalue()

Now we have write_thread to write data into fifo_in and read_thread to read from fifo_out and put it into an io.BytesIO stream. Everything that interacts with the fifos is either a thread or subprocess so when they block it won't block the main Python script. The script blocks at the end with read_thread.join(). This should wait until rgblink finishes writing its output. It still won't work properly if rgblink closes and reopens the output file in the middle.

I can't think of anything that would let this deadlock, but maybe I'm missing something.

To complete the original task of getting three outputs I'll just need two more fifos and reading threads.

Also I haven't tested any of the specific code in this post! It's based on some other code that DOES work though!