Designing a high-frequency-trading system/simulation lab

Part 0 - Introduction
Part 1 - Hardware
Part 2 - Replay the market
Part 3 - Build the LOB
Part 4 - Measurements
Part 5 - Overclocking
Part 6 - Linux Tuning
Appendix - Costs of opening a one-man HFT firm
Changelog


Introduction

This text is a primer on how to develop a high-frequency-trading system/simulation lab, with focus on the Nasdaq exchange and the ITCH protocol.

The code is entirely written in C and follows the data-oriented-design methodology.
The reason for picking C instead of C++, when the latter is the de-facto language in the industry, is because C is a very simple language to understand (and optimize), which makes this primer more encompassing than if I was to write the same in C++ (which would require you to be very familiar with advanced concepts of the language).  

The topics covered are:

  • The necessary hardware to build a simulation lab, as well as advice on ideal pc components.  
  • How to “replay the market” by sending market data from a “fake exchange” to a “trading server”.  
  • Parsing incoming data and building the limit order book (LOB)
  • Performance measurements techniques.  
  • RAM and CPU overclocking for optimal use of the hardware.  
  • SIMD instructions.  
  • Possibly other topics that will later come to mind.  

By the end of this text, you will know how to set up your own simulation lab, how to build a very fast LOB, how to measure the performance of your code and will be in a great position to do research on market microstructure and experiment with HFT strategies.

I am still working on this text so feel free to check for updates every few weeks.


Hardware

Note that even if you can't build a lab similar to the one described below, you will still benefit from reading this text,
since you will still learn how how to write a very fast ITCH parser and order book (which is faster than the fastest C++ 
implementation available in the open source domain), as well as pick a few tips on how to write high performance modern C
code.

To be able to get a simulation lab as close as possible to the live environment, you will need two computers (running Linux), two network cards (NIC’s) with both kernel bypass and hardware timestamping functionality, as well as two cables to connect them (1 in and 1 out on each NIC).
One of the computers (computer A) will act as a fake exchange by sending the historical market data to the second computer (computer B - the trading server), which itself handles the parsing of the market data, builds the LOB and does the desired computations to decide when it should trade.

Regarding which Linux distro to use on “computer B”, I recommend that you pick Arch Linux if you have experience with Linux, because, since it is a rolling-distro, you will very easily be able to test updates made to compilers and see if they improve your code, and additionally you can run a version without the X-server (the GUI windows manager) which occupies only 300mb of RAM.
If you are a less experienced user, Manjaro is a good alternative.
For “computer A”, any distro will do.

The objective of this setup is to accurately simulate a real world live trading situation, which means that you will have zero control over computer A (the fake exchange) but you have full control over computer B (the trading server).
Frameworks, such as those that you find on GitHub, won’t cut it because when you are designing a low latency system you have to consider, measure and optimize every single detail of your system, since your success/profitability will very likely depend on it.
Your system must be tuned, both hardware and software, and it must only do one thing (the trading logic), and not have rogue processes running in the background interfering with the cpu cache.
To have such high precision measurements you need to physically simulate what happens once your server is live, and that is the reason why you need two physically independent computers.

As mentioned above (computer A) simulates the exchange, by sending historical market data over the NIC (in other words, it replays the market) and computer B acts as the trading server, by receiving the data sent by computer A, parsing it, building the order book, doing any desired analysis and eventually sending a message (i.e. an order) back to computer A.

It’s now easy to understand that if you were reading data from disk and processing it on the same computer, you would be unable to get proper measurements since the computer would be performing other tasks (reading data, piping it to another process, etc), which would give you incorrect measurements.  

You would also be unable to test and get an answer for questions such as:
How do you deal with the network side when you are trading in the real world and no longer reading data from disk?
How can you test if your server is able to handle spikes of millions of packets per second?
How much time does it take to copy the data from the NIC buffer to user space? And does that extra time have a negative impact on your trading algorithms?
on and on…

Before going a bit more in depth on how to accurately measure the performance of computer B (the trading server), I would first like to clearly define “performance”.

Performance, in this context, is the sum of the time it takes to retrieve the data from the NIC + doing all the computations you want to do + sending data out of the NIC.

Now, there are two ways you can truly measure performance of a system. (I alluded to one before - the one I took - but I will clarify both now):

  • The first approach is to use a switch with hardware time stamping functionality, and connect both computers to it.
    This switch simply timestamps every packet that passes by it, both coming from “computer A” and “computer B”, and from there you can calculate how much time “computer B” takes to process, and reply, to incoming data.
    These switches can cost from around $4k for the “old” metamako’s to $40k for the new Arista/Cisco.

  • The second approach is the one I took, and which I recommend you do.
    I bought two second hand NIC’s for around $200 each on Ebay, and these NIC’s offer the same hardware timestamping functionality that you will find on the much more expensive switches, with a precision of around 10ns.
    The specific models I bought were the following: Exablaze X2 and Exablaze X4.
    You can buy two X2, or two X4, it doesn’t matter, I picked one of each because those were the ones that were for sale at the moment.
    If you end up purchasing the Exablaze cards, check their documentation online since they go over installation and some tips on how to get the most out of the cards and most importantly, remember to update the firmware.
    Note: You can also check some of the solarflare cards, they can be had pretty cheap too - just make sure that the model you buy supports OpenOnload / ef_vi.

After plugging both cards to the respective computer motherboards (use the closest pcie slot to the CPU!) , download and install the libraries that allow you to access them, plug the two cables between the cards and run:

$ sudo exanic-config
Device exanic0:
  Hardware type: ExaNIC X25
  Serial number: 643F5F0255EA
  Temperature: 44.8 C   VCCint: 0.84 V   VCCaux: 1.83 V
  Function: network interface
  Firmware date: 20190805 (Mon Aug  5 23:54:48 2019)
  PPS port: input, termination disabled
  Port 0:
    Interface: enp1s0
    Port speed: 10000 Mbps
    Port status: enabled, SFP present, signal detected, no link
    Bypass-only mode: on
    MAC address: 64:3f:5f:02:55:ea
    RX packets: 0  ignored: 0  error: 0  dropped: 0
    TX packets: 0
  Port 1:
    Interface: enp1s0d1
    Port speed: 10000 Mbps
    Port status: enabled, SFP present, signal detected, no link
    Bypass-only mode: on
    MAC address: 64:3f:5f:02:55:eb
    RX packets: 0  ignored: 0  error: 0  dropped: 0
    TX packets: 0

If you see something similar to the above, you are ready to follow along and will be able to replicate the C code that I will share below and will end up with a professional, high accuracy simulation lab, where you can test multiple ideas/algorithms and do research on market-microstructure.

I will now discuss which pc components are ideal for a low latency server.

Remember that for computer A (the fake exchange) you really can use any computer (i.e. an old one).
If you have the possibility to make some components update, my recommendation is that you purchase the fastest nvme you can afford, since the fastest you can read from the disk the faster you can push data out of the NIC.
In the next next section, I will show you how to load the historical market data into memory and just pushing it out of the card as fast as possible - this is a great way to test how computer B (the trading server) behaves when you saturate it with data - however it’s not always possible to have the entire dataset in memory and that is when having a very fast disk really helps.

For computer B, however, choices of components will very much impact the performance you can get. Let’s go one by one.

CPU: You will want an intel “K” cpu, that is a cpu that allows you to overclock (OC) its frequency and it’s cache frequency via the BIOS.
Personally, I have a 11900k which, while it didn’t receive great feedback from the public due to its poor multi-core performance, it is the fastest single core processor you can buy at the moment, and hence perfect for an HFT application.
In a later section I will go in depth into overclocking and show you how much other techniques such as disabling extra (unnecessary) cores, disabling sleep states and raising the aforementioned frequencies, improve the overall performance of the system.
Another important functionality of the last gen intel cpus is the fact that they support AVX-512 (while previous versions only supported up to AVX2), and I will also show you how using such intrinsics can lead to extremely high performance code when doing real time analysis of data.
Alternatives: The previous gen 10850k is also a very decent CPU and with a good cooling solution (and some luck on the specific bin) can be pushed up to 5.4GhZ on all cores or 5.5GhZ on one/two cores configuration.

Motherboard: In here you should definitely consider a 2-DIMM slot Motherboard, and the reason why is because 2-DIMM slots motherboards have shorter traces to the CPU and each of the two slots is connected directly to 1 channel of the CPU, and also because since there are less dimms there will be less signal losses.
These motherboards are commonly used for overclocking (do you see a pattern?) not only because of the reasons mentioned above but also because of the fact that their BIOS’s allow super fine tuning over ram/cpu parameters so you can really get the best out of your system.
On my current system I have an Asus Apex XIII installed and in my opinion the Asus BIOS is second to none in terms of organization, accessibility and fine tuning abilities.
Alternatives: Asus Apex XII, Gigabyte Aorus Tachyon, EVGA (z590) Dark and the AsRock OC Formula.

RAM: For memory you will want a 2x8 B-DIE (Samsung chip) kit.
B-DIE’s are the most stable memory kits to overclock (in comparison to something like Micron Rev-E), which means you can push more voltage through them/increase the frequency and they won’t crash.
If you know you will need 32GB of RAM (2x16), note that they will be around 5/6ns slower than the 2x8 variants.

The best 2x8 kits:
G.Skill Trident Z Royal DDR4-4800 CL18
Team T-Force XTREEM 16GB 4500 CL18
OLOy Blade RGB 4000 CL14

The best 2x16 kits:
G.Skill Trident Z RGB 4266 CL17
G.Skill Ripjaws V 4000 CL16
Team T-Force XTREEM ARGB 3600 CL14

I currently own a pair of each of the xtreem’s and they are excellent to OC.  

Disk: Any nvme/ssd will do. If you are looking for a nvme, either the Samsung 980 pro or the Sabrent rocket4+ are great picks.
If you want to store market data, which comes with a few caveats by itself (see Appendix - Costs of opening a one-man HFT firm), you could look into the enterprise Micron 9300 Pro as perhaps the best ssd format disk with up to 15tb of capacity.

Cooling: This is an extremely important part of the system because when you OC both RAM and CPU these will get hotter and you will need to cool them down otherwise the motherboard will either lower their frequency or just shut down the system to avoid burning the components.
There are a few parts that you have to cool down to achieve maximum system performance and stability and those are:
CPU cooler: I recommend that you go with an AIO (All in One) cooling solution with 360mm (3 x 120) fans. I currently own the Arctic Liquid Freezer II 360 and it’s an excellent cooler which is also very well priced.
Alternatively you could either build your own liquid cooler by purchasing and assembling the necessary components from a place like AquaTuning but the problem arises if you have to ship the server to a colocation, because you would need to have someone filling up the cooling liquid at the colocation site, as shipping the server with the liquid already on the loop could cause possible leaks.
You could also go with a “normal” cooler but I discourage it since they won’t be able to cool the CPU as well as the AIO counterparts.
To cool the RAM chips and the NIC you should place a fan facing each of the components.
This will greatly improve memory stability while OC’ing and while the cooling for the NIC is not as important, it does make a difference and provides extra airflow inside the case.
You can also add one or two small fans to exhaust the air at the back of the case.
Regarding which fans to purchase, I always had great results with Noctua fans (and the Arctic fans included with the AIO), they are very capable and silent so you can easily work with the system by your side, however if noise is not a concern (i.e. you have the server in a different room or it is ready to be shipped to the trading venue) you can look into more powerful fans such as the those by Delta Electronics.

PSU: Any 650W+ model from a known brand such as EVGA, Corsair, XPG, etc will be a good pick. I do recommend however that you pick one with a full modular design (i.e. you only attach the cables that you need to the PSU) as this will lead to much less clutter inside the case and consequently improve airflow.
I currently have a XPG 750W and it’s been great.

Case: Assuming that you wanted to colocate the server at a trading venue you will need a 4U case, since otherwise the CPU cooler I recommended won’t fit.
Now, finding decent 4u cases is not the easiest task and even if you find one it will probably be cluttered with unnecessary things.
I recommend that you remove the front panel and purchase some metallic mesh from a hardware store and use that instead, and while this procedure takes a bit of work, it will greatly improve airflow. (Have a look at a few pictures of my system below, where I did just that).
If your system is meant to be at home, any ATX desktop case with decent airflow will do.  

I will end this section with a few pictures of my own lab. Hope you are enjoying the text so far.

1 2 3 4 5


Replay the market

In this section I will show you how to “replay the market” by sending market data from computer A (the fake exchange) to computer B (the trading server).

For the sake of reachability and applicability of this primer, I made some decisions on how to actually proceed and I will explain them now.

When market data is captured, it is captured in a format called pcap (packet capture) and what this means is that you store the ethernet packets exactly as they arrive, with all the different OSI layers information, instead of only storing the payload (the messages from the exchange).
This is done mainly because having the raw data allows you to troubleshoot and do a deep analysis of your network.
If you have the Exablaze/Cisco NIC’s, you can use this optimized library to perform this capture.  

However, there are some disadvantages of working with .pcap files.
First, these files are huge (a single day of market data for a single exchange can be 80+ gb) and second, acquiring market data in this format is quite expensive and very few data vendors sell it.  

What I decided to do was the following:
Instead of showing you how to send .pcap market data from computer A to computer B, I will show you how to send the actual payload that those files contain, and because Nasdaq makes some of these samples available for download at ftp://emi.nasdaq.com/ITCH/ , you will be able to grab them for free and follow along implementing the code on this series.  

Now, to clarify, the aforementioned payload follows a specific protocol called ITCH, which is a protocol developed by Nasdaq for disseminating market data.  

Because the ITCH protocol is in binary, it is extremely fast and easy to parse when compared to a protocol such as FIX 
(which is ASCII based) and this particularility makes it ideal to parse in a fpga, since all fields/types have an exact 
length and appear exactly in the same order on every identical (same type) message.  
These two aspects allow you to write fpga logic that just parses all message fields at the same time, which results in a
massive performance increase.

I made one more decision that I must clarify prior to continuing:
That decision is that computer A will be sending only one message on each ethernet frame instead of (possible) multiple messages as you would see in a real-world environment.
I decided to do this because it slightly simplifies the code in both computer A and computer B.
Basically what will happen in computer B is that you receive one message and you parse it and then you receive another frame and you parse the one message within and on and on, while in a live environment you might receive multiple messages on the same ethernet frame and you will have to iterate over the receiving buffer and parse all the messages within.
The change is minimal but I argue that it aids readability in both computer A and computer B code and at this point that is more important in my opinion.

When I finish this primer, I will write an appendix where I will show you how to both send .pcap data from computer A to computer B, with multiple messages in the same frame, as well as how to handle multiple messages in the same frame in computer B.  

With that out of the way, here’s the code you should run on computer A:  

// replay.c

#include <exanic/fifo_tx.h>  // exanic_*
#include <stdio.h>           // fprintf, fopen, fread, fclose
#include <string.h>          // memcpy
#include <sys/stat.h>        // struct stat, stat()

int main(void) {
  // ------------------------------------------------ Acquire NIC + ring buffer

  exanic_t *nic_handle = exanic_acquire_handle("exanic0");
  if (!nic_handle) {
    fprintf(stderr, "ERROR: Couldn't acquire NIC handle %s",
            exanic_get_last_error());
    return -1;
  }

  exanic_tx_t *nic_buffer = exanic_acquire_tx_buffer(nic_handle, 0, 0);
  if (!nic_buffer) {
    fprintf(stderr, "ERROR: Couldn't acquire NIC buffer %s",
            exanic_get_last_error());
    return -1;
  }

  // ----------------------------------------------- Load ITCH file into memory

  const char *file = {"/home/diogo/data/07302019.NASDAQ_ITCH50"};

  // Get the file size
  struct stat st;
  stat(file, &st);
  size_t file_sz = (size_t)st.st_size;

  // Allocate a buffer to hold the whole file in memory.
  unsigned char *ptr = malloc(file_sz + 1);
  if (ptr == NULL) {
    fprintf(stderr, "ERROR: Couldn't allocate memory.\n");
    return EXIT_FAILURE;
  }

  // Since I will be iterating over the buffer, I will an extra pointer to
  // the beginning of the buffer to be able to free it later on.
  unsigned char *buf = ptr;

  // Open the file and copy it's content to the previously allocated buffer.
  FILE *f = fopen(file, "r");
  fread(buf, 1, file_sz, f);
  buf[file_sz] = 0;

  size_t size = 0;

  // Holds the message that will be sent over the NIC.
  char message[100];

  // -------------- Iterate over the file and send each individual ITCH message
  for (;;) {
    buf += 2;
    if (*buf) {
      switch (*buf) {
        case 'A': {  // --------------------------------------------- Add order
          size = 36;
        } break;
        case 'D': {  // ------------------------------------------ Delete order
          size = 19;
        } break;
        case 'U': {  // ----------------------------------------- Replace order
          size = 35;
        } break;
        case 'E': {  // ----------------------------------------- Execute order
          size = 31;
        } break;
        case 'X': {  // ------------------------------------------ Reduce order
          size = 23;
        } break;
        case 'I': {  // ----------------------------------- Net order imbalance
          size = 50;
        } break;
        case 'F': {  // ---------------------------------------- Add order MPID
          size = 40;
        } break;
        case 'P': {  // ------------------------------------------------- Trade
          size = 44;
        } break;
        case 'L': {  // ----------------------------------------- MPID position
          size = 26;
        } break;
        case 'C': {  // ------------------------------ Execute order with price
          size = 36;
        } break;
        case 'Q': {  // ------------------------------------------- Cross Trade
          size = 40;
        } break;
        case 'Y': {  // -------------------------------------- Reg Sho Restrict
          size = 20;
        } break;
        case 'H': {  // ---------------------------------------- Trading Action
          size = 25;
        } break;
        case 'R': {  // --------------------------------------- Stock Directory
          size = 39;
        } break;
        case 'J': {  // ------------------- Process LULD Auction Collar Message
          size = 35;
        } break;
        case 'V': {  // ------------------------------------------ MWCB Decline
          size = 35;
        } break;
        case 'W': {  // ------------------------------------------- MWCB Status
          size = 12;
        } break;
        case 'K': {  // -------------------------------------- IPO Quote Update
          size = 28;
        } break;
        case 'B': {  // ------------------------------------------ Broken trade
          size = 19;
        } break;
        case 'N': {  // ------------------------------ Retail price improvement
          size = 20;
        } break;
        case 'S': {  //  ----------------------------------------- System Event
          size = 12;
        } break;
        case 'h': {  //----------------------------------------- Operation halt
          size = 21;
        } break;
        default: {
          fprintf(stderr, "ERROR: Wrong message type - %c", *buf);
          return EXIT_FAILURE;
        }
      }

      // Copy the data to the message buffer that will be sent over the wire.
      memcpy(message, buf, size);
      buf += size;

      // Send it.
      if (exanic_transmit_frame(nic_buffer, message, size) != 0) {
        fprintf(stderr, "ERROR: Failed to transmit message");
        exit(0);
      }

    } else {
      break;
    }
  }

  // ----------------------------------------------------------------- Clean-up
  fclose(f);
  free(ptr);
  exanic_release_tx_buffer(nic_buffer);
  exanic_release_handle(nic_handle);
  return 0;
}

Compile with: clang -O3 -Weverything -Werror replay.c -o replay -lexanic
And turn on Kernel Bypass on the NIC: $ exanic-config exanic0:0 bypass-only on
Note that this is the simplest “fake exchange” you can have, because as of now it doesn’t receive orders from computer B, nor builds an order book representation (which you can then use to then trade against).
Later on in this primer, or in a follow-up post, I will show you how to build a “fake exchange” that addresses those issues.

Before moving on to the next section, I would like to clarify what exactly is “kernel bypass” and why is it necessary technique (and why you need a specific NIC to achieve it) when you are designing a low latency system.

When you receive data in a regular network card (such as an ethernet card on your motherboard or a wifi adapter), what happens is that the Linux Kernel reads/copies those bytes from the hardware to a Kernel buffer (which resides in the Kernel space) and then copies them again from that buffer to a buffer in user space (so whatever application that is waiting on that data can use it).
These copies take a considerable amount of time (in HFT terms) and also the Kernel has a limited buffer size, which means that if you are being flooded by packets, these will have to wait in a queue until they eventually get copied to user space, resulting in extra delay.
Similarly, when you want to send data, a system call is issued behind the scenes and what happens is that the Kernel takes control of the execution, and on behalf of the user space process copies the data to kernel space and eventually sends it.

What Kernel Bypass libraries/NIC’s allow you is to do is bypass the copies made by the Kernel by giving you direct access to the hardware (NIC) buffer, which then allows you to copy the bytes directly from the hardware to userspace and from userspace to hardware without having to go through the Kernel.
In other words, you bypass the Kernel, and hence the name.


Build the LOB

Before you start this section, I recommend that you go over the ITCH protocol document I linked above, as this will help you understand the data parsing that you will see in the code below.

I will first start by explaining some of the design choices I made in terms of data structures and overall architecture, and then go in detail over each function used.
By the end of this section you will have a very minimal (i.e. easily expandable) foundation of a simulation lab, as well as a deep understanding of the mechanics of an order book.

Before the deep-dive, however, I want to clarify a few points:  

  • A limit order book represents the will of all market participant at any given moment, and because of that it is the only source of truth at (any) time T.

  • An exchange, such as Nasdaq, hosts a LOB for each tradable asset and disseminates each update to each LOB (which can be a buy order, delete order, etc) to all market participants at the same time, via multicast, through a data feed called ITCH (same name of the protocol).

  • The ITCH feed is a firehose of data, as all the updates to each LOB will be pushed through it, which means that if you are receiving data from the ITCH feed you will receive updates on tradable assets that you might not even be interested in trading yourself.

  • To handle that, you will have to filter out the incoming messages the messages from tradable assets that you are not interested in trading, and you can either do it in the NIC (by programming the logic in verilog/vhdl directly in the card) or in software. (In this post I will cover the latter option).  

Another reason why you might want to filter messages is because you might decide to have specific servers only trading 
specific assets, which would allow you to load balance between different servers the whole trading operation and reduce 
the latency on each individual server (since they will only be processing a subset of the messages received).  

It’s also important to note that not all types of messages received via the ITCH feed are useful to build the LOB, since not all of them directly represent a change in a LOB (some messages just signal the beginning/end of trading day, if there’s a halt in trading etc), and while the parser should still parse them they are not mandatory.
To an attempt at brevity, I will only show you the functions that parse the messages that actually matter to building the LOB, however writing the remaining functions should be trivial for you after this section (and with the help of the ITCH protocol specification document).

Now, when you receive a message that affects the LOB you must store it somewhere, an example would be if you receive a “add order” message, you will need to store it so in case you later receive a “delete” message (with the same ID), you can make the appropriate changes to the order book.
Since all such messages have an ID, the most common data structure to use is a hash-map, using the ID as the key.
This is what you will see in many LOB implementations, and it is a valid option in particular if you are developing in C++ and pick a good hash-map implementation such as google-dense-hashmap or similar.
I will, however, show you a different approach to building the LOB, which has several performance benefits (and beauty) in comparison to a hash-map, at the expense of some complexity.

Remember that regardless of how fast the software is, it will never come even close to a full hardware systems, but if 
you are OK with that and are not trying to run a HFT market-making strategy, you can be on the second level in terms of 
speed, and are granted the ability to run more complex computations.

Without further ado, here’s how you build a super optimized LOB.

I will start to show you the header file - don’t try to grasp everything right now as it will only make sense when you see the implementation files.

// parser.h

#include <stdbool.h>  // bool
#include <stdint.h>   // uint32_t

// --------------------------------------------------------------------- Macros

#define DEPTH 5000  // $50 = 5000 levels

// ---------------------------------------------------------------------- Stock

typedef struct stocks {
  char symbol[8];
  uint32_t previous_day_closing_price;
} stocks;  // 12 bytes

// ---------------------------------------------------------------------- Order

typedef struct order {
  uint32_t* order_book_volume;
  uint32_t quantity;          // Number of shares.
  uint32_t side;              // Bid or Ask. (No order = 0, Bid = 1, Ask = 2)
} order;  // 16 bytes

// ----------------------------------------------------------------- Order Book

typedef struct order_book {
  uint32_t* highest_bid_price;
  uint32_t* highest_bid_volume;
  uint32_t* minimum_bid_price;
  uint32_t* minimum_bid_volume;

  uint32_t* lowest_ask_price;
  uint32_t* lowest_ask_volume;
  uint32_t* maximum_ask_price;
  uint32_t* maximum_ask_volume;

  // For every dollar there are 100 cents/possible price levels.
  uint32_t prices[2 * DEPTH + 1];
  uint32_t volume[2 * DEPTH + 1];

} order_book;  // Size 180kb

// ------------------------------------------------------------------ Functions

__attribute__((nonnull)) void add_order(const char* payload,
                                        bool* interested_in_trading,
                                        order_book* ob, order* order_ids);

__attribute__((nonnull)) void delete_order(const char* payload,
                                           order* order_ids);

__attribute__((nonnull)) void reduce_order(const char* payload,
                                           order* order_ids);

__attribute__((nonnull)) void replace_order(const char* payload,
                                            bool* interested_in_trading,
                                            order_book* ob, order* order_ids);

__attribute__((nonnull)) void parse_stock_id_and_initialize_orderbook(const char* payload,
                                                  bool* interested_in_trading,
                                                  order_book* ob);

There are a few points that I want to explain from the code above:

First one is “DEPTH”, which stands for the number of price levels an order book can have.
Each $1 has 100 price levels (0, 1, …, 99), so if you want to hold information for $10 of depth in each side of the book, you will need 1000 levels for the Bid side, 1000 levels for the Ask side, and 1 level for the initial spread.
You see that formula, 2 * DEPTH + 1, used in the order_book struct.

In the order_book struct, you see that there are multiple ptrs, these point exclusively to addresses that belong to either the prices or volume arrays.

On the order struct, the pointer within, points exclusively to an address in a order_book volume array.
(Note that I said “a” not “the”, as, in my implementation, an order has no idea that belongs to its symbol’s order book, it only knows in which address its quantity is stored - you will see why this is, soon).

Also note that there are only 5 functions, and these are all the functions you need to build and maintain a LOB.
I will go over each of their implementations in detail soon, but before that, here’s the “main” file of the program, which runs the loop that gets data from the NIC, checks the message type and delegates it to the appropriate function to be parsed.  

// main.c

#include <byteswap.h>        // bswap_*
#include <exanic/fifo_rx.h>  // exanic_*
#include <stdbool.h>         // bool
#include <stdint.h>          // uint*_
#include <stdio.h>           // fprintf
#include <stdlib.h>          // malloc
#include <string.h>          // memset

#include "parser.h"  	     // declarations of all functions/data structures

__attribute__((noreturn)) static void run_program(exanic_rx_t* nic_buffer,
                                                  order* order_ids,
                                                  order_book* order_books,
                                                  bool* interested_in_trading) {
  // ---------------------------------------------------- Parse incoming frames
  // Maximum length of an ethernet frame is 1518 bytes.
  // Using 1536 for alignment purposes: 3 * 512.
  char message[1536];

  for (;;) {
    // Negative values mean that an error occured while retrieving the frame.
    if (!(exanic_receive_frame(nic_buffer, message, 1536, NULL) > 0)) {
      continue;
    }
    switch (*message) {
      case 'A': {  // ----------------------------------------------- Add order
        add_order(message, interested_in_trading, order_books, order_ids);
      } break;
      case 'D': {  // -------------------------------------------- Delete order
        delete_order(message, order_ids);
      } break;
      case 'U': {  // ------------------------------------------- Replace order
        replace_order(message, interested_in_trading, order_books, order_ids);
      } break;
      case 'E': {  // ------------------------------------------- Execute order
        // When trades happen, they are considered "executed orders"
        // And we must reduce the amount of shares that were executed
        // from the original add_order()
        reduce_order(message, order_ids);
      } break;
      case 'X': {  // -------------------------------------------- Cancel order
        reduce_order(message, order_ids);
      } break;
      case 'I': {  // ------------------------------------- Net order imbalance
      } break;
      case 'F': {  // ------------------------------------------ Add order MPID
        add_order(message, interested_in_trading, order_books, order_ids);
      } break;
      case 'P': {  // --------------------------------------------------- Trade
      } break;
      case 'L': {  // ------------------------------------------- MPID position
      } break;
      case 'C': {  // -------------------------------- Execute order with price
        reduce_order(message, order_ids);
      } break;
      case 'Q': {  // --------------------------------------------- Cross Trade
      } break;
      case 'Y': {  // ---------------------------------------- Reg Sho Restrict
      } break;
      case 'H': {  // ------------------------------------------ Trading Action
      } break;
      case 'R': {  // ----------------------------------------- Stock Directory
        // These are the first messages sent
        // by the Nasdaq feed that are of interest to us;
        // Basically, each day each stock is
        // given a different id number called "locate"
        // and we must parse this "locate" everyday so we
        // can check it after on every incoming
        // message, and only parse information
        // for incoming messages that correspond
        // to the stocks we are interested in
        // trading.
        parse_stock_id_and_initialize_orderbook(message, interested_in_trading,
                                                order_books);
      } break;
      case 'J': {  // --------------------- Process LULD Auction Collar Message
      } break;
      case 'V': {  // -------------------------------------------- MWCB Decline
      } break;
      case 'W': {  // --------------------------------------------- MWCB Status
      } break;
      case 'K': {  // ---------------------------------------- IPO Quote Update
      } break;
      case 'B': {  // -------------------------------------------- Broken trade
      } break;
      case 'N': {  // -------------------------------- Retail price improvement
      } break;
      case 'S': {  // -------------------------------------------- System Event
      } break;
      case 'h': {  // ------------------------------------------ Operation Halt
      } break;
      default: {
        fprintf(stderr, "ERROR: Wrong message type: %d\n", *message);
        exit(1);
      }
    }
  }
}

int main(void) {
  // ------------------------------------------------ Acquire NIC + ring buffer
  exanic_t* exanic = exanic_acquire_handle("exanic0");
  if (!exanic) {
    fprintf(stderr, "ERROR: Couldn't acquire NIC handle %s",
            exanic_get_last_error());
    return EXIT_FAILURE;
  }

  exanic_rx_t* nic_buffer = exanic_acquire_rx_buffer(exanic, 0, 0);
  if (!nic_buffer) {
    fprintf(stderr, "ERROR: Couldn't acquire NIC buffer %s",
            exanic_get_last_error());
    return EXIT_FAILURE;
  }

  // ----------------------------------------------- Allocate the necessary RAM
  bool interested_in_trading[10000] = {0};
  const unsigned long number_of_order_books = 10000 * sizeof(order_book);
  const unsigned long number_of_order_ids = 1000000000LU * sizeof(order);

  // Using malloc + memset instead of calloc because otherwise Linux won't
  // allocate the memory until it is required to do so (i.e. a write is made
  // to a block that hasn't been allocated yet).
  order* order_ids = malloc(number_of_order_ids);
  order_book* order_books = malloc(number_of_order_books);

  if (order_ids == NULL || order_books == NULL) {
    fprintf(stderr, "ERROR: Couldn't allocate memory for program to run.\n");
    free(order_ids);
    free(order_books);
    exanic_release_rx_buffer(nic_buffer);
    exanic_release_handle(exanic);
    return EXIT_FAILURE;
  }
  memset(order_ids, '\0', number_of_order_ids);
  memset(order_books, '\0', number_of_order_books);

  // ----------------------- Assume all requirements are met to run the program
  printf("\nRunning\n");
  printf("-------\n");
  run_program(nic_buffer, order_ids, order_books, interested_in_trading);

  return EXIT_SUCCESS;
}

The most important things to understand from the code above are the following:

  • The array “interested_in_trading” holds 10000 Bools, and the reason why is because Nasdaq has 8/9 thousand tradable assets. (less than 10000)
    What will happen is that in the initial messages of every trading day, information about the ID of each particular stock for that day is provided (i.e. NVDA = 5823) and if we are interested in trading NVDA stock on that day we will set “interested_in_trading[5823] = 1;” and what that allows is to then in other functions, such as “add_order()”, check if the particular message we are receiving belongs (via its symbol ID) to one of the stocks we are interested in trading. If so, we parse the message, otherwise we don’t proceed further.
    This, if you remember, is what I meant by filtering messages earlier before.

  • For the same reasons as above, I create 10000 “order_books”, and again only some of them will actually be used, but it is much preferable to pre-allocate everything.

  • In the array “order_ids”, I allocate 1 billion orders, a common day will have around 500/600 million orders, but it’s important to have some headroom in case there’s a trading day that has more activity than the usual.
    Note that every day order ID’s start from 0, and if there was an order left in the order book from the previous day, that order will be re-entered the following day and given a new order ID.

And here’s the final piece of the puzzle, the code which parses the ITCH protocol and builds the LOB.

// parser.c

#pragma clang diagnostic ignored "-Wcast-align"

#include <assert.h>    // assert
#include <byteswap.h>  // bswap_*
#include <stdint.h>    // uint_*
#include <stdio.h>     // printf
#include <stdlib.h>    // malloc
#include <string.h>    // strcmp

#include "parser.h"    // declarations of all functions/data structures

// -------------------------------- List of stocks we are interested in trading
static stocks valid_stocks[] = {
    {"AMD\0", 340200},   {"ASML\0", 2271500},
    {"CSCO\0", 530500},  {"INTC\0", 491200},
    {"MSFT\0", 1382800}, {"MU\0", 474900},
    {"NVDA\0", 1743800}, {"QCOM\0", 715000},
};

void add_order(const char *payload, bool *interested_in_trading,
               order_book *order_books, order *order_ids) {
  const uint16_t stock = bswap_16(*(const uint16_t *)(payload + 1));
  if (interested_in_trading[stock]) {
    const uint64_t id = bswap_64(*(uint64_t const *)(payload + 11));
    const char side = *(payload + 19);
    const uint32_t number_of_shares =
        bswap_32(*((const uint32_t *)(payload + 20)));
    const uint32_t price = bswap_32(*((const uint32_t *)(payload + 32)));

    order_book *ob = &order_books[stock];

    // Check if the price is within the pre-defined range.
    if (price < *ob->minimum_bid_price || price > *ob->maximum_ask_price) {
      // Even if it's not within range, record the order 'side', as we
      // need this information in case the order is later replaced by a new
      // order we would actually like to add to the order_book.
      order_ids[id].side = (side == 'B' ? 1 : 2);
      return;
    }

    if (side == 'B') {  // bid
      // Get the location (price) of this order in respect to the
      // highest_bid_price.
      const int diff = (((int)*ob->highest_bid_price - (int)price) / 100);

      // Add the order quantity to such location.
      uint32_t *volume_at = ob->highest_bid_volume - diff;
      *volume_at += number_of_shares;
      order_ids[id].order_book_volume = volume_at;

      order_ids[id].quantity = number_of_shares;
      order_ids[id].side = 1;

      // If the new bid is higher than the current bid, let the new bid be
      // the highest/best.
      if (diff < 0) {
        ob->highest_bid_price -= diff;
        ob->highest_bid_volume -= diff;
      }

      // Make sure the lowest_ask_price is larger than the highest_bid_price
      if (!(ob->lowest_ask_price > ob->highest_bid_price)) {
        ob->lowest_ask_price = ob->highest_bid_price + 1;
        ob->lowest_ask_volume = ob->highest_bid_volume + 1;
        for (; ob->lowest_ask_price != ob->maximum_ask_price;
             ob->lowest_ask_price++, ob->lowest_ask_volume++) {
          if (*ob->lowest_ask_volume) {
            break;
          }
        }
      }
    } else {  // ask
      // Get the location (price) of this order in respect to the
      // lowest_ask_price.
      const int diff = (((int)*ob->lowest_ask_price - (int)price) / 100);

      // Add the order quantity to such location.
      uint32_t *volume_at = ob->lowest_ask_volume - diff;
      *volume_at += number_of_shares;
      order_ids[id].order_book_volume = volume_at;
      order_ids[id].quantity = number_of_shares;
      order_ids[id].side = 2;

      // If the new ask price is lower than the current best ask, let the new
      // ask be the best/lowest, only if such ask is not equal to the
      // highest bid, because in that case there will be a cross and the order
      // will be executed and because of that the highest ask will be the
      // previous highest ask.
      if (diff > 0) {
        ob->lowest_ask_price -= diff;
        ob->lowest_ask_volume -= diff;
      }

      // Make sure the highest_bid_price is less than the lowest_ask_price
      if (!(ob->highest_bid_price < ob->lowest_ask_price)) {
        ob->highest_bid_price = ob->lowest_ask_price - 1;
        ob->highest_bid_volume = ob->lowest_ask_volume - 1;
        for (; ob->highest_bid_price != ob->minimum_bid_price;
             ob->highest_bid_price--, ob->highest_bid_volume--) {
          if (*ob->highest_bid_volume) {
            break;
          }
        }
      }
    }

    // Make sure that the highest bid_price has volume > 0
    for (; ob->highest_bid_volume != ob->minimum_bid_volume;
         ob->highest_bid_volume--, ob->highest_bid_price--) {
      if (*ob->highest_bid_volume) {
        break;
      }
    }

    // Make sure the lowest_ask_price has volume > 0
    for (; ob->lowest_ask_volume != ob->maximum_ask_volume;
         ob->lowest_ask_volume++, ob->lowest_ask_price++) {
      if (*ob->lowest_ask_volume) {
        break;
      }
    }
  }
}

void delete_order(const char *payload, order *order_ids) {
  const uint64_t id = bswap_64(*(uint64_t const *)(payload + 11));
  if (order_ids[id].quantity) {
    *order_ids[id].order_book_volume -= order_ids[id].quantity;
  }
}

void reduce_order(const char *payload, order *order_ids) {
  const uint64_t id = bswap_64(*(uint64_t const *)(payload + 11));
  const uint32_t number_of_shares =
      bswap_32(*((const uint32_t *)(payload + 19)));

  if (order_ids[id].quantity) {
    order_ids[id].quantity -= number_of_shares;
    *order_ids[id].order_book_volume -= number_of_shares;
  }
}

void replace_order(const char *payload, bool *interested_in_trading,
                   order_book *order_books, order *order_ids) {
  const uint16_t stock = bswap_16(*(const uint16_t *)(payload + 1));
  if (interested_in_trading[stock]) {
    const uint64_t original_id = bswap_64(*(uint64_t const *)(payload + 11));

    // If we didn't keep track of the original order then we can not replace
    // it because we don't know it's side. (The reason for such to happen 
    // could be because of missing a network packet)
    // Side = 0 means we don't have the order, bid = 1, ask = 2
    if (order_ids[original_id].side) {
      const uint32_t side = order_ids[original_id].side;

      *order_ids[original_id].order_book_volume -=
          order_ids[original_id].quantity;

      // Add a new order:
      const uint64_t new_id = bswap_64(*(uint64_t const *)(payload + 19));
      const uint32_t number_of_shares =
          bswap_32(*((const uint32_t *)(payload + 27)));
      const uint32_t price = bswap_32(*((const uint32_t *)(payload + 31)));

      order_book *ob = &order_books[stock];

      // Check if the price is within the pre-defined range.
      if (price < *ob->minimum_bid_price || price > *ob->maximum_ask_price) {
        // Even if it's not within range, record the order 'side', as we
        // need this information in case the order is later replaced by a new
        // order we would actually like to add to the order_book.
        order_ids[new_id].side = side;
        return;
      }

      if (side == 1) {  // bid
        // Get the location (price) of this order in respect to the
        // highest_bid_price.
        const int diff = (((int)*ob->highest_bid_price - (int)price) / 100);

        // Add the order quantity to such location.
        uint32_t *volume_at = ob->highest_bid_volume - diff;
        *volume_at += number_of_shares;
        order_ids[new_id].order_book_volume = volume_at;
        order_ids[new_id].quantity = number_of_shares;
        order_ids[new_id].side = side;

        // If the new bid is higher than the current bid, let the new bid be
        // the highest/best.
        if (diff < 0) {
          ob->highest_bid_price -= diff;
          ob->highest_bid_volume -= diff;
        }

        // Make sure the lowest_ask_price is larger than the highest_bid_price
        if (!(ob->lowest_ask_price > ob->highest_bid_price)) {
          ob->lowest_ask_price = ob->highest_bid_price + 1;
          ob->lowest_ask_volume = ob->highest_bid_volume + 1;
          for (; ob->lowest_ask_price != ob->maximum_ask_price;
               ob->lowest_ask_price++, ob->lowest_ask_volume++) {
            if (*ob->lowest_ask_volume) {
              break;
            }
          }
        }
      } else {  // ask
        // Get the location (price) of this order in respect to the
        // lowest_ask_price.
        const int diff = (((int)*ob->lowest_ask_price - (int)price) / 100);

        // Add the order quantity to such location.
        uint32_t *volume_at = ob->lowest_ask_volume - diff;
        *volume_at += number_of_shares;
        order_ids[new_id].order_book_volume = volume_at;
        order_ids[new_id].quantity = number_of_shares;
        order_ids[new_id].side = side;

        // If the new ask price is lower than the current best ask, let the
        // new ask be the best/lowest.
        if (diff > 0) {
          ob->lowest_ask_price -= diff;
          ob->lowest_ask_volume -= diff;
        }

        // Make sure the highest_bid_price is less than the lowest_ask_price
        if (!(ob->highest_bid_price < ob->lowest_ask_price)) {
          ob->highest_bid_price = ob->lowest_ask_price - 1;
          ob->highest_bid_volume = ob->lowest_ask_volume - 1;
          for (; ob->highest_bid_price != ob->minimum_bid_price;
               ob->highest_bid_price--, ob->highest_bid_volume--) {
            if (*ob->highest_bid_volume) {
              break;
            }
          }
        }
      }

      // Make sure that the highest bid_price has volume > 0
      for (; ob->highest_bid_volume != ob->minimum_bid_volume;
           ob->highest_bid_volume--, ob->highest_bid_price--) {
        if (*ob->highest_bid_volume) {
          break;
        }
      }

      // Make sure the lowest_ask_price has volume > 0
      for (; ob->lowest_ask_volume != ob->maximum_ask_volume;
           ob->lowest_ask_volume++, ob->lowest_ask_price++) {
        if (*ob->lowest_ask_volume) {
          break;
        }
      }
    }
  }
}

void parse_stock_id_and_initialize_orderbook(const char *payload, bool *interested_in_trading,
                         order_book *order_books) {
  const uint16_t locate = bswap_16(*(const uint16_t *)(payload + 1));
  char stock[9] = {0};

  // Parse stock name.
  for (int i = 0; i < 8; i++) {
    // 11 is the starting position of stock name in msgs of type 'R'
    if (!(payload[i + 11] == ' ')) {
      stock[i] = payload[i + 11];
    }
  }

  stocks *ptr = valid_stocks;
  stocks *endptr =
      valid_stocks + sizeof(valid_stocks) / sizeof(valid_stocks[0]);

  while (ptr < endptr) {
    // Compare the current stock name with the stocks we are interested in trading 
    // and set the stocks[locate] to 1 as well as prepare the order_book for that
    // stock.
    if (!strcmp(stock, ptr->symbol)) {
      interested_in_trading[locate] = 1;
      order_book *ob = &order_books[locate];
	
      // Point the highest_bid price/volume to the middle of the respective arrays.
      ob->highest_bid_price = &ob->prices[DEPTH];
      ob->highest_bid_volume = &ob->volume[DEPTH];
      // Point the minimum bid volume to the first element of the array.
      ob->minimum_bid_volume = ob->volume;
      // Set the value of the highest_bid_price to the previous day closing price.
      // This is necessary to do because when new bids arrive to the book, they will be
      // placed in a location which is based on the location of the highest_bid_price.
      *ob->highest_bid_price = ptr->previous_day_closing_price;

      // Same logic as above.
      ob->lowest_ask_price = &ob->prices[DEPTH + 1];
      ob->lowest_ask_volume = &ob->volume[DEPTH + 1];
      ob->maximum_ask_volume = &ob->volume[2 * DEPTH + 1];
      *ob->lowest_ask_price = ptr->previous_day_closing_price + 100;

      // Fill the order book with prices increasing on the 'Ask' side
      uint32_t *value = ob->lowest_ask_price;
      uint32_t *end = ob->lowest_ask_price + DEPTH;
      for (uint32_t i = 0; value < (end + 1); i++, value++) {
        *value = (*ob->lowest_ask_price) + (i * 100);
        ob->maximum_ask_price = value;
      }

      // and decreasing on the 'Bid' side.
      value = ob->highest_bid_price;
      end = ob->highest_bid_price - DEPTH;
      for (int i = 0; value > (end - 1); i++, value--) {
        int v = ((int)*ob->highest_bid_price) - (i * 100);
        *value = (v > 0 ? (uint32_t)v : 0);
        ob->minimum_bid_price = ob->highest_bid_price - i;
        *ob->minimum_bid_price = *value;
      }
    }
    ptr++;
  }
}

Compile with: clang -O3 parser.c main.c -Weverything -Werror -o program -lexanic
Enable Kernel Bypass: $ exanic-config exanic0:0 bypass-only on
Enable promiscuous mode: $ exanic-config exanic0:0 promisc on

A few notes on the code above:

  • Start by reading the parse_stock_id_and_initialize_orderbook() function and then move on to add_order() and the remaining functions.

  • Still in the parse_stock_id_and_initialize_orderbook() function is important to understand that the value ptr->previous_day_closing_price would normally be requested from a database.

  • The function replace_order() is identical to the function add_order() except for one extra check. However, the reason why you can’t just call add_order() from replace_order() after performing that check is because the message fields to be parsed are in a different position within the payload.


Measurements

Work in Progress


Overclocking

Work in Progress


Linux Tuning

Work in Progress


Appendix: Costs of opening a one-man HFT firm

Going from theory to practice in HFT is not straightforward and below is something you should know (from a purely technical perspective, discarding bureaucracy etc) if you have intentions to open your own HFT shop.

In the simplest of the setups, you will need to host the minimum of one server with a colocation provider such as TSN/Pico/Options-IT that provides hosting at Carteret (the data center where the Nasdaq Matching Engine (NME)) is located, and for a single server hosting expect to pay around $3.5K MRC with a $2K NRC.
This will include layer 1 access to UDP full book market data (1 hop), which means that your server will be connected to one very fast switch (either a Cisco or Arista L1 switch) which itself is directly connected to the NME.
The average latency from the market data being sent by the NME and arrive to your network-interface-card (NIC) will be around 80ns.  

To be able to place orders you will have to go through a broker and if you are capital constrained, probably your only option here will be Lime Execution (which will cost you around $7K MRC per server).
Your server will be connected to a (not as fast) switch owned by the colocation provider you chose, which itself is connected to an equally not so fast switch owned by Lime which will receive your order, perform some checks (i.e. can you afford what you want to buy, do you own what you want to sell, etc) and finally be sent to the NME.
From the data leaving your server to reaching Lime’s server it will take around 2 * 380ns on average, + at least a few micro seconds for the checks.
Assuming that it takes 3us for the checks to be made and the order to be sent to the NME, your order will take 760ns in transit + 3us in processing for a total of almost 4us from leaving your server to arrive at the NME.

That is ~4000 nanoseconds and it is an important figure to understand because there are currently systems (hardware/fpga based) that have direct-market-access (DMA) to the exchange and can send an order to the exchange, in response to market data, in ~20ns (in those 20ns they parse the data, make some simple computations and send the order), which means that while your order is in traffic to arrive to the NME, these systems would have been able to place 200 orders (serially) in the same period of time, which could alter the market to the point that when your order arrives to the NME, the price/quantity you were considering to trade at is no longer available.

Note that unless you have around $5M to invest you won’t be able to get “sponsored access”/DMA to the exchange, which is provided by one of the big investment banks, and so you are constrained to go through a broker and incur on the latency mentioned above.

There is also a somehow hidden cost that you will have to pay in one form or the other and that is related to market data collection.

If you are recording the market data yourself you have a couple of options:

  1. On your main process (where your parsing/trading logic is) you will have to add extra functionality to save the data (and even if you are writing to RAM you will eventually have to save to disk) which likely implies the use of threads.  

  2. You have a second process reading the data from the NIC and saving it. (You will have to synchronize it with the main process so that both get to read the data before its evicted from the NIC)

Whatever option you choose, you will eventually have to make system calls to save the data (which will create added latency), you will also be putting more stress on the kernel task scheduler (which is non-deterministic), and additionaly you will be adding more complexity to the code, which by itself will put more stress on the instruction and data cache.

If you can’t accept the added latency/jitter, you also have a couple of options:

  1. Colocate another server which basically only records market data (it doesn’t trade).
    This will cost you another 3.5k MRC + 2k NRC + the cost of the server components (and any necessary maintenance).

  2. Purchase the data from a data provider such as Maystreet (also check with your colocation provider as they might offer such an add-on service) and get it by the end of the trading day.
    This will cost around $500-600 MRC and it’s probably the way to go if you are not in a position where you have rented a whole rack at the data center and have space for an extra server tasked only with recording market data.  

As you can see, it is a substantial investment just to be able to “play the game”, however if you have the capital and the expertise to keep pushing the boundaries, it is definitely worth considering.


Changelog

August 5th, 2021: Initial commit, covering the first 4 sections of the primer + the appendix “costs of opening a one-man hft firm”.

August 10th, 2021: Minor change in the valid_stocks struct and in the parse_stock_id_and_initialize_orderbook() function. Thank you for the feedback, Anton.
Files edited: parser.h, parser.c

August 14th, 2021: Improved tracking of orders that won’t be added to the order_book (because they fall outside of the predefined range).
Files edited: parser.h, parser.c


Diogo Flores