I have seen a number of jobs asking for experience with low-latency, performing tuning of the Linux Kernel. These positions are for the financial industry.
I imagine a financial trading company makes more money if they can execute X number of trades more per day than the previously and the competition. But isn’t there a point at where everyone is running their systems as fast as possible? Or is this an on-going process to make them even faster on the job all the time?
Anyone have experience with actually doing this type of work for the financial industry?
This is an industry where people locate their servers in certain buildings to literally take advantage of the speed of light. Any optimization they can do, both to trade-execution speed and internal computation speed, is potentially valuable to them.
An advantage of several milliseconds is literally worth millions to financial institutions. That’s why a company spent several hundred million dollars to lay a new communication cable between Chicago and New York City. The cable reduced the communication time between the cities by THREE MILLISECONDS and they are making a handsome profit by leasing the use of the cable to financial institutions.
Even after exchanges tried to standardized cable lengths people just started to fight around who was closest to the cooling outlets so that they can run faster.
As long as High-frequency trading is around there will be this need, especially as general computing moves more and more towards concurrency.
Geeky additional details, the Linux scheduler uses red–black trees while Windows, Solaris, and most other operating systems use multilevel feedback queues. While the Linux method is O(1) is better than O(n) of a MLFQ it targets fairness.
As others have pointed out, in this context the preoccupation is not with transactions per minute or per day (you can just add more servers to cover that); it’s with how few microseconds it takes to evaluate incoming information and output a buy/sell order.
I imagine these trades are in millions at a time, or whatever the max limit is if there is one. Optimized OS, fast processors, co-location, location, the physical link… individually might not be worth it, factor in the cumulative benefit and you might have an edge of 20-30 ms which could be advantageous.
Very intrigued about this and I’m going to look it up some more.
There is software commercially available for low-latency. Everyone can buy the fastest hardware and have the faster connectivity. So it seems like the faster solution is something which is common knowledge. So what is there to do on a daily basis on the job if you are doing Kernel performance tuning for a financial services company? What is to do which is different? Cause right now in 2018, it seems like this job is mostly doing maintenance and updating vendor software. Is it merely a glorified Linux administration position?
It is interesting, and I’ve watched a video on YouTube where there was a presentation about changing coding techniques. But none of these things seem to be a secret. Which makes me wonder if there really is anything new to be done in this area? Do they hire people for the position based on the metrics they have demonstrated from previous jobs?
I’d be interested to hear from someone who actually has a job doing this.
I haven’t done this specifically for financial processing, but I’ve done a lot of Linux kernel tuning for embedded applications like video processing. If you’re building a dedicated system that has a single purpose (processing financial transactions), you can do things to improve latency that wouldn’t be appropriate in a general-purpose kernel. For example, maybe you poll something in a tight loop rather than wait for an interrupt, which improves latency but stalls all other activity in the system.
Is there a reason to do this activity in an ongoing basis? If so, what changes that it would require constant tuning? I’m guessing you have a development/test system in place and with each new kernel update you work on it there first. Does really that much change after you have it established? Is Linux kernel tuning only part of your job?
You are confusing the general need concurrency and parallelism to increase bandwidth utilization and to work around limitations like the bandwidth delay product with the very specific needs of HFT.
Modern systems are designed to minimize overall latency, or more specifically to hide latency from interactive users.
While admittedly pessimistic Amdahl’s law may be the simplest explanation on why that is still a challenge in general computing even when it comes to OLTP or interactive users.
Even in this general case, if 95% of a program can be parallel but 5% is still serial, no matter how many CPUs or cores you will only ever be able to speed up the process by 20 times.
When ever you have a shared resource or state you will have a serial operation, when ever you have a changeable shared resource you will need locks to prevent race conditions and those locks are in effect a serialized portion of the code.
HFT requires knowing several bits of different information, needs to keep transactions and in general cannot be broken out into discreet, easy to distribute work items without shared dependencies. Concurrency control in modern databases uses MVCC as an example, which allows some parallelization but still does not allow not allow any overlapping execution on critical state objects without resorting to locks or causing a deadlock.
Simply reordering the recorded transactions in HFC is actually been proven as an NP-hard problem. Others like the subgraph isomorphism problem are NP-Complete and will either never be solved, or if they are, there will be no stock market.
To be clear, the only reason there is a chance to make money in HFT is because modeling it is NP-Complete.
General computing is biased to Little’s Law
mean response time = mean number in system / mean throughput
OR
latency = concurrency/throughput
As Moore’s law comes to an end and due to power constraints the industry moved from focusing on single threaded throughput and now are working on concurrency. This is why we have multiple core cpus, hyper threading etc…
The industry moving to this model is actually typically counter to the biggest challenges in reducing latency for HFT. While networking companies and other computer companies are producing products now directly targeted at the HFT market the core CPUs, mother board chipsets and operating systems are making changes that actually introduce latency into the system.
With the mess with security related to speculative execution and hyper-threading this will only get worse for this HFT need.
This is simply a niche market, and a niche market which can be much more profitable with minor reductions in latency and they will always have to have developers to work on the OS kernels.
The cause is due to the fundamentals of computer science and not due to product offerings. Unless there are major advancements and fundamental technology systems in general these decisions will always involve trade offs and the vast majority of use cases would be negatively impacted if the HFC needs were considered as a primary use case.
While directed to python, which is obviously not usable for HFT’s latency sensitive needs, here is a talk by Raymond Hettinger which will help explain the fundamental issues and challenges with concurrency.
It is about an hour long so it would require you invest some time but it will help explain some of these concepts that are pretty opaque concepts for most people while not getting in too deep.
I don’t know much about the financial industry, but I would guess that parallelism is fairly unimportant. If you want to handle 10x more trades, you just buy 9 more computers. Hardware costs are, I imagine, completely insignificant. What’s important is the latency – being able to process some data and spit out a trade in the smallest possible number of milliseconds.
There are a few factors that make this an ongoing task. One is handling kernel updates, which in some cases can break the tweaks we’ve made. It may also happen that requirements change – in my field, for example, we may need to support a new video codec. The biggest issue (again in my field) is supporting new hardware. When a new chipset comes along and we want to incorporate it in our product, we need to rewrite any tweaks that were hardware specific (in device drivers for example). Of course, some of this may not apply to the financial industry, but there may be other issues in that area that I’m not aware of.
Parallelism is critical, HFT is based on find deviations from market equilibrium simultaneously process large volumes of information.
They need to perform tasks like catch announcements or news before others or detect a lag between quotes and orders. The most time critical needs relate to price discrepancies simultaneously across several markets.
If you look into the Linux scheduler you will see that it is just a red black tree and it is just popping off the left of that tree. This is awesome for scaling, time complexity and fairness but is not great for HFT. Mix that with the shared state above, and the implications for concurrency vs parallelism and it is a challenge.
This assumes an ideal situation has already been achieved. There is always room for improvement. So you hire smart people and hope your smart people are smarter than the other companies’ smart people. And, barring that, hope they are at least as smart.
If other companies are working to improve their systems, it is a bad thing to not improve yours.
Unix/Linux is a great operating system but it isn’t optimal for real-time processing. To get the fastest response, I wonder if the most time-critical tasks should be done in a separate non-Linux processor. (Or, done completely within the Linux device drivers.) Disclaimer: I’ve no idea what I’m talking about and realize that, even if that suggestion is remotely feasible, it would be a huge software chore.
HFT emphasizes speed so much that it sounds like even one good idea might turn its inventor into a hero. Three decades ago I consulted for a Fortune-500 company which employed some of the biggest names in Unix kernel development. I discovered several easily-fixed driver flaws that were causing huge performance degradations in some cases. Based on that experience, I’d not be surprised if there’s still a “good new idea” to be found in the low-latency kernel.
This also brings up the matter of managing expectations. When is faster fast enough? Does management have metrics to know what is an acceptable speed for the transactions in the industry? Do they set goals to increase the speed and to what? How would someone in that position know they are doing a great job?
IIUC, your competition is other high-speed traders. All the algorithms have deduced that some institution is about to offer $43.23 for 1000 shares of Pfizer and are racing to get to the top of the $43.23 queue. You want to be the one to sell shares at that price. You can buy them back at a more reasonable price like $43.20 a few minutes later, but you want to grab that easy $30, and then another $40 when that desperate institution bids $43.24. Like hyenas fighting over a downed zebra, all the high-speed traders are sending in those sell orders as fast as they can.
If one of your competitors can place his order in 1900 microseconds, you want to place the order in 1850 microseconds. He’ll respond by hiring an ace programmer who can generate that decision in 1800 microseconds; now you need to respond in 1750 microseconds to beat him. Next week one of your competitors will get some hardware upgrade and you have to be faster still. How fast is fast enough? If you outrace all the competitors for 100% of the smart HFT trades, your company will be making millions of dollars per day.
(Of course there’s always the risk that the Pfizer buyer isn’t some institution doing routine rebalancing, but someone who’s read a press release and knows that a new drug has passed a test. Now instead of winning 3¢ per share you’ll be losing $1 per. But that’s not your problem. )