Previously in this series:
- The “lost boarding pass” puzzle
- The “deadly board game” puzzle
- The “knight on an infinite chessboard” puzzle
I recently came across an interview problem from A Cool SQL Problem: Avoiding For-Loops . Avoiding loops is a topic I always enjoy reading about, and the blog post didn’t disappoint. I’ll quote that post’s description of the problem:
You have a table of trading days (with no gaps) and close prices for a stock.
Find the highest and lowest profits (or losses) you could have made if you bought the stock at one close price and sold it at another close price, i.e. a total of exactly two transactions.
You cannot sell a stock before it has been purchased. Your solution can allow buying and selling on the same trading_date (i.e. profit or loss of $0 is always, by definition, an available option); however, for some bonus points, you may write a more general solution for this problem that requires you to hold the stock for at least N days.
The blog post provided a SQL solution, and noted the importance of thinking with tables rather than loops. But the post made me think about how I’d solve the problem in R, and make the solution as computationally efficient as possible. As in many of my posts, we’ll take a “tidy approach” that focuses on the dplyr, tidyr, and ggplot2 packages.
Setup: Getting stock data with the tidyquant package
The original blog post worked on just one stock symbol, but let’s make the problem slightly harder by examining data for several stocks at the same time. To gather some real data, I’ll use the excellent tidyquant package from Business Science. I’ll pick four stocks: Apple, Microsoft, Netflix, and Tesla.
Notice that the result from tq_get
is tidy: one row per symbol per day. This isn’t part of the interview question, but we can visualize the stocks over time using ggplot2.
Solving the problem with a self-join
One way to approach this problem (one taken in SQL by the original blog post) is to cross join the data with itself, and examine all combinations of dates with a future date.
In this case, since we have the symbol
column and we only want to compare symbols to themselves, we’ll want to join based on the "symbol"
column.1
This creates all combinations of present and future dates, ending up in columns close
and close_future
. Now we’ll need a few dplyr steps:
- Filter for cases where
date_future
is greater thandate
(filter
) - Look at the change if you bought on
date
and sold ondate_future
(mutate
) - Aggregate to find the maximum and minimum change within each symbol (
group_by
/summarize
)
So our solution looks like this:
I don’t know if this metric is at all meaningful from a finance perspective (of course nobody would know in advance when you’re supposed to buy and sell a stock to maximize a stock), but it’s interesting to see how much Netflix has grown since 2010, and how Tesla has seen a large drop (though based on the graph above it has since recovered).
The above lines up with the original blog post: join the data to itself while constraining on comparing dates to future dates. However, in R, on a dataset this size, it’s not a very fast approach. It had to create a 25-million row table (~6.3 million for each symbol; the number of days squared), and it had to make millions of comparisons. On my machine it takes about 3 seconds to run, and if we’d had a dataset of thousands of symbols it would have been impossible.
Solving with vectorized functions: cummean and cummax
A much faster approach in R is to use vectorized functions. In particular, we can use cummax()
(cumulative max) and cummin()
(cumulative min), to quickly discover the highest and lowest future prices from each date.
We first sort by the stock symbol2 and by the descending date. We also group by the symbol, which means we’ll be looking at changes within each stock.
Look, for instance, at the 10th row of the table. The closing price of AAPL on 2019-12-10 was 268, and the highest it will ever be in the future is 284, while the lowest it will ever be is 268. This means that someone buying on 2019-12-10 could gain up to $16, and couldn’t lose money.
By finding the largest/smallest difference between each day’s price and the highest/lowest future price, we can get the highest possible gain or the biggest possible loss. This gives us a new approach to the interview problem.
In R, this is about 1000X faster than the self-join approach (for me it takes about 5 milliseconds, compared to about 3 seconds for self-join). cummin
and cummax
are fast (they’re base R functions implemented in C), and the effect on memory is linear in terms of the number of observations.
It’s worth mentioning that the vectorized approach isn’t specific to dplyr (though dplyr makes it especially convenient for performing within each stock symbol). If x
is an ordered vector, you can use rev(cummax(rev(x)))
to get the highest future gain, which gives us a one liner for performing this on a vector of prices.
Variation: waiting at least N days?
There’s one more variation from the original blog post that’s worth thinking through. What if you set the rule that you have to hold the stock for some number N (say, 100) of trading days N before selling it?
You can solve this in R with the dplyr’s lag()
window function, which shifts each value in a vector several positions later.
From this we can see that if we had to wait 100 days, our “highest possible gain” doesn’t change, but our “biggest possible loss” shrinks for some of them. This makes sense because these stocks have generally been growing over time.
What if we wanted to see how the N (the gap we have to wait) changes the biggest gain and biggest loss? Well, one of my favorite tidyr tricks is crossing()
, which duplicates a dataset multiple times. By duplicating the data for many values of N, then doing lag(cummax(close), N[1])
, we can calculate a solution for many values of N, and visualize them.
This extension was made possible by two features of our approach:
- Advantage of the tidyverse: Since we solved the problem with dplyr, it was fairly straightforward to examine the effect of another parameter (
N
, the number of days we had to wait) by adding an extragroup_by
variable. - Advantage of computational speed: Since we managed to solve the problem in milliseconds rather than seconds, it wasn’t much of a hassle to solve it for a hundred different input parameters.
-
Fun fact: if there were only one stock and it didn’t have a
symbol
column, we could have usedcrossing()
to find all combinations. ↩ -
Technically, we don’t have to sort by symbol; as long as you’re sorted by descending date, the grouped operation will work within each symbol. However, this makes the data to understand when viewing the first few rows. ↩