## Quantitative Modeling for Algorithmic Traders – Primer

Quantitative Modeling techniques enable traders to mathematically identify, what makes data “tick” – no pun intended 🙂 .

They rely heavily on the following core attributes of any sample data under study:

1. Expectation – The mean or average value of the sample
2. Variance – The observed spread of the sample
3. Standard Deviation – The observed deviation from the sample’s mean
4. Covariance – The linear association of two data samples
5. Correlation – Solves the dimensionality problem in Covariance

## Why a dedicated primer on Quantitative Modeling?

Understanding how to use the five core attributes listed above in practice, will enable you to:

1. Construct diversified DARWIN portfolios using Darwinex’ proprietary Analytical Toolkit.
2. Conduct mean-variance analysis for validating your DARWIN portfolio’s composition.
3. Build a solid foundation for implementing more sophisticated quantitative modeling techniques.
4. Potentially improve the robustness of trading strategies deployed across multiple assets.

Hence, a post dedicated to defining these core attributes, with practical examples in R (statistical computing language) should hopefully serve as good reference material to accompany existing and future posts.

### Why R?

1. It facilitates the analysis of large price datasets in short periods of time.
2. Calculations that would otherwise require multiple lines of code in other languages, can be done much faster as R has a mature base of libraries for many quantitative finance applications.

* Sample data (EUR/USD and GBP/USD End-of-Day Adjusted Close Price) used in this post was obtained from Yahoo, where it is freely available to the public.

### Before progressing any further, we need to download EUR/USD and GBP/USD sample data from Yahoo Finance (time period: January 01 to March 31, 2017)

In R, this can be achieved with the following code:

library(quantmod)

getSymbols("EUR=X",src="yahoo",from="2017-01-01", to="2017-03-31")

getSymbols("GBP=X",src="yahoo",from="2017-01-01", to="2017-03-31")

Note: “EUR=X” and “GBP=X” provided by Yahoo are in terms of US Dollars, i.e. the data represents USD/EUR and USD/GBP respectively. Hence, we will need to convert base currencies first.

To achieve this, we will first extract the Adjusted Close Price from each dataset, convert base currency and merge both into a new data frame for use later:

eurAdj = unclass(EUR=X$EUR=X.Adjusted) # Convert to EUR/USD eurAdj = 1/eurAdj  gbpAdj <- unclass(GBP=X$GBP=X.Adjusted)

# Convert to GBP/USD
gbpAdj <- 1/gbpAdj

# Extract EUR dates for plotting later.
eurDates = index(EUR=X)

# Create merged data frame.
eurgbp_merged <- data.frame(eurAdj,gbpAdj)

EUR/USD and GBP/USD (Jan 01 – Mar 31, 2017)

Finally, we merge the prices and dates to form one single dataframe, for use in the remainder of this post:

eurgbp_merged = data.frame(eurDates, eurgbp_merged)

colnames(eurgbp_merged) = c("Dates", "EURUSD", "GBPUSD")

### The mean μ of a price series is its average value.

It is calculated by adding all elements of the series, then dividing this sum by the total number of elements in the series.

Mathematically, the mean μ of a price series P, where elements p ∈ P, with n number of elements in P, is expressed as:

$$μ = E(p) = \frac{1}{n} ∑ (p_1 + p_2 + p_3 + … + p_n)$$

In R, the mean of a sample can be calculated using the mean() function.

For example, to calculate the mean price observed in our sample of EUR/USD data, ranging from January 01 to March 31, 2017, we execute the following code to arrive at mean 1.065407:

mean(eurgbp_merged$EURUSD) [1] 1.065407 Using the plotly library in R, here’s the mean overlayed graphically on this EUR/USD sample: library(plotly) plot_ly(name="EUR/USD Price", x = eurgbp_merged$Dates, y = as.numeric(eurgbp_merged$EURUSD), type="scatter", mode="lines") %>% add_trace(name="EUR/USD Mean", y=(as.numeric(mean(eurgbp_merged$EURUSD))), mode="lines")

EUR/USD Mean R Plotly Chart (Jan 01 – Mar 31, 2017)

### The varianceσ² of a price series is simply the mean or expectation, of the square of (how much price deviates from the mean).

It characterises the range of movement around the mean, or “spread” of the price series.

Mathematically, the variance σ² of a price series P, with elements p ∈ P, and mean μ, is expressed as:

$$σ²(p) = E[(p – μ)²]$$

Standard Deviation is simply the square root of variance, expressed as σ:

$$σ = \sqrt{σ²(p)} = \sqrt{E[(p – μ)²]}$$

In R, the standard deviation of a sample can be calculated using the sd() function.

For example, to calculate the standard deviation observed in our sample of EUR/USD data, ranging from January 01 to March 31, 2017, we execute the following code to arrive at s.d. 0.00996836:

sd(eurgbp_merged$EURUSD) [1] 0.00996836 Using the plotly library in R again, we can overlay a single (or more) positive and negative standard deviation from the mean, as follows: plot_ly(name="EUR/USD Price", x = eurgbp_merged$Dates, y = as.numeric(eurgbp_merged$EURUSD), type="scatter", mode="lines") %>% add_trace(name="+1 S.D.", y=(as.numeric(mean(eurgbp_merged$EURUSD))+sd(eurgbp_merged$EURUSD)), mode="lines", line=list(dash="dot")) %>% add_trace(name="-1 S.D.", y=(as.numeric(mean(eurgbp_merged$EURUSD))-sd(eurgbp_merged$EURUSD)), mode="lines", line=list(dash="dot")) %>% add_trace(name="EUR/USD Mean", y=(as.numeric(mean(eurgbp_merged$EURUSD))), mode="lines")

EUR/USD Mean +/- 1 Standard Deviation R Plotly Chart (Jan 01 – Mar 31, 2017)

### The sample covariance of two price series, in this case EUR/USD and GBP/USD, each with its respective sample mean, describes their linear association, i.e. how they move together in time.

Let’s denote EUR/USD by variable ‘e’ and GBP/USD by variable ‘g‘.

These price series will then have respective sample means of $$\overline{e}$$ and $$\overline{g}$$ respectively.

Mathematically, their sample covariance, Cov(e, g), where both have n number of data points $$(e_i, g_i)$$, can be expressed as:

$$Cov(e,g) = \frac{1}{n-1}\sum_{i=1}^{n}(e_i – \overline{e})(g_i – \overline{g})$$

In R, sample covariance can be calculated easily using the cov() function.

Before we calculate covariance, let’s first use the plotly library to draw a scatter plot of EUR/USD and GBP/USD.

To visualize linear association, we will also perform a linear regression on the two price series, followed by drawing this as a line of best fit on the scatter plot.

This can be achieved in R using the following code:

# Perform linear regression on EUR/USD and GBP/USD
fit <- lm(EURUSD ~ GBPUSD, data=eurgbp_merged)

# Draw scatter plot with line of best fit
plot_ly(name="Scatter Plot", data=eurgbp_merged, y=~EURUSD, x=~GBPUSD, type="scatter", mode="markers") %>%

add_trace(name="Linear Regression", data=eurgbp_merged, x=~GBPUSD, y=fitted(fit), mode="lines")

EUR/USD and GBP/USD Scatter Plot with Linear Regression

Based on this plot, EUR/USD and GBP/USD have a positive linear association.

To calculate the sample covariance of EUR/USD and GBP/USD between January 01 and March 31, 2017, we execute the following code to arrive at covariance 7.629787e-05:

cov(eurgbp_merged$EURUSD, eurgbp_merged$GBPUSD)

[1] 7.629787e-05

Problem: Being dimensional in nature, calculating just Covariance makes it difficult to compare price series with significantly different variances.

Solution: Calculate Correlation, which is Covariance normalized by the standard deviations of each price series, hence making it dimensionless and a more interpretable ratio of linear association between two price series.

Mathematically, Correlation ρ(e,g) of EUR/USD and GBP/USD, where $$σ_e$$ and $$σ_g$$ are their respective standard deviations, can be expressed as:

$$ρ(e,g) = \frac{Cov(e,g)}{σ_e σ_g} = \frac{\frac{1}{n-1}\sum_{i=1}^{n}(e_i – \overline{e})(g_i – \overline{g})}{σ_e σ_g}$$

• Correlation = +1 indicates EXACT positive association.
• Correlation = -1 indicates EXACT negative association.
• Correlation = 0 indicates NO linear association.

In R, correlation can be calculated easily using the cor() function.

For example, to calculate the correlation between EUR/USD and GBP/USD, from January 01 to March 31, 2017, we execute the following code to arrive at 0.5169411:

cor(eurgbp_merged$EURUSD, eurgbp_merged$GBPUSD)

[1] 0.5169411

0.5169411 implies reasonable positive correlation between EUR/USD and GBP/USD, which is what we visualized earlier with our scatter plot and line of best fit.

In future blog posts, we will examine how to construct diversified DARWIN Portfolios using the information above in practice.

The Darwinex Team

* please activate CC mode to view subtitles.

Do you have what it takes? – Join the Darwinex Trader Movement!

Darwinex – The Open Trader Exchange

## Hidden Markov Models & Regime Change: S&P500

In this post, we will employ a statistical time series approach using Hidden Markov Models (HMM), to firstly obtain visual evidence of regime change in the S&P500.

Detecting significant, unforeseen changes in underlying market conditions (termed “market regimes“) is one of the greatest challenges faced by algorithmic traders today. It is therefore critical that traders account for shifts in these market regimes during trading strategy development.

## Why use Hidden Markov Models?

Hidden Markov Models for Detecting Market Regime Change (Source: Wikipedia)

Hidden Markov Models infer “hidden states” in data by using observations (in our case, returns) correlated to these states (in our case, bullish, bearish, or unknown).

They are hence a suitable technique for detecting regime change, enabling algorithmic traders to optimize entries/exits and risk management accordingly.

We will make use of the depmixS4 package in R to analyse regime change in the S&P500 Index.

Hidden Markov Model – State Space Model (Source: StackExchange)

With any state-space modelling effort in quantitative finance, there are usually three main types of problems to address:

1. Prediction – forecasting future states of the market
2. Filtering – estimating the present state of the market
3. Smoothing – estimating the past states of the market

We will be using the Filtering approach.

Additionally, we will assume that since S&P500 returns are continuous, the probability of seeing a particular return R in time t, with market regime M being in state m, where the model used has parameter-set P, is described by a multivariate normal distribution with mean μ and standard deviation σ [1].

Mathematically, this can be expressed as:

$$p(R_t | M_t = m, P) = N(R_t | μ_m, σ_m)$$

As noted earlier, we will utilize the Dependent Mixture Models package in R (depmixS4) for the purposes of:

1. Fitting a Hidden Markov Model to S&P500 returns data.
2. Determining posterior probabilities of being in one of three market states (bullish, bearish or unknown), at any given time.

We will then use the plotly R graphing library to plot both the S&P500 returns, and the market states the index was estimated to have been in over time.

You may replicate the following R source code to conduct this analysis on the S&P500.

#### Step 1: Load required R libraries

library(quantmod) library(plotly) library(depmixS4)

#### Step 2: Get S&P500 data from June 2014 to March 2017

getSymbols("^GSPC", from="2014-06-01", to="2017-03-31")

#### Step 3: Calculate differenced logarithmic returns using S&P500 EOD Close prices.

sp500_temp = diff(log(Cl(GSPC))) sp500_returns = as.numeric(sp500_temp)

#### Step 4: Plot returns from (3) above on plot_ly scatter plot.

plot_ly(x = index(GSPC), y = sp500_returns, type="scatter", mode="lines") %>%

layout(xaxis = list(title="Date/Time (June 2014 to March 2017)"), yaxis = list(title="S&P500 Differenced Logarithmic Returns"))

## S&P500 Differenced Logarithmic Returns (June 2014 to March 2017)

S&P500 Differenced Logarithmic Returns (June 2014 to March 2017)

#### Step 5: Fit Hidden Markov Model to S&P500 returns, with three “states”

hidden_markov_model <- depmix(sp500_returns ~ 1, family = gaussian(), nstates = 3, data = data.frame(sp500_returns=sp500_returns))

model_fit <- fit(hidden_markov_model)

#### Step 6: Calculate posterior probabilities for each of the market states

posterior_probabilities <- posterior(model_fit)

#### Step 7: Overlay calculated probabilities on S&P500 cumulative returns

sp500_gret = 1 + sp500_returns
sp500_gret <- sp500_gret[-1]
sp500_cret = cumprod(sp500_gret)

plot_ly(name="Unknown", x = index(GSPC), y = posterior_probabilities$S1, type="scatter", mode="lines", line=list(color="grey")) %>% add_trace(name="Bullish", y = posterior_probabilities$S2, line=list(color="blue")) %>%

add_trace(name="Bearish", y = posterior_probabilities\$S3, line=list(color="red")) %>%

add_trace(name="S&P500", y = c(rep(NA,1), sp500_cret-1), line=list(color="black"))

## S&P500 Market Regime Probabilities (June 2014 to March 2017)

S&P500 Hidden Markov Model States (June 2014 to March 2017)

Interpretation: In any one “market regime”, the corresponding line/curve will “cluster” towards the top of the y-axis (i.e. near a probability of 100%).

For example, during a brief bullish run starting on 01 June 2014, the blue line/curve clustered near y-axis value 1.0. This correlates as you can see, with movement in the S&P500 (black line/curve). The same applies to bearish and “unknown” market states.

An interesting insight one can draw from this graphic, is how the Hidden Markov Model successfully reveals high volatility in the market between June 2014 and March 2015 (constantly changing states between bullish, bearish and unknown).

References:

[1] Murphy, K.P. (2012) Machine Learning – A Probabilistic Perspective, MIT Press.
https://www.cs.ubc.ca/~murphyk/MLbook/

Influences:

The honourable Mr. Michael Halls-Moore. QuantStart.com
http://www.quantstart.com/