Posts

darwin api

DARWIN API: What’s Been & What’s To Come (2019)

Earlier this year marked a significant milestone in Darwinex’ evolution… the Beta-state launch of the DARWIN API.

This included the following sub APIs:

  1. DARWIN Info API (to access Quote and Attributes data)
  2. DARWIN Quotes API (to stream Quotes from active DARWINs in real-time via REST)
  3. Quote Websocket API (to stream Quotes from active DARWINs in real-time via Web Sockets)
  4. DARWIN Trading API (to trade DARWINs via REST as you would via the platform)
  5. Investor Accounts Info API (to retrieve account and portfolio performance details, e.g. equity, position data etc)

All 5 sub APIs have now been rolled out to everyone.


Get Access To The DARWIN API


The entire suite of APIs was covered in great detail in a dedicated video tutorial series on the Darwinex YouTube channel! If you haven’t watched it yet, here’s the link to bookmark:


What did we achieve?

The API’s launch enabled for the very first time, programmatic access to the Darwinex Community dataset.

It enabled anyone and everyone to analyse and trade trader talent algorithmically, build custom indicators, automated trading robots, analysis tools and even full-fledged DARWIN Trading Terminals from scratch, to name a few things.

Algorithmic and discretionary/manual traders alike, quants, data scientists and practitioners across the board could now access a trader behaviour-powered, multi-variate financial time series that offers a richer feature-space than OHLCV (Open, High, Low, Close, Volume) price data found in traditional asset classes.


Why does that matter?

..because information is power.

The more informed your investments, the better your odds of survival.

Means to address several evergreen trading challenges became a reality.

The API exposed endpoints that enabled anyone to create their own custom DARWIN filters and indexes for both investment in and to inform existing investments.

For instance:

  1. Would you work with just session-sensitive over-the-counter tick volume or would the time-weighted order frequency of high performance DARWINs offer better insights into potential mispricing events? Watch this video for more information.
  2. Would your volume-spread strategy be served better by saturated “smart-money” assumptions about volume/price differentials or the direction performance DARWINs took when those differentials took place? Watch this video for insight.
  3. What seems more reliable… the Quote evolution of a DARWIN that’s successfully traded volatility after transaction costs, in continuously changing market conditions, navigating news, black swans, market sweeps and more, for 5+ years with a Darwinex-verified track record? …or the hyperbolic backtest of a volatility strategy with invariant market conditions?
  4. Would it make sense to apply technical analysis to trader behaviour?
  5. How do good traders react to major economic news releases?
  6. Would it make more sense to set a BUY STOP order with a DARWIN that’s consistently traded the Non-Farm Payroll successfully, or a straddle of BUY/SELL STOP orders around the EUR/USD as a more educated gamble than 50/50?
  7. What does a portfolio of composed of intraday, swing or night-scalper DARWINs look like?
  8. What is the correlation of your strategy’s returns with that of the Darwinex Community..
  9. …the list goes on and on.

Depth of available data

With historical end-of-day data available for all 12 DARWIN investment attributes, Quote data available in multiple timeframes down to tick level, and another 200+ diagnostic attributes available via FTP to complement data available via the DARWIN API, trading strategy and DARWIN portfolio R&D scopes increased x-fold.

API users are now empowered to build proprietary solutions with the DARWIN asset class… be they filters, portfolios, indicators, platform features.. the possibilities span as far as your imagination can take them.

Here’s an example that demonstrates such development:

To support users in this quest, Darwinex Labs will continue to publish detailed video tutorials and source code on a weekly basis, as well as API wrappers via GitHub, all open source.

And as the API matures further over time, available features will also see an increase!

As always, we’ll publish all beta and release candidate features on the Darwinex Community Forum, where we’ll also rely heavily on your valuable feedback and experience over time.


Related links

More information and access to the APIs

API Walkthrough

Darwinex API Store

API T&C

Darwinex Collective Slack Workspace for Algorithmic R&D

LVQ and Machine Learning for Algorithmic Traders – Part 3

In the last two posts, LVQ and Machine Learning for Algorithmic Traders – Part 1, and LVQ and Machine Learning for Algorithmic Traders – Part 2, we demonstrated how to use:

  1. Linear Vector Quantization
  2. Correlation testing

..to determine the relevance/importance of and correlation between strategy parameters respectively.

Yet another technique we can use to estimate the best features to include in our trading strategies or models, is called Recursive Feature Elimination, an automatic feature selection approach.


What is Automatic Feature Selection?

It enables algorithmic traders to construct multiple quantitative models using different segments of a given dataset, allowing them to identify which combination of features or strategy parameters results in the most accurate model.

Recursive Feature Elimination

Recursive Feature Elimination

One such method of automatic feature selection is Recursive Feature Elimination (RFE).

To evaluate the best feature-space for an accurate model, the technique iteratively applies a Random Forest algorithm to all possible combinations of the input feature data (strategy parameters).

The end-outcome is a list of features that produce the most accurate model.

Using RFE, algorithmic traders can refine and speed up trading strategy optimization significantly (subject to this list being smaller than the total number of input parameters of course).


R (Statistical Computing Environment)

R (Statistical Computing)

We’ll make use of the caret (Classification and Regression Training) package in R once again.

It contains functions to perform RFE conveniently, allowing us to spend more time in analysis instead of writing the functionality ourselves.

Recursive Feature Elimination – Step by Step Process

  1. As before, run “raw” backtests without any optimization, employing all features (parameters), and save your results in a suitable data structure (e.g. CSV table) + load the caret and randomForest libraries.
  2. Specify the algorithm control using a Random Forest selection method.
  3. Execute the Recursive Feature Elimination algorithm.
  4. Output the algorithm’s chosen features (strategy parameters).

 

Step 1: Load the data + “randomForest” and “caret” machine learning libraries in R

> library(caret)
> library(randomForest)
> train.blogpost <- read.csv("data.csv", head=T, nrows=1000)
> train.blogpost <- train.blogpost[,grep("feature|target",names(train.blogpost))]

Step 2: Specify the control using Random Forest selection function

> rfe.control <- rfeControl(functions=rfFuncs, method="cv", number=10)

Step 3: Execute the Recursive Feature Elimination algorithm

rfe.output <- rfe(train.blogpost[,1:21], train.blogpost[,22], sizes=c(1:21), rfeControl = rfe.control)

Step 4: Output chosen features (strategy parameters)

> print(rfe.output)
> predictors(rfe.output)
> plot(rfe.output, type=c("o", "g"))

Recursive Feature Elimination - Output Predictors

Recursive Feature Elimination – Output Predictors

Recursive Feature Elimination - RMSE Plot

Recursive Feature Elimination – RMSE Plot


Conclusion

From these results, it is easily apparent that a model with:

  1. The first two parameters only, generates the most inaccurate model.
  2. The algorithm’s 5 selected parameters (out of a total of 21) produces the most accurate model.
  3. Any number of parameters greater than 5 produces lower but comparable accuracy, therefore choosing any greater a number of parameters would add zero value to the model.

Based on this, an algorithmic trader could significantly reduce his/her optimization overhead, by culling the number of strategy parameters employed in backtesting and optimization.


Additional Resource: Measuring Investments’ Risk: Value at Risk (VIDEO)
* please activate CC mode to view subtitles.

Do you have what it takes? – Join the Darwinex Trader Movement!

Darwinex - The Open Trader Exchange

Darwinex – The Open Trader Exchange