DataCrunch

DataCrunch

This weekly prediction contrast ranks 3000 most liquid US equities for DataCrunch's Hedge Fund.

DataCrunch uses the quantitative research of the CrunchDAO to manage its systematic market-neutral portfolio. DataCrunch built a dataset covering thousands of publicly traded U.S companies.

The long-term strategic goal of the fund is capital appreciation by capturing idiosyncratic return at low volatility.

In order to achieve this goal, DataCrunch needs the community to assess the relative performance of all assets in a subset of the Russell 3000 universe. In other words, DataCrunch is expecting your model to rank the constituent of its investment universe.

Prize

Reward are split in targets as follow. Each target represent an investment horizon and can be predicted using the DataCrunch dataset. Reward are distributed every month based on crunchers performance:

  • 60,000 $USDC yearly on target_b + $10k bonus for cumulative alpha target.

  • 20,000 $USDC yearly on target_g

  • 20,000 $USDC yearly on target_r

  • 10,000 $USDC yearly on target_w

Weekly Crunches

Every week, two phases:

  • The Submission Phase: every Friday at 8PM UTC to Tuesday 12PM UTC, the system will release an additional moon. Competitors will be able to submit their code or model

  • The Out-Of-Sample Phase: the models will be run on the Out-Of-Sample data (the live data). The score of each target are published as they are resolved against live market data as reward.

Data

Each row of the dataset describes a stock at a certain date.

The dataset is composed of three files, X_train y_train and X_test.

X_train

  • moon: A sequentially increasing integer representing a date. Time between subsequent dates is constant, denoting a weekly fixed frequency at which the data is sampled.

  • id: A unique identifier representing a stock at a given Moon. Note that the same asset has a different id in different Moons.

  • (gordon_Feature_1, …, dolly_Feature_30): Anonymised features that describe the state of assets on a given date. They are grouped into several families, or ways of assessing the relative performance of each stock on a given month.

Note: All features have the string "Feature" in their name, prefixed by a code name for the feature family.

y_train

  • moon: Same as in X_train.

  • id: Same as in X_train.

  • (target_w, …, target_b): the targets that may help you build your models. Target_w, r, g, b refer to 7, 28, 63, 91 days compounding of returns.

X_test - y_test

X_test and y_test has the same structure as X_train and y_train but comprises only 13 moons. These files are used to simulate the submission process locally via crunch.test() (within the code), or crunch test (via the cli). The aim is to help participants debug their code and have successful submissions. A successful local test usually means no errors during execution on the submission platform. The data of these files is composed of the 13 moons on which the longest target (target_b) is not resolved. The missing data for each target were replaced by -1 values.

Note: the features are split in two groups. The legacy features and the v2 features which are suffixed by "_v2"

The Performance Metric

The infer function from your code will return your predictions.

A Spearman rank correlation will be computed against the live targets.

Reward Scheme

All reward are computed on the leaderboards.

The Historical Rewards are the sum of every payout you have received from the DataCrunch competition.

The Projected Rewards are the current estimated rewards yet to be distributed.

Payouts calculation

Payouts are computed based on your the rank of your prediction for each target. The higher the Spearman Rank between your prediction and market realisation, the higher your rank on the leaderboard.

The payouts are distributed according to an exponential function of your position on the leaderboards, as shown in the graph below, the top 20 crunchers earn approximately 30% of the total rewards.

Computing Resources

Competitors will be allocated a specified quantity of computing resources within the cloud environment for the execution of their code.

During the phase, they are entitled to 10 hours of GPU or CPU compute time per week, and for the OOS phase, this allocation increased 10% in case of slower deployement by the system.

During the SUBMISSION phase, you are entitled to 10 hours of GPU or CPU computing time per week, and during the OOS phase, this allocation is increased by 10%.

Quickstarter Notebook

A Quickstarter notebook is available below so you can get familiar with what is expected from you.

Useful Links

Competition Host

Crunch Capital

Prize Pool

$120,000/year

Tags

Alternative Risk PremiaEquityFinanceLong ShortRankingTabular

Copyright © 2024 Crunch Lab Inc. All rights reserved.

36 Manchester Drive, Westfield, New Jersey, 07090, United States