# Financial Research Data Services#

`frds`

, *Financial Research Data Services*, is a Python library to simplify the
complexities often encountered in financial research. It provides a collection
of ready-to-use methods for computing a wide array of measures in the literature.

It is developed by Dr. Mingze Gao from the Macquarie University, initially started as as a personal project during his postdoctoral research fellowship at the University of Sydney.

Important

This project is under active development. Breaking changes may be expected.

If there’s any issue (likely), please contact me at mingze.gao@mq.edu.au.

## Quick start#

`frds`

is available on PyPI and can be
installed via `pip`

.

```
pip install frds
```

The structure of `frds`

is simple:

`frds.algorithms`

provides a collection of algorithms.`frds.measures`

provides a collection of measures.`frds.datasets`

provides example datasets.

## Read more#

- Supported Measures
- Absorption Ratio
- Contingent Claim Analysis
- Distress Insurance Premium
- Kyle’s Lambda
- Lerner Index (Banks)
- Limit Order Book Slope
- Long-Run Marginal Expected Shortfall (LRMES)
- Marginal Expected Shortfall
- Option Prices
- Probability of Informed Trading (PIN)
- Spread and Price Impact
- SRISK
- Systemic Expected Shortfall
- Z-score

- Algorithms
- Datasets

## Examples#

Some simple examples.

### Measure#

`frds.measures.DistressInsurancePremium`

estimates
Distress Insurance Premium, a systemic risk measure of a
hypothetical insurance premium against a systemic financial distress, which is
defined as total losses that exceed a given threshold, e.g., 15%, of total bank
liabilities.

```
>>> import numpy as np
>>> from frds.measures import DistressInsurancePremium
>>> # hypothetical implied default probabilities of 6 banks
>>> default_probabilities = np.array([0.02, 0.10, 0.03, 0.20, 0.50, 0.15])
>>> correlations = np.array(
... [
... [ 1.000, -0.126, -0.637, 0.174, 0.469, 0.283],
... [-0.126, 1.000, 0.294, 0.674, 0.150, 0.053],
... [-0.637, 0.294, 1.000, 0.073, -0.658, -0.085],
... [ 0.174, 0.674, 0.073, 1.000, 0.248, 0.508],
... [ 0.469, 0.150, -0.658, 0.248, 1.000, -0.370],
... [ 0.283, 0.053, -0.085, 0.508, -0.370, 1.000],
... ]
... )
>>> dip = DistressInsurancePremium(default_probabilities, correlations)
>>> dip.estimate()
0.2865733550799999
```

### Algorithm#

Use `frds.algorithms.GARCHModel`

to estimate a GARCH(1,1) model.
The results are as good as those obtained from other software or libraries.

```
>>> import pandas as pd
>>> from pprint import pprint
>>> from frds.algorithms import GARCHModel
>>> data_url = "https://www.stata-press.com/data/r18/stocks.dta"
>>> df = pd.read_stata(data_url, convert_dates=["date"])
>>> nissan = df["nissan"].to_numpy() * 100
>>> model = GARCHModel(nissan)
>>> res = model.fit()
>>> pprint(res)
Parameters(mu=0.019315543596552513,
omega=0.05701047522984261,
alpha=0.0904653253307871,
beta=0.8983752570013462,
loglikelihood=-4086.487358003049)
```

Use `frds.algorithms.GARCHModel_CCC`

to estimate a bivariate GARCH(1,1) - CCC model.
The results are as good as those obtained in Stata, if not better (based on loglikelihood).

```
>>> from frds.algorithms import GARCHModel_CCC
>>> toyota = df["toyota"].to_numpy() * 100
>>> model_ccc = GARCHModel_CCC(toyota, nissan)
>>> res = model_ccc.fit()
>>> pprint(res)
Parameters(mu1=0.02745814255283541,
omega1=0.03401400758840226,
alpha1=0.06593379740524756,
beta1=0.9219575443861723,
mu2=0.009390068254041505,
omega2=0.058694325049554734,
alpha2=0.0830561828957614,
beta2=0.9040961791372522,
rho=0.6506770477876749,
loglikelihood=-7281.321453218112)
```

Use `frds.algorithms.GARCHModel_DCC`

to estimate a bivariate GARCH(1,1) - DCC model.
The results are as good as those obtained in Stata/R, if not better (based on loglikelihood).

```
>>> from frds.algorithms import GARCHModel_DCC
>>> model_dcc = GARCHModel_DCC(toyota, nissan)
>>> res = model_dcc.fit()
>>> from pprint import pprint
>>> pprint(res)
Parameters(mu1=0.039598837827953585,
omega1=0.027895534722110118,
alpha1=0.06942955278530698,
beta1=0.9216715294923623,
mu2=0.019315543596552513,
omega2=0.05701047522984261,
alpha2=0.0904653253307871,
beta2=0.8983752570013462,
a=0.04305972552559641,
b=0.894147940765443,
loglikelihood=-7256.572183143142)
```