Quantlab v3.9.2:策略集合

当前已经积累的策略集:

图片

代码已经发布至星球:

图片

今天的策略是风险平价:

图片

name = "全球大类资产风险平价"
desc = "全球大类资产风险平价"
symbols = ["563300.SH", "510300.SH", "510500.SH", "512100.SH", "159915.SZ", "159967.SZ", "159920.SZ", "513100.SH", "513500.SH", "518880.SH", "159985.SZ", "513520.SH", "510050.SH"]
algo_period = "RunMonthly"
algo_period_days = 20 #RunPeriod时有效
rules_buy = []
at_least_buy = 1
rules_sell = []
at_least_sell = 1
order_by = ""
topK = 1
dropN = 0
b_ascending = 0
algo_weight = "WeightERC"
algo_weight_fix = []
feature_names = []
features = []
start_date = "20100101"
end_date = ""
commission = 0.0001
slippage = 0.0001
init_cash = 1000000
benchmark = "510300.SH"

算子代码如下:

class WeightERC(Algo):
    """
    Sets temp['weights'] based on equal risk contribution algorithm.

    Sets the target weights based on ffn's calc_erc_weights. This
    is an extension of the inverse volatility risk parity portfolio in
    which the correlation of asset returns is incorporated into the
    calculation of risk contribution of each asset.

    The resulting portfolio is similar to a minimum variance portfolio
    subject to a diversification constraint on the weights of its components
    and its volatility is located between those of the minimum variance and
    equally-weighted portfolios (Maillard 2008).

    See:
        https://en.wikipedia.org/wiki/Risk_parity

    Args:
        * lookback (DateOffset): lookback period for estimating covariance
        * initial_weights (list): Starting asset weights [default inverse vol].
        * risk_weights (list): Risk target weights [default equal weight].
        * covar_method (str): method used to estimate the covariance. See ffn's
          calc_erc_weights for more details. (default ledoit-wolf).
        * risk_parity_method (str): Risk parity estimation method. see ffn's
          calc_erc_weights for more details. (default ccd).
        * maximum_iterations (int): Maximum iterations in iterative solutions
          (default 100).
        * tolerance (float): Tolerance level in iterative solutions (default 1E-8).


    Sets:
        * weights

    Requires:
        * selected

    """

    def __init__(
            self,
            lookback=pd.DateOffset(months=3),
            initial_weights=None,
            risk_weights=None,
            covar_method="ledoit-wolf",
            risk_parity_method="ccd",
            maximum_iterations=100,
            tolerance=1e-8,
            lag=pd.DateOffset(days=0),
    ):
        super(WeightERC, self).__init__()
        self.lookback = lookback
        self.initial_weights = initial_weights
        self.risk_weights = risk_weights
        self.covar_method = covar_method
        self.risk_parity_method = risk_parity_method
        self.maximum_iterations = maximum_iterations
        self.tolerance = tolerance
        self.lag = lag

    def __call__(self, target):
        selected = target.temp["selected"]
        curr_symbols = target.df_bar.index

        selected = [s for s in selected if s in curr_symbols]

        if len(selected) == 0:
            target.temp["weights"] = {}
            return True

        if len(selected) == 1:
            target.temp["weights"] = {selected[0]: 1.0}
            return True

        t0 = target.now - self.lag
        prc = target.df_close.loc[t0 - self.lookback: t0, selected]

        returns = prc.pct_change().dropna()
        if len(returns) < 10:
            return False
        # ERC = EqualRiskContribution(returns.cov())
        # ERC.solve()

        # tw = ERC.x
        # print(tw)

        tw = calc_erc_weights(
            returns,
            initial_weights=self.initial_weights,
            risk_weights=self.risk_weights,
            covar_method=self.covar_method,
            risk_parity_method=self.risk_parity_method,
            maximum_iterations=self.maximum_iterations,
            tolerance=self.tolerance,
        )

        target.temp["weights"] = tw.dropna().to_dict()
        return True

吾日三省吾身

想给一些量化新手提供更易用的“开箱即用”的平台。这几天调研了前端的开发。

有考虑过uni-app,ionic和flutter。

uni-app与ionic类似,都是web的解决方案,当然可以打包成app,flutter可以打包成原生的app。——当然app并非咱们直接的刚需,但有当然更好了。

三者缺点是都要引入新的技术栈。uni-app与ionic都支持vue3,而flutter背后是dart。

原先对于dart这种高级语言的形态还是很满意了。折腾了Andriod Studio,androd sdk, flutter sdk等,包非常大,动辄几个G,加上在外网就更慢了。在启动安卓虚拟机时,最终决定放弃了。——还是太重。

想起之前折腾过几天的flet。

又过了这么一段时间,有惊喜。

之前体验过flet 和 nicegui。工欲善其事,必先利其器-NiceGUI:AI量化投资研究开发,后来用nicegui是直接原因是当前flet竟然没有Datetime Picker控件。之后又不用nicegui的原因是,它的控件更面像桌面程序。做出来的web感觉四不像。而我们真正的诉求的手机H5为主,App为辅

Flet开发的图,还行吧。

图片

历史文章:

Quantlab v3.9.2:策略集合——创成长与红利低波动的智能Beta策略(年化29.3%,最大回撤24%)(附源码)

Quantlab3.9代码:内置大模型LLM因子挖掘,全A股数据源以及自带GUI界面

AI量化实验室——2024量化投资的星辰大海

继续写策略:大小盘轮动——沪深300与创业板ETF的轮动,取动量大者,若均小于0,则持有现金:

图片

年化18.1%,最大回撤28.2%。

图片

其实大家发现,结合我们昨天的三个策略:

Quantlab v3.9.2:策略集合——创成长与红利低波动的智能Beta策略(年化29.3%,最大回撤24%)(附源码)

要开发一个年化20-30%的策略并不难

为什么大家仍然觉得投资很难。

因为不确定性,更确切地说,因为回撤。

在回撤期,你无法相信是策略或因子失效,还是正常的策略回调。

国内版本的“全天候策略”:

图片

策略代码如下:

name = "中国版全天候策略"
desc = "中国版全天候策略"
symbols = ['159928.SZ','510050.SH','512010.SH','513100.SH','518880.SH','511220.SH','511010.SH','161716.SZ']
algo_period = "RunQuarterly"
algo_period_days = -1 #RunPeriod时有效
rules_buy = []
at_least_buy = 1
rules_sell = []
at_least_sell = 1
order_by = ""
topK = 1
dropN = 0
b_ascending = 0
algo_weight = "WeightFix"
algo_weight_fix = [0.03,0.06,0.08,0.05,0.1,0.32,0.26,0.1] # WeightFix时生效
feature_names = []
features = []
start_date = "20100101"
end_date = ""
commission = 0.0001
slippage = 0.0001
init_cash = 1000000
benchmark = "510300.SH"

系统源代码(策略与数据)请前往星球下载:

图片

所有的策略参数均在这个目录下(还是持续更新中):

图片

历史文章:

Quantlab v3.9.2:策略集合——创成长与红利低波动的智能Beta策略(年化29.3%,最大回撤24%)(附源码)

Quantlab3.9代码:内置大模型LLM因子挖掘,全A股数据源以及自带GUI界面

AI量化实验室——2024量化投资的星辰大海

咱们星球围绕一个核心:开发有效的可实盘的策略。

代码更新(更新策略集合)

图片

task的定义在:

engine/task.py中: 需要的配置项很少:

name = "创业板动量择时"
desc = "创业板动量择时:在动量大的时候买入,动量小的时候卖出"
symbols = ['159915.SZ']
algo_period = "RunDaily"
algo_period_days = 20 #RunPeriod时有效 rules_buy = ['roc_20>N1'] at_least_buy = 1 rules_sell = ['roc_20<N2'] at_least_sell = 1 order_by = "" topK = 1 dropN = 0 b_ascending = 0 algo_weight = "WeightEqually" algo_weight_fix = [] feature_names = ['roc_20'] features = ['roc(close,20)'] start_date = "20100101" end_date = "" commission = 0.0001 slippage = 0.0001 init_cash = 1000000 benchmark = "510300.SH"

回测结果如下:

创成长与红利低波动的智能beta策略(年化29.3%,最大回撤24%):

图片

全球大类资产轮动,年化26.1%:

图片

再来创业板择时

图片

import importlib
from dataclasses import dataclass, field, asdict

from datafeed.dataloader import CSVDataloader
from engine.algos import *
import json

@dataclass
class Task:
    name: str = '策略名称'
    desc: str = '策略描述'

    # 标的池
    symbols: list[str] = field(default_factory=list)

    algo_period: str = 'RunDaily'
    algo_period_days: int = 20  # 仅当RunDays时需要

    # 规则选标的:当rules_buyrules_sell至少有一条规则时,添加SelectBySignal算子, 在SelectAll之后
    rules_buy: list[str] = field(default_factory=list)  # roc(close,20)>0.08
    at_least_buy: int = 1
    rules_sell: list[str] = field(default_factory=list)  # roc(close,20)<0
    at_least_sell: int = 1

    # 排序算子:当order_by不为空时,在选股之后,添加SelectTopK算子
    order_by: str = ''  # 比如roc(close,20),或者 roc(close,20)+ slope(close,20)
    topK: int = 1
    dropN: int = 0
    b_ascending: bool = 0  # 默认降序, 0=False

    # 仓位分配,默认等权
    algo_weight: str = 'WeightEqually'
    algo_weight_fix: list = field(default_factory=list)  # WeightFix时需要

    feature_names: list = field(default_factory=list)  # roc(close,20)
    features: list = field(default_factory=list)  # roc_20

    # 回测时用户可以改变的,不变就使用默认值,字符串为空表示不设置
    start_date: str = '20100101'
    end_date: str = ''
    commission: float = 0.0001
    slippage: float = 0.0001
    init_cash: int = 100 * 10000
    benchmark: str = '510300.SH'

    def __str__(self):
        return self.name

    def load_datas(self):
        logger.info('开始加载数据...')  #
        loader = CSVDataloader(DATA_DIR.joinpath('universe'), self.symbols, start_date=self.start_date,
                               end_date=self.end_date)
        df = loader.load(fields=self.features, names=self.feature_names)
        df['date'] = df.index
        df.dropna(inplace=True)
        return df

    def _parse_period(self):
        module = importlib.import_module('engine.algos')
        if self.algo_period == 'RunDays':
            algo_period = getattr(module, self.algo_period)(self.algo_period_days)
        else:
            if self.algo_period in ['RunWeekly', 'RunOnce', 'RunMonthly', 'RunQuarterly', 'RunYearly']:
                algo_period = getattr(module, self.algo_period)()
            else:
                algo_period = RunAlways()
        return algo_period

    def _parse_weights(self):
        module = importlib.import_module('engine.algos')
        if self.algo_weight == 'WeightEqually':
            return WeightEqually()
        if self.algo_weight == 'WeightFix':
            if len(self.symbols) != len(self.algo_weight_fix):
                logger.error('固定权重 != symbols ')
                return
            return WeightFix(self.algo_weight_fix)
        if self.algo_weight == 'WeightERC':
            return WeightERC()
        return None

    def get_algos(self):
        algos = []
        algos.append(self._parse_period())
        algos.append(SelectAll())
        if len(self.rules_buy) or len(self.rules_sell):
            algos.append(SelectBySignal(rules_buy=self.rules_buy,
                                        buy_at_least_count=self.at_least_buy,
                                        rules_sell=self.rules_sell,
                                        sell_at_least_count=self.at_least_sell
                                        ))

        if len(self.order_by):
            algos.append(SelectTopK(factor_name=self.order_by, K=self.topK, drop_top_n=self.dropN,
                                    b_ascending=self.b_ascending))

        algos.append(self._parse_weights())
        algos.append(Rebalance())
        return algos

    def to_toml(self, name):
        import toml
        from config import DATA_DIR
        with open(DATA_DIR.joinpath('tasks').joinpath(name + '.json'), "w", encoding='UTF-8') as f:
            toml.dump(asdict(self),f)

    def to_json(self, name):
        import json
        from config import DATA_DIR
        with open(DATA_DIR.joinpath('tasks').joinpath(name + '.json'), "w", encoding='UTF-8') as f:
            json.dump(asdict(self), f, ensure_ascii=False)


def task_from_json(name):
    with open(DATA_DIR.joinpath('tasks').joinpath(name), "r", encoding='UTF-8') as f:
        json_data = json.load(f)
        return Task(**json_data)


@dataclass
class TaskAssetsAllocation(Task):

    def get_algos(self):
        return [
            self._parse_period(),
            SelectAll(),
            self._parse_weights(),
            Rebalance()
        ]


@dataclass
class TaskRolling(Task):  # 轮动策略模板
    def get_algos(self):
        return [
            RunAlways(),
            SelectBySignal(rules_buy=self.rules_buy,
                           buy_at_least_count=self.at_least_buy,
                           rules_sell=self.rules_sell,
                           sell_at_least_count=self.at_least_sell
                           ),
            SelectTopK(factor_name=self.order_by, K=self.topK, drop_top_n=self.dropN,
                       b_ascending=self.b_ascending),
            self._parse_weights(),
            Rebalance()
        ]


@dataclass
class TaskRolling_Model(Task):  # 轮动策略模板
    def get_algos(self):
        return [
            RunAlways(),
            SelectBySignal(rules_buy=self.rules_buy,
                           buy_at_least_count=self.at_least_buy,
                           rules_sell=self.rules_sell,
                           sell_at_least_count=self.at_least_sell
                           ),
            SelectByModel(model_name=self.model_name, feature_names=self.feature_names),
            SelectTopK(factor_name=self.order_by, K=self.topK, drop_top_n=self.dropN,
                       b_ascending=self.b_ascending),
            self._parse_weights(),
            Rebalance()
        ]


@dataclass
class TaskPickTime(Task):  # 择时策略模板
    def get_algos(self):
        return [
            RunAlways(),
            SelectBySignal(rules_buy=self.rules_buy,
                           buy_at_least_count=self.at_least_buy,
                           rules_sell=self.rules_sell,
                           sell_at_least_count=self.at_least_sell
                           ),
            WeightEqually(),
            Rebalance()
        ]

吾日三省吾身

专注,一针顶破天。

有所为有所不为,有所必为。

能力圈与行动圈。

量化投资的本质还是投资。

投资没有圣杯,它是一个无限游戏。

投机就像赌场,为什么让很多人着迷,因为真的有人会赢,而且赢得很多。

塔勒布在《随机漫步的傻瓜》里讲的,地球70亿人,真就有那么几个幸运儿,他纯粹就是运气好。

但最怕这样的人出书立说,把他的经历当成经验让我们信以为真。

股神并不炒股,股神的公司80%的资产全资控股的子公司,合纵连横。可以为股神是经营了一个大集团公司,可以打通多个领域,多条产业链,相互扶持。——这个神话不可以复制。

普通人也确有人可以以交易为生,但冷暖自知。

个人曾经有一段不短的产品经理的职业经历,我把平台当产品来经营的——并不刻意所谓知识付费。

产品最重要的事:为谁解决什么问题?

几类用户:

1、使用者:不会代码,查学理财,甚至会一点量化投资,可以订阅别人的策略。————获得策略,理解策略,创建策略,分享策略。

2、策略或组合主理人。通过分享策略参数赚钱。

发布者:股市刺客,转载请注明出处:https://www.95sca.cn/archives/103429
站内所有文章皆来自网络转载或读者投稿,请勿用于商业用途。如有侵权、不妥之处,请联系站长并出示版权证明以便删除。敬请谅解!

(0)
股市刺客的头像股市刺客
上一篇 2024 年 7 月 29 日
下一篇 2024 年 7 月 29 日

相关推荐

发表回复

您的电子邮箱地址不会被公开。 必填项已用 * 标注