vn.py量化社区
By Traders, For Traders.
Member
加入于:
帖子: 77
声望: 17

这个算是一个小坑,因为我也还在学习过程中,代码慢慢完善。一开始在python 2.7,vnpy 1.9.2环境中实现。后面在python 3.7也基本实现,增加支持了xgboost。代码写的繁杂,多多抱歉。

首先说说Scikit-learn是Python语言中专门针对机器学习应用而发展起来的一款开源框架,相对于现在深度学习库tensorflow,由于Scikit-learn本身不支持深度学习,也不支持GPU加速,但是相对于tensorflow那种近乎黑箱的多层神经网络,还是比较好从数学来解释分析。

分类是指识别给定对象的所属类别,属于监督学习的范畴,最常见的应用场景包括垃圾邮件检测和图像识别等。目前Scikit-learn已经实现的算法包括:支持向量机(SVM),最近邻,逻辑回归,随机森林,决策树以及多层感知器(MLP)神经网络等等。这里会使用逻辑回归,决策数, MLP神经网络和SVM向量机,其实都是两句代码事情。

不考虑内部的复杂数学逻辑,这里功能性的使用Scikit-learn特征分类的功能。
1) 选取的特征值,为了避免数据值非逻辑化,这里不直接使用点位最高点这些,而使用那些和具体点位无关的指标,比如atr,cci,rsi,std和bar的涨跌百分比。
2) 利用这些特征值进行分类,这里对于期货走势就是三类,1 是之后上涨,0是之后无规律,-1是之后下跌,这里使用线性回归分析当前时点之后6根K线的走势,如果斜度下,那么当前时点归为-1下跌类,斜度上,为1上涨,如果取信p值不够,或者上下斜率不大,为0。
那么特征值就是atr,cci,rsi,std和bar的涨跌百分比,当然你可以加入KDJ,MACD更多;类别就是三类1,0,-1,利用机器学习,找出特征值和类别的隐含逻辑,进而指导交易,非常粗糙。

整体代码逻辑如下,这里大量借鉴这个 https://mp.weixin.qq.com/s?__biz=MjM5MDEzNDAyNQ==&mid=2650314212&idx=1&sn=0f04627d34f4305e0386fc7562563bff&chksm=be454f828932c694f8ce107249457e0ffba705e6e9531807e86a1f468a3a549001c00bb5389e&scene=21

一,期货K线数据, 导入k线数据,并整理加入特性和类属性
• 利用之前做的DataAnalyzer,读取Mongodb或者csv的1分钟k数据, 放入Dataframe来处理,按照定义合并出n分钟k线,这里n为10
• 还是利用DataAnalyzer,利用ta-lib方法。给Dataframe加入atr,cci,rsi,std,macd和涨跌百分比。
• 使用新方法addTrend,利用scipy.stats的线性回归求出当前时点之后的斜率,给与分类值-1,0,1.

二,数据处理,为了后面机器学习,把特征数组和类数组划分出来,并且划分训练机和测试集。
(1)划分出特征数组 X,和类别数组y
(2)划分训练集和测试集
• model_selection.train_test_split()

三,特征工程,之前需求很多特征,其实有些并没有体现规律,或者完全随机,那么没有意义,可以删除
• 根据P值选:feature_selection.SelectFpr()
• 按照百分比选出最高分特征:feature_selection.SelectPercentile()
这里使用SelectPercentile,

四,模型定义调参/选择
这里使用下面模式进行分析,然后利用网格调参
1)LogisticRegression 逻辑回归
2)DecisionTreeClassifier 决策树
3)SVC 支持向量分类
4)MLP 神经网络
• 交叉验证+网格搜索:model_selection.GridSearchCV()

五,模型测试和评价,使用选取最好的模型,进行测试看看拼接
• 模型预测:model.predict()
• Accuracy:metrics.accuracy_score()
• Presicion:metrics.precision_score()
• Recall:metrics.recall_score()

六,模型保存和调用
• 模型保存:joblib.dump()
• 模型调用:joblib.load(),这里就可以在vnpy中使用了。

后面代码整理放在后面,最后,只做参考,我发现虽然准确率有差不多70%,但是都是要你空仓,大智慧。。。

Member
加入于:
帖子: 77
声望: 17

代码如下。我也放在我的GitHub里面

# encoding: UTF-8
import warnings
warnings.filterwarnings("ignore")
from pymongo import MongoClient, ASCENDING
import pandas as pd
import numpy as np
from datetime import datetime
import talib
import matplotlib.pyplot as plt
import scipy.stats as st

from sklearn.model_selection import train_test_split
# LogisticRegression 逻辑回归
from sklearn.linear_model import LogisticRegression
# DecisionTreeClassifier 决策树
from sklearn.tree import DecisionTreeClassifier
# SVC 支持向量分类
from sklearn.svm import SVC
# MLP 神经网络
from sklearn.neural_network import MLPClassifier
from sklearn.model_selection import GridSearchCV

class DataAnalyzerforSklearn(object):
    """
    这个类是为了SVM做归纳分析数据,以未来6个bar的斜率线性回归为判断分类是否正确。
    不是直接分析HLOC,而且用下列分非线性参数(就是和具体点位无关)
    1.Percentage
    2.std
    4.MACD
    5.CCI
    6.ATR
    7. 该bar之前的均线斜率
    8. RSI
    """

    def __init__(self, exportpath="C:\\Project\\", datformat=['datetime', 'high', 'low', 'open', 'close','volume']):
        self.mongohost = None
        self.mongoport = None
        self.db = None
        self.collection = None
        self.df = pd.DataFrame()
        self.exportpath = exportpath
        self.datformat = datformat
        self.startBar = 2
        self.endBar = 12
        self.step = 2
        self.pValue = 0.015
    #-----------------------------------------导入数据-------------------------------------------------
    def db2df(self, db, collection, start, end, mongohost="localhost", mongoport=27017, export2csv=False):
        """读取MongoDB数据库行情记录,输出到Dataframe中"""
        self.mongohost = mongohost
        self.mongoport = mongoport
        self.db = db
        self.collection = collection
        dbClient = MongoClient(self.mongohost, self.mongoport, connectTimeoutMS=500)
        db = dbClient[self.db]
        cursor = db[self.collection].find({'datetime':{'$gte':start, '$lt':end}}).sort("datetime",ASCENDING)
        self.df = pd.DataFrame(list(cursor))
        self.df = self.df[self.datformat]
        self.df = self.df.reset_index(drop=True)
        path = self.exportpath + self.collection + ".csv"
        if export2csv == True:
            self.df.to_csv(path, index=True, header=True)
        return self.df

    def csv2df(self, csvpath, dataname="csv_data", export2csv=False):
        """读取csv行情数据,输入到Dataframe中"""
        csv_df = pd.read_csv(csvpath)
        self.df = csv_df[self.datformat]
        self.df["datetime"] = pd.to_datetime(self.df['datetime'])
        self.df = self.df.reset_index(drop=True)
        path = self.exportpath + dataname + ".csv"
        if export2csv == True:
            self.df.to_csv(path, index=True, header=True)
        return self.df

    def df2Barmin(self, inputdf, barmins, crossmin=1, export2csv=False):
        """输入分钟k线dataframe数据,合并多多种数据,例如三分钟/5分钟等,如果开始时间是9点1分,crossmin = 0;如果是9点0分,crossmin为1"""
        dfbarmin = pd.DataFrame()
        highBarMin = 0
        lowBarMin = 0
        openBarMin = 0
        volumeBarmin = 0
        datetime = 0
        for i in range(0, len(inputdf) - 1):
            bar = inputdf.iloc[i, :].to_dict()
            if openBarMin == 0:
                openBarmin = bar["open"]
            if highBarMin == 0:
                highBarMin = bar["high"]
            else:
                highBarMin = max(bar["high"], highBarMin)

            if lowBarMin == 0:
                lowBarMin = bar["low"]
            else:
                lowBarMin = min(bar["low"], lowBarMin)
            closeBarMin = bar["close"]
            datetime = bar["datetime"]
            volumeBarmin += int(bar["volume"])
            # X分钟已经走完
            if not (bar["datetime"].minute + crossmin) % barmins:  # 可以用X整除
                # 生成上一X分钟K线的时间戳
                barMin = {'datetime': datetime, 'high': highBarMin, 'low': lowBarMin, 'open': openBarmin,
                          'close': closeBarMin, 'volume' : volumeBarmin}
                dfbarmin = dfbarmin.append(barMin, ignore_index=True)
                highBarMin = 0
                lowBarMin = 0
                openBarMin = 0
                volumeBarmin = 0
        if export2csv == True:
            dfbarmin.to_csv(self.exportpath + "bar" + str(barmins)+ str(self.collection) + ".csv", index=True, header=True)
        return dfbarmin
    #-----------------------------------------开始计算指标-------------------------------------------------
    def dfcci(self, inputdf, n, export2csv=True):
        """调用talib方法计算CCI指标,写入到df并输出"""
        dfcci = inputdf
        dfcci["cci"] = None
        for i in range(n, len(inputdf)):
            df_ne = inputdf.loc[i - n + 1:i, :]
            cci = talib.CCI(np.array(df_ne["high"]), np.array(df_ne["low"]), np.array(df_ne["close"]), n)
            dfcci.loc[i, "cci"] = cci[-1]

        dfcci = dfcci.fillna(0)
        dfcci = dfcci.replace(np.inf, 0)
        if export2csv == True:
            dfcci.to_csv(self.exportpath + "dfcci" + str(self.collection) + ".csv", index=True, header=True)
        return dfcci

    def dfatr(self, inputdf, n, export2csv=True):
        """调用talib方法计算ATR指标,写入到df并输出"""
        dfatr = inputdf
        for i in range((n+1), len(inputdf)):
            df_ne = inputdf.loc[i - n :i, :]
            atr = talib.ATR(np.array(df_ne["high"]), np.array(df_ne["low"]), np.array(df_ne["close"]), n)
            dfatr.loc[i, "atr"] = atr[-1]
        dfatr = dfatr.fillna(0)
        dfatr = dfatr.replace(np.inf, 0)
        if export2csv == True:
            dfatr.to_csv(self.exportpath + "dfatr" + str(self.collection) + ".csv", index=True, header=True)
        return dfatr

    def dfrsi(self, inputdf, n, export2csv=True):
        """调用talib方法计算ATR指标,写入到df并输出"""
        dfrsi = inputdf
        dfrsi["rsi"] = None
        for i in range(n+1, len(inputdf)):
            df_ne = inputdf.loc[i - n :i, :]
            rsi = talib.RSI(np.array(df_ne["close"]), n)
            dfrsi.loc[i, "rsi"] = rsi[-1]

        dfrsi = dfrsi.fillna(0)
        dfrsi = dfrsi.replace(np.inf, 0)
        if export2csv == True:
            dfrsi.to_csv(self.exportpath + "dfrsi" + str(self.collection) + ".csv", index=True, header=True)
        return dfrsi

    def Percentage(self, inputdf, export2csv=True):
        """调用talib方法计算CCI指标,写入到df并输出"""
        dfPercentage = inputdf
        # dfPercentage["Percentage"] = None
        for i in range(1, len(inputdf)):
            # if dfPercentage.loc[i,"close"]>dfPercentage.loc[i,"open"]:
            #     percentage = ((dfPercentage.loc[i,"high"] - dfPercentage.loc[i-1,"close"])/ dfPercentage.loc[i-1,"close"])*100
            # else:
            #     percentage = (( dfPercentage.loc[i,"low"] - dfPercentage.loc[i-1,"close"] )/ dfPercentage.loc[i-1,"close"])*100
            if dfPercentage.loc[ i - 1, "close"] == 0.0:
                percentage = 0
            else:
                percentage = ((dfPercentage.loc[i, "close"] - dfPercentage.loc[i - 1, "close"]) / dfPercentage.loc[ i - 1, "close"]) * 100.0
            dfPercentage.loc[i, "Perentage"] = percentage

        dfPercentage = dfPercentage.fillna(0)
        dfPercentage = dfPercentage.replace(np.inf, 0)
        if export2csv == True:
            dfPercentage.to_csv(self.exportpath + "Percentage_" + str(self.collection) + ".csv", index=True, header=True)
        return dfPercentage


    def dfMACD(self, inputdf, n, export2csv=False):
        """调用talib方法计算MACD指标,写入到df并输出"""
        dfMACD = inputdf
        for i in range(n, len(inputdf)):
            df_ne = inputdf.loc[i - n + 1:i, :]
            macd,signal,hist = talib.MACD(np.array(df_ne["close"]),12,26,9)
            dfMACD.loc[i, "macd"] = macd[-1]
            dfMACD.loc[i, "signal"] = signal[-1]
            dfMACD.loc[i, "hist"] = hist[-1]

        dfMACD = dfMACD.fillna(0)
        dfMACD = dfMACD.replace(np.inf, 0)
        if export2csv == True:
            dfMACD.to_csv(self.exportpath + "macd" + str(self.collection) + ".csv", index=True, header=True)
        return dfMACD

    def dfSTD(self, inputdf, n, export2csv=False):
        """调用talib方法计算MACD指标,写入到df并输出"""
        dfSTD = inputdf
        for i in range(n, len(inputdf)):
            df_ne = inputdf.loc[i - n + 1:i, :]
            std = talib.STDDEV(np.array(df_ne["close"]),n)
            dfSTD.loc[i, "std"] = std[-1]

        dfSTD = dfSTD.fillna(0)
        dfSTD = dfSTD.replace(np.inf, 0)
        if export2csv == True:
            dfSTD.to_csv(self.exportpath + "dfSTD" + str(self.collection) + ".csv", index=True, header=True)
        return dfSTD

    #-----------------------------------------加入趋势分类-------------------------------------------------
    def addTrend(self, inputdf,  trendsetp=6, export2csv=False):
        """以未来6个bar的斜率线性回归为判断分类是否正确"""
        dfTrend = inputdf
        for i in range(1, len(dfTrend) - trendsetp-1):
            histRe = np.array(dfTrend["close"])[i:i+trendsetp]
            xAixs = np.arange(trendsetp) + 1
            res = st.linregress(y=histRe, x=xAixs)
            if res.pvalue < self.pValue+0.01:
                if res.slope > 0.5:
                    dfTrend.loc[i,"tradeindictor"] = 1
                elif res.slope < -0.5:
                    dfTrend.loc[i, "tradeindictor"] = -1
        dfTrend = dfTrend.fillna(0)
        dfTrend = dfTrend.replace(np.inf, 0)
        if export2csv == True:
            dfTrend.to_csv(self.exportpath + "addTrend" + str(self.collection) + ".csv", index=True, header=True)
        return dfTrend

def GirdValuate(X_train, y_train):
    """1)LogisticRegression
    逻辑回归
    2)DecisionTreeClassifier
    决策树
    3)SVC
    支持向量分类
    4)MLP
    神经网络"""
    clf_DT=DecisionTreeClassifier()
    param_grid_DT= {'max_depth': [1,2,3,4,5,6]}

    clf_Logit=LogisticRegression()
    param_grid_logit= {'solver': ['liblinear','lbfgs','newton-cg','sag']}

    clf_svc=SVC()
    param_grid_svc={'kernel':('linear', 'poly', 'rbf', 'sigmoid'),
                    'C':[1, 2, 4],
                    'gamma':[0.125, 0.25, 0.5 ,1, 2, 4]}

    clf_mlp = MLPClassifier()
    param_grid_mlp= {"hidden_layer_sizes": [(100,), (100, 30)],
                                 "solver": ['adam', 'sgd', 'lbfgs'],
                                 "max_iter": [20],
                                 "verbose": [False]
                                 }


    #打包参数集合
    clf=[clf_DT,clf_Logit,clf_mlp,clf_svc]
    param_grid=[param_grid_DT,param_grid_logit,param_grid_mlp,param_grid_svc]
    from sklearn.model_selection import StratifiedKFold  # 交叉验证
    kflod = StratifiedKFold(n_splits=10, shuffle=True, random_state=7)  # 将训练/测试数据集划分10个互斥子集,这样方便多进程测试

    #网格测试
    for i in range(0,4):
        grid=GridSearchCV(clf[i], param_grid[i], scoring='accuracy',n_jobs = -1,cv = kflod)
        grid.fit(X_train, y_train)
        print (grid.best_params_,': ',grid.best_score_)


if __name__ == '__main__':
    # 读取数据

    # exportpath = "C:\\Users\shui0\OneDrive\Documents\Project\\"
    exportpath = "C:\Project\\"
    DA = DataAnalyzerforSklearn(exportpath)
    #数据库导入
    start = datetime.strptime("20180501", '%Y%m%d')
    end = datetime.strptime("20190501", '%Y%m%d')
    df = DA.db2df(db="VnTrader_1Min_Db", collection="rb8888", start = start, end = end)
    df5min = DA.df2Barmin(df, 5)
    df5minAdd = DA.addTrend(df5min, export2csv=True)
    df5minAdd = DA.dfMACD(df5minAdd, n=34, export2csv=True)
    df5minAdd = DA.dfatr(df5minAdd, n=25, export2csv=True)
    df5minAdd = DA.dfrsi(df5minAdd, n=35, export2csv=True)
    df5minAdd = DA.dfcci(df5minAdd,n = 30,export2csv=True)
    df5minAdd = DA.dfSTD(df5minAdd, n=30, export2csv=True)
    df5minAdd = DA.Percentage(df5minAdd,export2csv = True)

    #划分测试验证。
    df_test = df5minAdd.loc[60:,:]        #只从第60个开始分析,因为之前很多是空值
    y= np.array(df_test["tradeindictor"]) #只保留结果趋势结果,转化为数组
    X = df_test.drop([ “tradeindictor”,"close","datetime","high","low","open","volume"],axis = 1).values #不是直接分析HLOC,只保留特征值,转化为数组


    X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.3,random_state=0) #三七
    print("训练集长度: %s, 测试集长度: %s" %(len(X_train),len(X_test)))

    from sklearn.feature_selection import SelectKBest
    from sklearn.feature_selection import SelectPercentile
    from sklearn.feature_selection import mutual_info_classif

    #特征工作,可以按照百分比选出最高分特征类,取最优70%,也可以用SelectKBest,指定要几个特征类。
    print(X_train.shape)
    selectPer = SelectPercentile(mutual_info_classif, percentile=70)
    # selectPer = SelectKBest(mutual_info_classif, k=7)
    X_train = selectPer.fit_transform(X_train, y_train)
    print(X_train.shape)
    X_test = selectPer.transform(X_test)
    # 也可以用Fpr选择
    # selectFea=SelectFpr(alpha=0.01)
    # X_train_new = selectFea.fit_transform(X_train, y_train)
    # X_test_new = selectFea.transform(X_test)

    # 这里使用下面模式进行分析,然后利用网格调参,得到最有模型后可以注释掉
    GirdValuate(X_train,y_train)


    # 使用选取最好的模型,进行测试看看拼接
    # • 模型预测:model.predict()
    # • Accuracy:metrics.accuracy_score()
    # • Presicion:metrics.precision_score()
    # • Recall:metrics.recall_score()
    from sklearn import metrics
    clf_selected=MLPClassifier(hidden_layer_sizes=(100,30), max_iter=20, solver='adam') #此处填入网格回测最优模型和参数,
    # {'hidden_layer_sizes': (100, 30), 'max_iter': 20, 'solver': 'adam', 'verbose': False} :  0.9897016507648039
    clf_selected.fit(X_train, y_train)

    y_pred = clf_selected.predict(X_test)
    #accuracy
    accuracy=metrics.accuracy_score(y_true=y_test, y_pred=y_pred)
    print ('accuracy:',accuracy)

    #precision
    precision=metrics.precision_score(y_true=y_test, y_pred=y_pred,average="micro")
    print ('precision:',precision)

    #recall
    recall=metrics.recall_score(y_true=y_test, y_pred=y_pred,average="micro")
    print ('recall:',recall)

    #实际值和预测值
    print (y_test)
    print (y_pred)
    dfresult = pd.DataFrame({'Actual':y_test,'Predict':y_pred})
    dfresult.to_csv(exportpath + "result"  + ".csv", index=True, header=True)


    from sklearn.externals import joblib
    #模型保存到本地
    joblib.dump(clf_selected,'clf_selected.m')
    #模型的恢复
    clf_tmp=joblib.load('clf_selected.m')

运行结果
训练集长度: 11673, 测试集长度: 5003
(11673, 8)
(11673, 5)
('accuracy:', '0.7833300019988008')
('precision:', '0.7833300019988008')
('recall:', '0.7833300019988008')
[ 1. 0. 0. ... 0. 0. -1.]
[0. 0. 0. ... 0. 0. 0.]

Member
加入于:
帖子: 77
声望: 17

在vnpy中使用,简单说下, 在策略init 方法中使用clf_tmp=joblib.load('clf_selected.m')读取模型,然后在onXminBar方法中,
使用ArrayManager计算那些特征值,使用 clf_selected.predict()计算中预测分类,如果1开多单,-1空单,0略过。

Member
加入于:
帖子: 22
声望: 1

学习,向大神致敬

Member
avatar
加入于:
帖子: 10
声望: 0

顶礼膜拜!

Member
avatar
加入于:
帖子: 16
声望: 0

为什么是6跟K线的斜度,而不是最后一根的涨跌幅作为label?

Member
加入于:
帖子: 77
声望: 17

量化爱好者 wrote:

为什么是6跟K线的斜度,而不是最后一根的涨跌幅作为label?

都可以,牛市买入,熊市卖出,现在怎么样算牛熊其实这个是最难界定的。

Member
avatar
加入于:
帖子: 10
声望: 2

试了一下,能跑, 没跑完, 跑起来机器就假死了啥也干不了,等有空再试

Member
加入于:
帖子: 77
声望: 17

simonb2277e35e4954c70 wrote:

试了一下,能跑, 没跑完, 跑起来机器就假死了啥也干不了,等有空再试

SVM 训练非常慢,我的小破笔记本跑了小2天

Member
avatar
加入于:
帖子: 10
声望: 2

那得拿另外一台带gpu的机子来试了

Member
avatar
加入于:
帖子: 2
声望: 0

学习,向大神致敬!

Member
avatar
加入于:
帖子: 3
声望: 0

张国平 wrote:

量化爱好者 wrote:

为什么是6跟K线的斜度,而不是最后一根的涨跌幅作为label?

都可以,牛市买入,熊市卖出,现在怎么样算牛熊其实这个是最难界定的。

我也遇到这个问题,现在也没找到有效办法。

Member
avatar
加入于:
帖子: 68
声望: 7

说到底最难的还是赚钱的逻辑,模型什么的都不重要......

Member
avatar
加入于:
帖子: 14
声望: 0

看了几个策略,发现数据的聚合是挺繁琐的,@用Python的交易员,如果能在加入一个类似聚宽或者RQ中的get_price(security, start_date=None, end_date=None, frequency='daily', fields=None, skip_paused=False, fq='pre', count=None) 这个函数就方便多了呢

Member
加入于:
帖子: 8
声望: 0

请问在VNPY中调用时,怎么使用ArrayManager计算那些特征值? 相关代码能否贴上?

Member
avatar
加入于:
帖子: 6
声望: 0

GitHub有链接的么?

Member
加入于:
帖子: 77
声望: 17

rongtail wrote:

请问在VNPY中调用时,怎么使用ArrayManager计算那些特征值? 相关代码能否贴上?

和vnpy自带的一样,调用ta-lib即可。比如kdj

# -----------------------------Billy------------------------------------------
def kdj(self, fastk_period, slowk_period, slowk_matype, slowd_period, slowd_matype, array=False):
    """KDJ指标"""

    slowk, slowd = talib.STOCH(self.high, self.low, self.close, fastk_period, slowk_period,
                               slowk_matype, slowd_period, slowd_matype)

    # 求出J值,J = (3 * D) - (2 * K)
    slowj = list(map(lambda x, y: 3 * x - 2 * y, slowk, slowd))
    if array:
        return slowk, slowd, slowj
    return slowk[-1], slowd[-1], slowj[-1]
Member
加入于:
帖子: 77
声望: 17

lusic2019 wrote:

GitHub有链接的么?

https://github.com/BillyZhangGuoping/MarketDataAnaylzerbyDataFrame
都在这里

Member
加入于:
帖子: 8
声望: 0

不是很明白在vnpy中怎么调用,我的理解是:用ArrayManager中的数据代替mongodb读取的数据,运行机器学习后生成模型clf_selected,然后用clf_selected.predict()生成结果1,-1,0,再据此开仓,对吗?

Member
加入于:
帖子: 77
声望: 17

rongtail wrote:

不是很明白在vnpy中怎么调用,我的理解是:用ArrayManager中的数据代替mongodb读取的数据,运行机器学习后生成模型clf_selected,然后用clf_selected.predict()生成结果1,-1,0,再据此开仓,对吗?

如果使用的话,不用模型训练了,直接joblib.load读取已经训练好的模型,然后用用predict()

© 2015-2019 上海韦纳软件科技有限公司
备案服务号:沪ICP备18006526号-3