Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5

Pandas For Traders
#1

Pandas is a very powerful Python library that you must master if you want to use Python in algorithmic trading. Pandas provide you with the ability to slice and dice a dataframe in many ways that can make your job very easy. Pandas is written in C. It is very fast. Dataframe is a very important concept. Dataframe is basically a table with many rows and columns. Each column represents an attribute. Each row is a record. 

When developing algorithmic trading strategies, you will have to deal with dataframes a lot. Reading a dataframe is very easy with Pandas. I have written the following python code that you can use to read the data csv files into a dataframe. I define a function that reads the csv file. We just specify the currency pair and the timeframe. CSV file should be saved on your hard drive. If you have Meta Trader 4 installed on your computer, you can download the csv files for different currency pairs for different timeframes from its history center:

# Data fetching
def get_data(currency_pair, timeframe):
    link='D:/Shared/MarketData/{}{}.csv'.format(currency_pair,\
                               timeframe)
    data1 = pd.read_csv(link, header=None)
    data1.columns=['Date', 'Time', 'Open', 'High', 'Low',
                'Close', 'Volume']
    # We need to merge the data and time columns
    # convert that column into datetime object
    data1['Datetime'] = pd.to_datetime(data1['Date'] \
         + ' ' + data1['Time'])
    #rearrange the columnss with Datetime the first
    data1=data1[['Datetime', 'Open', 'High',
                 'Low', 'Close', 'Volume']]
    #set Datetime column as index
    data1 = data1.set_index('Datetime')
    return(data1)

Above function can read the csv files saved on your hard drive.

df = get_data('GBPUSD', 1440)
df.shape
df.head()

Out[12]:
                    Open    High     Low   Close  Volume
Datetime                                          
2011-05-19  1.6156  1.6241  1.6129  1.6227   12390
2011-05-20  1.6226  1.6303  1.6165  1.6229   11879
2011-05-22  1.6224  1.6228  1.6209  1.6227     663
2011-05-23  1.6226  1.6232  1.6057  1.6072   12852
2011-05-24  1.6073  1.6207  1.6067  1.6177   12635

You can see the format of the dataframe that Python has read. Datetime is the index. While defining the function, we have converted date and time into a datetime column that is picked up by Pandas and used as an index. Below is the command that you can use to check if datetime is the index or not:

isinstance(df.index, pd.DatetimeIndex)

Resampling the Dataframe
With pandas we have the ability to resample the dataframe into different frequencies mostly higher. For example, you have read the GBPUSD 15 minute Open, High, Low, Close OHLC data. We can use pandas to easily resample that dataframe into a 30 minute OHLC data, 60 minute OHLC data, 240 minute, 1440 minute and more. First we need to define a dictionary.

ohlc_dict = {
    'Open':'first',
    'High':'max',
    'Low':'min',
    'Close':'last',
    'Volume':'sum'
    }

Now we can easily resample the dataframe with the following commands:

#resample the intraday OHLC into Daily OHLC
df240Mn=df.resample('240min').agg(ohlc_dict).dropna()
dfDaily=df.resample('1D').agg(ohlc_dict).dropna()
dfDaily.head()
#resample the intraday OHLC into daily OHLC
dfWeekly=df.resample('W-Fri').agg(ohlc_dict).dropna()
dfWeekly.head()

Below we read GBPUSD 15 minute data:
Out[16]:
                        Open     High      Low    Close  Volume
Datetime                                                       
2017-11-23 14:00:00  1.33113  1.33147  1.33066  1.33078     716
2017-11-23 14:15:00  1.33080  1.33110  1.33058  1.33085     620
2017-11-23 14:30:00  1.33087  1.33113  1.33070  1.33094     476
2017-11-23 14:45:00  1.33103  1.33149  1.33038  1.33046     273
2017-11-23 15:00:00  1.33045  1.33065  1.32988  1.33042     351

Using the above code we easily converted this GBPUSD 15 minute data into GBPUSD daily data:
Out[19]:
                        Open     High      Low    Close   Volume
Datetime                                               
2017-11-23  1.33113  1.33149  1.32913  1.33045  11292.0
2017-11-24  1.33046  1.33588  1.32777  1.33365  36115.0
2017-11-26  1.33224  1.33314  1.33170  1.33229   1550.0
2017-11-27  1.33228  1.33820  1.33044  1.33199  36031.0
2017-11-28  1.33200  1.33860  1.32199  1.33634  51388.0

Everything is done by pandas internally. More on pandas tomorrow so stay tuned!

Subscribe My YouTube Channel:
https://www.youtube.com/channel/UCUE7VPo...F_BCoxFXIw

Join Our Million Dollar Trading Challenge:
https://www.doubledoji.com/million-dolla...challenge/
Reply
#2

Dataframe Indexing Operator
Let's start our pandas training. The first thing is the indexing operator of a dataframe. We can use the indexing operator to select a single column as a series. We can also use the indexing operator to select a number of columns. For example:

>>> df1=get_data("EURUSD", 60)
>>> df1[['Close']].head()
                       Close
Datetime
2017-01-06 21:00:00  1.05312
2017-01-06 22:00:00  1.05344
2017-01-06 23:00:00  1.05333
2017-01-09 00:00:00  1.05313
2017-01-09 01:00:00  1.05298

We just selected the Close column. We are using datetime as the indexing column. More on that in the future. For now just focus on the Close column. We can also select the Open, High, Low and Close column. 

>>> df1[['Close', 'Open', 'Low', 'High']].head()
                                       Close      Open         Low       High
Datetime
2017-01-06 21:00:00  1.05312  1.05263  1.05251  1.05344
2017-01-06 22:00:00  1.05344  1.05322  1.05278  1.05354
2017-01-06 23:00:00  1.05333  1.05342  1.05303  1.05356
2017-01-09 00:00:00  1.05313  1.05290  1.05272  1.05324
2017-01-09 01:00:00  1.05298  1.05310  1.05262  1.05363

In the above example, I have intentionally chose the order of Open, High, Low and Close different. You can see I used Close, Open, Low and High and the indexing operator fetched these columns in the order that I had specified.
Dataframe Indexing Operator is very flexible. If you chose a string, it will output a one dimensional series and if you pass a list , it will output the dataframe with the columns in the list in the specified order. You can make your code more readable by first defining a list variable:

>>> cols=['Close', 'Open', 'Low', 'High']
>>> df1[cols].head()
                                       Close       Open       Low         High
Datetime
2017-01-06 21:00:00  1.05312  1.05263  1.05251  1.05344
2017-01-06 22:00:00  1.05344  1.05322  1.05278  1.05354
2017-01-06 23:00:00  1.05333  1.05342  1.05303  1.05356
2017-01-09 00:00:00  1.05313  1.05290  1.05272  1.05324
2017-01-09 01:00:00  1.05298  1.05310  1.05262  1.05363

Now you should know about pandas datatypes:

>>> df1.get_dtype_counts()
float64    4
int64      1
dtype: int64

int64 is the datatime column. Let's check it with the select_dtypes method:

>>> df1.select_dtypes(include=['int']).head()
Empty DataFrame
Columns: []
Index: [2017-01-06 21:00:00, 2017-01-06 22:00:00, 2017-01-06 23:00:00, 2017-01-09 00:00:00, 2017-01-09 01:00:00]

We can also use the filter mthod to filter the dataframe columns:

>>> df1.filter(like='Volume').head()
                     Volume
Datetime
2017-01-06 21:00:00     926
2017-01-06 22:00:00     813
2017-01-06 23:00:00     263
2017-01-09 00:00:00     400
2017-01-09 01:00:00     858

Since our dataframe has got only 5 columns, we may not appreciate the utility of these methods. But suppose our dataframe has got 100-500 columns then these methods can become very effective.  Filter method has three parameters: like, items, regex. regex method uses regular expressions to search for that particular expression in the string. items parameter is just like the indexing operator. If you specify a column that is not there, it will not raise an error like that in the indexing operator. You can also check the columns:

>>> df1.columns

Index(['Open', 'High', 'Low', 'Close', 'Volume'], dtype='object')

Count and describe methods are also useful most of the time when doing data exploration:

>>> df1.count()
Open      4922
High      4922
Low       4922
Close     4922
Volume    4922
dtype: int64
>>> df1.describe()
              Open         High          Low        Close
  Volume
count  4922.000000  4922.000000  4922.000000  4922.000000   4922.000000
mean      1.119891     1.120616     1.119198     1.119914   4145.585128
std       0.049810     0.049851     0.049776     0.049814   4067.799585
min       1.047460     1.048210     1.045380     1.047470
7.000000
25%       1.070008     1.070835     1.069315     1.069955   1413.000000
50%       1.117825     1.118525     1.117310     1.117865   2549.000000
75%       1.174777     1.175500     1.173918     1.174768   5807.250000
max       1.207950     1.209250     1.206960     1.207940  46398.000000

Pandas describe method is very powerful. You can specify your own quantiles as well. If there are missing values in the column, pandas will silently ignore that column so you should be know this thing when using the describe method.

Subscribe My YouTube Channel:
https://www.youtube.com/channel/UCUE7VPo...F_BCoxFXIw

Join Our Million Dollar Trading Challenge:
https://www.doubledoji.com/million-dolla...challenge/
Reply
#3

How do you check the dataframe for null values:

>>> df1.isnull().any()
Open      False
High      False
Low       False
Close     False
Volume    False
dtype: bool

We can sum the number of missing values also:
>>> df1.isnull().sum()
Open      0
High      0
Low       0
Close     0
Volume    0
dtype: int64

So we have zero missing values.Let's see if closing price went above 1.5:

>>> df1.Close.ge(1.5).sum()

0

Subscribe My YouTube Channel:
https://www.youtube.com/channel/UCUE7VPo...F_BCoxFXIw

Join Our Million Dollar Trading Challenge:
https://www.doubledoji.com/million-dolla...challenge/
Reply
#4

Let's do some exploratory data analysis (EDA). Metadata means data about data like number of columns, data type of the column data, memory usage. With this simple command, you can get a good idea on the dataframe metadata:

>>> df1.info()
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 4922 entries, 2017-01-06 21:00:00 to 2017-10-20
23:00:00
Data columns (total 5 columns):
Open      4922 non-null float64
High      4922 non-null float64
Low       4922 non-null float64
Close     4922 non-null float64
Volume    4922 non-null int64
dtypes: float64(4), int64(1)
memory usage: 230.7 KB

You can see the index column is DateTimeIndex while the rest of the columns are float64 and int64. Memory usage is only 230.7KB which is very good. There are 4922 rows in the dataframe. Now we can explore the dataframe more:

>>> df1.describe(include=[np.number]).T
         count         mean          std      min          25%
         50%  \
Open    4922.0     1.119891     0.049810  1.04746     1.070008
    1.117825
High    4922.0     1.120616     0.049851  1.04821     1.070835
    1.118525
Low     4922.0     1.119198     0.049776  1.04538     1.069315
    1.117310
Close   4922.0     1.119914     0.049814  1.04747     1.069955
    1.117865
Volume  4922.0  4145.585128  4067.799585  7.00000  1413.000000
 2549.000000

                75%          max
Open       1.174777      1.20795
High       1.175500      1.20925
Low        1.173918      1.20696
Close      1.174768      1.20794
Volume  5807.250000  46398.00000

You can see in the above data a number of things like the minimum close and the maximum close in the dataframe. With a simple describe command you know your data more. Like:

>>> df1.describe(include=[np.number],\
... percentiles=[0.01,0.05,0.1,0.25,0.5,0.75,0.9,0.95,0.99]).T
         count         mean          std      min          1%
        5%  \
Open    4922.0     1.119891     0.049810  1.04746    1.052654
  1.057192
High    4922.0     1.120616     0.049851  1.04821    1.053470
  1.057840
Low     4922.0     1.119198     0.049776  1.04538    1.052095
  1.056560
Close   4922.0     1.119914     0.049814  1.04747    1.052740
  1.057212
Volume  4922.0  4145.585128  4067.799585  7.00000  490.000000
715.000000

               10%          25%          50%          75%
    90%  \
Open      1.060443     1.070008     1.117825     1.174777     1.187729
High      1.061171     1.070835     1.118525     1.175500     1.188458
Low       1.059822     1.069315     1.117310     1.173918     1.186830
Close     1.060441     1.069955     1.117865     1.174768     1.187728
Volume  916.000000  1413.000000  2549.000000  5807.250000  9208.600000

                 95%           99%          max
Open        1.193987      1.201265      1.20795
High        1.194874      1.202000      1.20925
Low         1.193180      1.200417      1.20696
Close       1.194009      1.201260      1.20794
Volume  11447.450000  19455.110000  46398.00000

This is what we have done, we told describe to give the percentiles in full. Let's drill down and find how much memory each dataframe column is using:

>>> df1.memory_usage(deep=True)
Index     39376
Open      39376
High      39376
Low       39376
Close     39376
Volume    39376
dtype: int64

If the memory usage is excessive we can change the datatypes and see if the memory usage decreases. This helps when you are dealing with very big dataframes. In our case we are okay with the memory usage.

Subscribe My YouTube Channel:
https://www.youtube.com/channel/UCUE7VPo...F_BCoxFXIw

Join Our Million Dollar Trading Challenge:
https://www.doubledoji.com/million-dolla...challenge/
Reply
#5

How To Calculate The Trailing Stop Order Price Using Pandas?
As a trader, this is something important. I will show you how you can calculate the trailing stop loss price using pandas. This can be very useful when you are using python for algorithmic trading. Stop orders are very useful in algorithmic trading. You can place a stop order telling the algorithm to only open the buy/sell order when price reaches the stop order level. 

Trailing stop orders help you maximize the profit per trade. This is what we do with a trailing stop loss order. We will the algorithm to trail price at a certain distance. It can be a percentage of the current market price, it can be number of pips from the current price. Trailing stop order follows the current market price keep the safe distance that you have specified. If the price moves down in a buy order, trailing stop order doesn't move down from the highest price reached and when price retraces and goes below the trailing stop order, the sell order gets executed. 

>>> df1['Close'].cummax().head(5)
Datetime
2017-01-06 21:00:00    1.05312
2017-01-06 22:00:00    1.05344
2017-01-06 23:00:00    1.05344
2017-01-09 00:00:00    1.05344
2017-01-09 01:00:00    1.05344
Name: Close, dtype: float64

We use the cummax method to trail the closing price.  As you can see closing price reached 1.05344 and then it started retracing. This is how we will create a trailing stop order using pandas:

>>> #create trailing stop loss
...
>>> df1Cummax=df1['Close'].cummax()
>>> df1TrailingStop=df1Cummax*0.95
>>> df1TrailingStop.head(10)
Datetime
2017-01-06 21:00:00    1.000464
2017-01-06 22:00:00    1.000768
2017-01-06 23:00:00    1.000768
2017-01-09 00:00:00    1.000768
2017-01-09 01:00:00    1.000768
2017-01-09 02:00:00    1.000825
2017-01-09 03:00:00    1.001509
2017-01-09 04:00:00    1.001509
2017-01-09 05:00:00    1.001509
2017-01-09 06:00:00    1.001509
Name: Close, dtype: float64

I multiplied cummax with 0.95 so that we are always 5% behind the closing price. So this is how easily we can create a trailing stop order using pandas. Use can develop a trading strategy based on a trailing stop order. Now with some imagination, we can also use the above cummax and cummin methods with the high price and low price and develop breakout and continuation trading strategies.

Subscribe My YouTube Channel:
https://www.youtube.com/channel/UCUE7VPo...F_BCoxFXIw

Join Our Million Dollar Trading Challenge:
https://www.doubledoji.com/million-dolla...challenge/
Reply
#6

Pandas Indexing Operator
We can use pandas indexing operator to slice and dice a dataframe.
>>> df[-300:].tail()
                        Open     High      Low    Close  Volume
Datetime

2018-06-10 20:00:00  1.34057  1.34132  1.34038  1.34122    1079
2018-06-11 00:00:00  1.34093  1.34242  1.34071  1.34085    3091
2018-06-11 04:00:00  1.34088  1.34404  1.34004  1.34208    6426
2018-06-11 08:00:00  1.34203  1.34298  1.33437  1.33657    9221
2018-06-11 12:00:00  1.33659  1.33778  1.33616  1.33764    2698

But there are many times when we want to select some columns as well. In this case we use the .iloc and .loc operators. Both .iloc and .loc are known as Indexers. .iloc makes selection by integer location:
>>> df.iloc[-300:,2:4].head()
                                     Low    Close
Datetime
2018-04-06 00:00:00  1.39991  1.40008
2018-04-06 04:00:00  1.39822  1.39973
2018-04-06 08:00:00  1.39921  1.40026
2018-04-06 12:00:00  1.40020  1.40875
2018-04-06 16:00:00  1.40749  1.40888

In the above example we selected the last 300 rows in the dataframe and the Low and Close columns. 
>>> df.loc['2017'].head()
                                      Open     High      Low    Close  Volume
Datetime

2017-01-02 04:00:00  1.23383  1.23453  1.23383  1.23435    349
2017-01-02 08:00:00  1.23436  1.23449  1.22887  1.22983    9200
2017-01-02 12:00:00  1.22983  1.23003  1.22799  1.22906    7632
2017-01-02 16:00:00  1.22906  1.22939  1.22771  1.22840    6358
2017-01-02 20:00:00  1.22844  1.22909  1.22707  1.22793    5644

In the above example, if we have used .iloc we would have got a traceback error. Using .loc we can select the rows for the year 2017.
>>> df.loc['2017-03-31', ['Close','Low']]
                       Close      Low
Datetime
2017-03-31 00:00:00  1.24834  1.24717
2017-03-31 04:00:00  1.24624  1.24440
2017-03-31 08:00:00  1.24848  1.24315
2017-03-31 12:00:00  1.25335  1.24521
2017-03-31 16:00:00  1.25273  1.25204
2017-03-31 20:00:00  1.25429  1.25257

Here we have used 2017-03-31 date to extract Close and Low price. So both .iloc and .loc are powerful indexers.  Now use can speed up the scalar selection with .iat and .at indexers.

Subscribe My YouTube Channel:
https://www.youtube.com/channel/UCUE7VPo...F_BCoxFXIw

Join Our Million Dollar Trading Challenge:
https://www.doubledoji.com/million-dolla...challenge/
Reply
#7

Analyzing Currency Prices Using Boolean Selection
If you are a currency trader, you need to analyze many currency pairs on a regular basis. There are more than 50 currency pairs that I analyze. You need to analyze each currency pair on different timeframes. Manually this becomes a cumbersome process. Pandas can help a lot in this regard. Let's see how we can gain perspective on stock prices. First we read the data using pandas then select the close price:

>>> df_close=df['Close']
>>> df_summary=df_close.describe(percentiles=[0.1,0.9])
>>> df_summary
count    13747.000000
mean         1.501193
std          0.129704
min          1.202720
10%          1.293870
50%          1.544690
90%          1.636504
max          1.716360
Name: Close, dtype: float64

In the above code we asked pandas to describe the close price series especially 10th and 90th percentiles.

>>> df_10=df_summary.loc['10%']
>>> df_10
1.2938700000000001
>>> df_90=df_summary.loc['90%']
>>> condition=(df_close < df_10) | (df_close > df_90)
>>> df_close_extreme=df_close[condition]
>>> df_close.plot(color='black', figsize=(12,6))
<matplotlib.axes._subplots.AxesSubplot object at 0x000001E96A018DD8>
>>> df_close_extreme.plot(marker='o', style='', ms=4, color='lightgray')
<matplotlib.axes._subplots.AxesSubplot object at 0x000001E96A018DD8>
>>> minClose=condition.index[0]
>>> maxClose=condition.index[-1]
>>> plt.hlines(y=[df_10, df_90], xmin=minClose, xmax=maxClose,color='black')
<matplotlib.collections.LineCollection object at 0x000001E969FB8208>
>>> plt.show()

This is the plot that we get:
[Image: pandas_1.png]
Coding is all practice. if you want to master pandas, you need to practice and code different scenarios with it. It's just like playing football or for that matter golf. If you don't practice, you lose the edge.

Subscribe My YouTube Channel:
https://www.youtube.com/channel/UCUE7VPo...F_BCoxFXIw

Join Our Million Dollar Trading Challenge:
https://www.doubledoji.com/million-dolla...challenge/
Reply
#8

Are Currency Market Returns Normal?
This is a very important question that you should ask. Are currency market returns normally distributed? If currency market returns are normally distributed, then there will very less extreme events in the market. So events like the Black Monday, October 19th 1987 and the 2008 stock market crash as well as the frequent flash crashes would be very rare. The truth is these extreme events are not rare at all and keep on occurring again and again. Currency market returns are not normal at all. This means extreme events can occur with much more frequency as compared to the frequency of these events predicted by the normal distribution. Let's start with GBPUSD 240 minutes intraday returns and check whether they are normally distributed.

>>> df = get_data('GBPUSD', 1440)
>>> dailyReturn=df_close.pct_change()
>>> dailyReturn=dailyReturn.dropna()
>>> dailyReturn.hist(bins=30)
<matplotlib.axes._subplots.AxesSubplot object at 0x000001E96AB510F0>

>>> plt.hist(dailyReturn, bins=20)
(array([  1.00000000e+00,   0.00000000e+00,   0.00000000e+00,
         0.00000000e+00,   0.00000000e+00,   0.00000000e+00,
         0.00000000e+00,   0.00000000e+00,   0.00000000e+00,
         0.00000000e+00,   1.00000000e+00,   4.00000000e+00,
         3.00000000e+01,   4.37000000e+02,   1.03900000e+04,
         2.73100000e+03,   1.41000000e+02,   9.00000000e+00,
         1.00000000e+00,   1.00000000e+00]), array([-0.07648239, -0.07130676, -0.06613112, -0.06095549, -0.05577986,
       -0.05060423, -0.0454286 , -0.04025297, -0.03507733, -0.0299017 ,
       -0.02472607, -0.01955044, -0.01437481, -0.00919918, -0.00402355,
        0.00115209,  0.00632772,  0.01150335,  0.01667898,
0.02185461,
        0.02703024]), <a list of 20 Patch objects>)
>>> plt.show()
[Image: pandas_2.png]
In a normal distribution, 68% of the returns should fall within 1 standard deviation from the mean, 95% of the returns should fall between 2 standard deviations and 99.7% of the returns should fall between 3 standard deviations. 

>>> mean=dailyReturn.mean()
>>> standardDeviation=dailyReturn.std()
>>> mean
-1.4257920029287978e-05
>>> standardDeviation
0.0022830348817239462

Calculate the absolute value of the z-score:
>>> absZScore=dailyReturn.sub(mean).abs().div(standardDeviation)

Calculate the percentage of daily returns that are within 1, 2 and 3 standard deviations:
>>> absZScore=dailyReturn.sub(mean).abs().div(standardDeviation)
>>> percentage=[absZScore.lt(i).mean() for i in range(1,4)]
>>>
>>> print('{:.3f} fall within 1 standard deviation.'
...       '{:.3f} within 2 and {:.3f} within 3'.format(*percentage))
0.804 fall within 1 standard deviation. 0.949 within 2 and 0.984 within 3

So you can see 80% of the returns fall within 1 standard deviation, 95% fall within 2 standard deviation and 98% fall within 3 standard deviations. If this was a normal distributiion, around 68% of the daily returns should have fallen within 1 standard deviation, 95% within 2 standard deviation and 99.7% within 3 standard deviation which is clearly not the case here. So we can safely say that GBPUSD daily returns are not normally distributed.

Subscribe My YouTube Channel:
https://www.youtube.com/channel/UCUE7VPo...F_BCoxFXIw

Join Our Million Dollar Trading Challenge:
https://www.doubledoji.com/million-dolla...challenge/
Reply
#9

You should learn pandas. Pandas is used a lot in backtesting trading strategy and in building trading systems. Watch the video below where I explain pandas. This is a very brief introduction to pandas for traders. But it will get you going. You don't need to master the intricacies of pandas a lot just some understanding will get you going in building you trading strategy.


Subscribe My YouTube Channel:
https://www.youtube.com/channel/UCUE7VPo...F_BCoxFXIw

Join Our Million Dollar Trading Challenge:
https://www.doubledoji.com/million-dolla...challenge/
Reply
#10

MetaQuotes Corporation the maker of MetaTrader 5 platform has provided an API that we can use to connect python with MT5 and open and close trades on MT5 using python script. You can easily install that api using pip install MetaTrader5. This package is available on PyPi. I have made a video that you can watch below and learn how easily it is connect python with MT5 now. I develop a function that connects with MT5 and download the most recent bars from it and then returns those bars as a pandas object. We can then use that pandas object to build trading indicators and strategies and then we can open and close positions on MT5 as well.


This api opens the door for us to develop algorithmic trading strategies in python for MT5. We can easily now use tensorflow to develop deep learning models that can make predictions that we can use. You should read the thread on tensorflow for traders where I have posted video on a deep learning neural network that downloads data from MT5 and then makes predictions about the next candle whether it is going to be bullish or bearish. Pandas uses numpy a lot behind the scenes. Infact pandas has been built upon numpy. Tensorflow also uses numpy. Whatever with this python api, we can use python to do algorithmic trading on MT5 which is something great. In the next thread I will show how we can code a multi pair multi timeframe currency strength meter in python for MT5 that uses pandas, Coding this currency strength meter in MQL5 would have been a tedious exercise but python makes it easy.

Subscribe My YouTube Channel:
https://www.youtube.com/channel/UCUE7VPo...F_BCoxFXIw

Join Our Million Dollar Trading Challenge:
https://www.doubledoji.com/million-dolla...challenge/
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)