-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
some questions #10
Comments
Oh sorry another question, can the model.zip used with a strongly correlated pair? |
My idea is combining with multiple data source(different type of crypto) to gain more data to simulate the trading environment in order to train a more robust agent , so I only use normalized features which can cross different markets.
Yes, it is for the compatibility issues.
Yes, of course. The win/loss condition is to make sure that each training episodes won't take too long.
Yes
Well, not exactly. For each simulation I randomly select an entry point in the observation space.
Sure, I will find some time to take a look into the lib and come up with an example.
I can't say for sure, but you can do some paper trading to find out. |
@Payback80 I have updated an basic example for rllib. |
ehy that's great! I've worked hard on RL in the past year, if you like we can exchange ideas |
Surer, my email is: |
I am still curious about this: There are 10 days in the example config_rl.json from 2020/11/20 to 2020/11/30 10 days * 24 *hours * 4 (15min intervals) = 960 steps. I feel like 200 steps are really less (50 hours/ 2 days). I think 1 week might be a better select.
Can you explain why do you enter randomly, is it because of the risk of overfitting ? |
any advancement on rllib? |
Hi |
run |
I have tried your example and it works!
Now i'd like to ask some questions for clarification
in freqtradegym.py why
obs = np.array([ # row.open, # row.high, # row.low, # row.close, # row.volume,
are pulled out of the observation space? Why you consider at least open and close not informative?
edit: At least normalized
in IndicatorforRL.py there are still the buying and selling conditions, are there for compatibility issues or for what? also because in LoadRLmodel.py the buying/selling conditions call the model.zip
i see in freqtradegym
self.stake_amount = self.config['stake_amount']
self.reward_decay = 0.0005 self.not_complete_trade_decay = 0.5 self.game_loss = -0.5 self.game_win = 1.0 self.simulate_length = self.config['gym_parameters']['simulate_length']
the reward parameters, this drives me to ask: Is the profit the objective function?
in config
"gym_parameters": { "indicator_strategy": "IndicatorforRL", "fee": 0.0015, "timerange": "20201120-20201130", "simulate_length": 200 },
is timerange the observation space? If yes simulate_lenght are the Max_Episodes trained ONTO the timerange? Am i right? So regarding yours example in 15mins timeframe are 200 15min timesteps?
Do you mind switch to rllib?
``
The text was updated successfully, but these errors were encountered: