You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi Antonio,
A quick question:
In the distributed_nsgaii_with_dask.py example, the number of workers is explicitly given by setting up the Dask client: # setup Dask client
client = Client(LocalCluster(n_workers=10)).
On the other hand, in the synchronous nsgaii example, it is not stated, the Dask evaluator is just passed as a parameter to the algorithm: population_evaluator=DaskEvaluator().
How can we decide for number of workers in for synchronous nsgaii?
Additionally, is it straightforward to adapt Spea2 or Hype for asynchronous distributed search? They are only possible with synchronous.
Thanks
The text was updated successfully, but these errors were encountered:
Hi.
I have just updated the DastEvaluator so you can indicate the number of cores.
Developing an asynchronous version of an algorithm is not straightforward as you have to redesing it (we wrote a paper about this matter many years ago: https://ieeexplore.ieee.org/abstract/document/4536375) . Just take a look at the DistributedNSGAII class; in particular, at the run() method.
Hi Antonio,
A quick question:
In the distributed_nsgaii_with_dask.py example, the number of workers is explicitly given by setting up the Dask client: # setup Dask client
client = Client(LocalCluster(n_workers=10)).
On the other hand, in the synchronous nsgaii example, it is not stated, the Dask evaluator is just passed as a parameter to the algorithm: population_evaluator=DaskEvaluator().
How can we decide for number of workers in for synchronous nsgaii?
Additionally, is it straightforward to adapt Spea2 or Hype for asynchronous distributed search? They are only possible with synchronous.
Thanks
The text was updated successfully, but these errors were encountered: