Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ipfs add horrendously slow #898

Closed
anarcat opened this issue Mar 8, 2015 · 14 comments · Fixed by #4296
Closed

ipfs add horrendously slow #898

anarcat opened this issue Mar 8, 2015 · 14 comments · Fixed by #4296
Labels
topic/repo Topic repo

Comments

@anarcat
Copy link
Contributor

anarcat commented Mar 8, 2015

forgot to file that one, it seems. one of the first thin i tried in ipfs was to add a 3.2GB file on my laptop. the setup was that I had a HDD external drive connected through a SATA USB2 enclosure, on my laptop running Debian Jessie amd64. The transfer took over an hour.

Doing the same copy with rsync takes a few minutes - i am able to get around 60-70MB/s transfer rates on this drive, it seems. Here's a copy of the IRC log for more information:

21:52 <anarcat> what does ipfs actually *do*?
21:52 <anarcat> it uploads the file to the dht?
21:57 <anarcat> i am trying to add a 3.2GB file and it's taking a long time, and a lot of memory
21:58 <anarcat>   PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
21:58 <anarcat> 29800 anarcat   20   0 2079848 959980   9224 S  30,9 26,3   8:00.67 ipfs
21:58 <anarcat> 863.50 MB / 3.19 GB [=======>----------------------] 26.45 % 33m14s
22:00 <anarcat> at least the memory usage isn't growing so much so far
22:01 <anarcat> uh, and that's the daemon too
22:06 <dPow> Haha, be happy if it completes. Not too long ago a gig would break it
22:07 <anarcat> wow okay :)
22:07 <anarcat> that's the "alpha" part, i guess
22:07 <anarcat> so many promises, this project is exciting :)
22:07 <anarcat> 1.24 GB / 3.19 GB [============>-------------------] 38.87 % 32m55s
22:07 <dPow> whyrusleeping can probably give a very quick explanation to it
22:07 <anarcat> it seems to asymptotically approach a ETA of 30 minutes :P
22:07 <anarcat> dPow: thanks
22:10 <anarcat> this has got to be one of the most frustrating progress bar i've seen in a while
22:10 <anarcat> it just goes in bursts then stalls
22:10 <anarcat> 1.39 GB / 3.19 GB [=============>------------------] 43.50 % 30m24s
22:14 <@whyrusleeping> hey anarcat!
22:14 <@whyrusleeping> the memory usage is actually being addressed in PR 860
22:14 <@whyrusleeping> we had a bug :/
22:14 <joeyh> oh, was that my bug?
22:15 <@whyrusleeping> joeyh: it was indeed :)
22:15 <@whyrusleeping> anarcat: the bursts and stalling is actually leveldb being a terrible database
22:15 <@whyrusleeping> we're going to move away from it soon

I don't remember exactly how long it took to add the file, but I do remember the timer going up from 15 minutes to 30 min and fluctuating there fore a while. I believe it took around an hour.

Some references:

@whyrusleeping
Copy link
Member

@anarcat question, was your .go-ipfs directory located on the usb hard drive? or was the file you were adding on the usb harddrive?

@anarcat
Copy link
Contributor Author

anarcat commented Mar 9, 2015

On 2015-03-08 20:07:08, Jeromy Johnson wrote:

@anarcat question, was your .go-ipfs directory located on the usb hard drive? or was the file you were adding on the usb harddrive?

the file was on the USB drive, the .go-ipfs directory was on a SSD drive
on the laptop.

@jbenet jbenet added the topic/repo Topic repo label Mar 28, 2015
@daviddias daviddias removed the backlog label Jan 2, 2016
@whyrusleeping
Copy link
Member

closing, ipfs add is like, fast and stuff now

@NeoTheFox
Copy link

NeoTheFox commented Sep 10, 2016

For me ipfs add is still pretty slow when there are lots of files. Long story short - I am trying to host an archlinux mirror on ipfs, and this involves having 60GB+ of packages. I cloned the repo with rsync, and it took ~3hrs, but ipfs add had been adding these files for three days now. It added first 10GBs fast, but now it seems like it takes progressively more time to add new files - I am now at 18GB, and it is still going really slow. One thing I've noticed is that it slowly takes more and more ram as time goes, my server has 1Gb ram, and it filled all the ram, now it is filling up swap space. I believe it is caused by a possible memory leak, since there are no files this large in the repo.
I tried copying files into the fuse folder, but it is not an option, since there is no symlinking support.
screenshot

@jbenet
Copy link
Member

jbenet commented Sep 10, 2016

@NeoTheFox

  • get 0.4.3-rc4 from https://dist.ipfs.io/go-ipfs
  • try doing this with the daemon off? Just kill the daemon, and ipfs add without it. THEN turn it on.
  • There's been some config options added recently to remove "providing" from the hot path of adding which may help you.

@whyrusleeping
Copy link
Member

@NeoTheFox Which version of ipfs are you using? Also, do you mind filing a new issue to track this? There a few thing to try (some of which @jbenet mentions above)

@NeoTheFox
Copy link

@jbenet
I am on 0.4.2, I'll try installing the latest git one after I'll try running it with no daemon, thanks.

@NeoTheFox
Copy link

Yes, this time it consumes 288mb of ram, and does not grow. Looks likes it is fixed in 0.4.3, thanks for help! @whyrusleeping no need to open new issue, everything is resolved.

@whyrusleeping
Copy link
Member

@NeoTheFox Awesome!! It's really great to get confirmations that we're fixing these issues :)

Please do report any other perf issues as you encounter them.

@Calmarius
Copy link

Adding files is still slow for me. It happens in 8 MB chunks, then there is an 1-2 second pause.
Files previously cached gets readded quickly.

@Stebalien
Copy link
Member

@Calmarius just to check, try ipfs add --local. Currently, IPFS broadcasts to the network that it has a pieces of files while adding them and can end up bottle-necking on this process (a bug we're working on).

@skzap
Copy link

skzap commented Oct 7, 2017

@Calmarius I can confirm this behavior, I have the same, including with --local. Goes up to 8MB quickly, then pause for 1-2 seconds until the next 8MB. For example, when trying a 346MB file, it takes about 2min30sec to do 'ipfs --local add'.

@Stebalien
Copy link
Member

Ah, this is a disk-write problem. I am seeing a pause every 8MiB but more like a 200-500ms pause.

This happens because we batch writes to the datastore in 8MiB chunks (to avoid lots of small writes). However, instead of flushing in the background, we pause on flush. Using the experimental badger datastore should reduce this pause but I'll see if I can make this a bit more parallel.

Stebalien added a commit that referenced this issue Oct 11, 2017
1. Modern storage devices (i.e., SSDs) tend to be highly parallel.
2. Allows us to read and write at the same time (avoids pausing while flushing).

fixes #898 (comment)

License: MIT
Signed-off-by: Steven Allen <[email protected]>
@Stebalien
Copy link
Member

@skzap, @Calmarius if you have a moment, can you see if #4296 works better?

Stebalien added a commit to Stebalien/go-ipld-format that referenced this issue Oct 16, 2017
(ipfs/kubo#4296)

1. Modern storage devices (i.e., SSDs) tend to be highly parallel.
2. Allows us to read and write at the same time (avoids pausing while flushing).

fixes ipfs/kubo#898 (comment)
Stebalien added a commit to Stebalien/go-ipld-format that referenced this issue Oct 16, 2017
(ipfs/kubo#4296)

1. Modern storage devices (i.e., SSDs) tend to be highly parallel.
2. Allows us to read and write at the same time (avoids pausing while flushing).

fixes ipfs/kubo#898 (comment)
Jorropo pushed a commit to ipfs/boxo that referenced this issue Mar 15, 2023
1. Modern storage devices (i.e., SSDs) tend to be highly parallel.
2. Allows us to read and write at the same time (avoids pausing while flushing).

fixes ipfs/kubo#898 (comment)

License: MIT
Signed-off-by: Steven Allen <[email protected]>


This commit was moved from ipfs/go-merkledag@888d58c
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
topic/repo Topic repo
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants