Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Playground build is slow, even without changes #2158

Closed
richb-hanover opened this issue Mar 13, 2023 · 17 comments
Closed

Playground build is slow, even without changes #2158

richb-hanover opened this issue Mar 13, 2023 · 17 comments

Comments

@richb-hanover
Copy link
Contributor

What's up?

@eitsupi - This isn't quite a bug, but the devcontainer for commit d427c86 takes a long time (many minutes) to start up after it has already been built. Here's what I saw:

  • Checkout prql d427c86 (compiler 0.6.0)
  • Start VS Code, it grinds away builds the new Dev Container. When it's done...
  • Open a zsh terminal and enter task run-book when it's done, you'll be able to work on the book
  • Quit from VS Code

Then...

  • Launch VS Code, start the Dev Container
  • 20-30 seconds later, you see the Dev Container CLI prompt vscode ➜ /workspaces/prql (main) $
  • Type task run-book, see the Blocking waiting for file lock on build directory message

This has persisted for 10+ minutes this morning. I will also note that during this time, the CPU on my computer was pretty heavily used, even to the point it interferes with GUI interactions.

Am I missing something? Any recommendations about what to do? Many thanks.

@richb-hanover
Copy link
Contributor Author

Update: After about 15-18 minutes of startup, VSCode displayed an error that it couldn't run the build scripts. Here's the output.

At that point, I could task run-book. Any thoughts? Thanks.

[ERROR rust_analyzer::lsp_utils] failed to run build scripts:
The following warnings were emitted during compilation:


error: failed to run custom build command for `libduckdb-sys v0.7.0`

Caused by:
  process didn't exit successfully: `/workspaces/prql/target/debug/build/libduckdb-sys-f72a7fb08369c695/build-script-build` (exit status: 1)
  --- stdout
  cargo:rerun-if-changed=duckdb/duckdb.hpp
  cargo:rerun-if-changed=duckdb/duckdb.cpp
  TARGET = Some("x86_64-unknown-linux-gnu")
  OPT_LEVEL = Some("0")
  HOST = Some("x86_64-unknown-linux-gnu")
  cargo:rerun-if-env-changed=CXX_x86_64-unknown-linux-gnu
  CXX_x86_64-unknown-linux-gnu = None
  cargo:rerun-if-env-changed=CXX_x86_64_unknown_linux_gnu
  CXX_x86_64_unknown_linux_gnu = None
  cargo:rerun-if-env-changed=HOST_CXX
  HOST_CXX = None
  cargo:rerun-if-env-changed=CXX
  CXX = None
  cargo:rerun-if-env-changed=CXXFLAGS_x86_64-unknown-linux-gnu
  CXXFLAGS_x86_64-unknown-linux-gnu = None
  cargo:rerun-if-env-changed=CXXFLAGS_x86_64_unknown_linux_gnu
  CXXFLAGS_x86_64_unknown_linux_gnu = None
  cargo:rerun-if-env-changed=HOST_CXXFLAGS
  HOST_CXXFLAGS = None
  cargo:rerun-if-env-changed=CXXFLAGS
  CXXFLAGS = None
  cargo:rerun-if-env-changed=CRATE_CC_NO_DEFAULTS
  CRATE_CC_NO_DEFAULTS = None
  DEBUG = Some("true")
  CARGO_CFG_TARGET_FEATURE = Some("fxsr,sse,sse2")
  cargo:rerun-if-env-changed=CXX_x86_64-unknown-linux-gnu
  CXX_x86_64-unknown-linux-gnu = None
  cargo:rerun-if-env-changed=CXX_x86_64_unknown_linux_gnu
  CXX_x86_64_unknown_linux_gnu = None
  cargo:rerun-if-env-changed=HOST_CXX
  HOST_CXX = None
  cargo:rerun-if-env-changed=CXX
  CXX = None
  cargo:rerun-if-env-changed=CXXFLAGS_x86_64-unknown-linux-gnu
  CXXFLAGS_x86_64-unknown-linux-gnu = None
  cargo:rerun-if-env-changed=CXXFLAGS_x86_64_unknown_linux_gnu
  CXXFLAGS_x86_64_unknown_linux_gnu = None
  cargo:rerun-if-env-changed=HOST_CXXFLAGS
  HOST_CXXFLAGS = None
  cargo:rerun-if-env-changed=CXXFLAGS
  CXXFLAGS = None
  cargo:rerun-if-env-changed=CRATE_CC_NO_DEFAULTS
  CRATE_CC_NO_DEFAULTS = None
  CARGO_CFG_TARGET_FEATURE = Some("fxsr,sse,sse2")
  cargo:rerun-if-env-changed=CXX_x86_64-unknown-linux-gnu
  CXX_x86_64-unknown-linux-gnu = None
  cargo:rerun-if-env-changed=CXX_x86_64_unknown_linux_gnu
  CXX_x86_64_unknown_linux_gnu = None
  cargo:rerun-if-env-changed=HOST_CXX
  HOST_CXX = None
  cargo:rerun-if-env-changed=CXX
  CXX = None
  cargo:rerun-if-env-changed=CXXFLAGS_x86_64-unknown-linux-gnu
  CXXFLAGS_x86_64-unknown-linux-gnu = None
  cargo:rerun-if-env-changed=CXXFLAGS_x86_64_unknown_linux_gnu
  CXXFLAGS_x86_64_unknown_linux_gnu = None
  cargo:rerun-if-env-changed=HOST_CXXFLAGS
  HOST_CXXFLAGS = None
  cargo:rerun-if-env-changed=CXXFLAGS
  CXXFLAGS = None
  cargo:rerun-if-env-changed=CRATE_CC_NO_DEFAULTS
  CRATE_CC_NO_DEFAULTS = None
  CARGO_CFG_TARGET_FEATURE = Some("fxsr,sse,sse2")
  cargo:rerun-if-env-changed=CXX_x86_64-unknown-linux-gnu
  CXX_x86_64-unknown-linux-gnu = None
  cargo:rerun-if-env-changed=CXX_x86_64_unknown_linux_gnu
  CXX_x86_64_unknown_linux_gnu = None
  cargo:rerun-if-env-changed=HOST_CXX
  HOST_CXX = None
  cargo:rerun-if-env-changed=CXX
  CXX = None
  cargo:rerun-if-env-changed=CXXFLAGS_x86_64-unknown-linux-gnu
  CXXFLAGS_x86_64-unknown-linux-gnu = None
  cargo:rerun-if-env-changed=CXXFLAGS_x86_64_unknown_linux_gnu
  CXXFLAGS_x86_64_unknown_linux_gnu = None
  cargo:rerun-if-env-changed=HOST_CXXFLAGS
  HOST_CXXFLAGS = None
  cargo:rerun-if-env-changed=CXXFLAGS
  CXXFLAGS = None
  cargo:rerun-if-env-changed=CRATE_CC_NO_DEFAULTS
  CRATE_CC_NO_DEFAULTS = None
  CARGO_CFG_TARGET_FEATURE = Some("fxsr,sse,sse2")
  cargo:rerun-if-env-changed=CXX_x86_64-unknown-linux-gnu
  CXX_x86_64-unknown-linux-gnu = None
  cargo:rerun-if-env-changed=CXX_x86_64_unknown_linux_gnu
  CXX_x86_64_unknown_linux_gnu = None
  cargo:rerun-if-env-changed=HOST_CXX
  HOST_CXX = None
  cargo:rerun-if-env-changed=CXX
  CXX = None
  cargo:rerun-if-env-changed=CXXFLAGS_x86_64-unknown-linux-gnu
  CXXFLAGS_x86_64-unknown-linux-gnu = None
  cargo:rerun-if-env-changed=CXXFLAGS_x86_64_unknown_linux_gnu
  CXXFLAGS_x86_64_unknown_linux_gnu = None
  cargo:rerun-if-env-changed=HOST_CXXFLAGS
  HOST_CXXFLAGS = None
  cargo:rerun-if-env-changed=CXXFLAGS
  CXXFLAGS = None
  cargo:rerun-if-env-changed=CRATE_CC_NO_DEFAULTS
  CRATE_CC_NO_DEFAULTS = None
  CARGO_CFG_TARGET_FEATURE = Some("fxsr,sse,sse2")
  running: "c++" "-O0" "-ffunction-sections" "-fdata-sections" "-fPIC" "-gdwarf-4" "-fno-omit-frame-pointer" "-m64" "-std=c++11" "-o" "/workspaces/prql/target/debug/build/libduckdb-sys-3cb014c9e6227cca/out/duckdb/duckdb.o" "-c" "duckdb/duckdb.cpp"
  cargo:warning=c++: fatal error: Killed signal terminated program cc1plus
  cargo:warning=compilation terminated.
  exit status: 1

  --- stderr


  error occurred: Command "c++" "-O0" "-ffunction-sections" "-fdata-sections" "-fPIC" "-gdwarf-4" "-fno-omit-frame-pointer" "-m64" "-std=c++11" "-o" "/workspaces/prql/target/debug/build/libduckdb-sys-3cb014c9e6227cca/out/duckdb/duckdb.o" "-c" "duckdb/duckdb.cpp" with args "c++" did not execute successfully (status code exit status: 1).

@eitsupi
Copy link
Member

eitsupi commented Mar 13, 2023

As I mentioned below, unfortunately you are following the correct steps and this is a known limitation.
#1893 (comment)

In short, a build that includes duckdb (which is automatically executed by rust-analizer) requires high machine power and takes a long time.
And as you have seen, duckdb builds often fail!

However, I think duckdb is only used in the test, so I don't think it has anything to do with the book build.

@max-sixty
Copy link
Member

If you give it more memory, and it completes successfully, does it still take a long time on the second launch?

@richb-hanover
Copy link
Contributor Author

Update: I think I gave it more memory successfully. I bumped Memory from 10 to 12 GBytes and Virtual Disk from 64 to 72GBytes. See image below. (I am not sure that this made a difference.) Here's what I see:

From a fresh pull of current state of the main branch:

  1. Start VS Code. Open the DevContainer. Let it start up, doing whatever it does, maybe it's rebuilding the Dev Container. I show the log for the Dev Container to watch the proceedings
  2. Open a new terminal window then task run-website- this starts pretty quickly
  3. ^C and then task run-book - This gives the "blocking waiting..." message. If I ^C and wait a while (many minutes) this command proceeds and requires a rebuild of a few cargo items (prql? mdbook? I don't remember exactly) and takes a minute or two before it starts
  4. ^C and then task run-playground This requires a npm install for two or three minutes, then the playground opens.

Quit VS Code, then relaunch it. Here's what I see:

  1. Startup OK, show Dev Container log.
  2. task run-website in a new window is pretty fast (that is, it's available acceptably quickly)
  3. task run-book - this time, acceptably fast, no rebuild.
  4. task run-playground - always seems to perform npm install that takes several minutes

Docker-Advanced Settings

@max-sixty
Copy link
Member

FYI as a benchmark, "cd web/playground/ && npm install --install-links=false takes 29 seconds on my system on an additional run. Compiling prql-js is cached; wasm-opt takes 6.58s, and the rest is npm. It should be live updating after every change, so there shouldn't be a need to re-run in order to iterate on changes.

I'm not surprised that it's quite slow to compile the first time. Unsure about having to interrupt the process; does it complete without that?

Assuming I'm right in interpreting this as running successfully in reasonable time on each change, I think this is a great success!

@richb-hanover
Copy link
Contributor Author

richb-hanover commented Mar 18, 2023

Update: Running on my 2019 2.3 GHz 8-Core Intel Core i9 MacBook Pro: From a cold-start (launching VS Code)

  • website started almost immediately after the command line prompt was available
  • book took 5-10 seconds to start
  • playground took just over 4 minutes to open the web browser and display the Playground
  • cd web/playground/ && npm install --install-links=false took took just under 4 minutes to run the npm install, run the bundler, compile/optimize wasm, and get back to the command line.

My take is that both the Book and Website are in terrific shape. I wonder if it would be possible to cache some of the build products for the Playground. Thanks!

PS You're right - changes to Playground source files get reflected in the browser quite quickly - a second or two max.

@eitsupi
Copy link
Member

eitsupi commented Mar 18, 2023

My take is that both the Book and Website are in terrific shape. I wonder if it would be possible to cache some of the build products for the Playground.

My understanding is that these caches are in the project directory. (Let me know if I'm wrong!)
In your case, is the project directory bind-mounted to the local file system?

@richb-hanover
Copy link
Contributor Author

In your case, is the project directory bind-mounted to the local file system?

I don't know. (Here's where I show my lack of knowledge in all this stuff.) I'm simply using the Dev Container that was created when I pulled the repo. VS Code said, "Hey! There's a Dev Container here! Want to open it?" So I said yes.

@eitsupi
Copy link
Member

eitsupi commented Mar 18, 2023

Ok, it should be bind-mounted by default, so the cache should remain in the git ignored directory of your local filesystem.

@max-sixty
Copy link
Member

I had a look at this — some minor speed up + an explanation in #2242

OK to close?

@richb-hanover
Copy link
Contributor Author

richb-hanover commented Mar 19, 2023

I'm not quite ready to close this: The speedups do seem to have made a difference on the second startup, but the Playground is still painful. Here's how I tested:

  • rm -rf prql entire PRQL repository
  • git clone https://github.com/richb-hanover/prql.git to get commit 7125b33
  • Use Docker Desktop to delete all containers, volumes, and networks related to PRQL
  • Launch VS Code and open the new PRQL repo
  • Say yes when it asks "Re-open in Dev Container?"
  • About 40 minutes later the container build completes. I can tell because the status at the bottom of the VS Code window gives these hints:
    • CPU utilization decreases from > 50% to < 3%
    • "rust-analyzer" spinner stops
    • spinner for "Loading: building proc-macros: compile files" and "roots scanned" and "cargo check: <crate-name>" all disappear.

At this point, I can open a new terminal window (I chose zsh), run these commands, and get these results:

  • task run-website starts almost immediately
  • task run-book takes less than a minute to build ~147 crates, then starts right up.
  • task run-playground takes about four minutes before the Playground appears in the browser.

Second run: Quitting VS Code and relaunching is a little faster, except for the Playground.

  • The zsh command line is available in a few seconds
  • task run-website (again) starts almost immediately
  • task run-book does not require rebuilding, and starts in a 5-8 seconds.
  • task run-playground still takes about four minutes before the Playground appears in the browser.
  • ^C to stop the Playground and then start it again with task run-playground again takes about 4 minutes to start.

Here's a link to a video of starting VS Code the second time (after the container had been built) that shows the messages while the Playground is starting up: https://youtu.be/ZbKG-Vu7Xoo

Question: Why is it stuck at the reify:fsevents: sill reify mark deleted [ '/workspaces/prql/web/playground/node_modules/fsevents' ] message for almost three minutes?

@max-sixty @eitsupi If this is as good as we can get with the Playground, then it's OK to close this. But it sure would be nice to cache some of the Playground build products. Many thanks.

@max-sixty
Copy link
Member

Appreciate the full timings, that's v helpful.

For the playground build, quoting from #2242:

...but most of the time is still taken up by re-running wasm-opt.

There's an issue for this in rustwasm/wasm-pack#1152. But probably wasm-opt should be run by build.rs which knows whether it requires re-running based on changed input files https://doc.rust-lang.org/cargo/reference/build-scripts.html

I think the current state is probably OK — we still get instant feedback when developing the playground; this is only required when starting the playground or changing prql-js

So we'd need upstream changes (or lots of work, which should then be upstreamed) to make that fast.

I'll adjust the title and mark it as postponed, is that reasonable?

@max-sixty max-sixty changed the title DevContainer: "Blocking waiting for file lock on build directory" on second start Playground build is slow, even without changes Mar 19, 2023
@eitsupi
Copy link
Member

eitsupi commented Mar 19, 2023

Thanks for the detailed analysis!

I suspect that these time-consuming builds could be skipped by using Taskfile's sources and generates functionality.
https://taskfile.dev/usage/#prevent-unnecessary-work

@max-sixty
Copy link
Member

I suspect that these time-consuming builds could be skipped by using Taskfile's sources and generates functionality. https://taskfile.dev/usage/#prevent-unnecessary-work

It's at a different level though — at the moment npm install in playground runs npm install on prql-js, which runs wasm-pack, which runs wasm-opt. So unless we tag every source for npm install of playground, including prql-js and therefore anything in prql-compiler (!), we can't use Taskfile here.

I'm quite confident of this one

@eitsupi
Copy link
Member

eitsupi commented Mar 19, 2023

It's at a different level though — at the moment npm install in playground runs npm install on prql-js, which runs wasm-pack, which runs wasm-opt. So unless we tag every source for npm install of playground, including prql-js and therefore anything in prql-compiler (!), we can't use Taskfile here.

Thanks for explaining this.
I may understand your explanation and hope the wasm-pack will improve.

@richb-hanover
Copy link
Contributor Author

Thank you all for the careful analysis. Since there's no "simple fix", I believe it is reasonable to postpone this for now.

@max-sixty
Copy link
Member

max-sixty commented Mar 19, 2023

FYI this is the upstream issue: rustwasm/wasm-pack#251

Some folks there are less bullish on a full build.rs-driven workflow. The final commenter has got a "put a whole command into build.rs" solution working. Ideally there would be an approach calling wasm-bindgen APIs rather than running shell commands or even using wasm-pack...

Edit: here's a library that is actually doing this!

https://crates.io/crates/substrate-wasm-builder

It requires nightly, which we won't ever require, but also possibly it might be moving to not require nightly? paritytech/substrate#13580

We're 2-3 months behind on the toolchain, so assuming this works, we could use this in a few months to replace wasm-pack. That would simplify the build a lot as well as solving this perf issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants