You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Yesterday I tried running the following, where I specified -y to skip confirmations
pop up contract --constructor new --args false --suri //Alice --dry-run -y
But it didn't skip the confirmations. I would have expected it to just automatically assume that I want to automatically source it given that I specified the -y argument to skip confirmations like that.
...
▲ ⚠️ The substrate-contracts-node binary is not found.
│
◆ 📦 Would you like to source it automatically now?
│ ● Yes / ○ No
I reached out to the Pop Support group on Telegram where @al3mart confirmed there was a small bug where skipping confirmations was being ignored and that a fix was to be made soon #396.
Proposed Enhancement
As a user I find my initial requirement is to build an end-to-end MVP solution using Pop! CLI where I have control of the outputs of each command I make.
The Pop! CLI outputs that require confirmations are pretty and convenient for new users, but if I want to build automation to scale using Pop! CLI then I don't have time to manually choose my next step, instead I want control of each output, so I can use them as inputs to the next step.
For example, in situations like that I want to be able to specify what default options I want to use in the Pop! command itself, and for it to just get the job done without prompting me anymore if I chose the argument -y to skip confirmations. That could be possible if the codebase has 100% testing coverage of CLI outputs to empower end-to-end automation.
In the outputs of each Pop! CLI command, I also want the output to be easy to parse so I can use parts of the output in subsequent commands.
For example, in the past I created an end-to-end script that allowed me to run substrate-contracts-node, and to build, upload, and instantiate multiple ink! contracts with a single command.
I wanted that automation solution for myself and others because I didn't want to waste time manually choosing the output of CLI prompts when I could just run one command, and have assurance that if I went to have a coffee break when I came back it'd be finished.
More recently I was using jq to parse JSON configuration files that I'd created that gave me more control than the initial .env files https://github.com/svub/nunya/blob/main/scripts/set-relayer.sh#L20 and providing the outputs to yq that I used to update YAML configuration files in other related projects https://github.com/svub/nunya/blob/main/scripts/set-relayer.sh#L53. Note that in that situation I was running a local Ethereum network development node, a local Secret network development node, and uploading and instantiating contracts on each of those networks, and I needed the latest contract code hash from a gateway contract that I had deployed to be added to the YAML configuration file of a Relayer that listened for events emitted by each of those networks and broadcasted transactions onto the other network.
In that project I was specifically using Secret JS rather than Secret CLI incase it gave me more control of the outputs.
Perhaps instead of just providing the output metadata in JSON format at the terminal for each command, and enhancement could be to update a JSON or YAML configuration file that has been added to .gitignore, that the user has access to for subsequent commands. For example after uploading and instantiating a contract, that JSON file could be updated to include the name of the process task used by substrate-contracts-node so the user knows the name of the task to find using grep and kill if necessary, and it could include the temporary database folder being used, the uploaded code id and code hash, and the instantiated contract address, so they would parse that JSON file to use the outputs for something.
But in hindsight it would have been better to just run it in a separate Docker container and then to clean and restart the development node stop and remote that Docker container ID and start the Docker container from scratch again. In that project where I was using Secret Network, I was able to use their Makefile and run make start-server-daemonhttps://github.com/svub/nunya/blob/main/packages/secret-contracts/secret-gateway/Makefile#L38 to start a local Secret development node. It's just annoying that the source code of the Docker image isn't open-source. I was able to kill it with docker stop secretdev && docker rm secretdev && sleep 5https://github.com/svub/nunya/blob/main/scripts/run.sh#L38
For some reason adding the Docker container so users can just run docker up is an issue that's almost a year old paritytech/substrate-contracts-node#177.
Background
Yesterday I tried running the following, where I specified
-y
to skip confirmationspop up contract --constructor new --args false --suri //Alice --dry-run -y
But it didn't skip the confirmations. I would have expected it to just automatically assume that I want to automatically source it given that I specified the
-y
argument to skip confirmations like that.I reached out to the Pop Support group on Telegram where @al3mart confirmed there was a small bug where skipping confirmations was being ignored and that a fix was to be made soon #396.
Proposed Enhancement
As a user I find my initial requirement is to build an end-to-end MVP solution using Pop! CLI where I have control of the outputs of each command I make.
The Pop! CLI outputs that require confirmations are pretty and convenient for new users, but if I want to build automation to scale using Pop! CLI then I don't have time to manually choose my next step, instead I want control of each output, so I can use them as inputs to the next step.
For example, in situations like that I want to be able to specify what default options I want to use in the Pop! command itself, and for it to just get the job done without prompting me anymore if I chose the argument
-y
to skip confirmations. That could be possible if the codebase has 100% testing coverage of CLI outputs to empower end-to-end automation.In the outputs of each Pop! CLI command, I also want the output to be easy to parse so I can use parts of the output in subsequent commands.
For example, in the past I created an end-to-end script that allowed me to run substrate-contracts-node, and to build, upload, and instantiate multiple ink! contracts with a single command.
I wanted that automation solution for myself and others because I didn't want to waste time manually choosing the output of CLI prompts when I could just run one command, and have assurance that if I went to have a coffee break when I came back it'd be finished.
Back then I used
sed
to get the deployed contract address https://github.com/ltfschoen/XCMTemplate/blob/main/docker/quickstart-unnamed.sh#L110. I think an improvement could be to use a library like this instead of a bash script in cases where it gets quite long https://github.com/rust-shell-script/rust_cmd_lib.More recently I was using
jq
to parse JSON configuration files that I'd created that gave me more control than the initial .env files https://github.com/svub/nunya/blob/main/scripts/set-relayer.sh#L20 and providing the outputs toyq
that I used to update YAML configuration files in other related projects https://github.com/svub/nunya/blob/main/scripts/set-relayer.sh#L53. Note that in that situation I was running a local Ethereum network development node, a local Secret network development node, and uploading and instantiating contracts on each of those networks, and I needed the latest contract code hash from a gateway contract that I had deployed to be added to the YAML configuration file of a Relayer that listened for events emitted by each of those networks and broadcasted transactions onto the other network.In that project I was initially parsing the .env file using dotenv in this config.ts file https://github.com/svub/nunya/blob/main/packages/secret-contracts-scripts/src/config/config.ts and then after uploading a contract and it output the code id and contract hash I'd load and update a JSON configuration file in TypeScript here https://github.com/svub/nunya/blob/main/packages/secret-contracts-scripts/src/uploadAndInstantiateGateway.ts#L230, which
I could load in TypeScript here for example https://github.com/svub/nunya/blob/main/packages/secret-contracts-scripts/src/functions/evm/requestNunya.ts#L24.
In that project I was specifically using Secret JS rather than Secret CLI incase it gave me more control of the outputs.
Perhaps instead of just providing the output metadata in JSON format at the terminal for each command, and enhancement could be to update a JSON or YAML configuration file that has been added to .gitignore, that the user has access to for subsequent commands. For example after uploading and instantiating a contract, that JSON file could be updated to include the name of the process task used by substrate-contracts-node so the user knows the name of the task to find using grep and kill if necessary, and it could include the temporary database folder being used, the uploaded code id and code hash, and the instantiated contract address, so they would parse that JSON file to use the outputs for something.
I'm not sure if it's possible, but ideally a Pop! CLI command should be able to run a substrate-contracts-node, and to give the user to flexibility to restart the node, and to reset it so it deletes previously deployed contracts, and not get confused if the user is running more than one instance of it. Previously I most of that like this.
https://github.com/ltfschoen/XCMTemplate/blob/main/docker/run.sh#L113
https://github.com/ltfschoen/XCMTemplate/blob/main/docker/run-scn.sh
https://github.com/ltfschoen/XCMTemplate/blob/main/docker/reset.sh
https://github.com/ltfschoen/XCMTemplate/blob/main/docker/restart.sh
I think it should offer a more automated solution within the CLI to stop and restart/reset it than just asking the user to grep to find the service and use kill -9 to stop it.
But in hindsight it would have been better to just run it in a separate Docker container and then to clean and restart the development node stop and remote that Docker container ID and start the Docker container from scratch again. In that project where I was using Secret Network, I was able to use their Makefile and run
make start-server-daemon
https://github.com/svub/nunya/blob/main/packages/secret-contracts/secret-gateway/Makefile#L38 to start a local Secret development node. It's just annoying that the source code of the Docker image isn't open-source. I was able to kill it withdocker stop secretdev && docker rm secretdev && sleep 5
https://github.com/svub/nunya/blob/main/scripts/run.sh#L38For some reason adding the Docker container so users can just run
docker up
is an issue that's almost a year old paritytech/substrate-contracts-node#177.But the source code of https://github.com/paritytech/substrate-contracts-node is open-source, and if Pop! CLI can't do that, then what's on my mind, is to update the https://github.com/paritytech/substrate-contracts-node codebase and its documentation to show developers how to run substrate-contracts-node independently using Docker.
Note that there appear to be annoying bugs in
yq
though, for example, if I modify a line in a YAML file, it removes all the empty rows in that file, as highlighted here https://stackoverflow.com/questions/57627243/how-to-prevent-yq-removing-comments-and-empty-lines.The text was updated successfully, but these errors were encountered: