-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposed 1.11.0-rc2 #4534
Merged
Merged
Proposed 1.11.0-rc2 #4534
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
- Implement the `operator==` and the `operator<=>` (aka the spaceship operator) in `base_uint`, `Issue`, and `Book`. - C++20-compliant compilers automatically provide the remaining comparison operators (e.g. `operator<`, `operator<=`, ...). - Remove the function compare() because it is no longer needed. - Maintain the same semantics as the existing code. - Add some unit tests to gain further confidence. - Fix XRPLF#2525.
Port numbers can now be specified using either a colon or a space. Examples: 1.2.3.4:51235 1.2.3.4 51235 - In the configuration file, an annoying "gotcha" for node operators is accidentally specifying IP:PORT combinations using a colon. The code previously expected a space, not a colon. It also does not provide good feedback when this operator error is made. - This change simply allows this mistake (using a colon) to be fixed automatically, preserving the intention of the operator. - Add unit tests, which test the functionality when specifying IP:PORT in the configuration file. - The RPCCall test regime is not specific enough to test this functionality, it has been tested by hand. - Ensure IPv6 addresses are not confused for ip:port --------- Co-authored-by: Elliot Lee <[email protected]>
- MSVC 19.x reported a warning about import paths in boost for function_output_iterator class (boost::function_output_iterator). - Eliminate that warning by updating the import paths, as suggested by the compiler warnings.
* Prevent internal error by catching overflow exception in `gateway_balances`. * Treat `gateway_balances` obligations overflow as max (largest valid) `STAmount`. * Note that very large sums of STAmount are approximations regardless. --------- Co-authored-by: Scott Schurr <[email protected]>
…RPLF#4420) In rare circumstances, both `onRequestTimeout` and the response handler (`onSiteFetch` or `onTextFetch`) can get queued and processed. In all observed cases, the response handler processes a network error. `onRequestTimeout` usually runs first, but on rare occasions, the response handler runs first, which leaves `activeResource` empty.
When writing objects to the NodeStore, we need to convert them from the in-memory format to the binary format used by the node store. The conversion is handled by the `EncodedBlob` class, which is only instantiated on the stack. Coupled with the fact that most objects are under 1024 bytes in size, this presents an opportunity to elide a memory allocation in a critical path. This commit also simplifies the interface of `EncodedBlob` and eliminates a subtle corner case that could result in dangling pointers. These changes are not expected to cause a significant reduction in memory usage. The change avoids the use of a `std::shared_ptr` when unnecessary and tries to use stack-based memory allocation instead of the heap whenever possible. This is a net gain both in terms of memory usage (lower fragmentation) and performance (less work to do at runtime).
There were situations where `marker`s returned by `account_lines` did not work on subsequent requests, returning "Invalid Parameters". This was caused by the optimization implemented in "Enforce account RPC limits by account objects traversed": XRPLF@e289896?diff=unified&w=1 Previously, the ledger traversal would find up to `limit` account lines, and if there were more, the marker would be derived from the key of the next account line. After the change, ledger traversal would _consider_ up to `limit` account objects of any kind found in the account's directory structure. If there were more, the marker would be derived from the key of the next object, regardless of type. With this optimization, it is expected that `account_lines` may return fewer than `limit` account lines - even 0 - along with a marker indicating that there are may be more available. The problem is that this optimization did not update the `RPC::isOwnedByAccount` helper function to handle those other object types. Additionally, XLS-20 added `ltNFTOKEN_OFFER` ledger objects to objects that have been added to the account's directory structure, but did not update `RPC::isOwnedByAccount` to be able to handle those objects. The `marker` provided in the example for XRPLF#4354 includes the key for an `ltNFTOKEN_OFFER`. When that `marker` is used on subsequent calls, it is not recognized as valid, and so the request fails. * Add unit test that walks all the object types and verifies that all of their indexes can work as a marker. * Fix XRPLF#4340 * Fix XRPLF#4354
Without the protocol amendment introduced by this commit, an NFT ID can be reminted in this manner: 1. Alice creates an account and mints an NFT. 2. Alice burns the NFT with an `NFTokenBurn` transaction. 3. Alice deletes her account with an `AccountDelete` transaction. 4. Alice re-creates her account. 5. Alice mints an NFT with an `NFTokenMint` transaction with params: `NFTokenTaxon` = 0, `Flags` = 9). This will mint a NFT with the same `NFTokenID` as the one minted in step 1. The params that construct the NFT ID will cause a collision in `NFTokenID` if their values are equal before and after the remint. With the `fixNFTokenRemint` amendment, there is a new sequence number construct which avoids this scenario: - A new `AccountRoot` field, `FirstNFTSequence`, stays constant over time. - This field is set to the current account sequence when the account issues their first NFT. - Otherwise, it is not set. - The sequence of a newly-minted NFT is computed by: `FirstNFTSequence + MintedNFTokens`. - `MintedNFTokens` is then incremented by 1 for each mint. Furthermore, there is a new account deletion restriction: - An account can only be deleted if `FirstNFTSequence + MintedNFTokens + 256` is less than the current ledger sequence. - 256 was chosen because it already exists in the current account deletion constraint. Without this restriction, an NFT may still be remintable. Example scenario: 1. Alice's account sequence is at 1. 2. Bob is Alice's authorized minter. 3. Bob mints 500 NFTs for Alice. The NFTs will have sequences 1-501, as NFT sequence is computed by `FirstNFTokenSequence + MintedNFTokens`). 4. Alice deletes her account at ledger 257 (as required by the existing `AccountDelete` amendment). 5. Alice re-creates her account at ledger 258. 6. Alice mints an NFT. `FirstNFTokenSequence` initializes to her account sequence (258), and `MintedNFTokens` initializes as 0. This newly-minted NFT would have a sequence number of 258, which is a duplicate of what she issued through authorized minting before she deleted her account. --------- Signed-off-by: Shawn Xie <[email protected]>
* Remove obsolete build instructions. * By using Conan, builders can choose which dependencies specifically to build and link as shared objects. * Refactor the build instructions based on the plan in XRPLF#4433.
Make it easy for projects to depend on libxrpl by adding an `ALIAS` target named `xrpl::libxrpl` for projects to link. The name was chosen because: * The current library target is named `xrpl_core`. There is no other "non-core" library target against which we need to distinguish the "core" library. We only export one library target, and it should just be named after the project to keep things simple and predictable. * Underscores in target or library names are generally discouraged. * Every target exported in CMake should be prefixed with the project name. By adding an `ALIAS` target, existing consumers who use the `xrpl_core` target will not be affected. * In the future, there can be a migration plan to make `xrpl_core` the `ALIAS` target (and `libxrpl` the "real" target, which will affect the filename of the compiled binary), and eventually remove it entirely. Also: * Fix the Conan recipe so that consumers using Conan import a target named `xrpl::libxrpl`. This way, every consumer can use the same instructions. * Document the two easiest methods to depend on libxrpl. Both have been tested. * See XRPLF#4443.
Fix a case where `ripple::Expected` returned a json array, not a value. The problem was that `Expected` invoked the wrong constructor for the expected type, which resulted in a constructor that took multiple arguments being interpreted as an array. A proposed fix was provided by @godexsoft, which involved a minor adjustment to three constructors that replaces the use of curly braces with parentheses. This makes `Expected` usable for [Clio](https://github.com/XRPLF/clio). A unit test is also included to ensure that the issue doesn't occur again in the future.
Add the ability to mark amendments as obsolete. There are some known amendments that should not be voted for because they are broken (or similar reasons). This commit marks four amendments as obsolete: 1. `CryptoConditionsSuite` 2. `NonFungibleTokensV1` 3. `fixNFTokenDirV1` 4. `fixNFTokenNegOffer` When an amendment is `Obsolete`, voting for the amendment is prevented. A determined operator can still vote for the amendment by changing the source, and doing so does not break any protocol rules. The "feature" command now does not modify the vote for obsolete amendments. Before this change, there were two options for an amendment's `DefaultVote` behavior: yes and no. After this change, there are three options for an amendment's `VoteBehavior`: DefaultYes, DefaultNo, and Obsolete. To be clear, if an obsolete amendment were to (somehow) be activated by consensus, the server still has the code to process transactions according to that amendment, and would not be amendment blocked. It would function the same as if it had been voting "no" on the amendment. Resolves XRPLF#4014. Incorporates review feedback from @scottschurr.
* Create the FeeSettings object in genesis ledger. * Initialize with default values from the config. Removes the need to pass a Config down into the Ledger initialization functions, including setup(). * Drop the undocumented fee config settings in favor of the [voting] section. * Fix XRPLF#3734. * If you previously used fee_account_reserve and/or fee_owner_reserve, you should change to using the [voting] section instead. Example: ``` [voting] account_reserve=10000000 owner_reserve=2000000 ``` * Because old Mainnet ledgers (prior to 562177 - yes, I looked it up) don't have FeeSettings, some of the other ctors will default them to the config values before setup() tries to load the object. * Update default Config fee values to match Mainnet. * Fix unit tests: * Updated fees: Some tests are converted to use computed values of fee object, but the default Env config was also updated to fix the rest. * Unit tests that check the structure of the ledger have updated hashes and counts.
In the release notes (current and historical), there is a link to the `Builds` directory. By creating `Builds/README.md` with a link to `BUILD.md`, it is easier to find the build instructions.
Log exception messages at several locations. Previously, these were locations where an exception was caught, but the exception message was not logged. Logging the exception messages can be useful for analysis or debugging. The additional logging could have a small negative performance impact. Fix XRPLF#3213.
Previously, the object `account_data` in the `account_info` response contained a single field `Flags` that contains flags of an account. API consumers must perform bitwise operations on this field to retrieve the account flags. This change adds a new object, `account_flags`, at the top level of the `account_info` response `result`. The object contains relevant flags of the account. This makes it easier to write simple code to check a flag's value. The flags included may depend on the amendments that are enabled. Fix XRPLF#2457.
Change `ledger_data` to return an empty list when all entries are filtered out. When the `type` field is specified for the `ledger_data` method, it is possible that no objects of the specified type are found. This can even occur if those objects exist, but not in the section that the server checked while serving your request. Previously, the `state` field of the response has the value `null`, instead of an empty array `[]`. By changing this to an empty array, the response is the same data type so that clients can handle it consistently. For example, in Python, `for entry in state` should now work correctly. It would raise an exception if `state` is `null` (or `None`). This could break client code that explicitly checks for null. However, this fix aligns the response with the documentation, where the `state` field is an array. Fix XRPLF#4392.
- Include NFTokenPages in account_objects to make it easier to understand an account's Owner Reserve and simplify app development. - Update related tests and documentation. - Fix XRPLF#4347. For info about the Owner Reserve, see https://xrpl.org/reserves.html --------- Co-authored-by: Scott Schurr <[email protected]> Co-authored-by: Ed Hennis <[email protected]>
Add Clio-specific JSS constants to ensure a common vocabulary of keywords in Clio and this project. By providing visibility of the full API keyword namespace, it reduces the likelihood of developers introducing minor variations on names used by Clio, or unknowingly claiming a keyword that Clio has already claimed. This change moves this project slightly away from having only the code necessary for running the core server, but it is a step toward the goal of keeping this server's and Clio's APIs similar. The added JSS constants are annotated to indicate their relevance to Clio. Clio can be found here: https://github.com/XRPLF/clio Signed-off-by: ledhed2222 <[email protected]>
When instantiating a large amount of fixed-sized objects on the heap the overhead that dynamic memory allocation APIs impose will quickly become significant. In some cases, allocating a large amount of memory at once and using a slabbing allocator to carve the large block into fixed-sized units that are used to service requests for memory out will help to reduce memory fragmentation significantly and, potentially, improve overall performance. This commit introduces a new `SlabAllocator<>` class that exposes an API that is _similar_ to the C++ concept of an `Allocator` but it is not meant to be a general-purpose allocator. It should not be used unless profiling and analysis of specific memory allocation patterns indicates that the additional complexity introduced will improve the performance of the system overall, and subsequent profiling proves it. A helper class, `SlabAllocatorSet<>` simplifies handling of variably sized objects that benefit from slab allocations. This commit incorporates improvements suggested by Greg Popovitch (@greg7mdp). Commit 1 of 3 in XRPLF#4218.
The `SHAMapItem` class contains a variable-sized buffer that holds the serialized data associated with a particular item inside a `SHAMap`. Prior to this commit, the buffer for the serialized data was allocated separately. Coupled with the fact that most instances of `SHAMapItem` were wrapped around a `std::shared_ptr` meant that an instantiation might result in up to three separate memory allocations. This commit switches away from `std::shared_ptr` for `SHAMapItem` and uses `boost::intrusive_ptr` instead, allowing the reference count for an instance to live inside the instance itself. Coupled with using a slab-based allocator to optimize memory allocation for the most commonly sized buffers, the net result is significant memory savings. In testing, the reduction in memory usage hovers between 400MB and 650MB. Other scenarios might result in larger savings. In performance testing with NFTs, this commit reduces memory size by about 15% sustained over long duration. Commit 2 of 3 in XRPLF#4218.
The `Ledger` class contains two `SHAMap` instances: the state and transaction maps. Previously, the maps were dynamically allocated using `std::make_shared` despite the fact that they did not require lifetime management separate from the lifetime of the `Ledger` instance to which they belong. The two `SHAMap` instances are now regular member variables. Some smart pointers and dynamic memory allocation was avoided by using stack-based alternatives. Commit 3 of 3 in XRPLF#4218.
Add a `NetworkID` field to help prevent replay attacks on and from side-chains. The new field must be used when the server is using a network id > 1024. To preserve legacy behavior, all chains with a network ID less than 1025 retain the existing behavior. This includes Mainnet, Testnet, Devnet, and hooks-testnet. If `sfNetworkID` is present in any transaction submitted to any of the nodes on one of these chains, then `telNETWORK_ID_MAKES_TX_NON_CANONICAL` is returned. Since chains with a network ID less than 1025, including Mainnet, retain the existing behavior, there is no need for an amendment. The `NetworkID` helps to prevent replay attacks because users specify a `NetworkID` field in every transaction for that chain. This change introduces a new UINT32 field, `sfNetworkID` ("NetworkID"). There are also three new local error codes for transaction results: - `telNETWORK_ID_MAKES_TX_NON_CANONICAL` - `telREQUIRES_NETWORK_ID` - `telWRONG_NETWORK` To learn about the other transaction result codes, see: https://xrpl.org/transaction-results.html Local error codes were chosen because a transaction is not necessarily malformed if it is submitted to a node running on the incorrect chain. This is a local error specific to that node and could be corrected by switching to a different node or by changing the `network_id` on that node. See: https://xrpl.org/connect-your-rippled-to-the-xrp-test-net.html In addition to using `NetworkID`, it is still generally recommended to use different accounts and keys on side-chains. However, people will undoubtedly use the same keys on multiple chains; for example, this is common practice on other blockchain networks. There are also some legitimate use cases for this. A `app.NetworkID` test suite has been added, and `core.Config` was updated to include some network_id tests.
Newer compilers, such as Apple Clang 15.0, have removed `std::result_of` as part of C++20. The build instructions provided a fix for this (by adding a preprocessor definition), but the fix was broken. This fixes the fix by: * Adding the `conf` prefix for tool configurations (which had been forgotten). * Passing `extra_b2_flags` to `boost` package to fix its build. * Define `BOOST_ASIO_HAS_STD_INVOKE_RESULT` in order to build boost 1.77 with a newer compiler.
Add instructions for installing rippled using the package managers APT and YUM. Some steps were adapted from xrpl.org. --------- Co-authored-by: Michael Legleux <[email protected]>
If `--quorum` setting is present on the command line, use the specified value as the minimum quorum. This allows for the use of a potentially fork-unsafe quorum, but it is sometimes necessary for small and test networks. Fix XRPLF#4488. --------- Co-authored-by: RichardAH <[email protected]>
Address issues related to the removal of `std::{u,bi}nary_function` in C++17 and some warnings with Clang 16. Some warnings appeared with the upgrade to Apple clang version 14.0.3 (clang-1403.0.22.14.1). - `std::{u,bi}nary_function` were removed in C++17. They were empty classes with a few associated types. We already have conditional code to define the types. Just make it unconditional. - libc++ checks a cast in an unevaluated context to see if a type inherits from a binary function class in the standard library, e.g. `std::equal_to`, and this causes an error when the type privately inherits from such a class. Change these instances to public inheritance. - We don't need a middle-man for the empty base optimization. Prefer to inherit directly from an empty class than from `beast::detail::empty_base_optimization`. - Clang warns when all the uses of a variable are removed by conditional compilation of assertions. Add a `[[maybe_unused]]` annotation to suppress it. - As a drive-by clean-up, remove commented code. See related work in XRPLF#4486.
This change makes progress on the plan in XRPLF#4371. It does not replicate the full [matrix] implemented in XRPLF#3851, but it does replicate the 1.ii section of the Linux matrix. It leverages "heavy" self-hosted runners, and demonstrates a repeatable pattern for future matrices. [matrix]: https://github.com/XRPLF/rippled/blob/d794a0f3f161bb30c74881172fc38f763d7d46e8/.github/README.md#continuous-integration
SOCI is the C++ database access library. The SOCI recipe was updated in Conan Center Index (CCI), and it breaks for our choice of options. This breakage occurs when you build with a fresh Conan cache (e.g. when you submit a PR, or delete `~/.conan/data`). * Add a custom Conan recipe for SOCI v4.0.3 * Update dependency building to handle exporting and installing Snappy and SOCI * Fix workflows to use custom SOCI recipe * Update BUILD.md to include instruction for exporting the SOCI Conan recipe: * `conan export external/soci soci/4.0.3@` This solution has been verified on Ubuntu 20.04 and macOS. Context: * There is a compiler error that the `sqlite3.h` header is not available when building soci. * When package B depends on package A, it finds the pieces it needs by importing the Package Configuration File (PCF) that Conan generates for package A. * Read the CMake written by package B to check that it is importing the PCF correctly and linking its exports correctly. * Since this can be difficult, it is often more efficient to check https://github.com/conan-io/conan-center-index/issues for package B to see if anyone else has seen a similar problem. * One of the issues points to a problem area in soci's CMake. To confirm the diagnosis, review soci's CMake (after any patches are applied) in the Conan build directory `build/$buildId/src/`. * Review the Conan-generated PCF in `build/$buildId/build/$buildType/generators/`. * In this case, the problem was likely (re)introduced by conan-io/conan-center-index#17026 * If there is a problem in the source or in the Conan recipe, the fastest fix is to copy the recipe and either: * Add a source patch to fix any problems in the source. * Change the recipe to fix any problems in the recipe. * In this case, this can be done by finding soci's Conan recipe at https://github.com/conan-io/conan-center-index/tree/master/recipes/soci and then copying the `all` directory as `external/$packageName` in our project. Then, make any changes. * Test packages can be removed from the recipe folder as they are not needed. * If adding a patch in the `patches` directory, add a description for it to `conandata.yml`. * Since `conanfile.py` has no `version` property on the recipe class, builders need to pass a version on the command line (like they do for our `snappy` recipe). * Add an example command to `BUILD.md`. Future work: It may make sense to refer to recipes by revision, by checking in a lockfile.
On macOS, if you have not installed something that depends on `xz`, then your system may lack `lzma`, resulting in a build error similar to: ``` Downloading libarchive-3.6.0.tar.xz completed [6250.61k] libarchive/3.6.0: ERROR: libarchive/3.6.0: Error in source() method, line 120 get(self, **self.conan_data["sources"][self.version], strip_root=True) ReadError: file could not be opened successfully: - method gz: ReadError('not a gzip file') - method bz2: ReadError('not a bzip2 file') - method xz: CompressionError('lzma module is not available') - method tar: ReadError('invalid header') ``` The solution is to ensure that `lzma` is installed by installing `xz`.
…F#4404) The API would allow seeds (and public keys) to be used in place of accounts at several locations in the API. For example, when calling account_info, you could pass `"account": "foo"`. The string "foo" is treated like a seed, so the method returns `actNotFound` (instead of `actMalformed`, as most developers would expect). In the early days, this was a convenience to make testing easier. However, it allows for poor security practices, so it is no longer a good idea. Allowing a secret or passphrase is now considered a bug. Previously, it was controlled by the `strict` option on some methods. With this commit, since the API does not interpret `account` as `seed`, the option `strict` is no longer needed and is removed. Removing this behavior from the API is a [breaking change](https://xrpl.org/request-formatting.html#breaking-changes). One could argue that it shouldn't be done without bumping the API version; however, in this instance, there is no evidence that anyone is using the API in the "legacy" way. Furthermore, it is a potential security hole, as it allows users to send secrets to places where they are not needed, where they could end up in logs, error messages, etc. There's no reason to take such a risk with a seed/secret, since only the public address is needed. Resolves: XRPLF#3329, XRPLF#3330, XRPLF#4337 BREAKING CHANGE: Remove non-strict account parsing (XRPLF#3330)
Three new fields are added to the `Tx` responses for NFTs: 1. `nftoken_id`: This field is included in the `Tx` responses for `NFTokenMint` and `NFTokenAcceptOffer`. This field indicates the `NFTokenID` for the `NFToken` that was modified on the ledger by the transaction. 2. `nftoken_ids`: This array is included in the `Tx` response for `NFTokenCancelOffer`. This field provides a list of all the `NFTokenID`s for the `NFToken`s that were modified on the ledger by the transaction. 3. `offer_id`: This field is included in the `Tx` response for `NFTokenCreateOffer` transactions and shows the OfferID of the `NFTokenOffer` created. The fields make it easier to track specific tokens and offers. The implementation includes code (by @ledhed2222) from the Clio project to extract NFTokenIDs from mint transactions.
Global variables in different TUs are initialized in an undefined order. At least one global variable was accessing a global switchover variable. This caused the switchover variable to be accessed in an uninitialized state. Since the switchover is always explicitly set before transaction processing, this bug can not effect transaction processing, but could effect unit tests (and potentially the value of some global variables). Note: at the time of this patch the offending bug is not yet in production.
This assert was put in the wrong place, but it only triggers if shards are configured. This change moves the assert to the right place and updates it to ensure correctness. The assert could be hit after the server downloads some shards. It may be necessary to restart after the shards are downloaded. Note that asserts are normally checked only in debug builds, so release packages should not be affected. Introduced in: XRPLF#4319 (66627b2)
seelabs
approved these changes
May 24, 2023
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
HowardHinnant
approved these changes
May 24, 2023
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
High Level Overview of Change
Compared to rc1, this release candidate includes:
This software has a quarterly release cadence. The 1.11 release is expected to land the week of June 20, 2023. This release candidate allows for approximately 1 month of testing, including soak-in time on production servers.
Any other features can be included in the next release (1.12) - there is always another train coming.
Context of Change
There have been about 40 changes (PRs) since the last release (1.10.1). Review the comparison for details.
Type of Change