Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PriceOracle: Price Oracle (XLS-47d) #4789

Merged
merged 1 commit into from
Feb 26, 2024

Conversation

gregtatcam
Copy link
Collaborator

@gregtatcam gregtatcam commented Oct 26, 2023

High Level Overview of Change

Implement native support for Price Oracles

Add a new ledger object: Oracle

Add two new transactions:

  • OracleSet: create or update the Oracle object.
  • OracleDelete: delete the Oracle object.

Add one new API

  • get_aggregate_price: get aggregate price for the token pair for the oracles.

Context of Change

Notes for reviewers:

Type of Change

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Refactor (non-breaking change that only restructures code)
  • Tests (you added tests for code that already exists, or your new feature included in this PR)
  • Documentation update
  • Chore (no impact to binary, e.g. .gitignore, formatting, dropping support for older tooling)
  • Release

API Impact

  • Public API: New feature (new methods and/or new fields)
  • Public API: Breaking change (in general, breaking changes should only impact the next api_version)
  • libxrpl change (any change that may affect libxrpl or dependents of libxrpl)
  • Peer protocol change (must be backward compatible or bump the peer protocol version)

Test Plan

Added tests for the new features:

  • Oracle
  • GetAggregatePrice

@gregtatcam gregtatcam requested a review from mvadari October 26, 2023 22:35
@mvadari
Copy link
Collaborator

mvadari commented Nov 13, 2023

I get this error when I try to compile locally (Mac M1):

src/ripple/rpc/handlers/GetAggregatePrice.cpp:72:41: error: member access into incomplete type 'LedgerMaster'
                    context.ledgerMaster.getLedgerBySeq(prevSeq))
                                        ^
src/ripple/app/main/Application.h:79:7: note: forward declaration of 'ripple::LedgerMaster'
class LedgerMaster;

@gregtatcam
Copy link
Collaborator Author

I get this error when I try to compile locally (Mac M1):

src/ripple/rpc/handlers/GetAggregatePrice.cpp:72:41: error: member access into incomplete type 'LedgerMaster'
                    context.ledgerMaster.getLedgerBySeq(prevSeq))
                                        ^
src/ripple/app/main/Application.h:79:7: note: forward declaration of 'ripple::LedgerMaster'
class LedgerMaster;

Thanks. I built with the unity=ON; should have built with the unity=OFF as well. This file needs #include <ripple/app/ledger/LedgerMaster.h>. Will update.

auto const sleSetter =
ctx.view.read(keylet::account(ctx.tx.getAccountID(sfAccount)));
if (!sleSetter)
return {terNO_ACCOUNT};
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
return {terNO_ACCOUNT};
return terNO_ACCOUNT;

Comment on lines 75 to 87
// lastUpdateTime must be within 30 seconds of the last closed ledger
using namespace std::chrono;
std::size_t const closeTime =
duration_cast<seconds>(ctx.view.info().closeTime.time_since_epoch())
.count();
std::size_t const lastUpdateTime = ctx.tx[sfLastUpdateTime];
if (lastUpdateTime < closeTime ||
lastUpdateTime > (closeTime + maxLastUpdateTimeDelta))
return tecINVALID_UPDATE_TIME;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if the ledger slows down and there are a couple of minutes between ledgers?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could be. What is a reasonable range to use for the validation? 30 seconds is roughly 6 times of the expected 5 seconds.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A thought: is this field necessary? Seems like the same info is provided by PreviousTxnLgrSeq

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How is PreviousTxnLgrSeq providing the same info as LastUpdateTime?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It provides the ledger that the object was last updated in - my understanding is that that's basically providing the same information. But perhaps I'm misunderstanding the objective of that field.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, it's the ledger sequence, not the time. How do we get the time out of it: sequence * avg close time? This gives an estimate, but not the actual time. LastUpdateTime provides an accurate time up to a second.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, I'd look up that specific ledger's close time, which will be accurate within 10 seconds (possibly more accurate than LastUpdateTime, if a few ledgers go by before it's accepted).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mvadari Additionally, the PriceOracle Set transaction might not get included in the next ledger due to problems in consensus, network health, etc. We will need to account for arbitrary delays in the publication of the oracle data.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's indirection and 10 seconds might not be good enough.

src/ripple/protocol/TER.h Outdated Show resolved Hide resolved
@gregtatcam gregtatcam requested review from ckeshava and thejohnfreeman and removed request for ckeshava November 15, 2023 18:23
@thejohnfreeman
Copy link
Collaborator

I find the description of the OracleID confusing. The spec says this:

We compute the PriceOracle object ID, a.k.a., OracleID, as the SHA-512Half of the following values, concatenated in order:

  • The Oracle space key (0x52)
  • The Owner Account ID
  • The Oracle Sequence. This field must be passed to the transactions and it describes a unique Price Oracle sequence for the given account.
  • OracleID is not mentioned in the description of the fields of PriceOracle, but it is included in the JSON of an example PriceOracle. Is it included in the ledger object or not?
  • It is implied but not explicitly stated that the OracleID is used as the SHAMap key for a PriceOracle object. Is it? How are these objects supposed to be laid out in the ledger? The aggregation transaction suggests that it is possible to iterate PriceOracle objects for a given account. How?
  • Oracle Sequence is not mentioned in the fields of PriceOracle or shown in the example. Is it possible to recover the OracleID for a PriceOracle using only the fields contained within a PriceOracle? If so, how?
  • "Sequence" implies to me that it is a number that increments by one with each change, like a Ledger Sequence or an Account Sequence, but OracleID implies a constant identifier. Which is it? Are there any rules on the sequence number?

In the spec, the list following "The transaction fails if" includes:

  • The URI field length exceeds 64 bytes.
  • The Provider field length exceeds 64 bytes.

But the section describing the PriceOracle on-ledger object fields says:

  • Provider identifies an Oracle Provider. It can be URI or any data, for instance chainlink. It is a string of up to 256 ASCII hex encoded characters (0x20-0x7E).
  • URI is an optional URI field to reference price data off-chain. It is limited to 256 bytes.

Which is correct? Can we fix it in the spec?

I find this explanation unclear:

If an object with the OracleID Object ID already exists then the new token pairs are added to the Oracle instance PriceDataSeries array. Note that the order of the token pairs in the PriceDataSeries array is not important since the token pair uniquely identifies location in the PriceDataSeries array of the PriceOracle object. Also note that not every token pair price has to be updated. I.e., even though the PriceOracle may define ten token pairs, OracleSet transaction may contain only one token pair price update. In this case the missing token pair will not include AssetPrice and Scale fields. PreviousTxnID can be used to find the last updated Price Data for this token pair.

"If an object with the ID exists, then the new token pairs are added..." I'm assuming "new token pairs" are ones that do not appear in the PriceOracle. If so, then why no mention of "old token pairs"? They should be updating what appears in the PriceOracle, correct?

"In this case the missing token pair..." What missing token pair? The token pairs that are in the object, but not the transaction? If yes, then what is meant by "[they] will not include AssetPrice and Scale fields"? If they are missing, won't they "not include" every field?

I feel like there is an implied behavior for this transaction that is not well described by the existing language. Here is how I would describe that behavior. Is it correct?

An OracleSet transaction uniquely identifies a PriceOracle object with its Account and OracleSequence fields. If such an object does not yet exist in the ledger, it is created. Otherwise, the existing object is updated. The Provider, URI, and AssetClass fields are copied directly from the transaction, if present. Provider and AssetClass must be included in the transaction if the object is being created.

The PriceDataSeries of the transaction is copied to a newly created PriceOracle object, or updates an existing object, like so:

  • PriceData objects for (BaseAsset, QuoteAsset) token pairs that appear in the transaction but not the object are copied to the object.
  • PriceData objects for token pairs that appear in both the transaction and the object are overwritten in the object.
  • PriceData objects for token pairs that appear only in the object are left unchanged.

The order of token pairs in the transaction is not important because the token pair uniquely identifies the location of the PriceData object in the PriceDataSeries array of the PriceOracle object.

LastUpdateTime, PreviousTxnID, and PreviousTxnLgrSeq are set in the same manner as for an AccountSet transaction.

The owner reserve of the account is updated according to the difference in the size of the PriceDataSeries before and after the transaction is applied: 0 for missing, 1 for 1 - 5 objects, 2 for 6 - 10 objects.

AssetClass is a required field on a PriceOracle object and an optional field in an OracleSet transaction, just like Provider, but its field description does not say that it must be included in the transaction when creating a new instance of PriceOracle, unlike Provider. It must be included in the transaction, correct?

A side note on formatting. I think the presentation in the section on PriceOracle fields is best: one table with all the fields, followed by a bulleted list of field descriptions. Not like the transaction sections with separate tables (repeating the same header row) for each field. I think the example should go directly after or before the table, not at the end of the section.

@gregtatcam
Copy link
Collaborator Author

@thejohnfreeman thanks for the feedback on the specs. I copied your comments and responded on XRPLF/XRPL-Standards#138.

src/ripple/app/tx/impl/SetOracle.cpp Show resolved Hide resolved
@@ -1207,6 +1207,7 @@ class RPCParser
{"fetch_info", &RPCParser::parseFetchInfo, 0, 1},
{"gateway_balances", &RPCParser::parseGatewayBalances, 1, -1},
{"get_counts", &RPCParser::parseGetCounts, 0, 1},
{"get_aggregate_price", &RPCParser::parseAsIs, 3, 5},
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

regarding the naming of the function, can we keep it as &RPCParser::parseGetAggPrice or something like that? I feel this name is more readable.

if there is some other reason for using parseAsIs, please let me know.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We just need to get the RPC parameters. There is no need go have any kind of transformation.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe the parse function is only needed when using the command-line versions of RPCs.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm wondering now if it makes sense to have get_aggregate_price as the command line option. There are too many parameters that have to be provided.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll delete it.

Comment on lines 54 to 61
auto invalidLength = [&](auto const& sField, std::size_t length) {
return ctx.tx.isFieldPresent(sField) &&
ctx.tx[sField].length() > length;
};
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are the fields allowed to be empty strings? Should there be an error in that case?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The empty string doesn't make much sense but these fields are not validated in any way. Say how is a any more valid than '' in URI? It's up to the oracle's user to interpret. The '' might make some meaning. I'll change if you feel strongly about this.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added check for 0 length.

for (auto const& entry : ctx.tx.getFieldArray(sfPriceDataSeries))
{
if (!entry.isFieldPresent(sfAssetPrice))
return temMALFORMED;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why doesn't this check happen in preflight? Ditto with the other tem checks in this function

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wanted to reduce the number of redundant iterations. The other tem checks require the ledger access, which is usually done in preclaim.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't the check being done in preflight not override the need for an extra set of iterations, since preflight is done on the submission node instead of waiting to be broadcast?
Also, shouldn't any check requiring ledger access return a tec code instead of a tem code?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can move some checks to preflight: check if AssetPrice is included and if there are duplicate entries.
I wouldn't need to repeat this in preclaim in case of create but in case of update I still need to iterate over submitted array to see if there are token pairs that are going to be added so I could check the array size and the reserve. The double iteration can't be avoided in case of update. In reality it's not a big deal, since the submitted array size is less or equal to ten. Would you rather have it handled this way? I'd have to iterate in doApply but again it's such a small array size that I don't think it matters. As for tec and tem code, I don't think it's a hard core rule that tem can't appear in preclaim. For instance, CreateOffer, CancelOffer, and NFTokenAcceptOffer return tem in preclaim.

src/ripple/rpc/handlers/GetAggregatePrice.cpp Show resolved Hide resolved
Env env(*this);
env.fund(
env.current()->fees().accountReserve(1) +
env.current()->fees().base * 2,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Apart from the Account Reserve, why is the second component of the fees necessary? I don't understand why we need that.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just factoring in the transaction fee. There is create and set to should have enough fee for two tx.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, that makes sense 👍

src/ripple/app/tx/impl/SetOracle.cpp Show resolved Hide resolved
duration_cast<seconds>(ctx.view.info().closeTime.time_since_epoch())
.count();
std::size_t const lastUpdateTime = ctx.tx[sfLastUpdateTime];
if (lastUpdateTime <= epoch_offset.count())
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this condition precludes any pre-1 Jan, 2000 date from being used in the PriceDataSeries. I couldn't test this with the unit tests because the lastUpdateTime is automatically set to current date/time.

Is my understanding correct? This would make it difficult to work with historical data (like old stock market prices).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, and there is also a validation in place, which is not going you allow to set the time stamp older than the last closed ledger. If someone is interested in the prices predating 1/1/2000 then they need to look at some other databases and not use the blockchain's Price Oracle for this. The Price Oracle is reflecting the current market price and that's what dApps are going to use it for.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LastUpdateTime also must be within 30 seconds of the last ledger close time, so before 1/1/2000 is a moot issue.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i agree, I was thinking about the limitations of the PriceOracle model

src/ripple/protocol/impl/SField.cpp Show resolved Hide resolved
}

TER
DeleteOracle::doApply()
Copy link
Collaborator

@mvadari mvadari Dec 7, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like AccountDelete doesn't handle oracles - are they a deletion blocker or should they be deleted with the account? IMO they probably should be deleted with the account.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice observation!

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It should be deleted. Thanks for pointing it out. Will update.

@@ -1207,6 +1207,7 @@ class RPCParser
{"fetch_info", &RPCParser::parseFetchInfo, 0, 1},
{"gateway_balances", &RPCParser::parseGatewayBalances, 1, -1},
{"get_counts", &RPCParser::parseGetCounts, 0, 1},
{"get_aggregate_price", &RPCParser::parseAsIs, 3, 5},
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This RPC should be added to the help message in Main.cpp

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are not going to offer this command in cli. Will delete.

Comment on lines 621 to 643
try
{
uNodeIndex = beast::zero;
auto const& oracle = context.params[jss::oracle];
auto const sequence =
oracle[jss::oracle_sequence].isConvertibleTo(
Json::ValueType::uintValue)
? std::make_optional(
oracle[jss::oracle_sequence].asUInt())
: std::nullopt;
auto const account =
parseBase58<AccountID>(oracle[jss::account].asString());
if (!account || account->isZero())
jvResult[jss::error] = "malformedAddress";
else if (!sequence)
jvResult[jss::error] = "malformedSequence";
else
uNodeIndex = keylet::oracle(*account, *sequence).key;
}
catch (std::runtime_error const&)
{
jvResult[jss::error] = "malformedRequest";
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this in a big try block? Why not handle each specific issue individually?
Or is this more of a just-in-case thing?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this try/catch is needed. Thanks for pointing it out. I'll delete.

Comment on lines 113 to 141
if (ctx.tx.isFieldPresent(sfProvider) ||
ctx.tx.isFieldPresent(sfAssetClass))
return temMALFORMED;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd prefer that this check makes sure that if these fields exist then they match what's already in the object, because otherwise it becomes more difficult to use - you have to include these fields the first time you update the oracle (when you create the object), but you have to remove them for every subsequent update.
Otherwise, I'd rather have one OracleCreate transaction that sets all these fields, and a separate OracleUpdate transaction that updates the prices.

Copy link
Collaborator Author

@gregtatcam gregtatcam Dec 7, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will update per first suggestion.

Comment on lines 110 to 111
if (ctx.tx[sfAccount] != sle->getAccountID(sfOwner))
return tecNO_PERMISSION;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this already a given, since the keylet is owner + document ID?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The keylet is obtained by ctx.tx.getAccountID(sfAccount). It doesn't have to be identical to sle->getAccountID(sfOwner) right?

I can try to update your PriceOracle object with an OracleSet transaction. In that case, your account would be sfOwner and I would be the sfAccount field.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mvadari is right. If we can get sle for the Account then this account is the Owner. This check is redundant.

Comment on lines 117 to 153
for (auto const& entry : sle->getFieldArray(sfPriceDataSeries))
{
auto const hash = tokenPairHash(entry);
if (!pairs.contains(hash))
pairs.emplace(hash);
}
Copy link
Collaborator

@mvadari mvadari Dec 7, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't it be simpler to just do this test in doApply as you're making the modificaions instead of doing it again in preclaim?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I want to check the array size, which could grow during the update and consequently the reserve.

}
else
{
// add a token pair with the price
Copy link
Collaborator

@mvadari mvadari Dec 7, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How do I remove a token pair? Seems like you can only add them. One example usecase for this: FTX goes under and you no longer want a pair with FTT.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Decided not to have this option. If you delete and then create again then is it really the same pair or different? How to treat the historical data in this case?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Decided to add delete option after discussing with PM. A token pair is going to be deleted on update if it doesn't include the AssetPrice.

sle->setFieldArray(sfPriceDataSeries, updatedSeries);
if (ctx_.tx.isFieldPresent(sfURI))
sle->setFieldVL(sfURI, ctx_.tx[sfURI]);
// LastUpdateTime is Unix time, store internally as Ripple Epoch
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why convert? Seems more confusing to have it be different in the object and the transaction.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought you suggested to store it internally as Ripple Epoch or did I misunderstand? See: XRPLF/XRPL-Standards#138 (review)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I meant it should be Ripple Epoch in both.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oracle providers don't deal with Ripple Epoch and Unix Time makes a lot of sense for them. This is why it was changed.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My thought was that it's easy enough to convert on the tooling side.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think there's any other transaction that takes Unix Time as a parameter.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are there transactions that take epoch time?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, EscrowCreate and PaymentChannelCreate.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me check with Jazzi if she is ok with changing to Ripple Epoch.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Though these transactions have more to do with ripple, where is price oracle is an external data. So I'm not sure other transactions is a precedent that has to be followed.

@intelliot intelliot added this to the 2.2.0 (Apr 2024) milestone Feb 5, 2024
@intelliot
Copy link
Collaborator

  • Price oracles haven't been been partner tested or Devnet released yet
  • Let's put this in 2.2 (or later)

@gregtatcam
Copy link
Collaborator Author

  • Price oracles haven't been been partner tested or Devnet released yet
  • Let's put this in 2.2 (or later)

Is it available on devnet?

@intelliot
Copy link
Collaborator

No. I would suggest running it in a new Devnet (similar to what was done with amm-devnet). If that isn't possible, then we can do a custom build with this branch and give it a special version number, like 2.1.0+PriceOracle or the like.

@gregtatcam
Copy link
Collaborator Author

No. I would suggest running it in a new Devnet (similar to what was done with amm-devnet). If that isn't possible, then we can do a custom build with this branch and give it a special version number, like 2.1.0+PriceOracle or the like.

Is someone scheduled to work on this? It would be good if the partners could start testing this.

@gregtatcam
Copy link
Collaborator Author

@thejohnfreeman could you please review when you get a chance? There is a minor patch to fix trim validation in get_aggregate_price. I also merged with the latest develop and resolved the merge conflicts with AMM. Thanks.

@intelliot
Copy link
Collaborator

per the request for a quick re-review (see above), this is awaiting @thejohnfreeman

@manojsdoshi manojsdoshi self-requested a review February 22, 2024 00:39
@q73zhao
Copy link

q73zhao commented Feb 22, 2024

Performance sign-off

The Price Oracle feature testing on XRPL confirmed its good performance and reliability under simulated MainNet conditions.

The get_aggregate_price API efficiently processed queries on token pairs with the oldest updates among 200 oracle objects, averaging a response time of 6.24 ms under medium payment loads.

Additionally, running oracleset transactions at around 15 per second, with a concurrent payment load of around 100 TPS, showed no significant ledger validation delays or fee escalations, with average response time of 5.71ms

More detailed report can be found here

@intelliot intelliot added Perf SignedOff RippleX Performance Team has approved and removed Perf Attn Needed Attention needed from RippleX Performance Team labels Feb 22, 2024
 Implement native support for Price Oracles.

 A Price Oracle is used to bring real-world data, such as market prices,
 onto the blockchain, enabling dApps to access and utilize information
 that resides outside the blockchain.

 Add Price Oracle functionality:
 - OracleSet: create or update the Oracle object
 - OracleDelete: delete the Oracle object

 To support this functionality add:
 - New RPC method, `get_aggregate_price`, to calculate aggregate price for a token pair of the specified oracles
 - `ltOracle` object

 The `ltOracle` object maintains:
 - Oracle Owner's account
 - Oracle's metadata
 - Up to ten token pairs with the scaled price
 - The last update time the token pairs were updated

 Add Oracle unit-tests
@intelliot intelliot requested review from sgramkumar and SaxenaKaustubh and removed request for SaxenaKaustubh and manojsdoshi February 26, 2024 01:31
@seelabs seelabs merged commit e718378 into XRPLF:develop Feb 26, 2024
17 checks passed
@seelabs seelabs mentioned this pull request Feb 29, 2024
1 task
legleux added a commit to legleux/rippled that referenced this pull request Apr 12, 2024
* Price Oracle (XLS-47d): (XRPLF#4789) (XRPLF#4789)

Implement native support for Price Oracles.

 A Price Oracle is used to bring real-world data, such as market prices,
 onto the blockchain, enabling dApps to access and utilize information
 that resides outside the blockchain.

 Add Price Oracle functionality:
 - OracleSet: create or update the Oracle object
 - OracleDelete: delete the Oracle object

 To support this functionality add:
 - New RPC method, `get_aggregate_price`, to calculate aggregate price for a token pair of the specified oracles
 - `ltOracle` object

 The `ltOracle` object maintains:
 - Oracle Owner's account
 - Oracle's metadata
 - Up to ten token pairs with the scaled price
 - The last update time the token pairs were updated

 Add Oracle unit-tests

* fix compile error on gcc 13: (XRPLF#4932)

The compilation fails due to an issue in the initializer list
of an optional argument, which holds a vector of pairs.
The code compiles correctly on earlier gcc versions, but fails on gcc 13.

* Set version to 2.2.0-b1

* Remove default ctors from SecretKey and PublicKey: (XRPLF#4607)

* It is now an invariant that all constructed Public Keys are valid,
  non-empty and contain 33 bytes of data.
* Additionally, the memory footprint of the PublicKey class is reduced.
  The size_ data member is declared as static.
* Distinguish and identify the PublisherList retrieved from the local
  config file, versus the ones obtained from other validators.
* Fixes XRPLF#2942

* Fast base58 codec: (XRPLF#4327)

This algorithm is about an order of magnitude faster than the existing
algorithm (about 10x faster for encoding and about 15x faster for
decoding - including the double hash for the checksum). The algorithms
use gcc's int128 (fast MS version will have to wait, in the meantime MS
falls back to the slow code).

* feat: add user version of `feature` RPC (XRPLF#4781)

* uses same formatting as admin RPC
* hides potentially sensitive data

* build: add STCurrency.h to xrpl_core to fix clio build (XRPLF#4939)

* Embed patched recipe for RocksDB 6.29.5 (XRPLF#4947)

* fix: order book update variable swap: (XRPLF#4890)

This is likely the result of a typo when the code was simplified.

* Fix workflows (XRPLF#4948)

The problem was `CONAN_USERNAME` environment variable, which Conan 1.x uses as the default user in package references.

* Upgrade to xxhash 0.8.2 as a Conan requirement, enable SIMD hashing (XRPLF#4893)

We are currently using old version 0.6.2 of `xxhash`, as a verbatim copy and paste of its header file `xxhash.h`. Switch to the more recent version 0.8.2. Since this version is in Conan Center (and properly protects its ABI by keeping the state object incomplete), add it as a Conan requirement. Switch to the SIMD instructions (in the new `XXH3` family) supported by the new version.

* Update remaining actions (XRPLF#4949)

Downgrade {upload,download}-artifact action to v3 because of unreliability with v4.

* Install more public headers (XRPLF#4940)

Fixes some mistakes in XRPLF#4885

* test: Env unit test RPC errors return a unique result: (XRPLF#4877)

* telENV_RPC_FAILED is a new code, reserved exclusively
  for unit tests when RPC fails. This will
  make those types of errors distinct and easier to test
  for when expected and/or diagnose when not.
* Output RPC command result when result is not expected.

* Fix workflows (XRPLF#4951)

- Update container for Doxygen workflow. Matches Linux workflow, with newer GLIBC version required by newer actions.
- Fixes macOS workflow to install and configure Conan correctly. Still fails on tests, but that does not seem attributable to the workflow.

* perf: improve `account_tx` SQL query: (XRPLF#4955)

The witness server makes heavily use of the `account_tx` RPC command. Perf
testing showed that the SQL query used by `account_tx` became unacceptably slow
when the DB was large and there was a `marker` parameter. The plan for the query
showed only indexed reads. This appears to be an issue with the internal SQLite
optimizer. This patch rewrote the query to use `UNION` instead of `OR` and
significantly improves performance. See RXI-896 and RIPD-1847 for more details.

* `fixEmptyDID`: fix amendment to handle empty DID edge case: (XRPLF#4950)

This amendment fixes an edge case where an empty DID object can be
created. It adds an additional check to ensure that DIDs are
non-empty when created, and returns a `tecEMPTY_DID` error if the DID
would be empty.

* Enforce no duplicate slots from incoming connections: (XRPLF#4944)

We do not currently enforce that incoming peer connection does not have
remote_endpoint which is already used (either by incoming or outgoing
connection), hence already stored in slots_. If we happen to receive a
connection from such a duplicate remote_endpoint, it will eventually result in a
crash (when disconnecting) or weird behavior (when updating slot state), as a
result of an apparently matching remote_endpoint in slots_ being used by a
different connection.

* Remove zaphod.alloy.ee hub from default server list: (XRPLF#4903)

Remove the zaphod.alloy.ee hubs from the bootstrap and default configuration after 5 years. It has been an honor to run these servers, but it is now time for another entity to step into this role.

The zaphod servers will be taken offline in a phased manner keeping all those who have peering arrangements informed.

These would be the preferred attributes of a boostrap set of hubs:

    1. Commitment to run the hubs for a minimum of 2 years
    2. Highly available
    3. Geographically dispersed
    4. Secure and up to date
    5. Committed to ensure that peering information is kept private

* Write improved `forAllApiVersions` used in NetworkOPs (XRPLF#4833)

* Don't reach consensus as quickly if no other proposals seen: (XRPLF#4763)

This fixes a case where a peer can desync under a certain timing
circumstance--if it reaches a certain point in consensus before it receives
proposals. 

This was noticed under high transaction volumes. Namely, when we arrive at the
point of deciding whether consensus is reached after minimum establish phase
duration but before having received any proposals. This could be caused by
finishing the previous round slightly faster and/or having some delay in
receiving proposals. Existing behavior arrives at consensus immediately after
the minimum establish duration with no proposals. This causes us to desync
because we then close a non-validated ledger. The change in this PR causes us to
wait for a configured threshold before making the decision to arrive at
consensus with no proposals. This allows validators to catch up and for brief
delays in receiving proposals to be absorbed. There should be no drawback since,
with no proposals coming in, we needn't be in a huge rush to jump ahead.

* fixXChainRewardRounding: round reward shares down: (XRPLF#4933)

When calculating reward shares, the amount should always be rounded
down. If the `fixUniversalNumber` amendment is not active, this works
correctly. If it is not active, then the amount is incorrectly rounded
up. This patch introduces an amendment so it will be rounded down.

* Remove unused files

* Remove packaging scripts

* Consolidate external libraries

* Simplify protobuf generation

* Rename .hpp to .h

* Format formerly .hpp files

* Rewrite includes

$ find src/ripple/ src/test/ -type f -exec sed -i 's:include\s*["<]ripple/\(.*\)\.h\(pp\)\?[">]:include <ripple/\1.h>:' {} +

* Fix source lists

* Add markers around source lists

* fix: improper handling of large synthetic AMM offers:

A large synthetic offer was not handled correctly in the payment engine.
This patch fixes that issue and introduces a new invariant check while
processing synthetic offers.

* Set version to 2.1.1

* chore: change Github Action triggers for build/test jobs (XRPLF#4956)

Github Actions for the build/test jobs (nix.yml, mac.yml, windows.yml) will only run on branches that build packages (develop, release, master), and branches with names starting with "ci/". This is intended as a compromise between disabling CI jobs on personal forks entirely, and having the jobs run as a free-for-all. Note that it will not affect PR jobs at all.

* Address compiler warnings

* Fix search for protoc

* chore: Default validator-keys-tool to master branch: (XRPLF#4943)

* master is the default branch for that project. There's no point in
  using develop.

* Remove unused lambdas from MultiApiJson_test

* fix Conan component reference typo

* Set version to 2.2.0-b2

* bump version

* 2.2.3

* 2.2.4

* 2.2.5

---------

Co-authored-by: Gregory Tsipenyuk <[email protected]>
Co-authored-by: seelabs <[email protected]>
Co-authored-by: Chenna Keshava B S <[email protected]>
Co-authored-by: Mayukha Vadari <[email protected]>
Co-authored-by: John Freeman <[email protected]>
Co-authored-by: Bronek Kozicki <[email protected]>
Co-authored-by: Ed Hennis <[email protected]>
Co-authored-by: Olek <[email protected]>
Co-authored-by: Alloy Networks <[email protected]>
Co-authored-by: Mark Travis <[email protected]>
Co-authored-by: Gregory Tsipenyuk <[email protected]>
gregtatcam added a commit to gregtatcam/rippled that referenced this pull request Apr 29, 2024
This reverts commit e718378 in order to add input validation
in get_aggregate_price and make consistent error handling in Oracle
API's.
@seelabs seelabs mentioned this pull request Jun 4, 2024
1 task
sophiax851 pushed a commit to sophiax851/rippled that referenced this pull request Jun 12, 2024
Implement native support for Price Oracles.

 A Price Oracle is used to bring real-world data, such as market prices,
 onto the blockchain, enabling dApps to access and utilize information
 that resides outside the blockchain.

 Add Price Oracle functionality:
 - OracleSet: create or update the Oracle object
 - OracleDelete: delete the Oracle object

 To support this functionality add:
 - New RPC method, `get_aggregate_price`, to calculate aggregate price for a token pair of the specified oracles
 - `ltOracle` object

 The `ltOracle` object maintains:
 - Oracle Owner's account
 - Oracle's metadata
 - Up to ten token pairs with the scaled price
 - The last update time the token pairs were updated

 Add Oracle unit-tests
pkcs8 added a commit to pkcs8/xrpl-dev-portal that referenced this pull request Nov 1, 2024
Updated description for `LastUpdateTime` field as per the (PR comments)[XRPLF/rippled#4789 (review)]. This field accepts UNIX time instead of Ripple time because it must integrate with third party systems that may not measure time in Ripple time.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Amendment API Change Clio Reviewed Feature Request Used to indicate requests to add new features Perf SignedOff RippleX Performance Team has approved Testable Will Need Documentation
Projects
Status: 🚢 Released in 2.2.0
Development

Successfully merging this pull request may close these issues.

10 participants