Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow sending historic states and forecasts #3597

Merged
merged 1 commit into from
Nov 8, 2023

Conversation

J-N-K
Copy link
Member

@J-N-K J-N-K commented May 9, 2023

Fixes #844
Supersedes #3000

This is the implementation of the discussion results of #3000.

  • Adds a new type TimeSeries which contains a collection of timestamp/state pairs and a policy (REPLACE, ADD). Timestamps can be anywhere in time (past and future).
  • The REPLACE policy removes all values from persistence that are in the time range of the time-series (first element - last element) before adding the values
  • The ADD policy just adds all elements to the persistence service.
  • The BaseThingHandler has a new method sendTimeSeries(ChannelUID, TimeSeries) that can be used by bindings to send time series to items.
  • A new TimeSeriesProfile has been introduced, it extends the StateProfile with a new methods to pass the time series to the item. Profiles that do not implement TimeSeriesProfile do not pass time-series to the item, a warning is logged.
  • GenericItem has been extended with a new setTimeSeries method. The same logic that is used before a single state is set via setState is applied to each single value in the time series. Time series that do contain states that are invalid for the given item are discarded, otherwise listeners are notified and a ItemTimeSeriesUpdatedEvent is issued (similar to the ItemStateUpdatedEvent).
  • Persistence configurations with strategy everyUpdate accept time series updates and persist them, when the service supports it (i.e. it implements ModifiablePersistenceService). Even if the values are in the future, they are only persisted.
  • A strategy forecast has been added. Configurations with that strategy ONLY accept time series updates and set the item's state to the value if the system time is equivalent to a previously send timestamp. If this strategy is applied for a service that does not implement ModifiablePersistenceService, a warning is logged.

@J-N-K J-N-K added enhancement An enhancement or new feature of the Core work in progress A PR that is not yet ready to be merged labels May 9, 2023
@openhab-bot
Copy link
Collaborator

This pull request has been mentioned on openHAB Community. There might be relevant details there:

https://community.openhab.org/t/bringing-electricity-information-from-eloverblik-dk-and-energidataservice-dk-into-openhab/143470/16

@J-N-K J-N-K force-pushed the feature-historic branch from eb2f374 to ce2223a Compare May 13, 2023 20:25
@J-N-K
Copy link
Member Author

J-N-K commented May 14, 2023

@openhab/core-maintainers and especially @kaikreuzer (since we discussed about this issue in the other PR), I have some questions:

  1. Persistence: Which strategy should persist time series? I currently implemented "everyUpdate", but I'm not sure if it would be better to create a new one. One argument for doing so is that it can be used to identify those items that need monitoring for "applying" forecasted values.
  2. Persistence: We have filters (e.g. time base, value based). IMO we should NOT apply these filters, it just makes no sense e.g. to rate-limit time series or to discard below a certain relative threshold. It's also difficult to decide what shall be used as comparison: the previous value in the series or the current value of the item?
  3. Do we need support to remove all forecasts from the database from a thing handler? For something like energy prices this is probably not necessary because they are fixed once they are published, but this could also be used for weather forecasts (e.g. outside temperature), which is subject to change.

@J-N-K J-N-K force-pushed the feature-historic branch 2 times, most recently from e347f51 to 9633490 Compare May 17, 2023 17:23
@kaikreuzer
Copy link
Member

Hey @J-N-K, thanks for picking up this topic.
Regarding your questions:

  1. I would introduce a new strategy ("timeseries"?) - this allows more flexibility in the configuration as some databases (like the default rrd4j) anyhow cannot handle timeseries data and we would be able to log a warning in case a user nonetheless tries to configure the persistence this way. And the user would be able to only use a dedicated database for timeseries while logging "normal" events to other destinations.
  2. I agree, filters sound difficult here. This would also speak for a new strategy for which we could document that filters are simply not considered.
  3. I would say no. We already allow overwriting values (if a more recent forecast has a different value) and it might be nice if overwriting a value with NULL would simply delete it; this way, a handler would be able to selectively delete single entries, which should be enough.

@J-N-K
Copy link
Member Author

J-N-K commented May 18, 2023

  1. Strategy: I implemented forecast. This persists ONLY TimeSeries updates and sets the state to the item accordingly when the time has come. The everyUpdate strategy also persists TimeSeries updates, but does not set values - this can be used to retrospectively add values that arrived later.

  2. Overwriting is difficult. None of our persistence services supports that (and at least for InfluxDB it is very difficult to implement). Much easier (and with only core-code) would be adding a policy ("replace", "add") to the time-series. If it is "add", then the values are just added, if it is "replace", all values (starting from the first timestamp, ending with the last timestamp) are removed before the new values are persisted.

@kaikreuzer
Copy link
Member

  1. Not sure if I like the split of TimeSeries events into different strategies. Events with timestamps i the past will anyhow never trigger a state change. And a database that is capable of handling TimeSeries has much higher requirements than one that currently processes everyUpdate. Also, what will you do if you receive a TimeSeries event that has entries from the past AND the future?
  2. "then the values are just added": What does this mean when there is already a record for the exact same timestamp? Wouldn't it simply replace it? My assumption is that we will almost always have the exact same timestamps as before when receiving updated forecast values - but maybe some records might be missing, so replace could potentially delete entries that should not be deleted.

@J-N-K
Copy link
Member Author

J-N-K commented May 18, 2023

My idea was to make it as comfortable as possible for users:

  • If you have an energy meter and use everyUpdate the binding could still send "historic states" using the time series. This will fail (silently) if the persistence service does not implement ModifiablePersistenceService, but in that case you just don't have any benefit. However, it'll work OOTB when it is possible.
  • The forecast OTOH requires ModifiablePersistenceService and will log a warning if this is not available.

In both cases all values will be persisted (if possible), without checking if they are in the past or the future.

That probably depends on the persistence service. Regarding replacing single values and keeping others: We should have a look at the possible use-cases

  • historical data (like the energy meter): I can't imagine why someone would like to use REPLACE here. The use-case is to add missing-data.
  • forecasts that may change (like weather): IMO it's very unlikely that we would send new forecasts for the same time range an want to keep some of the old values. If we get a new forecast, we want to use that one, not a mix of the old and the new one, so it's fine when we remove the old values before.
  • forecasts that do not change (like prices): Usually we'll have non-overlapping intervals. If we send the same values again, no harm is done, because they are replaced with the same values.

What would in your opinion be a case where we would have the mixed case and need replace single values while keeping others?

@J-N-K J-N-K force-pushed the feature-historic branch from bb63574 to a17f6e7 Compare May 18, 2023 15:31
@kaikreuzer
Copy link
Member

kaikreuzer commented May 19, 2023

  1. If it silently fails for past values, it could also silently fail for future values and we could handle all TimeSeries through everyUpdate only. Where do you see the benefit of introducing another strategy here?
  2. a) You might have a binding that does "postprocessing" of historical data - e.g. removing outliers or softening the curves. I could hence imagine that also past values are potentially changed.
    b) We might only get a new more precise forecast for tomorrow and want to update those values, while keeping the older forecasts for the rest of the week.
    c) This is the behavior I would expect in all cases - updates should be idempotent, i.e. you should be able to do an update with the same value for the same timestamp and if it is the same, all is fine. If it is different, it would simply be updated.

It might be that I haven't yet fully understood the technical difficulties that there are with storing values for timestamps, when there is already a persisted value for this timestamp. My hope was that it should be possible with every db to simply update existing values, which then essentially results in an "update or add if not existent" functionality.

@J-N-K
Copy link
Member Author

J-N-K commented May 19, 2023

  1. (additional strategy for forecasts)

If we use everyUpdate together with setting values from persistence it is a bit more difficult. And it also changes the semantics of everyUpdate. That's why I would prefer to have

  • everyUpdate: Store all updates (like before) and time series (if possible, i.e. the service is modifiable), do not set values from persistence. Discard time series silently if non-modifiable.
  • forecast: Store only time series and set values if the time is equal to the timestamp, log an error when set on a non-modifiable service (because modifiable is required for storing future values).

So the real benefit of forecast is that it checks the service is capable of the new features and it is clear that "normal" updates are not persisted.

  1. (replacing)

a) Currently difficult, because there is no way for a binding to get access to historical data (Edit: Thinking about it this is not possible at all. Persistence is done on an item and bindings do not have any knowledge about items. Reading channels from bindings is not possible, because several items could be linked to the same channel). But I don't see an issue here, let's say we have (timestamp/value)

100/5 - 105/6 - 110/10 - 115/1000 - 120/7 - 125/4 - 130/9 - 135/2

The value 115/1000 is clearly an outlier. With the current implementation storing a time-series with REPLACE policy and a value set of

110/10 - 120/7 - 125/4

will result in

100/5 - 105/6 - 110/10 - 120/7 - 125/4 - 130/9 - 135/2

which is probably the expected result. This works, because the first value in the "update" has a timestamp 110, the last one has 125. Before persisting the values, all values with timestamps in this range are removed (because the policy is REPLACE).

b) See above: If the new forecast only includes values for tomorrow, the day after tomorrow stays the same, because REPLACE only affects the time-range of the enclosed time-series.

The difficulty is that this heavily depends on the service itself how and if values can be updated/replaced. For MySQL INSERTing a new value with the same timestamp (which is the index) will just fail, you have to UPDATE the old record. OTOH, UPDATEing a non-existing value will fail, if there is no existing record. The service would need to check for each individual value if it exists and then use the appropriate method. This needs to be implemented in each service.

Replacing by DELETEing in given time-range (using FilterCriteria) and INSERTing the new values is already supported, it does not require modifications to the persistence services and is universal.

@J-N-K J-N-K marked this pull request as ready for review May 19, 2023 13:23
@J-N-K J-N-K requested a review from a team as a code owner May 19, 2023 13:23
@J-N-K J-N-K removed the work in progress A PR that is not yet ready to be merged label May 19, 2023
@J-N-K
Copy link
Member Author

J-N-K commented May 19, 2023

@jlaur Since you are working on a binding that sends forecasts, do you think this is working for you?

@J-N-K
Copy link
Member Author

J-N-K commented May 19, 2023

@openhab/add-ons-maintainers I would like to hear your thoughts on this.

@jlaur
Copy link
Contributor

jlaur commented May 20, 2023

Since you are working on a binding that sends forecasts, do you think this is working for you?

I started getting back to this topic and also had an initial look on some of the code. If I understand it correctly, the scope and objective (for now) is to provide a way for bindings to publish historic/future states in order to persist them, and nothing else - correct?

For openhab/openhab-addons#14376 this would mean that it will be possible to persist future prices. This would allow users to see graphs in the future. Again, if I understand it correctly, only persistence will gain "access" to them, so until something like #3478 is implemented, this is the scope. So for now they cannot be used for calculations, for example - in rules or anywhere else, right?

I'm not making any points, just trying to establish the scope for my own understanding. 🙂

I would need to add the time series update to the binding, but I should keep the hourly job for updating current price, so this is still pushed to the event bus. With persistency strategy everyUpdate, items will be persisted in advance by time series, but also each hour as the prices become actual? Example - now is 20.05.2023 13:15. Time series update of spot price:

  • 20.05.2023 14:00:00: -0.00067
  • 20.05.2023 15:00:00: 0.0
  • 20.05.2023 16:00:00: 0.00983

Hourly job is the triggered at for example 14:00:00.030 and will update:

  • 20.05.2023 14:00:00.030: -0.00067

So I will now have this persisted:

  • 20.05.2023 14:00:00.000: -0.00067
  • 20.05.2023 14:00:00.030: -0.00067

(whereas today the end result would be only 20.05.2023 14:00:00.030: -0.00067)

But this can be prevented by using the forecast policy, correct?

A strategy forecast has been added. Configurations with that strategy ONLY accept time series updates and set the item's state to the value if the system time is equivalent to a previously send timestamp.

Can you elaborate a bit here? I don't fully understand the criteria for accepting to persist, i.e. the "if the system time is equivalent to a previously send timestamp" part.

Adds a new type TimeSeries which contains a collection of timestamp/state pairs and a policy (REPLACE, ADD). Timestamps can be anywhere in time (past and future).

I'm not sure I understand the purpose of the policy. Why would we need anything but REPLACE? If I update the state normally (not as time series), the latest value will always replace the previous value. I would intuitively expect the same for states having a timestamp, i.e. time series.

The REPLACE policy removes all values from persistence that are in the time range of the time-series (first element - last element) before adding the values

I'm not sure, but perhaps if it would be simpler and safer to remove all values >= the first received timestamp? Replacing something "in the middle" could be tricky. For example, weather forecast 1:

  • 21.05.2023 10:00:00 20 °C
  • 21.05.2023 13:00:00 21 °C (i.e. from 10-13 it will stay 20 °C)

Now, weather forecast 2:

  • 21.05.2023 10:00:00 19 °C
  • 21.05.2023 12:00:00 20 °C

With the second forecast, the expected temperature decreased by 1 °C, but the increase was moved forward from 13:00 to 12:00. This is now the full amount of information. So this is wrong:

  • 21.05.2023 10:00:00 19 °C
  • 21.05.2023 12:00:00 20 °C
  • 21.05.2023 13:00:00 21 °C

@J-N-K
Copy link
Member Author

J-N-K commented May 20, 2023

For openhab/openhab-addons#14376 this would mean that it will be possible to persist future prices. This would allow users to see graphs in the future. Again, if I understand it correctly, only persistence will gain "access" to them, so until something like #3478 is implemented, this is the scope. So for now they cannot be used for calculations, for example - in rules or anywhere else, right?

Rules should be able to use them, you should be able to use something like minimumBetween(ZonedDateTime.now(), ZonedDateTime.now().plusHours(24) or any other methods provided by PersistenceExtensions. Bindings are agnostic of items, so they'll probably never be able to USE this data, but other add-ons can implement their own persistence extension.

Using them in charts is also already possible (with the default chart provider), at least if you set begin/end dates. "period" needs an extensions, so we know if "1D" is [now-1d,now] or [now,now+1d]. I'm not sure about the best syntax for that. @lolodomo since you are more familiar with sitemaps: any good idea?

I would need to add the time series update to the binding, but I should keep the hourly job for updating current price, so this is still pushed to the event bus. With persistency strategy everyUpdate, items will be persisted in advance by time series, but also each hour as the prices become actual? Example - now is 20.05.2023 13:15. Time series update of spot price:

No, with everyUpdate they are just persisted, but or further processed. This should only be used for adding missing past-data.

Only the forecast policy "restores" values (when the system time is equal to the persisted timestamp). "restore" in this context means "update the item state to the persisted value".
forecast does NOT store any values except TimeSeries, therefore the value is not persisted again, because that would be a simple item state update, not a TimeSeries.

Adds a new type TimeSeries which contains a collection of timestamp/state pairs and a policy (REPLACE, ADD). Timestamps can be anywhere in time (past and future).

I'm not sure I understand the purpose of the policy. Why would we need anything but REPLACE? If I update the state normally (not as time series), the latest value will always replace the previous value. I would intuitively expect the same for states having a timestamp, i.e. time series.

For forecasts always REPLACE would be fine. But when we think of adding "missing data" (e.g. stored in the device, but not transmitted to openHAB because of network issues) replacing might not be desired.

I'm not sure, but perhaps if it would be simpler and safer to remove all values >= the first received timestamp? Replacing something "in the middle" could be tricky. For example, weather forecast 1:

  • 21.05.2023 10:00:00 20 °C
  • 21.05.2023 13:00:00 21 °C (i.e. from 10-13 it will stay 20 °C)

Now, weather forecast 2:

  • 21.05.2023 10:00:00 19 °C
  • 21.05.2023 12:00:00 20 °C

With the second forecast, the expected temperature decreased by 1 °C, but the increase was moved forward from 13:00 to 12:00. This is now the full amount of information. So this is wrong:

  • 21.05.2023 10:00:00 19 °C
  • 21.05.2023 12:00:00 20 °C
  • 21.05.2023 13:00:00 21 °C

As @kaikreuzer pointed out above: What if we have a forecast for a week and then receive a "better" forecast for tomorrow. If we replace all future values, we'll add information for tomorrow, but lose the information for the day after tomorrow. IMO the way it is now is more flexible.

@jlaur
Copy link
Contributor

jlaur commented May 20, 2023

So for now they cannot be used for calculations, for example - in rules or anywhere else, right?

Rules should be able to use them, you should be able to use something like minimumBetween(ZonedDateTime.now(), ZonedDateTime.now().plusHours(24) or any other methods provided by PersistenceExtensions.

I guess there could be a need to extend PersistenceExtensions to retrieve the time series, since there are currently no methods to get the actual values. But currently I can't think of any use-case, since the energy services should implement the needed calculations on electricity prices. However, if some custom calculation would be needed, this would only be possible if the item values could be retrieved. This is clearly outside the scope of this PR.

Bindings are agnostic of items, so they'll probably never be able to USE this data, but other add-ons can implement their own persistence extension.

👍

I would need to add the time series update to the binding, but I should keep the hourly job for updating current price, so this is still pushed to the event bus. With persistency strategy everyUpdate, items will be persisted in advance by time series, but also each hour as the prices become actual? Example - now is 20.05.2023 13:15. Time series update of spot price:

No, with everyUpdate they are just persisted, but or further processed. This should only be used for adding missing past-data.

Only the forecast policy "restores" values (when the system time is equal to the persisted timestamp). "restore" in this context means "update the item state to the persisted value". forecast does NOT store any values except TimeSeries, therefore the value is not persisted again, because that would be a simple item state update, not a TimeSeries.

Okay, so I misunderstood. For the EDS binding I could then completely remove the hourly job updating current prices, since state updates will automatically be pushed to the event bus as they become current?

One question, though: Will this require persistence, or will time series be stored in memory also? If not, then I think I would still update states manually as they become current so that the channels can be used without persistence. There is some redundancy here, but I think it would be needed.

Users would then either not configure any persistence and just get current values provided by the binding, or they would configure persistence with the forecast policy. However, since the binding is not aware of this, in this configuration there would be double updates each hour, one from the binding, and one from the time series state being pushed to the event bus, right?

I'm not sure I understand the purpose of the policy. Why would we need anything but REPLACE? If I update the state normally (not as time series), the latest value will always replace the previous value. I would intuitively expect the same for states having a timestamp, i.e. time series.

For forecasts always REPLACE would be fine. But when we think of adding "missing data" (e.g. stored in the device, but not transmitted to openHAB because of network issues) replacing might not be desired.

OK, so this is to prevent overwrites after being unsynchronized?

With the second forecast, the expected temperature decreased by 1 °C, but the increase was moved forward from 13:00 to 12:00. This is now the full amount of information. So this is wrong:

  • 21.05.2023 10:00:00 19 °C
  • 21.05.2023 12:00:00 20 °C
  • 21.05.2023 13:00:00 21 °C

As @kaikreuzer pointed out above: What if we have a forecast for a week and then receive a "better" forecast for tomorrow. If we replace all future values, we'll add information for tomorrow, but lose the information for the day after tomorrow. IMO the way it is now is more flexible.

Perhaps we could consider a method overload to control this behavior? I guess the desired behavior can vary. @kaikreuzer's scenario sounds like a mix of a 1 day forecast and a week forecast. I'm not into weather services, but from the weather web UI's I can remember, the short-term and long-term forecasts are two different things. So if mixing them, yes, the short-term forecast could overwrite the long-term forecast.

@J-N-K
Copy link
Member Author

J-N-K commented May 20, 2023

I guess there could be a need to extend PersistenceExtensions to retrieve the time series, since there are currently no methods to get the actual values. But currently I can't think of any use-case, since the energy services should implement the needed calculations on electricity prices. However, if some custom calculation would be needed, this would only be possible if the item values could be retrieved. This is clearly outside the scope of this PR.

There was a PR for that, that was unfortunately closed: #3466

Okay, so I misunderstood. For the EDS binding I could then completely remove the hourly job updating current prices, since state updates will automatically be pushed to the event bus as they become current?

Exactly.

One question, though: Will this require persistence, or will time series be stored in memory also? If not, then I think I would still update states manually as they become current so that the channels can be used without persistence. There is some redundancy here, but I think it would be needed.

This requires persistence, but I think this can be noted in the documentation. I was already thinking about some sort of "in-memory-persistence" service, that could be used to achieve that.

Users would then either not configure any persistence and just get current values provided by the binding, or they would configure persistence with the forecast policy. However, since the binding is not aware of this, in this configuration there would be double updates each hour, one from the binding, and one from the time series state being pushed to the event bus, right?

For forecasts always REPLACE would be fine. But when we think of adding "missing data" (e.g. stored in the device, but not transmitted to openHAB because of network issues) replacing might not be desired.

OK, so this is to prevent overwrites after being unsynchronized?

Exactly. An example would be a network-attached energy meter. The binding could then send missed data as TimeSeries. If the persistence service does not support it, the values are discarded and there stays a gap in data (like it is now), but if it does support it, then the gap will be filled. This does not require any changes on user-side, it just works if the pre-requisites are fulfilled.

Perhaps we could consider a method overload to control this behavior? I guess the desired behavior can vary. @kaikreuzer's scenario sounds like a mix of a 1 day forecast and a week forecast. I'm not into weather services, but from the weather web UI's I can remember, the short-term and long-term forecasts are two different things. So if mixing them, yes, the short-term forecast could overwrite the long-term forecast.

How do you imagine that to work? The problem is that a binding can never know what was configured by the user. The user could link two channels (the one with daily forecasts and the one with hourly forecasts) to the same channel, in that case "replace the future" would be wrong.

@jlaur
Copy link
Contributor

jlaur commented May 20, 2023

One question, though: Will this require persistence, or will time series be stored in memory also? If not, then I think I would still update states manually as they become current so that the channels can be used without persistence. There is some redundancy here, but I think it would be needed.

This requires persistence, but I think this can be noted in the documentation. I was already thinking about some sort of "in-memory-persistence" service, that could be used to achieve that.

The only issue I see here is that in any case this would require the user to configure persistence in order to use the basic and most important channels in the binding. But it can be easily worked around by just keeping the hourly update job in the binding and still update the state the old-fashioned way for backwards compatibility and "out of the box" functionality without persistence configured. The data is already cached in the binding anyway to reduce network calls and to be used by calculations.

I'm now also thinking about an interesting case for time series, namely the update of electricity taxes and net transmission tariffs, since this data is very static in nature. Here I may push a time series with very few states valid months into the future.

Example:

{
	"total": 2,
	"filters": "{\"ChargeType\":[\"D03\"],\"GLN_Number\":[\"5790000432752\"],\"Note\":[\"Elafgift\"]}",
	"dataset": "DatahubPricelist",
	"records": [
		{
			"ValidFrom": "2023-07-01T00:00:00",
			"ValidTo": null,
			"ChargeTypeCode": "EA-001",
			"Price1": 0.697,
			"Price2": null,
			"Price3": null,
			"Price4": null,
			"Price5": null,
			"Price6": null,
			"Price7": null,
			"Price8": null,
			"Price9": null,
			"Price10": null,
			"Price11": null,
			"Price12": null,
			"Price13": null,
			"Price14": null,
			"Price15": null,
			"Price16": null,
			"Price17": null,
			"Price18": null,
			"Price19": null,
			"Price20": null,
			"Price21": null,
			"Price22": null,
			"Price23": null,
			"Price24": null
		},
		{
			"ValidFrom": "2023-01-01T00:00:00",
			"ValidTo": "2023-07-01T00:00:00",
			"ChargeTypeCode": "EA-001",
			"Price1": 0.008,
			"Price2": null,
			"Price3": null,
			"Price4": null,
			"Price5": null,
			"Price6": null,
			"Price7": null,
			"Price8": null,
			"Price9": null,
			"Price10": null,
			"Price11": null,
			"Price12": null,
			"Price13": null,
			"Price14": null,
			"Price15": null,
			"Price16": null,
			"Price17": null,
			"Price18": null,
			"Price19": null,
			"Price20": null,
			"Price21": null,
			"Price22": null,
			"Price23": null,
			"Price24": null
		}
	]
}

In the binding this will cause the channel to be updated from 0.008 to 0.697 on 2023-06-30T22:00:00Z.

For forecasts always REPLACE would be fine. But when we think of adding "missing data" (e.g. stored in the device, but not transmitted to openHAB because of network issues) replacing might not be desired.

OK, so this is to prevent overwrites after being unsynchronized?

Exactly. An example would be a network-attached energy meter. The binding could then send missed data as TimeSeries. If the persistence service does not support it, the values are discarded and there stays a gap in data (like it is now), but if it does support it, then the gap will be filled. This does not require any changes on user-side, it just works if the pre-requisites are fulfilled.

Just out of curiosity, what would be the harm in using REPLACE in this case?

Perhaps we could consider a method overload to control this behavior? I guess the desired behavior can vary. @kaikreuzer's scenario sounds like a mix of a 1 day forecast and a week forecast. I'm not into weather services, but from the weather web UI's I can remember, the short-term and long-term forecasts are two different things. So if mixing them, yes, the short-term forecast could overwrite the long-term forecast.

How do you imagine that to work? The problem is that a binding can never know what was configured by the user. The user could link two channels (the one with daily forecasts and the one with hourly forecasts) to the same channel, in that case "replace the future" would be wrong.

I imagined assuming each channel in the binding to have a specific purpose so that the code would make that decision. But I get the point, if users would mix such channels into the same item, it can't work. But it also means we'll have no bullet-proof way of invaliding all future states, or did I miss something?

@J-N-K
Copy link
Member Author

J-N-K commented May 21, 2023

The only issue I see here is that in any case this would require the user to configure persistence in order to use the basic and most important channels in the binding. But it can be easily worked around by just keeping the hourly update job in the binding and still update the state the old-fashioned way for backwards compatibility and "out of the box" functionality without persistence configured. The data is already cached in the binding anyway to reduce network calls and to be used by calculations.

IMO an example configuration for persistence would be fine. I don't think it's a good idea to mix hourly updates and forecasts on the same channel. If you want to keep that, you should probably add two channels one "forecast" and a second one "current". You can the update "current" every hour, and send the time-series to "forecast", with a note in the doc that this requires persistence. The user can then choose which one to use, but only "forecast" supports charting in the future.

I'm now also thinking about an interesting case for time series, namely the update of electricity taxes and net transmission tariffs, since this data is very static in nature. Here I may push a time series with very few states valid months into the future.
[...]

This should be fine. I think we should probably add methods to send time-series from rules, so a simple HTTP request can be converted to a time-series.

Exactly. An example would be a network-attached energy meter. The binding could then send missed data as TimeSeries. If the persistence service does not support it, the values are discarded and there stays a gap in data (like it is now), but if it does support it, then the gap will be filled. This does not require any changes on user-side, it just works if the pre-requisites are fulfilled.

Just out of curiosity, what would be the harm in using REPLACE in this case?

That depends on the configuration. Maybe not in that case, but if you have an Stringitem that aggregates "messages" from different systems (similar to a log), messages from other system might already be in the same time range and would be removed.

How do you imagine that to work? The problem is that a binding can never know what was configured by the user. The user could link two channels (the one with daily forecasts and the one with hourly forecasts) to the same channel, in that case "replace the future" would be wrong.

I imagined assuming each channel in the binding to have a specific purpose so that the code would make that decision. But I get the point, if users would mix such channels into the same item, it can't work. But it also means we'll have no bullet-proof way of invaliding all future states, or did I miss something?

Yes, I think we can't cover all cases. One thing we could do is allowing manually setting beginDate and endDate, in that case the binding could send the endDate to Instant.MAX which would result in removing all future data. WDYT?

@jlaur
Copy link
Contributor

jlaur commented May 21, 2023

The only issue I see here is that in any case this would require the user to configure persistence in order to use the basic and most important channels in the binding. But it can be easily worked around by just keeping the hourly update job in the binding and still update the state the old-fashioned way for backwards compatibility and "out of the box" functionality without persistence configured. The data is already cached in the binding anyway to reduce network calls and to be used by calculations.

IMO an example configuration for persistence would be fine. I don't think it's a good idea to mix hourly updates and forecasts on the same channel. If you want to keep that, you should probably add two channels one "forecast" and a second one "current". You can the update "current" every hour, and send the time-series to "forecast", with a note in the doc that this requires persistence. The user can then choose which one to use, but only "forecast" supports charting in the future.

That would introduce a complete parallel universe where states differing only in timestamp (now vs. historic/future timestamp) cannot co-exist.

I don't really like my own proposal to still have the redundant hourly job updating current prices, this would only be a (hopefully temporary) work-around because of the current dependency to persistence for publishing future states on the event bus when they become current.

Ideally this would happen automatically without requiring persistence, but I didn't want to push for this now as it would extend the scope and required work. But I would like to hear your opinion about this. It would be like your "in-memory persistence" proposal, except it would be integrated and work independently of persistence configurations. It's not only a matter of requiring more configuration by the user, which of course can be documented. It's also the actual dependency, which feels strange if the user doesn't need persistence at all. It's still possible to display current prices and even perform useful calculations (within near past and until tomorrow) without any persistence.

I even prepared for this with commit openhab/openhab-addons@3506e60. For me the channels are exactly the same, except that this PR makes it possible to update states ahead of time. There is no conceptual difference, it's just different ways of updating the same state - which of course also unlocks new possibilities.

Keeping only the same set of channels, with the hourly job, my understanding is it would work like this (having duplicated state updates, i.e. hourly + time series):

Non-persisted items or items configured with any policy other than forecast:

  • Price items are updated as they become current (by binding). This will be close to new hour, but not exact - for example 21.05.2023 21:00:00.027.
  • When persisted, the persisted timestamp will also not be exact.
  • Since future state updates are not pushed back to the event bus when they become current, there will only be one state update per item per hour.
  • All works fine and exactly like before this PR.

Persisted items with policy forecast:

  • Price items are updated as they become current (by core) - handled automatically by the core framework, not by the binding.
  • Price items are persisted ahead of time with exact timestamps - for example 21.05.2023 21:00:00.000.
  • Price items will also be updated by the binding at approximately the same time. However, this state update will not be persisted, so there will be duplication on the event bus (in timestamps, not value), but not in persistence.
  • All in all everything still works fine, except the hourly redundant updates on the event bus.

I really think we should try to make this just another dimension of states and try to eliminate redundancy rather than adding more of it.

Just a side note: I'm now also thinking about the "normalization" of prices. Let's say hourly tariffs will be A from 17:00-20:00 and B for all remaining hours - and this is valid for the next three months. That means that I actually have just one record for this three month period, but I'm pushing a state update two times per day, i.e. 17:00 (value A) and 20:00 (value B). Currently this is done by the hourly job, so it's not an issue. But if I would do this for three months ahead, it would be approximately 180 future state updates. I think in this case I would not do this, but limit the binding to only push time series until the end of tomorrow. You can't make much use of the tariffs anyway without also having the spot prices.

How do you imagine that to work? The problem is that a binding can never know what was configured by the user. The user could link two channels (the one with daily forecasts and the one with hourly forecasts) to the same channel, in that case "replace the future" would be wrong.

I imagined assuming each channel in the binding to have a specific purpose so that the code would make that decision. But I get the point, if users would mix such channels into the same item, it can't work. But it also means we'll have no bullet-proof way of invaliding all future states, or did I miss something?

Yes, I think we can't cover all cases. One thing we could do is allowing manually setting beginDate and endDate, in that case the binding could send the endDate to Instant.MAX which would result in removing all future data. WDYT?

Wouldn't that result in the same problem where the short-term forecast would overwrite the long-term forecast if the user linked the two sets of channels to the same items?

@openhab-bot
Copy link
Collaborator

This pull request has been mentioned on openHAB Community. There might be relevant details there:

https://community.openhab.org/t/bringing-electricity-information-from-eloverblik-dk-and-energidataservice-dk-into-openhab/143470/37

@J-N-K J-N-K force-pushed the feature-historic branch from 3cea526 to ee25a2f Compare May 29, 2023 12:59
@J-N-K
Copy link
Member Author

J-N-K commented May 29, 2023

Regarding persistence: I don't think that the requirement to configure persistence is too high. With #2871 we added core support for managing persistence from UI and we can provide example configurations in the respective binding doc (similar to the .things, .items, ... examples).

With the proposed InMemoryPersistenceService the user can also decide not to have external systems and still use what the binding supports, without scheduled jobs in the binding. This also simplifies support for that in bindings, the just need to schedule updates to the TimeSeries.

With my latest commit I also added support for sending TimeSeries from JSR-223, I believe it's easy to have that in the helper library, ist that correct @florian-h05?

@florian-h05
Copy link
Contributor

florian-h05 commented May 29, 2023

With my latest commit I also added support for sending TimeSeries from JSR-223, I believe it's easy to have that in the helper library, ist that correct @florian-h05?

Those methods are part of the BusEvent actions, right? If yes, it should be very easy to include them in the helper library to ease use, but all actions are automatically exported on the actions namespace, so we don’t need adjustments to have it work. actions.BusEvent.sendTimeSeries should work out of the box with no changes in the helper library.

@kaikreuzer
Copy link
Member

Thanks!

@kaikreuzer kaikreuzer merged commit cdbca4d into openhab:main Nov 8, 2023
3 checks passed
@kaikreuzer kaikreuzer added this to the 4.1 milestone Nov 8, 2023
@jlaur
Copy link
Contributor

jlaur commented Nov 9, 2023

@J-N-K - congrats on the merge! 🚀

I was writing some documentation and considered linking to the persistence documentation. The forecast strategy should probably be documented here:
https://next.openhab.org/docs/configuration/persistence.html

And you are probably the best to describe it. 🙂 When running some tests I noticed mixed updates when using strategy everyChange, e.g.:

2023-11-09 21:00:00.000, 1.76230003375
2023-11-09 21:00:00.009, 1.76230003375

This was expected. However, after publishing new time series starting before and ending after this, this "odd" timestamp row was still present. In this case I probably expected it to be replaced by the time series. However, after then changing strategy to forecast and publishing new time series again, this row was removed. This is just a detail, but might be worth mentioning somehow in the documentation.

@openhab-bot
Copy link
Collaborator

This pull request has been mentioned on openHAB Community. There might be relevant details there:

https://community.openhab.org/t/best-approach-for-energy-consumption-usage-queries/151393/23

@shikousa
Copy link

Is there already a complete documentation for this feature available?

What I'm looking for is a InMemoryPersistenceService, a persistence service what is available only in RAM.
At certain time stamps like midnight I would like to evaluate my persisted energy readings in a rule and create an additional virtual persisted item using the InMemoryPersistenceService with only the required energy readings from the past. This I could use for the charts. The big advantage is, that the charts would be shown a lot faster. At the moment OH takes all data and it always takes a while until the chart is shown.

@J-N-K J-N-K deleted the feature-historic branch November 25, 2023 10:39
@J-N-K
Copy link
Member Author

J-N-K commented Nov 25, 2023

@shikousa Please have a look here: openhab/openhab-addons#15063

What documentation would you expect?

@shikousa
Copy link

I think there will be in the future functions like this what can be used in Rules

.store(ZonedDateTime,State)
.remove(FilterCriteria)
....

Should this be here?
https://next.openhab.org/docs/configuration/persistence.html

What I did not found is a function like .removePersistenceData(). Is this available?
Thanks!

florian-h05 added a commit to florian-h05/openhab-addons that referenced this pull request Nov 26, 2023
Implement time series support introduced by openhab/openhab-core#3597.

Signed-off-by: Florian Hotze <[email protected]>
florian-h05 added a commit to florian-h05/openhab-addons that referenced this pull request Nov 26, 2023
Implement time series support introduced by openhab/openhab-core#3597.

Signed-off-by: Florian Hotze <[email protected]>
florian-h05 added a commit to florian-h05/openhab-webui that referenced this pull request Nov 26, 2023
florian-h05 added a commit to openhab/openhab-webui that referenced this pull request Nov 26, 2023
@shikousa
Copy link

shikousa commented Dec 1, 2023

I'm not 100% sure. May I ask if we can use the InMemoryPersistenceService in rules and also add values with timestamps to it?

florian-h05 added a commit to florian-h05/openhab-addons that referenced this pull request Dec 14, 2023
Implement time series support introduced by openhab/openhab-core#3597.

Signed-off-by: Florian Hotze <[email protected]>
jlaur pushed a commit to openhab/openhab-addons that referenced this pull request Dec 14, 2023
austvik pushed a commit to austvik/openhab-addons that referenced this pull request Mar 27, 2024
…enhab#15963)

Implement time series support introduced by openhab/openhab-core#3597.

Signed-off-by: Florian Hotze <[email protected]>
Signed-off-by: Jørgen Austvik <[email protected]>
joni1993 pushed a commit to joni1993/openhab-addons that referenced this pull request Oct 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement An enhancement or new feature of the Core
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Historic state updates from bindings
8 participants