-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Historical access to existing database #2396
Comments
You built the historical access layer. |
Hello nicolasr75, Yes, your use case is supported with open62541. Please look into tutorial_server_historicaldata.c. The easiest way for your cenario is to implement the data backend yourself. To use your backend you can take the tutorial and change just line 87. A second way to go is to implement the database plugin itself, but that is much more complicated and I would not recommend that. Please try first if route one is suitable for you. It is much easier to implement. Thanks, Peter |
Thanks so much Peter. That is more information than I hoped for :-) I will start with route one and report back how it works. Nicolas. |
If you intent to upstream your work feel free to add either @peter-rustler-basyskom or @basyskom-meerkoetter to the review. |
Hello @nicolasr75 , I have written a test which implements a database backend with simulation of random indexes. This PR fixes them and adds the test. You can also use this test backend as an example. |
Great, thanks! I will need some time to go through all of this and will have to read up some things in the specification... Some conceptual questions: I have yet some difficulties with the indexing thing. At least in our legacy database the primary key is the combination of timestamp and measuring point ID (I have seen it similarly in other systems). Should I combine these to a (virtual) index? Does the index have to be chronological? What is the "gathering" for? Is that some kind of in-memory cache? Should it be configured so to optimize the usual queries? F.e. my user may query all data for the previous 24h once a day. For a measuring point with 15 minute period that would mean 96 values. But there could be 500 measuring points, so around 50000 records altogehter every day. |
Like I said before the index do not need to be in order. It also do not need to start at 0 or end at LLONG_MAX. It is totally up to you what the index represent, as long the backend can identify a datatupel. Gathering is the part that store node data into database(if you want that) and setup and store which backend to ask for data if a client make a history read. |
I am in the middle of implementing my own backend for connecting to an SQLite database. |
For getDateTimeMatch, lastIndex and firstIndex, if you can not find a value for it, because of database error or because table is empty, you should return the same index as returned by getEnd. getEnd should return whatever you in the backend can identify as the value after lastIndex. That mean it is the first index that is invalid. It is not needed that new values will get that value. So, it is totally up to you what to return there, if you can identify it as an invalid value. If I would implement a database backend I would return LLONG_MAX for example. In getResultSize, if you get LLONG_MAX as startIndex or lastIndex you should return 0 (no data is present for the request). |
@nicolasr75 See also randomindex_test.h for a fully implemented backend which uses random indexes. |
@peter-rustler-basyskom Thanks for all the tips. LLONG_MAX for getEnd seems to be a good idea. I already studied your test with random indices but the point is this: my database does not have a separate ID column as a primary key, instead it uses timestamp and measuring point ID as a composite primary key. Instead of creating an extra index just for transporting the data via OPC, I would like to calculate an index on the fly from my timestamps by converting them to 'unix epoch time', that should serve as a good index and fit the data type 'size_t'. It also is automatically chronological which may turn out helpful in debugging and error analysis... I'll keep you updated! |
@peter-rustler-basyskom @basyskom-meerkoetter I still have one question though:
As far as I understand the 'gathering' is used to collect data from f.e. sensors that should be written to the database. But when reading from existing databases, as in my example, there is nothing to gather. Peter also comments in his code that UA_HistoryDataGathering_Default() could be expensive when used with multiple nodes. In my example there may be 500 measuring points, i.e. 500 nodes. Since I can not leave out the above lines without getting errors, that would mean I have to create 500 gatherings! I wrote the example in pure C like the rest of open62541. For my actual project I will now continue in C++ though because I also have to link to existing C++ code. |
@nicolasr75 I am develop the same task for integrating open62541 HDA function with SQLite DB. |
@lkyptw If you tell me your e-mail I could send you my test code but I haven't worked on it for months so I don't know its state.... |
@nicolasr75 Very Thanks! |
Hi @nicolasr75 |
Hi |
@lkyptw @mojaxx Here you go: https://github.com/nicolasr75/open62541_sqlite |
Hi nicolasr75 |
Hi nicolasr,
Thanks a lot! you are a kind person.
nicolasr75 <[email protected]> 於 2020年1月25日 週六 上午9:35寫道:
… @lkyptw <https://github.com/lkyptw> @mojaxx <https://github.com/mojaxx>
Here you go: https://github.com/nicolasr75/open62541_sqlite
Please note: this is really a minimal demo!
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#2396?email_source=notifications&email_token=ANKY2M4L3MX6H7IEBH4JT53Q7OJOTA5CNFSM4GSX5JP2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEJ4R5HY#issuecomment-578363039>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ANKY2MYTJZFOF67ZP36PHX3Q7OJOTANCNFSM4GSX5JPQ>
.
|
My pleasure! Actually I promised to provide some code once I got it working :-) Actually we should thank the guys that created and maintain this project! |
@peter-rustler-basyskom Thanks |
Hi @nicolasr75 , First of all thanks a lot for your contributions! I was taking a look at the SQLiteBackend.h code and I found out that to convert UA_DateTime to a size_t value (seconds since epoch) you are using some defines like "WINDOWS_TICK" and "SEC_TO_UNIX_EPOCH". I think you can avoid that and use directly open62541 defines. In particular: size_t OpcTimestampToUnixSeconds(UA_DateTime timestamp)
{
return (size_t)(timestamp / WINDOWS_TICK - SEC_TO_UNIX_EPOCH);
} could become int64_t OpcTimestampToUnixSeconds(UA_DateTime timestamp)
{
return (int64_t)( ( timestamp - UA_DATETIME_UNIX_EPOCH ) / UA_DATETIME_SEC );
} and you can easily converto also to milliseconds: int64_t OpcTimestampToUnixMilliseconds(UA_DateTime timestamp)
{
return (int64_t)( ( timestamp - UA_DATETIME_UNIX_EPOCH ) / UA_DATETIME_MSEC );
} and microseconds: int64_t OpcTimestampToUnixMicroseconds(UA_DateTime timestamp)
{
return (int64_t)( ( timestamp - UA_DATETIME_UNIX_EPOCH ) / UA_DATETIME_USEC );
} |
@luibass92 Thanks for your input, I will add your code next week |
#define UA_DATETIME_UNIX_EPOCH (11644473600LL * UA_DATETIME_SEC) |
UA timestamps are the number of 100 nanosecond intervals since 1/1/1601 (UTC). The magic number you ask for is the difference in seconds between this date and the start of the Unix epoch which is 1/1/1970. See f.e. here https://stackoverflow.com/questions/6161776/convert-windows-filetime-to-second-in-unix-linux |
Thank you @nicolasr75, I just want to know how it was calculated because I got another number. Theorically, if we have 92 leap years between 1601 and 1970, So; |
Very interesting. I'm ashamed I didn't bother to calculate it on my own ;-) but now that you bring it up... |
Wow, thank you very much. I never knew this exception! |
Hello, |
The UA_DateTime type represents the timestamp but it is defined as a 64-bit signed integer. Does a negative timestamp value represent something? It should not be defined as 64-bit unsigned integer? |
You may want to have a look at the specs here: https://reference.opcfoundation.org/v104/Core/docs/Part6/5.2.2/#5.2.2.5 There is no explicit mention of negative timestamps but some rules about earliest timestamps. |
hi all, i have improved project of nicolasr75, to make it can be used in Linux, test on ubuntu and debian is OK. https://github.com/happybruce/opcua/tree/main |
Hi all, I am trying to implement an OPC UA Server with historical access capabilities. I was testing the example from @nicolasr75 (thank you a lot for making it available!) and it works well if I register "manually" every node using these two lines of code. My question is about using these two lines of code to register multiple Nodes without having to write the two lines for every node. How can I get a list of all the variables and nodeids in my Server with the "attr.historizing = true;"? There is already a way of doing this by exploring the address space searching for nodes with the "historizing" attribute set to true? Many thanks. |
@peter-rustler-basyskom in this thread (on Jan 29, 2019) you stated:
But if I look at the file I'm just trying to make sense as I would like to access my own db and the low level interface seems to be a bit clugy. Any bit of information on how the memory example was designed it would be a huge help. Thanks UPDATE: I see now, that the 2019 version of the demo backend did use the low level interface (v1.0). At some point it was refactored to use the high level API. I will see if I can dig into some PRs to get some orientation. In the meanwhile if anyone has any advice on the high level (I'm just try to read from a REST endpoint, no need to write). |
I am very excited to see historical services being added to this library.
I have seen the historical server example but feel somewhat lost in the feature jungle especially since I currently would like to realize only one specific scenario:
I have a very old database and would like to provide an existing OPC UA client a way to access this database within a corporate network. The database contains multiple measuring points. The measuring points usually have measuring periods of 1, 15 or 60 minutes. The database itself gets filled with new values for the measuring points usually once or twice a day. I would like the OPC UA client to be able to query the newest available data whenever it wants to, probably with a redundant lookback of at least 3 days.
Is this already possible with the current version and could you maybe give me a short sketch of the steps involved or where to find information?
btw: I would be more than happy to write a full example with something like SQLite or PostgreSQL in return, once I got my project running.
The text was updated successfully, but these errors were encountered: