• terrypacker

    Milo,

    BACnet4J can be compiled using Maven OR Gradle. I use Maven and have no problems as the pom references our Maven repository at maven.mangoatuomation.net.

    You have 2 options. If you are using Maven in the project that requires BACnet4J then you can review the pom in the git repo here:

    https://github.com/infiniteautomation/BACnet4J/blob/master/pom.xml

    That should show you how to include all the necessary libraries (note the repository and pluginRepository configuration)

    The other option is to download what you need directly from the bintray and maven.mangoautomation.net repositories online.

    posted in BACnet4J general discussion read more
  • terrypacker

    Just a thought but do you have more than 300 data points? The free version might let you import them but after a restart it could be raising an event about a license violation. I am not sure what the REST api would do in this situation and you could be seeing the effects of this.

    posted in User help read more
  • terrypacker

    It definitely sounds like you don't have enough memory for your configuration. If you allocate the JVM too much memory you run the risk of having the process get killed by the OS.

    If you intend to run with 4GB of system memory I would take a look at throttling the Persistent publishers via the setting on the receiving Mango. Phillip suggested setting it to 5 million but it seems like your system would run out of memory before there are 5 million values waiting to be written. I would keep an eye on that value and see when you start to experience GC thrashing (High CPU and OOM errors in the logs). Then set the throttle threshold to below that number of values waiting.

    From the graph you posted you could set it to 10,000 (but that was with less memory so the value is going to be higher now).

    posted in User help read more
  • terrypacker

    To see both in swagger just set:

    swagger.mangoApiVersion=v[12]
    

    You must restart to see the changes. Also Swagger isn't really designed for use in production environments especially if you are running thin on memory as it will eat up some of your precious ram.

    posted in User help read more
  • terrypacker

    In addition to those metrics you can also request all of the information found on the InternalMetrics page via the /rest/v1/system-metrics/ and /rest/v1/system-metrics/{id} endpoints.

    The most useful of these for your current problem would be the id of com.serotonin.m2m2.db.dao.PointValueDao$BatchWriteBehind.ENTRIES_MONITOR which will show you how many values are currently waiting to be written to the database (cached in memory).

    This information can also be logged by the Internal Metrics data source.

    posted in User help read more
  • terrypacker

    Adam,

    I'd be a little weary of hitting the /v2/server/system-info endpoint frequently, some of the data returned is computationally intensive for Mango to calculate. For example it will compute the database size by recursively accessing every file to get its size. For NoSQL there will be 1 file for every 2 week period a data point has data.

    I would strip down the request to only get what you want:

    GET /rest/v2/server/system-info/noSqlPointValueDatabaseStatistics
    GET /rest/v2/server/system-info/loadAverage
    

    I would avoid requesting noSqlPointValueDatabaseSize because of the intensity of the request on the server.

    posted in User help read more
  • terrypacker

    adamlevy,

    I'll throw in my 2 cents here also. First I agree with Phillip that a core upgrade is the first thing to do.

    It looks like you are receiving data from multiple Persistent Publishers. If your system cannot write the data to disk as fast as it is coming in then it will run out of memory. One symptom of this is right after startup all the Publishers will connect and dump their queues of data to the receiving machine. This can cause the Point Values Waiting to Be Written to skyrocket and eat up memory. You should be able to see this on the Internal Metrics page. Basically if you are not writing faster than the data is coming in you will run out of memory.

    If this is the case you have a few options:

    1. Tune the publishers to slow down their queue dumping on connection, this is controlled via the Persistent Synchronization system settings.

    2. Tune the Batch Writing for the MangoNoSQL database which can be done via the nosql system settings.

    posted in User help read more
  • terrypacker

    You shouldn't need any configuration changes, the lost backdates were likely due to a corrupt shard that was repaired while the system was running and inserting data. This is a rare scenario and has been fixed in the next release.

    posted in Mango Automation read more
  • terrypacker

    This likely points to a recent bug we found and have fixed in the NoSQL module where discarded samples could potentially make that count incorrect. If your 'Writes per second during database batches' is low and does not appear to be taking any out of the > 200,000 then this is the cause. Those values are not actually in memory, the counter is wrong.

    The only way to reset the counter is to restart Mango. The fix for this bug will be in the 3.2.2 mangoNoSQLDatabase release next week.

    The 200,000 values missing were discarded due to a problem writing to disk so you will likely need to re-run the sync. You will see the reasons for the discards in the log file.

    posted in Mango Automation read more
  • terrypacker

    With regards to values of 0 with the date set to 1969, I would think that could be due to an error in your script. This is the exact result you will get if one was to set the time stamp and value of a point to 0.

    posted in Development general discussion read more