• C
    craig

    curious for 7800 points how often are they read from the datasources, how often are they logged to disk, and how much RAM are you allowing for the JVM on the 6 core CPU?

    posted in User help read more
  • C
    craig

    maybe a limit could be implemented so if "select all" was clicked it would only select up to 30.

    posted in Wishlist read more
  • C
    craig

    Hi Phil, no failed network requests and no errors in the firebug console, it is on version 3.6.4. Guess it is a web browser thing

    posted in User help read more
  • C
    craig

    firefox 68, or more exactly: userAgent=Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:68.0) Gecko/20100101 Firefox/68.0

    posted in User help read more
  • C
    craig

    Hi,

    When I try to save a plot from a watchlist the jpg,png,pdf exports are all blank white space. The PDF has an URL at the top. The xlsx export does provide data, and there is data in the plot.

    looks like this exception in the log occurs:

    WARN  2019-10-07T09:42:16,831 (com.infiniteautomation.mango.rest.v2.ServerRestV2Controller.postClientError:319) - Client error
    [user=admin, cause=Possibly unhandled rejection: {}, location=http://192.168.172.11:8080/ui/administration/system-status/logging-console, userAgent=Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:68.0) Gecko/20100101 Firefox/68.0, language=en-US, date=2019-10-07T09:42:35.800-07:00, timezone=Africa/Abidjan]
    TypeError: this.activeEvents is undefined
    	at subscribeToEvents/< (http://192.168.172.11:8080/ui/mangoUi~ngMango~ngMangoServices.js?v=f41ab677f738e8ffe5b8:72:97514)
    	at u/</< (http://192.168.172.11:8080/ui/mangoUi~ngMango~ngMangoServices.js?v=f41ab677f738e8ffe5b8:78:94777)
    	at u/< (http://192.168.172.11:8080/ui/mangoUi~ngMango~ngMangoServices.js?v=f41ab677f738e8ffe5b8:78:94915)
    	at $digest (http://192.168.172.11:8080/ui/mangoUi~ngMango~ngMangoServices.js?v=f41ab677f738e8ffe5b8:78:100341)
    	at $apply (http://192.168.172.11:8080/ui/mangoUi~ngMango~ngMangoServices.js?v=f41ab677f738e8ffe5b8:78:102518)
    	at $applyAsync/r< (http://192.168.172.11:8080/ui/mangoUi~ngMango~ngMangoServices.js?v=f41ab677f738e8ffe5b8:78:102663)
    	at Qr/this.$get</</this.completeTask (http://192.168.172.11:8080/ui/mangoUi~ngMango~ngMangoServices.js?v=f41ab677f738e8ffe5b8:78:122819)
    	at un/this.$get</</a.defer/r< (http://192.168.172.11:8080/ui/mangoUi~ngMango~ngMangoServices.js?v=f41ab677f738e8ffe5b8:78:34332)
    

    actually that exception might be related to viewing the log, not the empty plots

    posted in User help read more
  • C
    craig

    No points have been disabled, no event messages about disabled points, just some meta points not starting back up properly after a dirty shutdown.

    The meta point has a cron of 2 minutes as the update event, the script is "return p.previous(MINUTE,2).average", and is set to average interval logging every 2 minutes.

    The excel report is using rollups.

    Let me know if there is any other information I can provide before I start changing things that will make any further troubleshooting of the root cause impossible.

    posted in User help read more
  • C
    craig

    Thanks for the details on the corruption scan. Maybe not related at all!

    Part of the problem appears to be that we have used meta points for one reason or another I don't understand. The modbus data point looks fine - all the data is there except for the 3 hour period when the machine was powered off.

    in the watchlist some of the meta points are flatlined and did not update since the machine was powered off. here is a screenshot showing orange penstock flow and yellow PHDS gauge flow (meta points) flatlined starting at 3:25 when the machine was powered off and did not recover until mango was restarted again, whereas others carried on just fine september 26 onwards:
    0_1570126090100_017a912a-5142-4a12-8581-490a2b17daa6-image.png

    however on the modbus data source it is all there:
    0_1570130569800_026fc08f-d133-46d4-9005-8513fb3fdca1-image.png

    The excel report that uses the metadata points oddly doesn't show any flatline except for where there is actually missing data, so that doesn't make any sense to me how it is getting the right data from the meta point when the flatlines are shown in the watchlist for the meta point for the same time period:
    0_1570130677400_bfac3734-9d49-4ccb-8815-1c7e7763308e-image.png

    and lastly the e-mail report (not excel), also using the meta data point, is not able to produce a plot with a flatline even:
    0_1570130965900_97584401-1c65-4538-a536-08617282f481-image.png

    Probably re-generating the point history for the meta points will fix all this. Since I don't understand why we are using meta points at all we can probably just delete them and use the modbus points and stop having these issues, otherwise I will probably have to re-generate meta point history after the next dirty shutdown.

    Before I re-generate meta history or get rid of the meta points altogether let me know if you'd like to take a closer look at the data and configuration that is there in case anything looks like an issue with mango that you would like to fix.

    Thanks for your help and continued work on mango
    Craig

    posted in User help read more
  • C
    craig

    plug was pulled on the computer. upon plugging back in some pointvalues are plotting flatlines. using tsdb.

    in env.properties db.nosql.runCorruptionOnStartupIfDirty=false

    so no corruption scan happened when mango restarted. changing that property to true and restarting mango does not go back and fix corruption because it was shut down cleanly. kill -9 of mango and then restarting does initiate the corruption scan, but I still have flatlines in the logs for the time between computer being plugged back in and me cleanly restarting mango.

    Is there a way to force mango to go back and run a corruption scan on the whole tsdb database so the data from the time period between (plug pulled) and (mango cleanly restarted) can be plotted, or is that data not even recorded since part of the database for those point values was corrupted?

    is there any reason runcorruptiononstartupifdirty is set to false by default?
    is there any way I can detect when data is not being logged due to corrupted database?

    Thanks.

    posted in User help read more
  • C
    craig

    I am trying to use units MVAR and MVARh but mango keeps changing them to MV*A and MVAh.

    Can MVAR and MVARh be added to the list of units?

    posted in User help read more
  • C
    craig

    Hi Phil,

    This was on mango 3 at all times. I had upgraded to the latest mango 3 a week ago after the problem happened the first time.

    I have just upgraded the mangoNoSQL module. No update to the excel reports appeared in the list of newly released modules.

    We will make sure to upgrade all of the MangoES with version 2.8 to the latest 2.8 as well.

    Thanks for such a quick turn around!

    posted in User help read more
  • C
    craig

    Hi Phil,

    Thanks for getting in touch. I can report back to our customer that you are working on it.

    We made it a couple days before running in to this issue again. The first exception is in the excel report purge, then after it looks like the corruption scanner has an exception and then no more reports run after that, and then the queue fills up.

    I had the same problem not being able to shut mango down cleanly after the exceptions have occurred.

    I did change the runCorruptionOnStartupIfDirty to true after I the first bout of corruption occurred.

    Exception in the excel report purge:

    Exception in thread "high-pool-2-thread-10622" java.lang.NullPointerException
            at java.io.File.<init>(File.java:360)
            at com.infiniteautomation.mango.excelreports.ExcelReportsCommon.getReport(ExcelReportsCommon.java:174)
            at com.infiniteautomation.mango.excelreports.dao.ExcelReportDao.purgeReportsBefore(ExcelReportDao.java:193)
            at com.infiniteautomation.mango.excelreports.ExcelReportPurgeDefinition.execute(ExcelReportPurgeDefinition.java:30)
            at com.serotonin.m2m2.rt.maint.DataPurge.executeImpl(DataPurge.java:93)
            at com.serotonin.m2m2.rt.maint.DataPurge.execute(DataPurge.java:61)
            at com.serotonin.m2m2.rt.maint.DataPurge$DataPurgeTask.run(DataPurge.java:289)
            at com.serotonin.timer.Task.runTask(Task.java:179)
            at com.serotonin.timer.TaskWrapper.run(TaskWrapper.java:23)
            at com.serotonin.timer.OrderedThreadPoolExecutor$OrderedTaskCollection.run(OrderedThreadPoolExecutor.java:307)
            at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
            at java.lang.Thread.run(Thread.java:745)
    

    Exception in the corruption scanner after which no reports run:

    WARN  2017-07-16T06:01:09,297 (com.infiniteautomation.tsdb.impl.IasTsdbImpl.repairShard:1214) - Corruption detected in series 68 shard 698, repairing now.
    WARN  2017-07-16T06:01:09,297 (com.infiniteautomation.tsdb.impl.IasTsdbImpl.repairShard:1214) - Corruption detected in series 68 shard 698, repairing now.
    WARN  2017-07-16T06:01:10,401 (com.serotonin.m2m2.util.timeout.TaskRejectionHandler.rejectedTask:77) - Rejected task: Interval logging: DP_194949 because Task Qu                        eue Full
    WARN  2017-07-16T06:01:11,441 (com.serotonin.m2m2.util.timeout.TaskRejectionHandler.rejectedTask:77) - Rejected task: Interval logging: DP_745002 because Task Qu                        eue Full
    WARN  2017-07-16T06:01:10,401 (com.serotonin.m2m2.util.timeout.TaskRejectionHandler.rejectedTask:77) - Rejected task: Interval logging: DP_194949 because Task Qu                        eue Full
    WARN  2017-07-16T06:01:11,441 (com.serotonin.m2m2.util.timeout.TaskRejectionHandler.rejectedTask:77) - Rejected task: Interval logging: DP_745002 because Task Qu                        eue Full
    ERROR 2017-07-16T06:01:11,443 (com.infiniteautomation.tsdb.impl.CorruptionScanner.findCorruption:531) - Map failed
    java.io.IOException: Map failed
            at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:907) ~[?:1.8.0_33]
            at com.infiniteautomation.tsdb.impl.ChecksumMappedByteBufferInputStream.<init>(ChecksumMappedByteBufferInputStream.java:41) ~[ias-tsdb-1.3.2.jar:?]
            at com.infiniteautomation.tsdb.impl.CorruptionScanner.openDataIn(CorruptionScanner.java:791) ~[ias-tsdb-1.3.2.jar:?]
            at com.infiniteautomation.tsdb.impl.CorruptionScanner.findCorruption(CorruptionScanner.java:498) [ias-tsdb-1.3.2.jar:?]
            at com.infiniteautomation.tsdb.impl.CorruptionScanner.checkFile(CorruptionScanner.java:467) [ias-tsdb-1.3.2.jar:?]
            at com.infiniteautomation.tsdb.impl.CorruptionScanner.checkShard(CorruptionScanner.java:434) [ias-tsdb-1.3.2.jar:?]
            at com.infiniteautomation.tsdb.impl.IasTsdbImpl.repairShard(IasTsdbImpl.java:1226) [ias-tsdb-1.3.2.jar:?]
            at com.infiniteautomation.tsdb.impl.IasTsdbImpl.multiQuery(IasTsdbImpl.java:529) [ias-tsdb-1.3.2.jar:?]
            at com.infiniteautomation.nosql.MangoNoSqlPointValueDao.getPointValuesBetween(MangoNoSqlPointValueDao.java:238) [mangoNoSqlDatabase-3.1.1.jar:?]
            at com.infiniteautomation.mango.excelreports.rt.ExcelReportWorkItem.execute(ExcelReportWorkItem.java:501) [excel-reports-3.1.2.jar:?]
            at com.serotonin.m2m2.rt.maint.BackgroundProcessing$RejectableWorkItemRunnable.run(BackgroundProcessing.java:556) [mango-3.1.1.jar:?]
            at com.serotonin.timer.Task.runTask(Task.java:179) [mango-3.1.1.jar:?]
            at com.serotonin.timer.TaskWrapper.run(TaskWrapper.java:23) [mango-3.1.1.jar:?]
            at com.serotonin.timer.OrderedThreadPoolExecutor$OrderedTaskCollection.run(OrderedThreadPoolExecutor.java:307) [mango-3.1.1.jar:?]
            at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_33]
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_33]
            at java.lang.Thread.run(Thread.java:745) [?:1.8.0_33]
    Caused by: java.lang.OutOfMemoryError: Map failed
            at sun.nio.ch.FileChannelImpl.map0(Native Method) ~[?:1.8.0_33]
            at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:904) ~[?:1.8.0_33]
            ... 16 more
    ERROR 2017-07-16T06:01:11,443 (com.infiniteautomation.tsdb.impl.CorruptionScanner.findCorruption:531) - Map failed
    java.io.IOException: Map failed
            at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:907) ~[?:1.8.0_33]
            at com.infiniteautomation.tsdb.impl.ChecksumMappedByteBufferInputStream.<init>(ChecksumMappedByteBufferInputStream.java:41) ~[ias-tsdb-1.3.2.jar:?]
            at com.infiniteautomation.tsdb.impl.CorruptionScanner.openDataIn(CorruptionScanner.java:791) ~[ias-tsdb-1.3.2.jar:?]
            at com.infiniteautomation.tsdb.impl.CorruptionScanner.findCorruption(CorruptionScanner.java:498) [ias-tsdb-1.3.2.jar:?]
            at com.infiniteautomation.tsdb.impl.CorruptionScanner.checkFile(CorruptionScanner.java:467) [ias-tsdb-1.3.2.jar:?]
            at com.infiniteautomation.tsdb.impl.CorruptionScanner.checkShard(CorruptionScanner.java:434) [ias-tsdb-1.3.2.jar:?]
            at com.infiniteautomation.tsdb.impl.IasTsdbImpl.repairShard(IasTsdbImpl.java:1226) [ias-tsdb-1.3.2.jar:?]
            at com.infiniteautomation.tsdb.impl.IasTsdbImpl.multiQuery(IasTsdbImpl.java:529) [ias-tsdb-1.3.2.jar:?]
            at com.infiniteautomation.nosql.MangoNoSqlPointValueDao.getPointValuesBetween(MangoNoSqlPointValueDao.java:238) [mangoNoSqlDatabase-3.1.1.jar:?]
            at com.infiniteautomation.mango.excelreports.rt.ExcelReportWorkItem.execute(ExcelReportWorkItem.java:501) [excel-reports-3.1.2.jar:?]
            at com.serotonin.m2m2.rt.maint.BackgroundProcessing$RejectableWorkItemRunnable.run(BackgroundProcessing.java:556) [mango-3.1.1.jar:?]
            at com.serotonin.timer.Task.runTask(Task.java:179) [mango-3.1.1.jar:?]
            at com.serotonin.timer.TaskWrapper.run(TaskWrapper.java:23) [mango-3.1.1.jar:?]
            at com.serotonin.timer.OrderedThreadPoolExecutor$OrderedTaskCollection.run(OrderedThreadPoolExecutor.java:307) [mango-3.1.1.jar:?]
    
            at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_33]
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_33]
            at java.lang.Thread.run(Thread.java:745) [?:1.8.0_33]
    Caused by: java.lang.OutOfMemoryError: Map failed
            at sun.nio.ch.FileChannelImpl.map0(Native Method) ~[?:1.8.0_33]
            at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:904) ~[?:1.8.0_33]
            ... 16 more
    ERROR 2017-07-16T06:01:11,443 (com.infiniteautomation.tsdb.impl.CorruptionScanner.findCorruption:531) - Map failed
    java.io.IOException: Map failed
            at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:907) ~[?:1.8.0_33]
            at com.infiniteautomation.tsdb.impl.ChecksumMappedByteBufferInputStream.<init>(ChecksumMappedByteBufferInputStream.java:41) ~[ias-tsdb-1.3.2.jar:?]
            at com.infiniteautomation.tsdb.impl.CorruptionScanner.openDataIn(CorruptionScanner.java:791) ~[ias-tsdb-1.3.2.jar:?]
            at com.infiniteautomation.tsdb.impl.CorruptionScanner.findCorruption(CorruptionScanner.java:498) [ias-tsdb-1.3.2.jar:?]
            at com.infiniteautomation.tsdb.impl.CorruptionScanner.checkFile(CorruptionScanner.java:467) [ias-tsdb-1.3.2.jar:?]
            at com.infiniteautomation.tsdb.impl.CorruptionScanner.checkShard(CorruptionScanner.java:434) [ias-tsdb-1.3.2.jar:?]
            at com.infiniteautomation.tsdb.impl.IasTsdbImpl.repairShard(IasTsdbImpl.java:1226) [ias-tsdb-1.3.2.jar:?]
            at com.infiniteautomation.tsdb.impl.IasTsdbImpl.multiQuery(IasTsdbImpl.java:529) [ias-tsdb-1.3.2.jar:?]
            at com.infiniteautomation.nosql.MangoNoSqlPointValueDao.getPointValuesBetween(MangoNoSqlPointValueDao.java:238) [mangoNoSqlDatabase-3.1.1.jar:?]
            at com.infiniteautomation.mango.excelreports.rt.ExcelReportWorkItem.execute(ExcelReportWorkItem.java:501) [excel-reports-3.1.2.jar:?]
            at com.serotonin.m2m2.rt.maint.BackgroundProcessing$RejectableWorkItemRunnable.run(BackgroundProcessing.java:556) [mango-3.1.1.jar:?]
            at com.serotonin.timer.Task.runTask(Task.java:179) [mango-3.1.1.jar:?]
            at com.serotonin.timer.TaskWrapper.run(TaskWrapper.java:23) [mango-3.1.1.jar:?]
            at com.serotonin.timer.OrderedThreadPoolExecutor$OrderedTaskCollection.run(OrderedThreadPoolExecutor.java:307) [mango-3.1.1.jar:?]
            at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_33]
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_33]
            at java.lang.Thread.run(Thread.java:745) [?:1.8.0_33]
    Caused by: java.lang.OutOfMemoryError: Map failed
            at sun.nio.ch.FileChannelImpl.map0(Native Method) ~[?:1.8.0_33]
            at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:904) ~[?:1.8.0_33]
            ... 16 more
    INFO  2017-07-16T06:01:12,047 (com.infiniteautomation.tsdb.impl.IasTsdbImpl.repairShard:1237) - Corruption repair report available at: /opt/mango/logs/tsdb-serie                        s-68-shard-698-scan-report_2017-07-16_06-01-09.log
    INFO  2017-07-16T06:01:12,047 (com.infiniteautomation.tsdb.impl.IasTsdbImpl.repairShard:1237) - Corruption repair report available at: /opt/mango/logs/tsdb-serie                        s-68-shard-698-scan-report_2017-07-16_06-01-09.log
    WARN  2017-07-16T07:49:21,011 (com.serotonin.m2m2.util.timeout.TaskRejectionHandler.rejectedTask:77) - Rejected task: Interval logging: DP_299411 because Task Qu                        eue Full
    WARN  2017-07-16T07:49:21,011 (com.serotonin.m2m2.util.timeout.TaskRejectionHandler.rejectedTask:77) - Rejected task: Interval logging: DP_299411 because Task Qu                        eue Full
    WARN  2017-07-16T12:00:24,185 (com.serotonin.m2m2.util.timeout.TaskRejectionHandler.rejectedTask:77) - Rejected task: Interval logging: DP_452922 because Task Qu                        eue Full
    WARN  2017-07-16T12:00:24,185 (com.serotonin.m2m2.util.timeout.TaskRejectionHandler.rejectedTask:77) - Rejected task: Interval logging: DP_452922 because Task Qu                        eue Full
    WARN  2017-07-16T18:01:00,131 (com.serotonin.m2m2.util.timeout.TaskRejectionHandler.rejectedTask:77) - Rejected task: Generating report: Hourly Compliance becaus                        e Task Queue Full
    WARN  2017-07-16T18:01:00,131 (com.serotonin.m2m2.util.timeout.TaskRejectionHandler.rejectedTask:77) - Rejected task: Generating report: Hourly Compliance becaus                        e Task Queue Full
    WARN  2017-07-16T19:01:00,168 (com.serotonin.m2m2.util.timeout.TaskRejectionHandler.rejectedTask:77) - Rejected task: Generating report: Hourly Compliance becaus
    

    shutdown that won't complete: last line repeats forever:

    mango@mangoES2147:/opt/mango/bin$sudo ./ma.sh stop
    mango@mangoES2147:/opt/mango/bin$INFO  2017-07-17T10:41:05,062 (com.serotonin.m2m2.Lifecycle.terminate:361) - Mango Lifecycle terminating...
    INFO  2017-07-17T10:41:05,062 (com.serotonin.m2m2.Lifecycle.terminate:361) - Mango Lifecycle terminating...
    INFO  2017-07-17T10:41:05,138 (com.serotonin.m2m2.rt.DataSourceGroupTerminator.terminate:72) - Terminating 8 NORMAL priority data sources in 8 threads.
    INFO  2017-07-17T10:41:05,138 (com.serotonin.m2m2.rt.DataSourceGroupTerminator.terminate:72) - Terminating 8 NORMAL priority data sources in 8 threads.
    INFO  2017-07-17T10:41:05,147 (com.serotonin.m2m2.rt.RuntimeManager.stopDataSourceShutdown:418) - Data source 'Cameras' stopped
    INFO  2017-07-17T10:41:05,147 (com.serotonin.m2m2.rt.RuntimeManager.stopDataSourceShutdown:418) - Data source 'Cameras' stopped
    INFO  2017-07-17T10:41:05,147 (com.serotonin.m2m2.rt.RuntimeManager.stopDataSourceShutdown:418) - Data source 'Mango Performance' stopped
    INFO  2017-07-17T10:41:05,147 (com.serotonin.m2m2.rt.RuntimeManager.stopDataSourceShutdown:418) - Data source 'Mango Performance' stopped
    INFO  2017-07-17T10:41:05,154 (com.serotonin.m2m2.rt.RuntimeManager.stopDataSourceShutdown:418) - Data source 'PLCi' stopped
    INFO  2017-07-17T10:41:05,154 (com.serotonin.m2m2.rt.RuntimeManager.stopDataSourceShutdown:418) - Data source 'PLCi' stopped
    INFO  2017-07-17T10:41:05,179 (com.serotonin.m2m2.rt.RuntimeManager.stopDataSourceShutdown:418) - Data source 'Mango Internal' stopped
    INFO  2017-07-17T10:41:05,179 (com.serotonin.m2m2.rt.RuntimeManager.stopDataSourceShutdown:418) - Data source 'Mango Internal' stopped
    INFO  2017-07-17T10:41:05,184 (com.serotonin.m2m2.rt.RuntimeManager.stopDataSourceShutdown:418) - Data source 'Unit 1' stopped
    INFO  2017-07-17T10:41:05,184 (com.serotonin.m2m2.rt.RuntimeManager.stopDataSourceShutdown:418) - Data source 'Unit 1' stopped
    INFO  2017-07-17T10:41:05,191 (com.serotonin.m2m2.rt.RuntimeManager.stopDataSourceShutdown:418) - Data source 'Unit 2' stopped
    INFO  2017-07-17T10:41:05,191 (com.serotonin.m2m2.rt.RuntimeManager.stopDataSourceShutdown:418) - Data source 'Unit 2' stopped
    INFO  2017-07-17T10:41:05,191 (com.serotonin.m2m2.rt.RuntimeManager.stopDataSourceShutdown:418) - Data source 'PLC0' stopped
    INFO  2017-07-17T10:41:05,191 (com.serotonin.m2m2.rt.RuntimeManager.stopDataSourceShutdown:418) - Data source 'PLC0' stopped
    INFO  2017-07-17T10:41:05,213 (com.serotonin.m2m2.rt.RuntimeManager.stopDataSourceShutdown:418) - Data source 'MangoES System' stopped
    INFO  2017-07-17T10:41:05,213 (com.serotonin.m2m2.rt.RuntimeManager.stopDataSourceShutdown:418) - Data source 'MangoES System' stopped
    INFO  2017-07-17T10:41:05,252 (com.serotonin.m2m2.rt.DataSourceGroupTerminator.terminate:102) - Termination of 8 NORMAL priority data sources took 114ms
    INFO  2017-07-17T10:41:05,252 (com.serotonin.m2m2.rt.DataSourceGroupTerminator.terminate:102) - Termination of 8 NORMAL priority data sources took 114ms
    INFO  2017-07-17T10:41:13,291 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:462) - BackgroundProcessing waiting 60 more seconds for 1 active and 1 queued low priority tasks to complete.
    INFO  2017-07-17T10:41:13,291 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:462) - BackgroundProcessing waiting 60 more seconds for 1 active and 1 queued low priority tasks to complete.
    INFO  2017-07-17T10:41:18,292 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:462) - BackgroundProcessing waiting 55 more seconds for 1 active and 1 queued low priority tasks to complete.
    INFO  2017-07-17T10:41:18,292 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:462) - BackgroundProcessing waiting 55 more seconds for 1 active and 1 queued low priority tasks to complete.
    INFO  2017-07-17T10:41:23,293 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:462) - BackgroundProcessing waiting 50 more seconds for 1 active and 1 queued low priority tasks to complete.
    INFO  2017-07-17T10:41:23,293 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:462) - BackgroundProcessing waiting 50 more seconds for 1 active and 1 queued low priority tasks to complete.
    INFO  2017-07-17T10:41:28,294 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:462) - BackgroundProcessing waiting 45 more seconds for 1 active and 1 queued low priority tasks to complete.
    INFO  2017-07-17T10:41:28,294 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:462) - BackgroundProcessing waiting 45 more seconds for 1 active and 1 queued low priority tasks to complete.
    INFO  2017-07-17T10:41:33,295 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:462) - BackgroundProcessing waiting 40 more seconds for 1 active and 1 queued low priority tasks to complete.
    INFO  2017-07-17T10:41:33,295 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:462) - BackgroundProcessing waiting 40 more seconds for 1 active and 1 queued low priority tasks to complete.
    INFO  2017-07-17T10:41:38,296 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:462) - BackgroundProcessing waiting 35 more seconds for 1 active and 1 queued low priority tasks to complete.
    INFO  2017-07-17T10:41:38,296 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:462) - BackgroundProcessing waiting 35 more seconds for 1 active and 1 queued low priority tasks to complete.
    INFO  2017-07-17T10:41:43,297 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:462) - BackgroundProcessing waiting 30 more seconds for 1 active and 1 queued low priority tasks to complete.
    INFO  2017-07-17T10:41:43,297 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:462) - BackgroundProcessing waiting 30 more seconds for 1 active and 1 queued low priority tasks to complete.
    INFO  2017-07-17T10:41:48,298 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:462) - BackgroundProcessing waiting 25 more seconds for 1 active and 1 queued low priority tasks to complete.
    INFO  2017-07-17T10:41:48,298 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:462) - BackgroundProcessing waiting 25 more seconds for 1 active and 1 queued low priority tasks to complete.
    INFO  2017-07-17T10:41:53,299 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:462) - BackgroundProcessing waiting 20 more seconds for 1 active and 1 queued low priority tasks to complete.
    INFO  2017-07-17T10:41:53,299 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:462) - BackgroundProcessing waiting 20 more seconds for 1 active and 1 queued low priority tasks to complete.
    INFO  2017-07-17T10:41:58,300 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:462) - BackgroundProcessing waiting 15 more seconds for 1 active and 1 queued low priority tasks to complete.
    INFO  2017-07-17T10:41:58,300 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:462) - BackgroundProcessing waiting 15 more seconds for 1 active and 1 queued low priority tasks to complete.
    INFO  2017-07-17T10:42:03,301 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:462) - BackgroundProcessing waiting 10 more seconds for 1 active and 1 queued low priority tasks to complete.
    INFO  2017-07-17T10:42:03,301 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:462) - BackgroundProcessing waiting 10 more seconds for 1 active and 1 queued low priority tasks to complete.
    INFO  2017-07-17T10:42:08,302 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:462) - BackgroundProcessing waiting 5 more seconds for 1 active and 1 queued low priority tasks to complete.
    INFO  2017-07-17T10:42:08,302 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:462) - BackgroundProcessing waiting 5 more seconds for 1 active and 1 queued low priority tasks to complete.
    INFO  2017-07-17T10:42:13,305 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:473) - BackgroundProcessing waiting 60 more seconds for 18 active and 0 queued high priority tasks to complete.
    INFO  2017-07-17T10:42:13,305 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:473) - BackgroundProcessing waiting 60 more seconds for 18 active and 0 queued high priority tasks to complete.
    INFO  2017-07-17T10:42:18,306 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:473) - BackgroundProcessing waiting 55 more seconds for 18 active and 0 queued high priority tasks to complete.
    INFO  2017-07-17T10:42:18,306 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:473) - BackgroundProcessing waiting 55 more seconds for 18 active and 0 queued high priority tasks to complete.
    INFO  2017-07-17T10:42:23,307 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:473) - BackgroundProcessing waiting 50 more seconds for 18 active and 0 queued high priority tasks to complete.
    INFO  2017-07-17T10:42:23,307 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:473) - BackgroundProcessing waiting 50 more seconds for 18 active and 0 queued high priority tasks to complete.
    INFO  2017-07-17T10:42:28,308 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:473) - BackgroundProcessing waiting 45 more seconds for 18 active and 0 queued high priority tasks to complete.
    INFO  2017-07-17T10:42:28,308 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:473) - BackgroundProcessing waiting 45 more seconds for 18 active and 0 queued high priority tasks to complete.
    INFO  2017-07-17T10:42:33,309 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:473) - BackgroundProcessing waiting 40 more seconds for 18 active and 0 queued high priority tasks to complete.
    INFO  2017-07-17T10:42:33,309 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:473) - BackgroundProcessing waiting 40 more seconds for 18 active and 0 queued high priority tasks to complete.
    INFO  2017-07-17T10:42:38,310 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:473) - BackgroundProcessing waiting 35 more seconds for 18 active and 0 queued high priority tasks to complete.
    INFO  2017-07-17T10:42:38,310 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:473) - BackgroundProcessing waiting 35 more seconds for 18 active and 0 queued high priority tasks to complete.
    INFO  2017-07-17T10:42:43,311 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:473) - BackgroundProcessing waiting 30 more seconds for 18 active and 0 queued high priority tasks to complete.
    INFO  2017-07-17T10:42:43,311 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:473) - BackgroundProcessing waiting 30 more seconds for 18 active and 0 queued high priority tasks to complete.
    INFO  2017-07-17T10:42:48,312 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:473) - BackgroundProcessing waiting 25 more seconds for 18 active and 0 queued high priority tasks to complete.
    INFO  2017-07-17T10:42:48,312 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:473) - BackgroundProcessing waiting 25 more seconds for 18 active and 0 queued high priority tasks to complete.
    INFO  2017-07-17T10:42:53,313 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:473) - BackgroundProcessing waiting 20 more seconds for 18 active and 0 queued high priority tasks to complete.
    INFO  2017-07-17T10:42:53,313 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:473) - BackgroundProcessing waiting 20 more seconds for 18 active and 0 queued high priority tasks to complete.
    INFO  2017-07-17T10:42:58,314 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:473) - BackgroundProcessing waiting 15 more seconds for 18 active and 0 queued high priority tasks to complete.
    INFO  2017-07-17T10:42:58,314 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:473) - BackgroundProcessing waiting 15 more seconds for 18 active and 0 queued high priority tasks to complete.
    INFO  2017-07-17T10:43:03,315 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:473) - BackgroundProcessing waiting 10 more seconds for 18 active and 0 queued high priority tasks to complete.
    INFO  2017-07-17T10:43:03,315 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:473) - BackgroundProcessing waiting 10 more seconds for 18 active and 0 queued high priority tasks to complete.
    INFO  2017-07-17T10:43:08,316 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:473) - BackgroundProcessing waiting 5 more seconds for 18 active and 0 queued high priority tasks to complete.
    INFO  2017-07-17T10:43:08,316 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:473) - BackgroundProcessing waiting 5 more seconds for 18 active and 0 queued high priority tasks to complete.
    INFO  2017-07-17T10:43:12,317 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:481) - All high priority tasks exited gracefully.
    INFO  2017-07-17T10:43:12,317 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:481) - All high priority tasks exited gracefully.
    INFO  2017-07-17T10:43:12,318 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:487) - All medium priority tasks exited gracefully.
    INFO  2017-07-17T10:43:12,318 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:487) - All medium priority tasks exited gracefully.
    INFO  2017-07-17T10:43:12,318 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:494) - 1 low priority tasks forcefully terminated.
    INFO  2017-07-17T10:43:12,318 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:494) - 1 low priority tasks forcefully terminated.
    INFO  2017-07-17T10:43:12,763 (com.infiniteautomation.nosql.MangoNoSqlProxy.shutdown:115) - Terminating NoSQL Batch Write Manager.
    INFO  2017-07-17T10:43:12,763 (com.infiniteautomation.nosql.MangoNoSqlProxy.shutdown:115) - Terminating NoSQL Batch Write Manager.
    INFO  2017-07-17T10:43:12,764 (com.infiniteautomation.nosql.MangoNoSqlBatchWriteBehindManager.terminate:242) - Terminating NoSQL Point Value Mover.
    INFO  2017-07-17T10:43:12,764 (com.infiniteautomation.nosql.MangoNoSqlBatchWriteBehindManager.terminate:242) - Terminating NoSQL Point Value Mover.
    INFO  2017-07-17T10:43:12,764 (com.infiniteautomation.nosql.MangoNoSqlBatchWriteBehindManager.terminate:249) - Terminating 16 Batch Writer Tasks.
    INFO  2017-07-17T10:43:12,764 (com.infiniteautomation.nosql.MangoNoSqlBatchWriteBehindManager.terminate:249) - Terminating 16 Batch Writer Tasks.
    INFO  2017-07-17T10:43:12,767 (com.infiniteautomation.nosql.MangoNoSqlBatchWriteBehindManager.terminate:258) - 16 Batch Writer Tasks awaiting termination.
    INFO  2017-07-17T10:43:12,767 (com.infiniteautomation.nosql.MangoNoSqlBatchWriteBehindManager.terminate:258) - 16 Batch Writer Tasks awaiting termination.
    WARN  2017-07-17T10:43:22,767 (com.infiniteautomation.nosql.MangoNoSqlBatchWriteBehind.joinTermination:169) - Waiting for Batch Writer Task 0 to stop
    WARN  2017-07-17T10:43:22,767 (com.infiniteautomation.nosql.MangoNoSqlBatchWriteBehind.joinTermination:169) - Waiting for Batch Writer Task 0 to stop
    WARN  2017-07-17T10:43:32,768 (com.infiniteautomation.nosql.MangoNoSqlBatchWriteBehind.joinTermination:169) - Waiting for Batch Writer Task 0 to stop
    WARN  2017-07-17T10:43:32,768 (com.infiniteautomation.nosql.MangoNoSqlBatchWriteBehind.joinTermination:169) - Waiting for Batch Writer Task 0 to stop
    WARN  2017-07-17T10:43:42,769 (com.infiniteautomation.nosql.MangoNoSqlBatchWriteBehind.joinTermination:169) - Waiting for Batch Writer Task 0 to stop
    
    

    posted in User help read more
  • C
    craig

    mango won't shut down cleanly;

    mango@mangoES2147:/opt/mango$sudo ./bin/ma.sh stop
    INFO  2017-07-14T13:57:47,780 (com.serotonin.m2m2.Lifecycle.terminate:361) - Mango Lifecycle terminating...
    mango@mangoES2147:/opt/mango$INFO  2017-07-14T13:57:47,822 (com.serotonin.m2m2.rt.DataSourceGroupTerminator.terminate:72) - Terminating 8 NORMAL priority data sources in 8 threads.
    INFO  2017-07-14T13:57:47,838 (com.serotonin.m2m2.rt.RuntimeManager.stopDataSourceShutdown:418) - Data source 'Cameras' stopped
    INFO  2017-07-14T13:57:47,843 (com.serotonin.m2m2.rt.RuntimeManager.stopDataSourceShutdown:418) - Data source 'Mango Performance' stopped
    INFO  2017-07-14T13:57:47,847 (com.serotonin.m2m2.rt.RuntimeManager.stopDataSourceShutdown:418) - Data source 'Unit 2' stopped
    INFO  2017-07-14T13:57:47,851 (com.serotonin.m2m2.rt.RuntimeManager.stopDataSourceShutdown:418) - Data source 'Unit 1' stopped
    INFO  2017-07-14T13:57:47,852 (com.serotonin.m2m2.rt.RuntimeManager.stopDataSourceShutdown:418) - Data source 'PLC0' stopped
    INFO  2017-07-14T13:57:47,855 (com.serotonin.m2m2.rt.RuntimeManager.stopDataSourceShutdown:418) - Data source 'PLCi' stopped
    INFO  2017-07-14T13:57:47,856 (com.serotonin.m2m2.rt.RuntimeManager.stopDataSourceShutdown:418) - Data source 'Mango Internal' stopped
    INFO  2017-07-14T13:57:47,880 (com.serotonin.m2m2.rt.RuntimeManager.stopDataSourceShutdown:418) - Data source 'MangoES System' stopped
    INFO  2017-07-14T13:57:47,926 (com.serotonin.m2m2.rt.DataSourceGroupTerminator.terminate:102) - Termination of 8 NORMAL priority data sources took 104ms
    INFO  2017-07-14T13:57:55,953 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:462) - BackgroundProcessing waiting 60 more seconds for 1 active and 6 queued low priority tasks to complete.
    INFO  2017-07-14T13:58:00,954 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:462) - BackgroundProcessing waiting 55 more seconds for 1 active and 6 queued low priority tasks to complete.
    INFO  2017-07-14T13:58:05,955 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:462) - BackgroundProcessing waiting 50 more seconds for 1 active and 6 queued low priority tasks to complete.
    
    INFO  2017-07-14T13:58:20,958 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:462) - BackgroundProcessing waiting 35 more seconds for 1 active and 6 queued low priority tasks to complete.
    INFO  2017-07-14T13:58:25,959 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:462) - BackgroundProcessing waiting 30 more seconds for 1 active and 6 queued low priority tasks to complete.
    INFO  2017-07-14T13:58:30,960 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:462) - BackgroundProcessing waiting 25 more seconds for 1 active and 6 queued low priority tasks to complete.
    INFO  2017-07-14T13:58:35,961 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:462) - BackgroundProcessing waiting 20 more seconds for 1 active and 6 queued low priority tasks to complete.
    INFO  2017-07-14T13:58:40,962 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:462) - BackgroundProcessing waiting 15 more seconds for 1 active and 6 queued low priority tasks to complete.
    INFO  2017-07-14T13:58:45,963 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:462) - BackgroundProcessing waiting 10 more seconds for 1 active and 6 queued low priority tasks to complete.
    INFO  2017-07-14T13:58:50,964 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:462) - BackgroundProcessing waiting 5 more seconds for 1 active and 6 queued low priority tasks to complete.
    INFO  2017-07-14T13:58:55,966 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:473) - BackgroundProcessing waiting 60 more seconds for 24 active and 0 queued high priority tasks to complete.
    INFO  2017-07-14T13:59:00,967 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:473) - BackgroundProcessing waiting 55 more seconds for 24 active and 0 queued high priority tasks to complete.
    INFO  2017-07-14T13:59:05,968 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:473) - BackgroundProcessing waiting 50 more seconds for 24 active and 0 queued high priority tasks to complete.
    INFO  2017-07-14T13:59:10,969 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:473) - BackgroundProcessing waiting 45 more seconds for 24 active and 0 queued high priority tasks to complete.
    INFO  2017-07-14T13:59:15,970 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:473) - BackgroundProcessing waiting 40 more seconds for 24 active and 0 queued high priority tasks to complete.
    INFO  2017-07-14T13:59:20,971 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:473) - BackgroundProcessing waiting 35 more seconds for 24 active and 0 queued high priority tasks to complete.
    INFO  2017-07-14T13:59:25,972 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:473) - BackgroundProcessing waiting 30 more seconds for 24 active and 0 queued high priority tasks to complete.
    INFO  2017-07-14T13:59:30,973 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:473) - BackgroundProcessing waiting 25 more seconds for 24 active and 0 queued high priority tasks to complete.
    INFO  2017-07-14T13:59:35,974 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:473) - BackgroundProcessing waiting 20 more seconds for 24 active and 0 queued high priority tasks to complete.
    INFO  2017-07-14T13:59:40,975 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:473) - BackgroundProcessing waiting 15 more seconds for 24 active and 0 queued high priority tasks to complete.
    INFO  2017-07-14T13:59:45,976 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:473) - BackgroundProcessing waiting 10 more seconds for 24 active and 0 queued high priority tasks to complete.
    INFO  2017-07-14T13:59:50,977 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:473) - BackgroundProcessing waiting 5 more seconds for 24 active and 0 queued high priority tasks to complete.
    INFO  2017-07-14T13:59:54,978 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:481) - All high priority tasks exited gracefully.
    INFO  2017-07-14T13:59:54,978 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:487) - All medium priority tasks exited gracefully.
    INFO  2017-07-14T13:59:54,979 (com.serotonin.m2m2.rt.maint.BackgroundProcessing.joinTermination:494) - 6 low priority tasks forcefully terminated.
    INFO  2017-07-14T13:59:56,248 (com.infiniteautomation.nosql.MangoNoSqlProxy.shutdown:115) - Terminating NoSQL Batch Write Manager.
    INFO  2017-07-14T13:59:56,248 (com.infiniteautomation.nosql.MangoNoSqlBatchWriteBehindManager.terminate:242) - Terminating NoSQL Point Value Mover.
    INFO  2017-07-14T13:59:56,249 (com.infiniteautomation.nosql.MangoNoSqlBatchWriteBehindManager.terminate:249) - Terminating 22 Batch Writer Tasks.
    INFO  2017-07-14T13:59:56,251 (com.infiniteautomation.nosql.MangoNoSqlBatchWriteBehindManager.terminate:258) - 22 Batch Writer Tasks awaiting termination.
    WARN  2017-07-14T14:00:06,252 (com.infiniteautomation.nosql.MangoNoSqlBatchWriteBehind.joinTermination:169) - Waiting for Batch Writer Task 0 to stop
    WARN  2017-07-14T14:00:16,253 (com.infiniteautomation.nosql.MangoNoSqlBatchWriteBehind.joinTermination:169) - Waiting for Batch Writer Task 0 to stop
    

    the last message repeats forever so I have to kill it

    upon restarting there are some out of memory exceptions:

    INFO  2017-07-14T14:40:45,060 (com.infiniteautomation.tsdb.impl.CorruptionScanner.seriesComplete:1022) - Scan of /opt/mango/databases/mangoTSDB/5/191 completed in 00:00:00.04
    INFO  2017-07-14T14:40:45,623 (com.infiniteautomation.tsdb.impl.CorruptionScanner.seriesComplete:1008) - Completed folder 62 of 94
    INFO  2017-07-14T14:40:45,624 (com.infiniteautomation.tsdb.impl.CorruptionScanner.seriesComplete:1022) - Scan of /opt/mango/databases/mangoTSDB/91/50 completed in 00:00:15.137
    ERROR 2017-07-14T14:40:49,905 (com.infiniteautomation.tsdb.impl.CorruptionScanner.findBadTs:693) - Map failed
    java.io.IOException: Map failed
            at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:907) ~[?:1.8.0_33]
            at com.infiniteautomation.tsdb.impl.ChecksumMappedByteBufferInputStream.<init>(ChecksumMappedByteBufferInputStream.java:41) ~[ias-tsdb-1.3.2.jar:?]
            at com.infiniteautomation.tsdb.impl.CorruptionScanner.openDataIn(CorruptionScanner.java:791) ~[ias-tsdb-1.3.2.jar:?]
            at com.infiniteautomation.tsdb.impl.CorruptionScanner.findBadTs(CorruptionScanner.java:671) [ias-tsdb-1.3.2.jar:?]
            at com.infiniteautomation.tsdb.impl.CorruptionScanner.checkFile(CorruptionScanner.java:484) [ias-tsdb-1.3.2.jar:?]
            at com.infiniteautomation.tsdb.impl.CorruptionScanner.checkShard(CorruptionScanner.java:434) [ias-tsdb-1.3.2.jar:?]
            at com.infiniteautomation.tsdb.impl.CorruptionScanner.checkSeriesDir(CorruptionScanner.java:336) [ias-tsdb-1.3.2.jar:?]
            at com.infiniteautomation.tsdb.impl.CorruptionScanner$CorruptionCheckTask.run(CorruptionScanner.java:300) [ias-tsdb-1.3.2.jar:?]
            at java.lang.Thread.run(Thread.java:745) [?:1.8.0_33]
    Caused by: java.lang.OutOfMemoryError: Map failed
            at sun.nio.ch.FileChannelImpl.map0(Native Method) ~[?:1.8.0_33]
            at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:904) ~[?:1.8.0_33]
            ... 8 more
    ERROR 2017-07-14T14:40:49,929 (com.infiniteautomation.tsdb.impl.CorruptionScanner.findCorruption:531) - Map failed
    java.io.IOException: Map failed
            at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:907) ~[?:1.8.0_33]
            at com.infiniteautomation.tsdb.impl.ChecksumMappedByteBufferInputStream.<init>(ChecksumMappedByteBufferInputStream.java:41) ~[ias-tsdb-1.3.2.jar:?]
            at com.infiniteautomation.tsdb.impl.CorruptionScanner.openDataIn(CorruptionScanner.java:791) ~[ias-tsdb-1.3.2.jar:?]
            at com.infiniteautomation.tsdb.impl.CorruptionScanner.findCorruption(CorruptionScanner.java:498) [ias-tsdb-1.3.2.jar:?]
            at com.infiniteautomation.tsdb.impl.CorruptionScanner.checkFile(CorruptionScanner.java:467) [ias-tsdb-1.3.2.jar:?]
            at com.infiniteautomation.tsdb.impl.CorruptionScanner.checkShard(CorruptionScanner.java:434) [ias-tsdb-1.3.2.jar:?]
            at com.infiniteautomation.tsdb.impl.CorruptionScanner.checkSeriesDir(CorruptionScanner.java:336) [ias-tsdb-1.3.2.jar:?]
            at com.infiniteautomation.tsdb.impl.CorruptionScanner$CorruptionCheckTask.run(CorruptionScanner.java:300) [ias-tsdb-1.3.2.jar:?]
            at java.lang.Thread.run(Thread.java:745) [?:1.8.0_33]
    Caused by: java.lang.OutOfMemoryError: Map failed
            at sun.nio.ch.FileChannelImpl.map0(Native Method) ~[?:1.8.0_33]
            at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:904) ~[?:1.8.0_33]
            ... 8 more
    INFO  2017-07-14T14:40:51,325 (com.infiniteautomation.tsdb.impl.CorruptionScanner.seriesComplete:1008) - Completed folder 63 of 94
    INFO  2017-07-14T14:40:51,327 (com.infiniteautomation.tsdb.impl.CorruptionScanner.seriesComplete:1022) - Scan of /opt/mango/databases/mangoTSDB/18/93 completed in 00:00:14.739
    INFO  2017-07-14T14:40:51,362 (com.infiniteautomation.tsdb.impl.CorruptionScanner.seriesComplete:1008) - Completed folder 64 of 94
    INFO  2017-07-14T14:40:51,363 (com.infiniteautomation.tsdb.impl.CorruptionScanner.seriesComplete:1022) - Scan of /opt/mango/databases/mangoTSDB/13/199 completed in 00:00:00.34
    INFO  2017-07-14T14:40:51,367 (com.infiniteautomation.tsdb.impl.CorruptionScanner.seriesComplete:1008) - Completed folder 65 of 9
    

    posted in User help read more
  • C
    craig

    It seems for some reason there is some corruption in the Mango NO SQL database as there are some log entries regarding repairing the corruption, but from that time on none of the scheduled reports (hourly) complete successfully so the queue fills up.

    WARN  2017-07-13T06:00:50,656 (com.serotonin.m2m2.util.timeout.TaskRejectionHandler.rejectedTask:77) - Rejected task: Interval logging: DP_452922 because Task Queue Full 
    WARN  2017-07-13T06:01:11,041 (com.infiniteautomation.tsdb.impl.IasTsdbImpl.repairShard:1214) - Corruption detected in series 47 shard 698, repairing now. 
    INFO  2017-07-13T06:01:11,869 (com.infiniteautomation.tsdb.impl.IasTsdbImpl.repairShard:1237) - Corruption repair report available at: /opt/mango/logs/tsdb-series-47-shard-698-scan-report_2017-07-13_06-01-11.log 
    WARN  2017-07-13T18:01:00,199 (com.serotonin.m2m2.util.timeout.TaskRejectionHandler.rejectedTask:77) - Rejected task: Generating report: Hourly Compliance because Task Queue Full 
    WARN  2017-07-13T19:01:00,152 (com.serotonin.m2m2.util.timeout.TaskRejectionHandler.rejectedTask:77) - Rejected task: Generating report: Hourly Compliance because Task Queue Full 
    

    What showed up in the console though that isn't in the log is this NullPointerException, but it appears to happen before the database corruption:
    0_1500050991765_CTN mango NPE 2017-07-14.png

    Here is the contents of the corruption scan reports:

    mango@mangoES2147:/opt/mango/logs$ cat  tsdb-series-47-shard-698-scan-report_2017-07-13_06-01-11.log
    -- Shard Scan Report Start: 07/13/2017 06:01:11.868 --
    Callback was disordered by corruption: false
    
    **********  SERIES 47 **********
    Path: /opt/mango/databases/mangoTSDB/67/47
    Runtime: 00:00:00.825
    Shard Repair: None
    mango@mangoES2147:/opt/mango/logs$cat  tsdb-series-178-shard-698-scan-report_2017-07-11_17-01-10.log
    -- Shard Scan Report Start: 07/11/2017 17:01:11.897 --
    
    
    **********  SERIES 178 **********
    Path: /opt/mango/databases/mangoTSDB/50/178
    Runtime: 00:00:01.03
    Shard Repair: None
    

    Now in the excel reports tab there are a bunch of reports running and a bunch queued.

    The forum won't let me upload the ma.log or threads.json, so they are here
    https://pastebin.com/fQ92iAGm
    https://pastebin.com/C4HDzLZv

    posted in User help read more
  • C
    craig

    any fanless hardware on the horizon?

    posted in MangoES Hardware read more
  • C
    craig

    if you want to work strictly in mango you could read the 4 byte value as an unsigned long integer and then use a meta point to mask off each byte and try reassembling different arrangements of bytes and converting types to float until you find the float that has the correct value.

    Here is an excerpt from another scada system's help file:
    Control byte order for floating point values (the Modbus driver supports floating point values).
    Some systems expect to use a different byte order for their floating point data.
    Allowable Values: 0 to 3, where:
    0 - Byte order = 1 0 3 2
    1 - Byte order = 3 2 1 0
    2 - Byte order = 0 1 2 3
    3 - Byte order = 2 3 0 1
    Default Value: 0

    mango supports option 0 and 1 I think, so it is also possible that the system you have is using option 2 or 3 if you are sure you aren't off by one and neither 4 byte float and 4 byte float swapped worked.

    posted in Hardware read more
  • C
    craig

    a byte is 8 bits. so the flow rate is a 4 byte float and the total is an 8 byte float. There are two settings in mango for each of these, 4 byte float and 4 byte float swapped, and 8 byte float and 8 byte float swapped. Try both, see which if any works.

    make sure you are not off-by-one with the register addresses. find a simple register that is INT type and make sure you can read it correctly and that you are not getting the register before or after.

    If neither work then I would use the program called qmodmaster to connect to the salve and see what the exact bytes are at what address and then you can try different arrangements of bytes to floating point until you find the one that works.

    posted in Hardware read more
  • C
    craig

    check out the modbus data source in the old mango source code for an example on how to use the modbus4j library

    https://sourceforge.net/projects/scadabr/files/Software/mango-src/

    posted in Modbus4J general discussion read more
  • C
    craig

    As phildunlap found it looks like a problem with the USB serial port and linux or rpi hardware and not a mango problem. Try a powered USB hub?

    posted in User help read more
  • C
    craig

    It isn't a point of major concern, but it would be better for us if it were possible to make it so the events raised in mango were only related to the process that an operator would be able to deal with. Timeouts, reconnects, retries etc should be in a log file so that they can be reviewed to diagnose faulty equipment, improve settings, etc but for us they don't belong in the same interface as events generated from the process since they are for a different audience.

    Shouldn't Rejected Work Item Log Level = None make it so DataPointRT rejections don't show up in mango?

    With Log Level = None do the events still get recorded in ma.log?

    thanks for the help

    posted in User help read more
  • C
    craig

    I set defaultTaskQueueSize=5 and Rejected Work Item log level none and I still get two DataPointRT rejections every night around 3 AM.

    Anything else I can do?

    posted in User help read more