• terrypacker

    @HSAcontrols I took a look at your log files and there is no indication of the source of the problem from what I could tell. That is what I expected though. I believe your problem (as you identified) is with one or many of the Meta point scripts, the executions are backing up for some reason and eventually this consumes all your memory.

    The queues were predominantly point events, is it normal for a meta point to generate a queue like this when executing a script?

    It is not normal for there to be this many Point Events in the queue. Whenever a data point's value changes it will notify any interested parties (listeners) of the change via the Point Events you mention. So in your situation the Meta points are the listeners.

    I suspect that the chain of listeners is being held up from executing due to a blocking script or perhaps just a very long running one. I would start by checking to see which of the Point Event queues are actually executing. It appears that many of them have the same number of executions while some have less. You may be able to find a deadlock type situation by seeing which ones are 'stuck' and not executing.

    posted in Development general discussion read more
  • terrypacker

    @Turbo

    Here is some general information on what you see, and I have been tracking these known problems for some time trying to find a solution.

    1. Excel Reports Module can use up all the JVM memory.

    This is mostly out of our control and a known problem with the Apache POI library that is the backbone of that module. They have recently released a more performant version that will eventually be part of our Excel Reports module. We have reworked our REST api using a new algorithm to manage memory that will in the not-to-distant future be ported over to the ExcelReports module and should solve all of the problems outside of the POI library limitations.

    1. Open Files - MangoNoSQL
      The time series database will open many files and leave them open for some time, they eventually do get closed but for performance reasons we don't close them immediately after use. As @CraigWeb mentioned better tuning can help during intense syncs. As for the problem being worse in 3.6, I was not aware of that and can confirm that the logic for opening and closing the files has not changes. It is likely that you system is just getting larger and you are now aware that this happens. There are a whole slew of settings for very large systems in the env.properties file with the prefix: db.nosql.*. I don't have time to go into them all here but I would suggest experimenting with them and posting questions about specific properties later.

    As for the JVM general memory use:

    We are starting to use Adopt OpenJDK 13 and are finding that it is much better at memory management. As for the -Xms and -Xmx settings, depending what you are trying to achieve they may not be ideal. By setting both to the same value you get a fully allocated JVM at startup which on a machine only running Mango is likely ideal. But for a system with other processes that need memory at other times the JVM cannot release memory with that configuration. We have found that on a very large machine using Adopt OpenJDK it is quite good expanding and contracting the JVM memory use for Mango's needs. But we are still evaluating the best configuration for it at this point.

    posted in User help read more
  • terrypacker

    @petermcs please run jmap -histo pid to get the object count.

    posted in User help read more
  • terrypacker

    @petermcs the best way to analyze OOM problems is by taking a heap dump and loading it into an analysis tool such as JVisualVM or MAT. However that file will generally be GB in size on a system like yours so before we get into that I would suggest that you take some samples of the Java process while it is running. From that we may be able to see what is eating up all the memory.

    By using the jmap tool packaged with the JDK you should be able to get the necessary info:

    https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/tooldescr014.html

    Step 1. Find the PID of Mango while it is running. This can be by something like ps aux | grep java or looking into the ma.pid file in MA_HOME\bin

    Step 2. Run jmap -histo <mango pid> every so often until Mango crashes. Write the output to a file for reference later obviously.

    By looking at the classes that are using the most memory you should be able to get an idea of what the problem is. Please post a summary of your findings here and I can try to see what the problem is.

    NOTE: likely jmap won't be on your path so you may need to execute it from within the JDK installation directory directly.

    posted in User help read more
  • terrypacker

    You are importing a non csv formatted file. Looks like you are trying to import the MultiColumnCsvImporter.java file.

    The path to data file or folder setting must point to a directory that will contain CSV files for importing, not the source code you use to build the importers.

    posted in User help read more
  • terrypacker

    @etantonio it looks like you have a syntax error. The other dao classes have a static instance member to access the methods but this dao is different in Mango 2.8,

    The method you are using is not static so you need to first get a reference to the created dao. I believe there is a DaoRegistry class you may be able to use. Just be careful and make sure you understand why some methods are static and others are not.

    posted in Mango Automation general Discussion read more
  • terrypacker

    Stefano,

    There is no tool built into the NoSQL module to export the data to an SQL database. Depending on your level of technical skill you could extract the data from the REST api and then import it into an SQL database using many 3rd party tools.

    If we are talking about converting data from a Mango instance running on NoSQL to a Mango instance running on MySQL:

    1. You could try to export and import the data using export and import as CSV functionality.
    2. You could publish all the data to a new instance running on MySQL using the Persistent TCP module
    3. There would also be ways to write Meta Data source scripts and Scripting data source scripts to do this, this again is quite technical so I won't go into it here.

    posted in User help read more
  • terrypacker

    Hi Rebeca,

    I would suggest playing with some of the example classes included with the data source to try and understand how it works. The data will be saved into data points on the data source that need to be either created first or you can check the Create missing points checkbox.

    If you look in the MA_HOME/web/modules/dataFile/web/CompilingGrounds/CSV directory you can see a few example classes and data files you can test with.

    posted in User help read more
  • terrypacker

    @mihairosu

    I assume you upgraded to Mango 3.6 and then the script started breaking but you did not say that. The reason it no longer works is because in Mango 3.6 we wrap all Meta scripts in a function block and then execute the function. So when you use your script it actually gets translated (internally in Mango) into this code:

    function __scriptExecutor__() {
    /*
    //Script by Phil Dunlap to automatically generate lost history
    if( my.time + 60000 < source.time ) { //We're at least a minute after
      var metaEditDwr = new com.serotonin.m2m2.meta.MetaEditDwr();
      metaEditDwr.generateMetaPointHistory(
          my.getDataPointWrapper().getId(), my.time+1, CONTEXT.getRuntime(), false);
      //Arguments are, dataPointId, long from, long to, boolean deleteExistingData
      //my.time+1 because first argument is inclusive, and we have value there
    }
    //Your regular script here.*/
    
    
    return CwConsumed.value-CwConsumed.ago(DAY, 1); //Subtracts this moment's gallons used minus one day ago } __scriptExecutor__();
    

    As you can see the last line in your function get the end of the function appended } and then executed __scriptExecutor__();

    Since your last line contains a comment the end of the wrapping function and the actual command to execute it are commented out causing the problem. The reason for doing it this way is because it makes the validation and testing messages for scripts easier to relate to the script being tested as the Javascript engine returns the line number and column number of the failure. I will take a look at making this more robust in Mango 3.7.0.

    So just remove the commend from the end of the last line and place it above the line to get this to work.

    posted in Scripting general Discussion read more
  • terrypacker

    @jflores13 said in AI, Machine Learning, Neural Networks, TensorFlow:

    Interesting, it's pretty cool that you've tried it. Is it possible that something cloud-based, maybe a combo similar to lambda functions + Amazon API Gateway, could make the ML model more available and reduce the need for Python or R support in the same environment?

    Something to look into in the future for sure, but that approach would have its own problems to overcome. For example getting the data from Mango to the model.

    posted in Development general discussion read more