Navision Database - Optimization - ?

KaetchenKaetchen Member Posts: 106
Hi,

We run from time to time a procedure called "optimize" tables.

Optimize
To optimize the location of data in the table, that is, to compress it, removing empty space from the data


The reason for this exercise is to increase the performance of Navision.

Could anybody explain the architecture of this c/side database?
If this is a dynamic database - why would I need to compress?

I know relational database from AS/400 or now I5Server and c/side looks to me pretty much like the file system storage on IBM S/36 with physical and logical files which needed to be re-indexed to maintain acceptable access performance.

I would be grateful to know more about this subject. So thanks for any reply.

Kaetchen

Comments

  • kinekine Member Posts: 12,562
    Data are stored in B-trees. These trees needs sometime to optimize (the trees have tendency to become linear structure), to have balanced tree with minimum levels. The data are stored in units of some size in the file. Optimize will "defragment" these units. Optimalization is good to speed up the reading BUT writing will be slower (this is hard to describe it in easy way... you need to understand what it is B-tree and how the data are written into it).

    Search the MIBUSO for optimize, there are more posts about that, may be there is somewhere more details...

    And optimize on SQL is another thing...
    Kamil Sacek
    MVP - Dynamics NAV
    My BLOG
    NAVERTICA a.s.
  • KaetchenKaetchen Member Posts: 106
    As such should it not be done at the physical storage level and not at the logical or base relation level?

    I think this database c/side isn't a real RDB in the terms of Physical Data Independence and I really hope that changing to SQL Server will improve access performance, or? Do you have experience with Navision on SQL Server?
    I appreciate your professional input as I find it hard to get an inside look at this application from our previous Navision Partner. It was badly setup and add-ons we paid for don't work.

    BTW, I couldn't find many postings on this issue.

    Thanks ;-)
    Kaetchen
  • SavatageSavatage Member Posts: 7,142
    i'll throw in that I optimize almost every day our big files - it keep our database in check (I'm using native). I find the writing might be slower but it's so insignificant that you probably wouldn't notice. If you're having performance issues I would post your server specs so we know what you have.
  • kinekine Member Posts: 12,562
    I think this database c/side isn't a real RDB in the terms of Physical Data Independence and I really hope that changing to SQL Server will improve access performance, or? Do you have experience with Navision on SQL Server?

    1) Navision DB is optimized for Navision style of work...
    2) Navision DB is very good in performance...
    3) Navision DB is just limited with one CPU and limited Memory size, it is why MS SQL is better in bigger solutions
    4) SQL is not cure for performance problems. You will have another problem - locking... :-)
    5) I recommend using SQL for bigger installations (for example 40 and more users, but do not take this as rule...) or when you need to connect external applications to the DB.
    Kamil Sacek
    MVP - Dynamics NAV
    My BLOG
    NAVERTICA a.s.
  • KaetchenKaetchen Member Posts: 106
    Here are some issues regarding Navision from "real live"

    1. MS QRY
    Working with MS/QRY is impossible when you have more then 500,000 records in Table 17 and you use more then two criteria and/or join tables.
    With c/side database you can/t use outer-joins.
    The Qry times out with increasing records., i.e designed queries become obsolete

    2. Crystal Reports
    After the date field has changed from dd/mm/yy to dd/mm/yy hh/ss all the crystal reports failed to run.

    We have now reached in Table 17 - 1,400,000 records size 655944 KB.

    There are maintenance tools to decrease the size such as "Date Compression" but I don't know whether this will improve the situation.

    Any recommandations?
  • SavatageSavatage Member Posts: 7,142
    my table 17 is 2,952,256 - no problems with crystal - I do keep date fields as date fields and not combine date & time.
  • KaetchenKaetchen Member Posts: 106
    I haven't changed anything I just tried to run crystal reports and realised it didn't work - when I looked in Navision Table I noticed that the date field has changed to a new field description - obviously an upgrade done by our navision partner.
    However here are infos from our IT Guys:

    dual 3.06 xeon, 2gb ram (Navision Live is allocated the maximum ram which is 1gb), 4 x 18gb SCSI disk RAID 1+0, gigabit ethernet on gigabit switch.

    Server specs are not an issue. I have noticed Navision is hard on the network, its like it pulls all data across to client.
  • SavatageSavatage Member Posts: 7,142
    Maybe you can give more details about " Crystal Didn't Work".

    did it time out?
    did it crash?
    did it give you any error messages?

    Just out of curiosity..
    when you go File->Database->Information->Tables
    what is the optimization % of table 17.

    I'm assuming you think it's becasue of the size of the Table that you are having problems, correct?
  • KaetchenKaetchen Member Posts: 106
    I stopped using Crystal a year ago and changed to MS Qry.

    From my memory it produced an error message, which had definitely to do with the change of the date field.

    We going to SQL reporting services and I won't spend more time with crystal. It was just an example.

    We basically maintaining two different accounting systems

    1. in Navision for processing
    2. Qry and Excel for reports

    not very efficient, but as report writing in Navision needs programming expertise and training...

    I worked with many different accounting systems - customized and standard but still haven't figured out the global concept of Navision.

    Are there any references I could use?

    Optimization is 93.9 - sounds not to bad, or?
  • bbrownbbrown Member Posts: 3,268
    If Crystal is having problems working with your Navision database, wouldn't you expect the same issues with Reporting Services since it will be accessing the database in the same manner (ODBC)?
    There are no bugs - only undocumented features.
  • SavatageSavatage Member Posts: 7,142
    Sql is a differnet animal - so I can't help, but I'm at 95.6 in case you want to compare. I really don't think it would matter.
  • KaetchenKaetchen Member Posts: 106
    You mentioned that you optimise your database or some tables daily.

    It takes up to 45min and you need exclusive use of the system. How do you manage that?

    Are you doing a backup before?
    Is it possible to block users from login for this purpose?
  • SavatageSavatage Member Posts: 7,142
    Kaetchen wrote:
    You mentioned that you optimise your database or some tables daily.

    It takes up to 45min and you need exclusive use of the system. How do you manage that?

    Are you doing a backup before?
    Is it possible to block users from login for this purpose?

    I wish it was a more complicated answer - but it's my company & I'm simply the last one in the building. My backup usually takes about 25 mins w/a 17 gb database. I just copy the complete database (as is) to another PC in the building on the same network. makes things much faster. If for some some reason I need to restore its as simple as coping it back. saves tons of time compared to restoring. The main rtwo tables are Item deger entries (which is huge) it can go from 95% to 75% within a week if i don't optimize and the other is Value Entries. These are very big tables & I like to keep them in check.

    do you use Hotcopy?
    there are others like expandit has
    http://www.expandit.com/templates/expan ... log_id=616
  • KaetchenKaetchen Member Posts: 106
    I have to do this the first time, but I wonder whether you can run a timed batch to backup and optimize in a subroutine at night.

    If this is an ongoing maintenance issue I would like to do it automatically.
    What else are computer systems for???
  • kuyakuya Member Posts: 6
    Hi all,

    Currently my client has 35GB and 60 concurrent user using navision SP1 and SQL2000. It went live from this january 2006 and just 5 months the data blow. Their machine ML370 dual processor 3.2 ghz raid 0+1 10k rpm SCSI. bandwith transaction per day 600Mb.

    Optimisation took us 4 hours and running weekly
    Full backup took 1 hour, incremental hasve not tested yet
    Restore and re-indexing took 12 hours for 35 GB
    Adjust costing big transaction item with serial number can goes to 9 hours and some items goes beyond 12 hrs and we do this weekly and cutting down to daily.

    Does anyone here have ever used MSA-Network storage or perhaps larger than dual processor, and or maintain a database larger than 100GB. If do, please kindly share with me onto how to maximise our time againt jumbo database.

    Thanks
  • kinekine Member Posts: 12,562
    1) Size of RAM
    2) You need to otimize SIFT tables and Indexes
    3) 8GB RAM is optimal for you, you can use 4CPU
    Kamil Sacek
    MVP - Dynamics NAV
    My BLOG
    NAVERTICA a.s.
  • KaetchenKaetchen Member Posts: 106
    Is it not possible to run this as a batch job?

    Backup
    Optimisation
  • johnson_alonsojohnson_alonso Member Posts: 690
    Kine wrote:
    1) Size of RAM
    2) You need to otimize SIFT tables and Indexes
    3) 8GB RAM is optimal for you, you can use 4CPU

    Have you considered about costing method..?
    will it have influnce on it..?


    Rgds,
    JOhnson
  • SavatageSavatage Member Posts: 7,142
    Kaetchen wrote:
    Is it not possible to run this as a batch job?

    Backup
    Optimisation

    http://www.mibuso.com/forum/viewtopic.php?t=4934
  • kuyakuya Member Posts: 6
    here come the odds, as the pdf about mbs navision went out how big is the size of the server need to be in order to handle how large the transactions are. we just did a simple backup restore prcodedure from nav apps with 25GB Datasize, and the result was

    DL 380 G4
    3.2 Ghz XEON Dual Processor
    4 GB RAM, 6 x Hard drive totaling 300GB 15k rpm
    RAID 0+1

    full backup : 1 hr, full restore : 12 hr

    DL580 G4
    3.2 Ghz DUAL CORE 4 Processors
    4 GB RAM, 4x Hardrive SAS 10K rpm
    full backup : 1 hr 45 min, full restore : >24 hrs *i cancel it, it was just abnormal*

    RX2600 Itanium 64 bit
    MSA1000 Network Storage with fibre channel
    10 x 72.8 Hard rive 15K rpm (7x data - 3 x log)

    i just did a full restore, still abnormal the restore > 24hrs

    By having large database size what i thought is really usefull was having many hardrives to maintain, and the peculiar thing was that navision performance was even slower. Sure i was confused, using a small machine like dl380 dual processor was even faster compare to dl580 4 processors DUAL CORE!!!! Whilst looking the perf monitor, the activity of restoration was very high on avg. write queue length and memory and processor time was really stable. Yet i have not dare to try this on productive db, i am afraid the performance will even slower for activity like adjust costing and optimisation.

    If someone ever try to compare, or perhaps has a large machine as big as 4 processors with navision v.4 sp1 on it. and how it is actually performed ?

    regards
    Andre
  • kinekine Member Posts: 12,562
    Please, do not use Hyperthreading if possible (I do not know if you have enabled or disabled), it has problems if just single application is u the main running process. Have you watch the performance of the CPU during the restore? It was 100% or less? If less, the HDDs are the bottleneck. Which filesystem are you using on the disks? NTFS? NTFS is too slow... FAT32 is faster...
    Kamil Sacek
    MVP - Dynamics NAV
    My BLOG
    NAVERTICA a.s.
  • SavatageSavatage Member Posts: 7,142
    Here's a white paper I've had for a while
    maybe someone can find it useful for SQL
    Microsoft SQL Server

    Processor

    There are three factors to consider for processing:

    - Number of CPUs
    - Size of Second Level Cache (SLC)
    - Clock rate

    Number of CPUs

    The number of CPUs is by far the most important of the afore-mentioned parameters. The Navision Attain Server requires only a minimum of CPU power, and it is possible to use only one CPU. This changes with the Microsoft SQL Server Option for Navision Attain, as the server can now theoretically be scaled up to 32 CPUs. The processors should not be loaded more than 80% on average during peak times. If the load is higher, the system is most likely CPU limited.


    DB Size/CCU 50 100 150 200 250

    12 GB 2 3 3-4 4 4
    24 GB 2 3 4 4 4
    36 GB 3 4 4 6 6-8

    Large databases can have an impact on the CPU load if extensive reports are carried out. But basically the size of the database has no impact on the CPU load as the table above shows.

    Size of Second Level Cache (SLC)

    The amount of SLC has an impact on performance, because of Microsoft SQL Server’s enhanced memory utilization. All frequently accessed tables are stored in the memory for fast retrieval. The SLC stores the information that is accessed the most and this access is dependent on the speed of the CPU. Because of the high price of SLC, the selection of a CPU with more SLC should only be considered when it is not possible to add more CPUs.

    Clock Rate The clock rate of the processors is least important, since an increase in the clock rate by 10% will effectively increase performance only 2-3%. Acceptable performance increases can be achieved by using the latest model processors.

    Memory Microsoft SQL Server is very good at utilizing the available memory. All tables are loaded in memory upon first use. When tables are present in the memory, all read and write transactions are committed to the memory without having to use the slower disk system. Minimizing communication to disks is one of the primary means to obtain better performance. On checkpoints, Microsoft SQL Server flushes to the disks any changes to the tables in the memory.

    The minimum amount of memory is 512 MB. However, the maximum amount of memory that can be afforded should be configured because it increases performance. Reducing the amount of memory will increase the load on the disk system, which will slow down the overall performance.

    DB Size/CCU 50 100 150 200 250

    12 GB 1 GB 1 GB 2 GB 2 GB 3 GB
    24 GB 1.5 GB 2 GB 3 GB 4 GB 4 GB
    36 GB 2 GB 3 GB 4 GB 4 GB 6 GB

    The Navision Attain cache does not exist on the Microsoft SQL Server Option for Navision Attain. All the cache is controlled by Microsoft SQL Server.

    Storage System To ensure acceptable performance at peak times, the storage system should be sized for the maximum workload created by the Microsoft SQL Server Option for Navision Attain. A sufficient amount of memory should be configured before the storage system is designed. If the memory is insufficient, the excessive workload on the storage system will create a demand for a faster and more expensive storage system than needed. The type of RAID protection to use for each file type (Database, Transaction Log, Tempdb, etc.) needs to be taken into consideration. Below is a chart with our suggestions for what RAID level to use for each file type used with Microsoft SQL.

    File Types RAID 1 RAID 0+1

    OS + SQL Server Files XX
    Database Files XX
    Log Files XX XX
    TempDB XX XX

    Network

    The Navision Attain client creates a log of network traffic when it runs on Microsoft SQL Server. The network’s limiting factor is the number of packets that can be transmitted per second. A properly segmented 100-Mbit network is suggested. If the Microsoft SQL Server Option for Navision Attain is deployed in a 10-Mbit network, server-based computing should be used to avoid a bottleneck in the network (Terminal Server or Citrix).
  • SavatageSavatage Member Posts: 7,142
    Here's a white paper I've had for a while
    maybe someone can find it useful for NATIVE
    Navision Server

    RAM

    A rule of thumb to follow, but is not an absolute rule, is 1.5 MB of RAM per user + 10 MB of RAM per every 500 MB of database space + 32 MB of RAM for the Operating System.
    Here is an example of what might be used for a 60 user system with 4 GB database:
    RAM = (1.5 MB/User * 60 Users) + (10 MB/ 500 MB * 1000 MB/1GB * 4 GB) + 32 MB
    RAM = 202 MB

    The next standard increment of RAM would 256 MB. So the Navision Server should be configured with 256 MB of RAM initially. The amount of suggested CACHE would then be set to 202 MB. If the CACHE setting is the same as the system RAM, then set the CACHE equal to the calculated amount minus the RAM required for the Operating System. This value will probably change as you fine tune the system for the client.

    Hard Drives

    Our server is very dependant on disk reads and writes. The faster the disk setup, the faster the performance of the server process. This is probably where the biggest gains in server performance can be had. We suggest that a RAID 1 array be used with our database. RAID 1 provides the best redundancy for the system by duplicating each drive (Mirroring). Because of the way our Database Manager works, RAID 1 also allows our server to stripe the data over multiple drives. This permits the server to read and write data over multiple drives simultaneously, which results in better throughput. When putting together the RAID 1 array, we suggest that you use the fastest drives and interface available (ULTRA SCSI 3 15k RPM).

    Processor

    We suggest that you get the fastest processor that the client can afford. We do not require multiple processors since the Navision server can not take advantage of it.

    Network

    We suggest that a 100 Mb/s or faster network be utilized. For WAN connections, we suggest the Citrix or Microsoft Terminal Services be used.
  • johnson_alonsojohnson_alonso Member Posts: 690
    It is very nice to have your explanation. I really follow this topic since it is very crucial and if the problem unsolved, means Navision is 100% FAIL in implementation for the company, but if it succeeds, none will not admint, Navision is the best. although, I don't know Kuya is, although he is from Indonesia as same as me.

    Rgds,
    Johnson
  • ovicashovicash Member Posts: 141
    It is very nice to have your explanation. I really follow this topic since it is very crucial and if the problem unsolved, means Navision is 100% FAIL in implementation for the company, but if it succeeds, none will not admint, Navision is the best. although, I don't know Kuya is, although he is from Indonesia as same as me.

    Rgds,
    Johnson

    I am in the same situation as you. By next yeay one of our clients will have probably 150 users. Navision will interact with 4 or 5 external aplication, you can realize that we need a "space ship" for the Navision Sql Server :).
    ovidiu

    Best Regards
  • kuyakuya Member Posts: 6
    Further checkings;
    1. SQL Profiler did not shows any funny slow in performance for each event processing, but we found that amount of high paging file is written to disk may cause slowness
    2. Since we check onto 3 machines just doing backup restore, we are suspecting the navision application code written against the machine architecture. However, we do not know how to check this .... anyone ? In coherence, the iterate processing like adjust costing and or optimisation are affected by 3 different machines' specs as i stated above.
    3. Hypertrading did not cause any slow in performance.
    4. Memory plays big role on paging/swap files, one to two processor increase great performance to 60-70% but when you increase to 4 it is just slight increase. Moredisks suppose to be increase in writing activities however when we test with MSA-1000 it fails to do so *i am going to re-test this with different machines*, however four disks for data, 2 disk for logs, 1 for tempdb, 1 for os is a good. More disk RPM gives more performance. RAID 0+1 is best.

    I really do not understand a simple equation such as BIGGER MACHINES suppose to give FAR BETTER PERFORMANCE is returning a failure answer.

    Anyone knows how to use fully utilise processor FSB speed say iterative process in Navision only use 8-20% processor, can we tune to 50-80% so we are not wasting any of processor speeds. Or I might say how can we speed up from the application side of tuning, rather than hardwares ?

    Please help ... many thanks

    Regards,
    Andre
  • johnson_alonsojohnson_alonso Member Posts: 690
    In the other forum, I've seen that many members in euphoria to welcome Microsoft Dynamics Nav 5.0 but here, there is a tough problem unsolved and will be used by the person as a tool to hurl an insult to Navision. This problem is really serious I think and still no solution until now.


    Rgds,
  • kinekine Member Posts: 12,562
    I can say just what I know from experiences. Customer (70-80 sessions) with SAN disks, 4 CPU, had performance problem. After performance measuring we found that there are many re-compile actions on the MS SQL (2GB RAM). After upgrading to 8GB RAM (and using appropriate Switches on OS and on MS SQL!) the performance boost was great. Each write into table with SIFT means compilation of the triggers on the table. If you do not have enough ram, the trigger is not kept in memory and must be recompiled. The trigger is not easy to compile...

    Another thing with SAN is use correct strip size. If you use 8Kb blocks when SAN is connected over fieber channel, the overhead will be too big. On fieber channel use the 64Kb strip size... (of course, can be different for different SAN architectures). For common work use MAX DEGREE OF PARALLELISM = 1, for statistic update job etc. set it to 0. You can disable swap file on your server. If you do not have enough ram to run the server without swap file, add ram...

    Do not forget to set correct parameters for OS and SQL depending on RAM size:

    RAM SIZE:
    2-4GB OS:/3GB
    4GB> OS:/PAE /3GB SQL: /AWE
    16GB> OS:/PAE SQL: /AWE

    After you checked all that, check if the clients had enough RAM, CPU and enough fast network.

    After that, you can start optimize the code. Best is to use 4.00SP1 or SP2 to had new commands and properties to optimize the app for MS SQL.

    And read the Performance Troubleshooting guide and try to understand all what is there...
    Kamil Sacek
    MVP - Dynamics NAV
    My BLOG
    NAVERTICA a.s.
Sign In or Register to comment.