Nav 2009 R2 Database Size

parambparamb Member Posts: 85
edited 2014-04-17 in NAV Three Tier
Hi All,

The size of database in my current project is around 75 GB (63 GB ndf and 9 GB ldf).

What size is considered to be extreme for Nav SQL databases. What are the recommended data compression methods. Is there any recommended configuration which will keep the growth of the database size in control.

Any reply is really appreciated. Is there any documentations or blogs on this topic.

Thanks,
Siyab

Comments

  • mdPartnerNLmdPartnerNL Member Posts: 802
    80 Gb is big but normal for a company with more then 5 years of data.

    I call it big because all jobs, backups, restore now take up more and more time. Maybe investigate which tables are this large and do they still need to.
  • bbrownbbrown Member Posts: 3,268
    To begin with, you can't simply look at a databases and say whether it is too big or not. If the client's size and transaction volume justifies the databases growth, then it is what it is. If the database is growing because it is over-indexed, then that is something that should be addressed. But in general terms, 80 GB is nothing. I generally work with systems that are multiples larger. Including ones that grow that much in a year. I would not lose sleep over figuring out how to shrink this database. Rather I'd invest that effort into understanding how to suppport it. Including guiding your client to providing and maintaining a proper infrastructure.

    If the growth is due to over-indexing, that should be addressed.

    Feature like SQL compression can be useful with very large databases (this is not a very large DB). But before you get too excited, that feature only come with SQL Enterprise. Which is not cheap.

    I do agree with the statement that backups, restores, and other jobs will take longer as the database gets larger. However, if your system can't run these task in a reasonable time, the problem is not the size of the database.
    There are no bugs - only undocumented features.
  • parambparamb Member Posts: 85
    Thank you all for the replies. I am collecting more info based on the inputs from you guys.

    Thanks Again.
  • parambparamb Member Posts: 85
    Hi,

    Please find the attached screenshots with more info on the current status of database.

    I am trying to understand at what point of time this database is considered to be high with the given infrastructure, no. of users and no. of years since implementation.

    Also, I have listed the Disk usage by Top Tables with %of data and index. Please advice if there is any measures to keep the growth of the table size in control.

    Looking ahead for expert advice.
  • bbrownbbrown Member Posts: 3,268
    You list 3 drives. Could you tell us what is on each of them.

    As I've said before, this database isn't even close to being big. Could you elaborate on what is leading you to think it is too big? Are you seeing performance issues?
    There are no bugs - only undocumented features.
  • parambparamb Member Posts: 85
    We have encountered performance issues and recently moved to the current infrastructure. At this point performance is good and no concerns. But the management do not want to wait till we reach the next critical stage in terms of database size. That is why, I am trying to understand with the given information, what will be the size which will be alarming. Once we can know the alarming size, then we can monitor the growth of the database in the coming days and come up with an understanding of where we stand.

    I will post the requested drive info shortly.

    Thanks
    Paramb
  • bbrownbbrown Member Posts: 3,268
    I suspect that your real issue is not database size, but rather database or application performance bottlenecks. All of which likely still exist. Just now, you have enough extra hardware resources that they get masked. My suggestion would be to investigate and do some performance tuning. You'll find lot of helpful information on this site.
    There are no bugs - only undocumented features.
  • parambparamb Member Posts: 85
    Yes I agree. Along with the new infrastructure we have taken measures to enhance the performance and we have achieved the result.

    All I am trying to understand is to know is a number like the size of the data file 200 GB considered to be huge. I would really appreciate if you can provide some light to what size is considered to be huge size with the given information.

    Thanks,
    Paramb
  • Alex_ChowAlex_Chow Member Posts: 5,063
    We have a client that has 140GB.

    The largest database that I've seen is 200GB, and that was back in version 4.0.
  • bbrownbbrown Member Posts: 3,268
    The database I'm working on today is 600 GB. I routinely see databases of 150 GB or larger. Also see those that are only 20 GB after 5 years.
    There are no bugs - only undocumented features.
  • parambparamb Member Posts: 85
    Thank you all for posting replies. I understand all these 140, 200 and 600 GBs are the size of data files and not log files.
  • bbrownbbrown Member Posts: 3,268
    paramb wrote:
    Thank you all for posting replies. I understand all these 140, 200 and 600 GBs are the size of data files and not log files.

    The numbers I gave you are the amount of data (used space). The physical files are larger.
    There are no bugs - only undocumented features.
  • parambparamb Member Posts: 85
    Hi bbrown,

    There are three Storage drives, two of them RAID 10 and one RAID 1. RAID 1 is used for operating system and the other two RAID 10 drives are used one each for storing Data files and Log files.
  • waynenlwaynenl Member Posts: 39
    I have a client have 925 Gb data, 400 Gb Log.
    They use SAN, Memory is just 96 Gb,
    Ledger Dimension Entry: 1.4 bil
    GL Entry: 92 mil
    RTC: 130 users
    Web Services: 300 users

    Currently server is too heavy to response in the rush hour.

    My question always is how much memory, CPU do i need for this system? It always get bottleneck RAM and HDD.
  • bbrownbbrown Member Posts: 3,268
    waynenl wrote:
    I have a client have 925 Gb data, 400 Gb Log.
    They use SAN, Memory is just 96 Gb,
    Ledger Dimension Entry: 1.4 bil
    GL Entry: 92 mil
    RTC: 130 users
    Web Services: 300 users

    Currently server is too heavy to response in the rush hour.

    My question always is how much memory, CPU do i need for this system? It always get bottleneck RAM and HDD.

    Rather than throwing more hardware at it, you might look into some performance tuning of the database and NAV code. Constant high CPU utilization can be a sign of a database that is not indexed efficiently.
    There are no bugs - only undocumented features.
  • David_SingletonDavid_Singleton Member Posts: 5,479
    My largest client is 3tb, I have a few clients with Databases over 1 TB.

    It really annoys me that there is all this "marketing" info out that that Navision can not be used in these types of environments.

    Having said that, paramb I must say that you are doing exactly the right thing. It is important that we are aware of what is going on, and it is a very professional attitude you are taking to look at the system BEFORE there are problems. At SQL Perform, we generally see the systems late in the game. It is much better to look at performance from day one and to manage it as the system grows.
    David Singleton
  • parambparamb Member Posts: 85
    I would like to Thank you all for the contributions to this thread. It really helped me learn a lot and hope it is helping many who is viewing this thread.
  • parambparamb Member Posts: 85
    I can see there are Nav databases with 600 GB and even in TB. I am sure these databases will have huge tables (No. of records perspective) like G/L Entry, Value Entry and various Ledger Entry tables. Doesn't this impact the performance of the system when the no. of records in the database is grown beyond a limit. Is really the no. of records matter from performance perspective?
Sign In or Register to comment.