MySQL file limit, table cache and max_connections

MySQL variables open_files_limit, table_open_cache and max_connections are
inter-related, and this is for obvious reasons: all deal with file descriptors
one way or another.

If one of the value is provided but others are left out, mysqld calculates
others using a formula and in some cases, emits a warning if not possible.

The whole calculation behind obtaining the final file descriptor limit is a bit
byzantine and is as follows (for Linux):

EDIT: This applies to MySQL 5.5, in 5.6, as Daniël in comments pointed out,
few things have changed, check comment for details. I will probably make a
followup post on the differences.

  1. First it tries to calculate a minimum requirement based on max_connections
    and table_open_cache. As for other variables, if it is not provided in cnf or
    on command line it uses defaults.

    <br />    /* MyISAM requires two file handles per table. */
        wanted_files= 10+max_connections+table_cache_size*2;

  2. Next, it checks how much it checks how much it can actually allocate based on

    • Here, it tries to allocate based on the assumption that atmost 5 file
      handles will be used by a connection and then obtains the maxima from the
      previous wanted_files and open_files_limit (Note, here it is considered
      to be 0 if not set).
    <br />    max_open_files= max(max(wanted_files, max_connections*5),open_files_limit);
        files= my_set_max_open_files(max_open_files);
  3. After this, it obtains minimum of max_open_files and maximum file limit
    supported by that platform. For Linux, it is UINT_MAX.

    limit = min(max_file_limit, OS_FILE_LIMIT);
  4. This is where it gets interesting1.
    • Before setting with setrlimit2, it obtains the current limit and if that
      limit is unlimited (in ulimit paralance), it tries to set it lower to limit
      obtained earlier (good guy mysql!).

    • If the current limit is not unlimited and is higher than limit, it

    • If both the above conditions are not satisified, it actually sets it with

      • If this fails, it returns the value obtained with earlier getrlimit.

      • If this succeeds, it does a getrlimit again3 and returns the limit
        it returns (lets call it plimit).

  5. Now if the limit it received is lower than what it wanted plimit <

    • If open_files_limit is not set, then it readjusts max_connections and
      table_cache_size appropriately based on the plimit obtained before and
      prints a warning (based on log_warnings).

    The math in the formula above is left as an exercise to reader .

    • If open_files_limit limit is set, then it prints the familiar warning
      “Could not increase number of max_open_files to more”.
  6. Finally, it sets the variable open_files_limit to plimit. This is why
    you may often see open_files_limit different at runtime than what you set in

Now, what are the implications of this?

  • MySQL can only change the limit upto soft limit imposed by OS. If it hits the
    hard limit, it falls back as in above (since it is not root, it cannot raise
    that by itself). On a capability based system, MySQL should be able to change
    the hard limit if it has CAP_SYS_RESOURCE. You can check both hard and soft
    limits with ulimit -H and ulimit -S for current user. You can set them with
    /etc/security for daemons.

  • If you set – (max_connections or table_open_cache or both) AND
    open_files_limit, and MySQL is not able to obtain the desired limit due to OS
    restrictions, your connections can fail unexpectedly4 later on if and
    when it runs out of file descriptors (at max_connections or large number of

    • Hence, one good thing to do here is to not set open_files_limit. Just look
      into global limits and MySQL is wise enough to figure out how much it needs
      (for instance, it sets lower if getrlimit returns unlimited). For global
      limits, set a sane upper limit and set max_connections/table_cache variables
      appropriately based on application needs.

    • Only downside with previous approach is that max_connections and
      table_cache will be adjusted accordingly (and is visible with log_warnings)
      if higher than OS limits but it still better than failing later on!

  • You can set open_files_limit upto UINT_MAX (assuming OS allows it)! MySQL
    documentation mentions “platform dependent” but for a 64 bit system, you can
    set upto 4 million!

  • This should be obvious but if you just set open_files_limit but not
    max_connections or table_open_cache, mysql won’t ramp these up to match
    open_files_limit (if the limit is higher than what is obtained with latter

  • You still may want to set open_files_limit in some cases. Xtrabackup, for
    instance, uses this variable to set its limits (it has to open all files at
    start due to DDL restrictions) and it doesn’t base this on max_connections
    or table_cache (since it doesn’t make sense to xtrabackup). Refer to Bug

  • You may also want to set it if mysql is running in a shared environment with
    other services like Apache etc. and you want to limit for some reason. In this
    case, pay attention to warning in the error log! (may be there should be an
    option where it fails than just log to error log5).

  • max_connections and table_open_cache are a dynamic variable, open_files_limt
    is not. If you change max_connections at runtime, no one of the above is done.
    Don’t set them too high at runtime if your open_files_limit is low (check the
    variable at runtime).

  • Based on previous point, you may also want to set open_files_limit if you want
    to keep a buffer between what mysql feels it wants and what you feel may be
    needed later on (especially if you are going to change at runtime).

  • In case of MariaDB and Percona Server, due to thread pool requirements (when
    used), an additional parameter extra_max_connections is also used.

Photo Credit

  1. Assuming getrlimit (and RLIMIT_NOFILE) is available, which should
    be on any Linux. If not, it uses min(max_file_limit, OS_FILE_LIMIT) as a fallback. 
  3. This looks redundant to me. 
  4. With “EAGAIN: Resource temporarily unavailable”. 
  5. MySQL is notorious in this aspect, in case of SSL for instance

4 Replies to “MySQL file limit, table cache and max_connections”

  1. The behaviour changed in 5.6.12. For more details see this bug report (which I consider fixed):

    There is also a small patch attached to that bug which changes the behaviour and sets the limit to the hard limit instead of the soft limit if setting the requested limit fails.

    And for troubleshooting something like this should be used instead the more classical ‘switch to the user and do a ulimit -a’. With cgroups in linux the limit for the mysql user and the mysql process might be different.
    egrep ‘^(Limit|Max open files)’ /proc/`pidof mysqld`/limits

    And this one is also a classic:

  2. @Daniël,

    Thanks for the details and the link.

    Yes, I see a few changes in 5.6 as per the commit introduced in fix of Bug#16430532.

    For the issue with diagnostics printed to error log, 5.5 also has it now. (I
    see the bug was filed in 2011, so may be it wasn’t there then).

    sql_print_warning(“Changed limits: max_open_files: %u max_connections: %ld table_cache: %ld”,
    files, max_connections, table_cache_size);

    sql_print_warning(“Could not increase number of max_open_files to more than %u (request: %u)”, files, wanted_files);

    But, regarding,
    I see that in the above patch, you don’t try to set above the
    hard limit (with ‘(max_file_limit > rlimit.rlim_max)’) to avoid failing and
    fallback to original.

    Going by the same ulimit example in that issue:

    [[email protected] ~]$ ulimit -Sn
    [[email protected] ~]$ ulimit -Hn
    [[email protected] ~]$ ulimit -n 66000
    -bash: ulimit: open files: cannot modify limit: Operation not permitted
    [[email protected] ~]$ ulimit -Sn

    Suppose mysql needs 90000, mysql falls back here to 1024 (both 5.5/5.6), so that
    max_connections etc. are adjusted appropriately and logged.

    Otherwise, with your patch there, it resets its goal to 65536 (based on
    rlim_max) and raises soft limit to 65536, and still adjusts max_connections and
    others, and logs them too (because it still ended up getting less than 90000).

    Now, since it is failing in both cases, it depends on which approach may be
    considered good. (either to settle for less or not change at all).

    c) One more distinction I noted is that, in 5.5 it checks wanted_files against
    final limit it obtained (after setrlimit) whereas in 5.6 it checks
    request_open_files (which is max_open_files in 5.5) against that final limit
    (effective_open_files in 5.6).


    Regarding cgroups and ulimit, I am not sure you can restrict the number of open file
    descriptors with a cgroup. Can you point me to the appropriate setting for that?

    Also, if you are speaking of memory footprint with cgroup, it doesn’t work as
    expected either –

    This is because rlimits are inherited across processes through parent-child
    relationship which doesn’t change if you move a process into a cgroup (unless
    child itself uses setrlimit as mysqld does). Moreso,
    the limits of a cgroup are enforced at a different level (higher level) than
    ulimit, it is more throttled than enforced at a cgroup level and mostly opaque
    to the process which is being throttled.

    But, yes, for a runtime process, checking /proc/$pid/limits is indeed the
    accurate way to obtain it (since in the MPB blog post you linked they talk about
    echo -n “Max processes=SOFT_LIMIT:HARD_LIMIT” > /proc/`pidof mysqld`/limits).
    Otherwise, you may to resort to a login shell to obtain current user’s rlimits.
    Since mysqld is a daemon, it is usually set to /bin/false, so checking
    limits.conf/limits.d is only other way.

    for the first problem with nproc and 90-nproc.conf,

    “Add a new file in /etc/security/limits.d with a higher number (e.g.,
    91-mysql.conf) and set your limits in there” is indeed the right solution.

    Changing limits.conf directly should be completely avoided and be thoroughly
    discouraged. (unless there was no limits.d functionality available, which is in
    older distros).

    Also, yes, the snafu on the limits there (reboot v/s service restart) is due
    to how legacy init system functioned, similar issue shouldn’t be there with

Leave a Reply

Your email address will not be published. Required fields are marked *