Monday, May 13, 2024
 Popular · Latest · Hot · Upcoming
17
rated 0 times [  17] [ 0]  / answers: 1 / hits: 6278  / 2 Years ago, tue, november 8, 2022, 4:35:22

I do not have much knowledge about operating system internal workings, so why do we have limits for maximum number of open files and running processes in Linux?



I would appreciate it if anyone could help me understand.


More From » kernel

 Answers
6

This is mainly for historical reasons. On older Linux mainframes, many users would connect and use the mainframe's resources. Of course, itwas necessary to limit, and since such operations like file handles and processes were built into the kernel, it was limited there. It also helps limit attacks like the fork bomb. A defense against the form bomb using process limits is shown here.



It also helps keep complex services and daemons in check by not allowing runaway forking and file opening, similar to what the fork bomb tries to do.



Also to be noted are things like the amount of RAM and CPU available, and the fact that a 32-bit counter can only reference so much(only 4294967296 entries that can be counted by a 32-bit counter), but such limits are far above those usually set by programmers and system administrators. Anyway, long before you have 4294967296 processes, your machine would have probably been rebooted, either just as planned, or as it began to lock up due to starving another resource.



Unless you run the Titan with its 584 TiB of memory(you won't, since Linux cannot be run as one instance on a supercomputer), you probably won't reach this process limit anytime soon. Even then, the average process would only have roughly 146KB of memory, assuming no shared memory.


[#34344] Wednesday, November 9, 2022, 2 Years  [reply] [flag answer]
Only authorized users can answer the question. Please sign in first, or register a free account.
mance

Total Points: 198
Total Questions: 105
Total Answers: 128

Location: South Georgia
Member since Mon, Aug 16, 2021
3 Years ago
;