Saturday, May 4, 2024
 Popular · Latest · Hot · Upcoming
0
rated 0 times [  0] [ 0]  / answers: 1 / hits: 582  / 1 Year ago, sat, february 25, 2023, 11:21:49

Previously, I had a 4GB RAM server with Linode. Several times, I did "free -m" command to check available memory. Most of the times it showed less than 200 Mb free. Major processes I am running continuously on server are :



1) Apache Server serving around 1000 hits a day.
2) Tomcat Server , less than 100 hits
3) Solr
4) Three Java programs that should not consume more than 2GB RAM.



(In Java processes, I am not using any -Xmx parameter)



So, I moved to another dedicated host. But here I am getting the same kind of problem. My solr getting Killed if I try to run any additional Java program (that don't need more 512 Mb). Even, sometimes, it gets automatically "Killed", perhaps when other Java processes are working hard.



Here is the output I got in /var/log/kern.log when I tried to know the reason why solr gets "Killed" without any reason.



Dec 14 20:25:03 xyzserver kernel: [4680101.245182] Out of memory: Kill process 7481 (java) score 184 or sacrifice child
Dec 14 20:25:03 xyzserver kernel: [4680101.246851] Killed process 7481 (java) total-vm:22841896kB, anon-rss:987160kB, file-rss:0kB


I am not sure why I always get less than 200 Mb as free memory.



Free -m output :
root@xyzserver:/home# free -m



             total       used       free     shared    buffers     cached
Mem: 7963 7805 157 24 1 57
-/+ buffers/cache: 7746 216
Swap: 3813 2420 1393


python ps_mem.py output



root@xyzserver:/home# python ps_mem.py
Private + Shared = RAM used Program

4.0 KiB + 9.5 KiB = 13.5 KiB acpid
4.0 KiB + 20.5 KiB = 24.5 KiB upstart-socket-bridge
4.0 KiB + 21.0 KiB = 25.0 KiB upstart-file-bridge
4.0 KiB + 24.5 KiB = 28.5 KiB atd
4.0 KiB + 25.0 KiB = 29.0 KiB upstart-udev-bridge
4.0 KiB + 27.5 KiB = 31.5 KiB vsftpd
4.0 KiB + 37.5 KiB = 41.5 KiB init
4.0 KiB + 44.5 KiB = 48.5 KiB dbus-daemon
4.0 KiB + 47.5 KiB = 51.5 KiB systemd-logind
4.0 KiB + 51.5 KiB = 55.5 KiB systemd-udevd
24.0 KiB + 117.0 KiB = 141.0 KiB getty (6)
104.0 KiB + 48.5 KiB = 152.5 KiB flock (5)
120.0 KiB + 49.5 KiB = 169.5 KiB sh (5)
156.0 KiB + 41.0 KiB = 197.0 KiB irqbalance
264.0 KiB + 183.5 KiB = 447.5 KiB sshd (2)
480.0 KiB + 46.5 KiB = 526.5 KiB rsyslogd
524.0 KiB + 123.0 KiB = 647.0 KiB screen (4)
384.0 KiB + 369.0 KiB = 753.0 KiB cron (6)
840.0 KiB + 123.0 KiB = 963.0 KiB bash (5)
73.0 MiB + 138.0 KiB = 73.2 MiB mysqld
58.1 MiB + 27.4 MiB = 85.5 MiB apache2 (31)
3.2 GiB + 3.0 MiB = 3.2 GiB java (7)
---------------------------------
3.4 GiB
=================================


cat /proc/meminfo



root@xyzserver:/home# cat /proc/meminfo
MemTotal: 8154636 kB
MemFree: 180992 kB
Buffers: 692 kB
Cached: 36560 kB
SwapCached: 142536 kB
Active: 2775768 kB
Inactive: 1070008 kB
Active(anon): 2765376 kB
Inactive(anon): 1059320 kB
Active(file): 10392 kB
Inactive(file): 10688 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 3905532 kB
SwapFree: 613012 kB
Dirty: 0 kB
Writeback: 1916 kB
AnonPages: 3667288 kB
Mapped: 28880 kB
Shmem: 15796 kB
Slab: 59552 kB
SReclaimable: 22052 kB
SUnreclaim: 37500 kB
KernelStack: 3592 kB
PageTables: 42956 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 7982848 kB
Committed_AS: 8087572 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 281716 kB
VmallocChunk: 34359421140 kB
HardwareCorrupted: 0 kB
AnonHugePages: 14336 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 55572 kB
DirectMap2M: 8310784 kB


As I can see, ps_mem.py is showing less than 4Gb. Why is free -m showing all memory consumed. How do I control this behavior. Apparently, I am not utilizing all memory. How can I do that ? Do I need to change swap memory ?


More From » server

 Answers
0

Don't do anything: The Linux kernel is using as much memory as possible instead of letting it sit idle doing nothing by allocating free memory to the cache.



You can control the cache, but doing so would make your system perform worse as all disk access would actually be disk access instead of cached disk access.



(actually you should have moved to a server containing even more memory as you are running into swap)



Here's my system:



free -m
total used free shared buffers cached
Mem: 3886 3777 109 256 24 2572
-/+ buffers/cache: 1180 2705
Swap: 9755 0 9755


Also very low free, but using a ton of memory for cache and when applications need memory, it'll be released from the cache.


[#22039] Monday, February 27, 2023, 1 Year  [reply] [flag answer]
Only authorized users can answer the question. Please sign in first, or register a free account.
atchcommo

Total Points: 114
Total Questions: 130
Total Answers: 101

Location: Cook Islands
Member since Sat, Oct 16, 2021
3 Years ago
;