How to fix the Hyperconverged Cluster backed by gluster when the thinpool is filled 100%

  1. If the vms are in a non-responsive state with `?` mark in the hosted engine and no response is seen and event errors saying as below

{“resource”: “jsonrpc”, “current_tasks”: 80, “reason”: “Too many tasks”}

2. Make sure to check lv size is full by `lvs -a` whether the thinpool is filled completely.

3. If the thinpool is full, then you need to increase the size of the lv by vgextend.

4. Prior to vgextend, have to comment out the below section in /etc/lvm/lvm.conf file

“filter = [“a|^/dev/disk/by-id/lvm-pv-uuid”

5. Without performing step 4, if you try to extend vg it will throw as below

[root@host1 ~]# vgextend gluster_vg_sdb /dev/sdc
Device /dev/sdc excluded by a filter.

6. Perform vgextend as below and wait for it to succeed
[root@host1 ~]# vgextend gluster_vg_sdb /dev/sdc
Physical volume “/dev/sdc” successfully created.
Volume group “gluster_vg_sdb” successfully extended

6. Next perform lvextend to increase the thinpool size as below, Here I am increasing the lv thinpool size by 15T as I have added 15 T disk to the machine.

lvextend -L+15T gluster_vg_sdb/gluster_thinpool_gluster_vg_sdb

7. If you have multiple hosts in the Hyperconverged cluster, perform the above steps and wait for few mins till all the hosts are shown online and all the storage domains are online.