![]() ![]() The expectation is that they will configure the unit using the ConfD CLI and the WebUI that are provided for F5OS ~ ] # oc describe nodesīladeready = true cpumanager = true kubernetes.io/hostname = **Note:**When troubleshooting F5OS subsystem issues with OKD, Docker, and kubevirt, it is requested not to discuss these subsystems in detail with the customer. ![]() Pods is another term of note, which references multiple Docker containers in the OKD subsystem that are deployed together - also the smallest compute unit that can be defined, deployed, and managed. This will be important when attempting to troubleshoot nodes that do not come up in the compute cluster under the partition - the node will be a reference to a particular blade that is having issues and may need further investigation. More often than not, nodes will be the reference in F5OS when speaking of the blades since they are the ‘compute nodes’ that are used by tenants in a partition. This becomes important when reviewing information at the F5OS partition level because nodes are used instead of blades or slots, although all are nearly synonymous (when the 1/4 width blades are used). The system controllers and blades are considered ‘nodes’ in the OKD subsystem. As always, there is the base Linux /var/log directory, but there are also locations in each container and logs that are tied to the various subsystems as well that are only accessible via those subsystems’ commands.Īs a quick overview, the VELOS system is made up of the chassis, system controllers, and blades. These also include logs that are in a non-standard location compared to previous F5 software systems. With the move to a software system that is intended for cloud architectures and nuke and pave of tenants, but managed on F5 hardware, it is important to know some of the basic commands to gather data and investigate issues that may stem from the OKD, docker and kubevirt installations on one or both of the system controllers. docker exec -t partition_rsyslogd cat /var/log/velos.log.docker exec -t controller_rsyslogd cat /var/log/velos.log.docker exec -it partition_rsyslogd bash.docker exec -it controller_rsyslogd bash.kubectl get endpoints -n kubevirt kubevirt-prometheus-metrics -o yaml./usr/share/omd/kubevirt/virtctl console -n (accessing a tenant virtual console).docker exec -it partition4_manager /opt/bin/confd-master-key/getConfdMasterKey -storage confd.docker container ls –format “table ” -a.docker ps (with formatting so it’s easier to spot a container that’s restarting).F5OS-C 1.0.0 - OKD, Docker and kubevirt Basics.LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convertĭocker_containers k8s_vol1 -wi-ao- 50.00gĭocker_thinpool k8s_vol1 twi-aotz- 200.00g 22.50 38.28 Operating System: Red Hat Enterprise Linux Server 7.0 (Maipo) Network: bridge host macvlan null overlayĬontainerd version: 4ab9917febca54791c5f071a9d1f404867857fcc Red Hat Enterprise Linux Server release 7.0 (Maipo) This problem seems to be unique to the kubernetes worker nodes, I guess it could be something with Rancher and Kubernetes and how it is interacting with Docker? docker_host:~ # cat /etc/redhat-release My suspicion is the device mapper is being used to create thin volumes and then they are being orphaned, but I can't see them using the dmsetup command. We appear to be creeping back up to the same point now, though, and I am wondering what is going on. I ended up just rebuilding my kubernetes working nodes to clear the space back up last time and switching from XFS to EXT4 after reading some things about potential issues with XFS releasing space with thin provisioning. I searched around a bit a while back and found stuff about using loop device, which we are not, and using fstrim to free space. ![]()
0 Comments
Leave a Reply. |