Search This Blog

Thursday, December 21, 2023

Prerequisites for On-premises MAS Installation

1) Cluster Information:

Node details The cluster consists of 3 master nodes, 5 worker nodes and 3 Infra/storage nodes. There is an installer machine running RHEL which hosts all the installation artifacts and which load balances the request to master and infra nodes.

2) DNS configurations

3) Load Balancers details

Load Balancer

Port

Server Pool

Type

API Load Balancer

6443,

22623

Boot, master0, master1, master2

Boot, master0, master1, master2

Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the API routes. A stateless load balancing algorithm. The options vary based on the load balancer implementation.

Application Ingress

80

infra0, infra1, infra2

Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge

Load Balancer

443

infra0, infra1, infra2

mode, you must enable Server Name Indication (SNI) for the Ingress routes. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform.

 

 

 

 



Prerequisites

 System requirement

1. All Linux systems (VMs) must be registered to the subscription manager.

2. Ensure that the package manager is configured to allow for package updates across all nodes. 3. The Installation will need the Root access privilege.

 4. Memory and CPU requirement should meet the minimum requirement as given the table under cluster node details

 

 Network requirement

 Network topology diagrams can be unique for every cluster based on the requirements. We can deploy the cluster with different VLANs or in the same VLANs.

• Ensure a block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod networks.

 • Similarly, a block of IP address pool is reserved to use for service IP addresses and must not overlap with existing physical networks.

• Instances (systems) on the network should be able to see each other as being neighbours. Each machine must be able to resolve the hostnames of all other machines in the cluster. Note: Changes to the network post deployment of OCP cluster aren't supported by Red Hat.

DNS requirement In OpenShift Container Platform deployments, DNS name resolution is required for the following components:

 • The Kubernetes API

• The OpenShift Container Platform application wildcard

 • The bootstrap, control plane, and compute machines

Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines.

DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate.

 

Port requirement List of required ports that must be available for installation and configuration of an OpenShift cluster. You need to open the ports before you start installing OpenShift.

 

Below link has the port details that are needed to be open:

 https://docs.openshift.com/container-platform/4.10/installing/installing_vsphere/installing-restricted[1]networks-vsphere.html#installation-network-connectivity-user-infra_installing-restricted-networks[1]vsphere

 

Other Firewall requirements:

 Ports should be open from all cluster nodes to the NTP servers

 In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat.

 Ensure that ports are open between all the proxy and GLB for all OCP cluster nodes


No comments:

Post a Comment

MAS and Manage custom resources are not reconciled

  After a change was applied to IBM Maximo Application Suite (MAS) or IBM Maximo Manage, the custom resources are not reconciled. For exampl...