The IPv4 iptables rules that you set in the earlier steps will be loaded in /etc/iptables/rules.v4 and will be persistent across reboots. ![]() Select yes for the prompts to save current IPv4 iptables rules to /etc/iptables/rules.v4 and current IPv6 iptables rules to /etc/iptables/rules.v6. Once you have finished configuring iptables, ensure that you make the changes persistent on each node so that they will be in effect when your nodes reboot.Ī terminal UI will open up. Sudo iptables -I INPUT 1 -i -m multiport -p tcp -s / -dports 6800:6811 -j ACCEPT For example, if you have 4 OSDs, open 4 x 3 ports ( 12). Each OSD on each Ceph node needs three ports: one for talking to clients and monitors (public network) one for sending data to other OSDs (cluster network, if available otherwise, public network) and, one for heartbeating (cluster network, if available otherwise, public network). Sudo iptables -I INPUT 1 -i -p tcp -s / -dport 6789 -j ACCEPTįinally, you MUST also open ports for OSD traffic (e.g., 6800-7100). You MUST open port 6789 on your public network on ALL Ceph monitor nodes. Sudo iptables -I INPUT 1 -i -m multiport -p tcp -s / -dports 4505:4506 -j ACCEPT Sudo iptables -I INPUT 1 -i -p tcp -s / -dport 2003 -j ACCEPT Sudo iptables -I INPUT 1 -i -p tcp -s / -dport 80 -j ACCEPT ![]() You MUST open ports 80, 2003, and 4505-4506 on your Calamari node. You MUST adjust your firewall settings on the Calamari node to allow inbound requests on port 80 so that clients in your network can access the Calamari web user interface.Ĭalamari also communicates with Ceph nodes via ports 2003, 45. The hardware requirements scale linearly with the number of Ceph servers, so if you intend to run a fairly large cluster, ensure that you have enough RAM, processing power and network throughput. A minimum recommended hardware configuration for a Calamari server includes at least 4GB of RAM, a dual core CPU on x86_64 architecture and enough network throughput to handle communication with Ceph hosts. The administration/Calamari server hardware requirements vary with the size of your cluster. The following instructions assume you will install (or update) the repository on the dedicated administration node. We expect that you will have a dedicated administration node that will host the local repository and the Calamari monitoring and administration server. When you execute the ice_setup script, it will install a local repository, the Calamari monitoring and administration server and the Ceph installation scripts, including a nf file pointing ceph-deploy to the local repository. ![]() To simplify installation and to support deployment scenarios where security measures preclude direct Internet access, Red Hat Ceph Storage v1.2.3 is installed from a single software build delivered as an ISO with the ice_setup package, which installs the ice_setup script. This document provides procedures for installing Red Hat Ceph Storage v1.2.3 for x86_64 architecture on Ubuntu Precise and Ubuntu Trusty. Providing the tools to flexibly and cost-effectively manage petabyte-scale data deployments in the enterprise, Red Hat Ceph Storage manages cloud data so enterprises can focus on managing their businesses. Upgrading Your Cluster from v1.2.2 to v1.2.3ĭesigned for cloud infrastructures and web-scale object storage, Red Hat® Ceph Storage is a massively scalable, open, software-defined storage platform that combines the most stable version of Ceph with a Ceph management platform, deployment tools, and support services. Add OSD Hosts/Chassis to the CRUSH Hierarchyġ7. Make your Calamari Admin Node a Ceph Admin Nodeġ2. Storage Cluster Quick Start"Ĭollapse section "II. Pre-Installation Requirements"Įxpand section "II. ![]() Pre-Installation Requirements"Ĭollapse section "1.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |