diff --git a/_data/sidebars/docs_sidebar.yml b/_data/sidebars/docs_sidebar.yml index 7732774c..b8e4b8e7 100644 --- a/_data/sidebars/docs_sidebar.yml +++ b/_data/sidebars/docs_sidebar.yml @@ -32,6 +32,8 @@ entries: url: /concept_hci_volume_access_groups.html - title: Initiators url: /concept_hci_initiators.html + - title: Custom protection domains + url: /concept_hcc_custom_protection_domains.html # url: /concept_hci_storage.html - title: Licensing url: /concept_cg_hci_licensing.html diff --git a/docs/concept_hcc_custom_protection_domains.adoc b/docs/concept_hcc_custom_protection_domains.adoc new file mode 100644 index 00000000..dfa34c89 --- /dev/null +++ b/docs/concept_hcc_custom_protection_domains.adoc @@ -0,0 +1,32 @@ +--- +permalink: docs/concept_hcc_custom_protection_domains.html +sidebar: docs_sidebar +keywords: protection domain layout,user-defined,custom layout +summary: 'You can define a custom protection domain layout, where each node is associated with one and only one custom protection domain. By default, each node is assigned to the same default custom protection domain.' +--- += Custom protection domains +:icons: font +:imagesdir: ../media/ + +[.lead] +You can define a custom protection domain layout, where each node is associated with one and only one custom protection domain. By default, each node is assigned to the same default custom protection domain. + +If no custom protection domains are assigned: + +* Cluster operation is unaffected. +* Custom level is neither tolerant nor resilient. + +If more than one custom protection domain is assigned, each subsystem will assign duplicates to separate custom protection domains. If this is not possible, it reverts to assigning duplicates to separate nodes. Each subsystem (for example, bins, slices, protocol endpoint providers, and ensemble) does this independently. + +NOTE: Using custom protection domains assumes that no nodes share a chassis. + +The following Element API methods expose these new protection domains: + +* GetProtectionDomainLayout - shows which chassis and which custom protection domain each node is in. +* SetProtectionDomainLayout - allows a custom protection domain to be assigned to each node. + +Contact NetApp support for further details on using custom protection domains. + +== Find more information + +link:api/index.html[Manage storage with the Element API] diff --git a/docs/reference_hcc_config_maximums.adoc b/docs/reference_hcc_config_maximums.adoc index 3200c087..4b3d591d 100644 --- a/docs/reference_hcc_config_maximums.adoc +++ b/docs/reference_hcc_config_maximums.adoc @@ -23,4 +23,4 @@ If you exceed these tested maximums, you might experience issues with NetApp Hyb .Configuration maximums -NetApp Hybrid Cloud Control supports VMware vSphere environments with up to 100 ESXi hosts and 1000 virtual machines (comparable to a small vCenter Server Appliance configuration). +NetApp Hybrid Cloud Control supports VMware vSphere environments with up to 500 NetApp compute nodes. It supports up to 20 NetApp Element Software based storage clusters. diff --git a/docs/reference_notices_hci.adoc b/docs/reference_notices_hci.adoc index af2a62f7..c746aa35 100644 --- a/docs/reference_notices_hci.adoc +++ b/docs/reference_notices_hci.adoc @@ -49,6 +49,7 @@ Notice files provide information about third-party copyright and licenses. * link:../media/storage_firmware_bundle_2.27_notices.pdf[Notice for Storage Firmware Bundle 2.27^] * link:../media/compute_iso_notice.pdf[Notice for compute firmware ISO^] * link:../media/H610S_BMC_notice.pdf[Notice for H610S BMC^] +* link:../media/2.18_notice.pdf[Notice for Management Services 2.18.91^] * link:../media/2.17.56_notice.pdf[Notice for Management Services 2.17.56^] * link:../media/2.17_notice.pdf[Notice for Management Services 2.17.52^] * link:../media/2.16_notice.pdf[Notice for Management Services 2.16^] diff --git a/docs/rn_relatedrn.adoc b/docs/rn_relatedrn.adoc index 255d8872..56653d52 100644 --- a/docs/rn_relatedrn.adoc +++ b/docs/rn_relatedrn.adoc @@ -37,6 +37,7 @@ NOTE: You will be prompted to log in using your NetApp Support Site credentials. * https://kb.netapp.com/Advice_and_Troubleshooting/Data_Storage_Software/Management_services_for_Element_Software_and_NetApp_HCI/Management_Services_Release_Notes[Management Services Release Notes] == NetApp Element Plug-in for vCenter Server +* https://library.netapp.com/ecm/ecm_download_file/ECMLP2876748[vCenter Plug-in 4.7 Release Notes^] * https://library.netapp.com/ecm/ecm_download_file/ECMLP2874631[vCenter Plug-in 4.6 Release Notes] * https://library.netapp.com/ecm/ecm_download_file/ECMLP2873396[vCenter Plug-in 4.5 Release Notes] * https://library.netapp.com/ecm/ecm_download_file/ECMLP2866569[vCenter Plug-in 4.4 Release Notes] diff --git a/docs/task_hcc_edit_bmc_info.adoc b/docs/task_hcc_edit_bmc_info.adoc index 68dced76..f36d54e2 100644 --- a/docs/task_hcc_edit_bmc_info.adoc +++ b/docs/task_hcc_edit_bmc_info.adoc @@ -20,7 +20,7 @@ You can change Baseboard Management Controller (BMC) administrator credentials i Cluster administrator permissions to change BMC credentials. -NOTE: If you set BMC credentials during a health check, there can be a delay of up to 15 minutes before the change is reflected on the *Nodes* page. +NOTE: If you set BMC credentials during a health check, there can be a delay of up to 2 minutes before the change is reflected on the *Nodes* page. .Options diff --git a/docs/task_hcc_manage_accounts.adoc b/docs/task_hcc_manage_accounts.adoc index 7376227a..46c80e8e 100644 --- a/docs/task_hcc_manage_accounts.adoc +++ b/docs/task_hcc_manage_accounts.adoc @@ -89,7 +89,7 @@ If you configured users on the storage cluster with LDAP, those accounts show a . Log out of NetApp Hybrid Cloud Control. . link:task_mnode_manage_storage_cluster_assets.html#edit-the-stored-credentials-for-a-storage-cluster-asset[Update the credentials] for the authoritative cluster asset using the NetApp Hybrid Cloud Control API. + -NOTE: It might take the NetApp Hybrid Cloud Control UI up to 15 minutes to refresh the inventory. To manually refresh inventory, access the REST API UI inventory service `https://[management node IP]/inventory/1/` and run `GET /installations​/{id}` for the cluster. +NOTE: It might take the NetApp Hybrid Cloud Control UI up to 2 minutes to refresh the inventory. To manually refresh inventory, access the REST API UI inventory service `https://[management node IP]/inventory/1/` and run `GET /installations​/{id}` for the cluster. . Log into NetApp Hybrid Cloud Control. diff --git a/docs/task_hcc_manage_storage_clusters.adoc b/docs/task_hcc_manage_storage_clusters.adoc index 56320fab..f5b5ea16 100644 --- a/docs/task_hcc_manage_storage_clusters.adoc +++ b/docs/task_hcc_manage_storage_clusters.adoc @@ -48,7 +48,7 @@ NOTE: Only remote storage clusters that are not currently managed by a managemen . Select *Add*. + -NOTE: After you add the storage cluster, the cluster inventory can take up to 15 minutes to refresh and display the new addition. You might need to refresh the page in your browser to see the changes. +NOTE: After you add the storage cluster, the cluster inventory can take up to 2 minutes to refresh and display the new addition. You might need to refresh the page in your browser to see the changes. . If you are adding Element eSDS clusters, enter or upload your SSH private key and SSH user account. diff --git a/docs/task_hcc_manage_vol_management.adoc b/docs/task_hcc_manage_vol_management.adoc index 78082396..24b3f27c 100644 --- a/docs/task_hcc_manage_vol_management.adoc +++ b/docs/task_hcc_manage_vol_management.adoc @@ -24,11 +24,11 @@ You can manage volumes in NetApp Hybrid Cloud Control in the following ways: * <> * <> * <> +* <> * <> * <> * <> - == Create a volume You can create a storage volume using NetApp Hybrid Cloud Control. @@ -47,16 +47,19 @@ NOTE: The default volume size selection is in GB. You can create volumes using s 1GiB = 1 073 741 824 bytes . Select a block size for the volume. -. From the Account list, select the account that should have access to the volume. +. From the *Account* list, select the account that should have access to the volume. + -If an account does not exist, click *Create New Account*, enter a new account name, and click *Create*. The account is created and associated with the new volume. +If an account does not exist, click *Create New Account*, enter a new account name, and click *Create Account*. The account is created and associated with the new volume in the *Account* list. + NOTE: If there are more than 50 accounts, the list does not appear. Begin typing and the auto-complete feature displays values for you to choose. -. To set the Quality of Service, do one of the following: -.. Select an existing QoS policy. -.. Under QoS Settings, set customized minimum, maximum, and burst values for IOPS or use the default QoS values. +. To configure the Quality of Service for the volume, do one of the following: + +* Under *Quality of Service Settings*, set customized minimum, maximum, and burst values for IOPS or use the default QoS values. +* Select an existing QoS policy by enabling the *Assign Quality of Service Policy* toggle and choosing an existing QoS policy from the resulting list. +* Create and assign a new QoS policy by enabling the *Assign Quality of Service Policy* toggle and clicking *Create New QoS Policy*. In the resulting window, enter a name for the QoS policy and then enter QoS values. When finished, click *Create Quality of Service Policy*. + + Volumes that have a Max or Burst IOPS value greater than 20,000 IOPS might require high queue depth or multiple sessions to achieve this level of IOPS on a single volume. @@ -64,23 +67,17 @@ Volumes that have a Max or Burst IOPS value greater than 20,000 IOPS might requi == Apply a QoS policy to a volume -You can apply a QoS policy to an existing storage volume by using NetApp Hybrid Cloud Control. - +You can apply a QoS policy to existing storage volumes by using NetApp Hybrid Cloud Control. If instead you need to set custom QoS values for a volume, you can <>. To create a new QoS policy, see link:task_hcc_qos_policies.html[Create and manage volume QoS policies^]. .Steps . Log in to NetApp Hybrid Cloud Control by providing the NetApp HCI or Element storage cluster administrator credentials. . From the Dashboard, expand the name of your storage cluster on the left navigation menu. . Select *Volumes* > *Overview*. -. In the *Actions* column in the volumes table, expand the menu for the volume and select *Edit*. -. Change the Quality of Service by doing one of the following: -.. Select an existing policy. -.. Under Custom Settings, set the minimum, maximum, and burst values for IOPS or use the default values. -+ -NOTE: If you are using QoS policies on a volume, you can set custom QoS to remove the QoS policy affiliation with the volume. Custom QoS override QoS policy values for volume QoS settings. +. Select one or more volumes to associate with a QoS policy. +. Click the *Actions* drop-down list at the top of the volumes table, and select *Apply QoS Policy*. +. In the resulting window, select a QoS policy from the list and click *Apply QoS Policy*. + -TIP: When you change IOPS values, increment in tens or hundreds. Input values require valid whole numbers. Configure volumes with an extremely high burst value. This enables the system to process occasional large block, sequential workloads more quickly, while still constraining the sustained IOPS for a volume. - -. Select *Save*. +NOTE: If you are using QoS policies on a volume, you can set custom QoS to remove the QoS policy affiliation with the volume. Custom QoS values override QoS policy values for volume QoS settings. == Edit a volume @@ -155,36 +152,36 @@ NOTE: Cloned volumes do not inherit volume access group membership from the sour . Log in to NetApp Hybrid Cloud Control by providing the NetApp HCI or Element storage cluster administrator credentials. . From the Dashboard, expand the name of your storage cluster on the left navigation menu. . Select the *Volumes* > *Overview* tab. -. Select each volume you want to clone and click the *Clone* button that appears. -. Do one of the following: -* To clone a single volume, perform the following steps: -.. In the *Clone Volume* dialog box, enter a volume name for the volume clone. +. Select each volume you want to clone. +. Click the *Actions* drop-down list at the top of the volumes table, and select *Clone*. +. In the resulting window, do the following: + +.. Enter a volume name prefix (this is optional). +.. Choose the access type from the *Access* list. +.. Choose an account to associate with the new volume clone (by default, *Copy from Volume* is selected, which will use the same account that the original volume uses). +.. If an account does not exist, click *Create New Account*, enter a new account name, and click *Create Account*. The account is created and associated with the volume. + TIP: Use descriptive naming best practices. This is especially important if multiple clusters or vCenter Servers are used in your environment. -.. Select an account access level: -** Read Only -** Read/Write -** Locked -** Replication Target - -.. Select a size in GB or GIB for the volume clone. + NOTE: Increasing the volume size of a clone results in a new volume with additional free space at the end of the volume. Depending on how you use the volume, you may need to extend partitions or create new partitions in the free space to make use of it. -.. Select an account to associate with the volume clone. + -If an account does not exist, click *Create New Account*, enter a new account name, and click *Create*. The account is created and associated with the volume. - .. Click *Clone Volumes*. +NOTE: The time to complete a cloning operation is affected by volume size and current cluster load. Refresh the page if the cloned volume does not appear in the volume list. -* To clone multiple volumes, perform the following steps: -.. In the *Clone Volumes* dialog box, enter an optional prefix for the volume clones in the *New Volume Name Prefix* field. -.. Select a new type of access for the volume clones or copy the access type from the active volumes. -.. Select a new account to associate with the volume clones or copy the account association from the active volumes. -.. Click *Clone Volumes*. +== Add volumes to a volume access group +You can add a single volume or a group of volumes to a volume access group. + +.Steps +. Log in to NetApp Hybrid Cloud Control by providing the NetApp HCI or Element storage cluster administrator credentials. +. From the Dashboard, expand the name of your storage cluster on the left navigation menu. +. Select *Volumes* > *Overview*. +. Select one or more volumes to associate with a volume access group. +. Click the *Actions* drop-down list at the top of the volumes table, and select *Add to Access Group*. +. In the resulting window, select a volume access group from the *Volume Access Group* list. +. Click *Add Volume*. -NOTE: The time to complete a cloning operation is affected by volume size and current cluster load. Refresh the page if the cloned volume does not appear in the volume list. == Delete a volume You can delete one or more volumes from an Element storage cluster. @@ -202,11 +199,8 @@ IMPORTANT: Persistent volumes that are associated with management services are c . From the Dashboard, expand the name of your storage cluster on the left navigation menu. . Select *Volumes* > *Overview*. . Select one or more volumes to delete. -. Do one of the following: -+ -* If you selected multiple volumes, click the *Delete* quick filter at the top of the table. -* If you selected a single volume, in the *Actions* column of the Volumes table, expand the menu for the volume and select *Delete*. -. Confirm the delete by selecting *Yes*. +. Click the *Actions* drop-down list at the top of the volumes table, and select *Delete*. +. In the resulting window, confirm the action by clicking *Yes*. == Restore a deleted volume After a storage volume is deleted, you can still restore it if you do so before eight hours after deletion. diff --git a/docs/task_hcc_nodes.adoc b/docs/task_hcc_nodes.adoc index 8a37d7b0..4475f352 100644 --- a/docs/task_hcc_nodes.adoc +++ b/docs/task_hcc_nodes.adoc @@ -16,7 +16,7 @@ keywords: netapp, hci, on premise, cluster, inventory, nodes, storage, compute [.lead] You can view both your storage and compute assets in your system and determine their IP addresses, names, and software versions. -You can view storage information for your multiple node systems and any NetApp HCI Witness Nodes associated with two-node or three-node clusters. +You can view storage information for your multiple node systems and any NetApp HCI Witness Nodes associated with two-node or three-node clusters. If link:concept_hcc_custom_protection_domains.html[custom protection domains^] are assigned, you can see which protection domains are assigned to specific nodes. Witness Nodes manage quorum within the cluster; they are not used for storage. Witness Nodes are applicable only to NetApp HCI and not to all-flash storage environments. For more information about Witness Nodes, see link:concept_hci_nodes.html[Nodes definitions]. diff --git a/docs/task_hcc_update_management_services.adoc b/docs/task_hcc_update_management_services.adoc index 25d6c8f4..626bcc61 100644 --- a/docs/task_hcc_update_management_services.adoc +++ b/docs/task_hcc_update_management_services.adoc @@ -59,7 +59,7 @@ NOTE: For a list of available services for each service bundle version, see the + The Management Services tab shows the current and available versions of management services software. + -NOTE: If your installation cannot access the internet, only the current software version is shown. +NOTE: If your installation cannot access the internet, only the current software version is shown. If you have external connectivity but NetApp HCI is unable to access the NetApp online repository, check your link:task_mnode_configure_proxy_server.html[proxy configuration^]. . If your installation can access the internet and if a management services upgrade is available, click *Begin Upgrade*. . If your installation cannot access the internet, do the following: diff --git a/docs/task_hcc_upgrade_element_prechecks.adoc b/docs/task_hcc_upgrade_element_prechecks.adoc index aeb4be79..6875448a 100644 --- a/docs/task_hcc_upgrade_element_prechecks.adoc +++ b/docs/task_hcc_upgrade_element_prechecks.adoc @@ -151,7 +151,7 @@ If all health checks are successful, the return is similar to the following exam You can verify that the storage cluster is ready to be upgraded by using the `sfupgradecheck` command. This command verifies information such as pending nodes, disk space, and cluster faults. -If your management node is at a dark site, the upgrade readiness check needs the `metadata.json` file you downloaded during link:task_upgrade_element_latest_healthtools.html[HealthTools upgrades] to run successfully. +If your management node is at a dark site without external connectivity, the upgrade readiness check needs the `metadata.json` file you downloaded during link:task_upgrade_element_latest_healthtools.html[HealthTools upgrades] to run successfully. .About this task diff --git a/docs/task_hcc_upgrade_element_software.adoc b/docs/task_hcc_upgrade_element_software.adoc index acb69b83..58100bd2 100644 --- a/docs/task_hcc_upgrade_element_software.adoc +++ b/docs/task_hcc_upgrade_element_software.adoc @@ -132,7 +132,7 @@ Here are the different states that the *Upgrade Status* column in the UI shows b |An error has occurred during the upgrade. You can download the error log and send it to NetApp Support. After you resolve the error, you can return to the page, and click *Resume*. When you resume the upgrade, the progress bar goes backwards for a few minutes while the system runs the health check and checks the current state of the upgrade. |Unable to Detect -|NetApp Hybrid Cloud Control shows this status instead of *Versions Available* when it does not have external connectivity to reach the online software repository. +|NetApp Hybrid Cloud Control shows this status instead of *Versions Available* when it does not have external connectivity to reach the online software repository. If you have external connectivity but still see this message, check your link:task_mnode_configure_proxy_server.html[proxy configuration^]. |Complete with Follow-up |Only for H610S nodes upgrading from Element version earlier than 11.8. After phase 1 of the upgrade process is complete, this state prompts you to perform phase 2 of the upgrade (see the https://kb.netapp.com/Advice_and_Troubleshooting/Hybrid_Cloud_Infrastructure/H_Series/NetApp_H610S_storage_node_power_off_and_on_procedure[KB article^]). After you complete phase 2 and acknowledge that you have completed it, the status changes to *Up to Date*. @@ -529,7 +529,7 @@ Starting light cluster block service check IMPORTANT: If you are upgrading an H610S series node to Element 12.3 and the node is running a version of Element earlier than 11.8, you will need to perform additional upgrade steps (<>) for each storage node. If you are running Element 11.8 or later, the additional upgrade steps (phase 2) are not required. == Upgrade Element software at dark sites using HealthTools -You can use the HealthTools suite of tools to update NetApp Element software at a dark site. +You can use the HealthTools suite of tools to update NetApp Element software at a dark site that has no external connectivity. .What you'll need diff --git a/docs/task_hcc_upgrade_management_node.adoc b/docs/task_hcc_upgrade_management_node.adoc index d4fc214f..6a0d8440 100644 --- a/docs/task_hcc_upgrade_management_node.adoc +++ b/docs/task_hcc_upgrade_management_node.adoc @@ -195,7 +195,7 @@ NOTE: The script retains previous management services configuration, including c sudo /sf/packages/mnode/redeploy-mnode -mu ---- -IMPORTANT: If you had previously disabled SSH functionality on the management node, you need to link:task_mnode_ssh_management.html[disable SSH again] on the recovered management node. SSH capability that provides link:task_mnode_enable_remote_support_connections.html[NetApp Support remote support tunnel (RST) session access] is enabled on the management node by default. +IMPORTANT: SSH capability that provides link:task_mnode_enable_remote_support_connections.html[NetApp Support remote support tunnel (RST) session access] is disabled by default on management nodes running management services 2.18 and later. If you had previously enabled SSH functionality on the management node, you might need to link:task_mnode_ssh_management.html[disable SSH again] on the upgraded management node. == Upgrade a management node to version 12.3 from 11.3 through 11.8 @@ -275,7 +275,7 @@ NOTE: The script retains previous management services configuration, including c sudo /sf/packages/mnode/redeploy-mnode -mu ---- -IMPORTANT: If you had previously disabled SSH functionality on the management node, you need to link:task_mnode_ssh_management.html[disable SSH again] on the recovered management node. SSH capability that provides link:task_mnode_enable_remote_support_connections.html[NetApp Support remote support tunnel (RST) session access] is enabled on the management node by default. +IMPORTANT: SSH capability that provides link:task_mnode_enable_remote_support_connections.html[NetApp Support remote support tunnel (RST) session access] is disabled by default on management nodes running management services 2.18 and later. If you had previously enabled SSH functionality on the management node, you might need to link:task_mnode_ssh_management.html[disable SSH again] on the upgraded management node. == Upgrade a management node to version 12.3 from 11.1 or 11.0 You can perform an in-place upgrade of the management node from 11.0 or 11.1 to version 12.3 without needing to provision a new management node virtual machine. diff --git a/docs/task_hcc_upgrade_storage_firmware.adoc b/docs/task_hcc_upgrade_storage_firmware.adoc index 334b85e2..6c018777 100644 --- a/docs/task_hcc_upgrade_storage_firmware.adoc +++ b/docs/task_hcc_upgrade_storage_firmware.adoc @@ -51,7 +51,7 @@ You can use the NetApp Hybrid Cloud Control UI to upgrade the firmware of the st CAUTION: For potential issues while upgrading storage clusters using NetApp Hybrid Cloud Control and their workarounds, see the https://kb.netapp.com/Advice_and_Troubleshooting/Hybrid_Cloud_Infrastructure/NetApp_HCI/Potential_issues_and_workarounds_when_running_storage_upgrades_using_NetApp_Hybrid_Cloud_Control[KB article^]. -TIP: The upgrade process takes approximately 30 minutes per node. +TIP: The upgrade process takes approximately 30 minutes per storage node. If you are upgrading an Element storage cluster to storage firmware newer than version 2.76, individual storage nodes will only reboot during the upgrade if new firmware was written to the node. .Steps diff --git a/docs/task_hci_h410srepl.adoc b/docs/task_hci_h410srepl.adoc index ec04b283..9fe2eba3 100644 --- a/docs/task_hci_h410srepl.adoc +++ b/docs/task_hci_h410srepl.adoc @@ -216,7 +216,7 @@ The cluster membership changes from Available to Pending. . Select *Pending* from the drop-down list to view the list of available nodes. . Select the node you want to add, and select *Add*. + -NOTE: It might take up to 15 minutes for the node to be added to the cluster and displayed under Nodes > Active. +NOTE: It might take up to 2 minutes for the node to be added to the cluster and displayed under Nodes > Active. + IMPORTANT: Adding the drives all at once can lead to disruptions. For best practices related to adding and removing drives, see https://kb.netapp.com/Advice_and_Troubleshooting/Data_Storage_Software/Element_Software/What_is_the_best_practice_on_adding_or_removing_drives_from_a_cluster_on_Element%3F[this KB article] (login required). . Select *Drives*. diff --git a/docs/task_hci_hserieschassisrepl.adoc b/docs/task_hci_hserieschassisrepl.adoc index e61a5e52..0d82c042 100644 --- a/docs/task_hci_hserieschassisrepl.adoc +++ b/docs/task_hci_hserieschassisrepl.adoc @@ -136,7 +136,7 @@ CAUTION: Ensure that you do not force the cables into the ports; you might damag . After the node powers on, add the node to the cluster. + -NOTE: It might take up to 15 minutes for the node to get added and be displayed under *Nodes > Active*. +NOTE: It might take up to 2 minutes for the node to get added and be displayed under *Nodes > Active*. . Add the drives. . Perform these steps for all the storage nodes in the chassis. @@ -232,7 +232,7 @@ a| . In the Element plug-in for vCenter server, select *Cluster > Nodes > Pending*. . Select the node, and select *Add*. + -NOTE: It might take up to 15 minutes for the node to get added and be displayed under *Nodes > Active*. +NOTE: It might take up to 2 minutes for the node to get added and be displayed under *Nodes > Active*. . Select *Drives*. . From the Available list, add the drives. diff --git a/docs/task_mnode_enable_activeIQ.adoc b/docs/task_mnode_enable_activeIQ.adoc index 2ecdd3fc..524eee0f 100644 --- a/docs/task_mnode_enable_activeIQ.adoc +++ b/docs/task_mnode_enable_activeIQ.adoc @@ -21,7 +21,7 @@ The Active IQ collector service forwards configuration data and Element software .Before you begin * Your storage cluster is running NetApp Element software 11.3 or later. * You have deployed a management node running version 11.3 or later. -* You have internet access. The Active IQ collector service cannot be used from dark sites. +* You have internet access. The Active IQ collector service cannot be used from dark sites that do not have external connectivity. .Steps . Get the base asset ID for the installation: diff --git a/docs/task_mnode_enable_remote_support_connections.adoc b/docs/task_mnode_enable_remote_support_connections.adoc index c9b90b33..49737b17 100644 --- a/docs/task_mnode_enable_remote_support_connections.adoc +++ b/docs/task_mnode_enable_remote_support_connections.adoc @@ -16,9 +16,13 @@ keywords: netapp, mnode, management node, connect to support, support tunnel, rs [.lead] If you require technical support for your NetApp HCI or SolidFire all-flash storage system, NetApp Support can connect remotely with your system. To start a session and gain remote access, NetApp Support can open a reverse Secure Shell (SSH) connection to your environment. -.About this task -You can open a TCP port for an SSH reverse tunnel connection with NetApp Support. This connection enables NetApp Support to log in to your management node. If your management node is behind a proxy server, the following TCP ports are required in the sshd.config file: +You can open a TCP port for an SSH reverse tunnel connection with NetApp Support. This connection enables NetApp Support to log in to your management node. +.Before you begin +* For management services 2.18 and later, the capability for remote access is disabled on the management node by default. To enable remote access functionality, see link:task_mnode_ssh_management.html[Manage SSH functionality on the management node]. + +* If your management node is behind a proxy server, the following TCP ports are required in the sshd.config file: ++ [cols=3*,options="header",cols="15,25,60"] |=== | TCP port @@ -28,8 +32,6 @@ You can open a TCP port for an SSH reverse tunnel connection with NetApp Support | 22 | SSH login access | Management node to storage nodes or from storage nodes to management node |=== -NOTE: By default, the capability for remote access is enabled on the management node. To disable remote access functionality, see link:task_mnode_ssh_management.html[Manage SSH functionality on the management node]. You can enable remote access functionality again, if needed. - .Steps * Log in to your management node and open a terminal session. * At a prompt, enter the following: @@ -40,6 +42,9 @@ NOTE: By default, the capability for remote access is enabled on the management + `rst --killall` +* (Optional) Disable link:task_mnode_ssh_management.html[remote access functionality] again. ++ +NOTE: SSH remains enabled if you do not disable it. SSH enabled configuration persists on the management node through updates and upgrades until it is manually disabled. [discrete] == Find more information diff --git a/docs/task_mnode_multi_vcenter_config.adoc b/docs/task_mnode_multi_vcenter_config.adoc index cef820d9..f4fb8757 100644 --- a/docs/task_mnode_multi_vcenter_config.adoc +++ b/docs/task_mnode_multi_vcenter_config.adoc @@ -34,7 +34,7 @@ NOTE: You might need to link:task_hcc_edit_bmc_info.html[change BMC credentials https://[management node IP]/inventory/1/ ---- + -NOTE: As an alternative, you can wait 15 minutes for the inventory to update in NetApp Hybrid Cloud Control UI. +NOTE: As an alternative, you can wait 2 minutes for the inventory to update in NetApp Hybrid Cloud Control UI. .. Click *Authorize* and complete the following: ... Enter the cluster user name and password. diff --git a/docs/task_mnode_recover.adoc b/docs/task_mnode_recover.adoc index e3a6a4e8..7e4a2854 100644 --- a/docs/task_mnode_recover.adoc +++ b/docs/task_mnode_recover.adoc @@ -185,8 +185,7 @@ NOTE: You can add the user name or allow the script to prompt you for the inform .. Run the `redeploy-mnode` command. The script displays a success message when the redeployment is complete. .. If you access Element or NetApp HCI web interfaces (such as the management node or NetApp Hybrid Cloud Control) using the Fully Qualified Domain Name (FQDN) of the system, link:task_hcc_upgrade_management_node.html#reconfigure-authentication-using-the-management-node-rest-api[reconfigure authentication for the management node^]. -IMPORTANT: If you had previously disabled SSH functionality on the management node, you need to link:task_mnode_ssh_management.html[disable SSH again] on the recovered management node. SSH capability that provides link:task_mnode_enable_remote_support_connections.html[NetApp Support remote support tunnel (RST) session access] is enabled on the management node by default. - +IMPORTANT: SSH capability that provides link:task_mnode_enable_remote_support_connections.html[NetApp Support remote support tunnel (RST) session access] is disabled by default on management nodes running management services 2.18 and later. If you had previously enabled SSH functionality on the management node, you might need to link:task_mnode_ssh_management.html[disable SSH again] on the recovered management node. [discrete] == Find more Information diff --git a/docs/task_mnode_ssh_management.adoc b/docs/task_mnode_ssh_management.adoc index c0b7ee66..58da3abe 100644 --- a/docs/task_mnode_ssh_management.adoc +++ b/docs/task_mnode_ssh_management.adoc @@ -14,7 +14,7 @@ keywords: netapp, mnode, management node, ssh, disable, enable, rest api :imagesdir: ../media/ [.lead] -You can disable, re-enable, or determine the status of the SSH capability on the management node (mNode) using the REST API. SSH capability that provides link:task_mnode_enable_remote_support_connections.html[NetApp Support remote support tunnel (RST) session access] is enabled on the management node by default. +You can disable, re-enable, or determine the status of the SSH capability on the management node (mNode) using the REST API. SSH capability that provides link:task_mnode_enable_remote_support_connections.html[NetApp Support remote support tunnel (RST) session access] is disabled by default on management nodes running management services 2.18 or later. .What you'll need * *Cluster administrator permissions*: You have permissions as administrator on the storage cluster. @@ -29,7 +29,7 @@ You can do any of the following tasks after you link:task_mnode_api_get_authoriz * <> == Disable or enable the SSH capability on the management node -You can disable or re-enable SSH capability on the management node. SSH capability that provides link:task_mnode_enable_remote_support_connections.html[NetApp Support remote support tunnel (RST) session access] is enabled on the management node by default. Disabling SSH does not terminate or disconnect existing SSH client sessions to the management node. If you disable SSH and elect to re-enable it at a later time, you can do so using the same API. +You can disable or re-enable SSH capability on the management node. SSH capability that provides link:task_mnode_enable_remote_support_connections.html[NetApp Support remote support tunnel (RST) session access] is disabled by default on management nodes running management services 2.18 or later. Disabling SSH does not terminate or disconnect existing SSH client sessions to the management node. If you disable SSH and elect to re-enable it at a later time, you can do so using the same API. .API command ---- @@ -51,11 +51,11 @@ https:///mnode/ .. Close the window. . From the REST API UI, select *PUT /settings​/ssh*. .. Click *Try it out*. -.. Set the *enabled* parameter to `false` to disable SSH or `true` to re-enable SSH capability that you previously disabled. +.. Set the *enabled* parameter to `false` to disable SSH or `true` to re-enable SSH capability that was previously disabled. .. Click *Execute*. == Determine status of the SSH capability on the management node -You can determine whether or not SSH capability is enabled on the management node using a management node service API. SSH is enabled by default on the management node. +You can determine whether or not SSH capability is enabled on the management node using a management node service API. SSH is disabled by default on management nodes running management services 2.18 or later. .API command ---- diff --git a/docs/task_post_deploy_credentials.adoc b/docs/task_post_deploy_credentials.adoc index 89745ba9..f2c55184 100644 --- a/docs/task_post_deploy_credentials.adoc +++ b/docs/task_post_deploy_credentials.adoc @@ -13,12 +13,10 @@ Depending on the security policies in the organization that deployed NetApp HCI If you change credentials for one component of a NetApp HCI or NetApp SolidFire deployment, the following table provides guidance as to the impact on other components. - NetApp HCI component interactions: image:../media/diagram_credentials_hci.png[NetApp HCI components] - [options="header",cols="10a,60a,30a"] |=== | Credential Type and Icon diff --git a/docs/task_rancher_upgrades.adoc b/docs/task_rancher_upgrades.adoc index cfa4cbf9..ad692848 100644 --- a/docs/task_rancher_upgrades.adoc +++ b/docs/task_rancher_upgrades.adoc @@ -43,7 +43,7 @@ Using the NetApp Hybrid Cloud Control UI, you can upgrade any of these component * Node OS security updates .What you'll need -* A good internet connection. Dark site upgrades are not available. +* A good internet connection. Dark site upgrades (upgrades at a site without external connectivity) are not available. .Steps @@ -65,7 +65,7 @@ https:// + NOTE: For node OS, unattended upgrades for security patches are run on a daily basis but the node is not rebooted automatically. By applying upgrades, you are rebooting each node for the security updates to take effect. -A banner appears indicating the component upgrade is successful. There could be up to a 15 minute delay before NetApp Hybrid Cloud Control UI shows the updated version number. +A banner appears indicating the component upgrade is successful. There could be up to a 2 minute delay before NetApp Hybrid Cloud Control UI shows the updated version number. == Use NetApp Hybrid Cloud Control API to upgrade a Rancher deployment diff --git a/docs/task_vcp_upgrade_plugin.adoc b/docs/task_vcp_upgrade_plugin.adoc index 17d780f8..ba2f9207 100644 --- a/docs/task_vcp_upgrade_plugin.adoc +++ b/docs/task_vcp_upgrade_plugin.adoc @@ -2,7 +2,7 @@ sidebar: docs_sidebar permalink: docs/task_vcp_upgrade_plugin.html summary: As part of a NetApp HCI or SolidFire storage system upgrade, you can upgrade the Element Plug-in for vCenter Server. -keywords: netapp, vcp, vCenter plug-in, cluster, 4.4 upgrade, 4.5 upgrade, 4.6 upgrade +keywords: netapp, vcp, vCenter plug-in, upgrade, 4.4, 4.5, 4.6, 4.7 --- = Upgrade the Element Plug-in for vCenter Server @@ -20,11 +20,11 @@ You can update the plug-in registration on vCenter Server Virtual Appliance (vCS This upgrade procedure covers the following upgrade scenarios: -* You are upgrading to VCP 4.6, 4.5, or 4.4. +* You are upgrading to VCP 4.7, 4.6, 4.5, or 4.4. * You are upgrading to a 7.0, 6.7, or 6.5 HTML5 vSphere Web Client. * You are upgrading to a 6.7 or 6.5 Flash vSphere Web Client. -IMPORTANT: The plug-in is compatible with vSphere Web Client version 6.7 U2 for Flash, 6.7 U3 (Flash and HTML5), and 7.0 U1. The plug-in is not compatible with version 6.7 U2 of the HTML5 vSphere Web Client. For more information about supported vSphere versions, see the release notes for https://mysupport.netapp.com/documentation/productlibrary/index.html?productID=62701[your version of the plug-in]. +IMPORTANT: The plug-in is compatible with vSphere Web Client version 6.7 U2 for Flash, 6.7 U3 (Flash and HTML5), and 7.0 U1. The plug-in is not compatible with version 6.7 U2 of the HTML5 vSphere Web Client. For more information about supported vSphere versions, see the release notes for https://docs.netapp.com/us-en/vcp/rn_relatedrn_vcp.html#netapp-element-plug-in-for-vcenter-server[your version of the plug-in]. .What you'll need @@ -80,8 +80,8 @@ NOTE: If the vCenter Plug-in icons are not visible, see Element Plug-in for vCen + You should see the following version details or details of a more recent version: ---- -NetApp Element Plug-in Version: 4.6 -NetApp Element Plug-in Build Number: 29 +NetApp Element Plug-in Version: 4.7 +NetApp Element Plug-in Build Number: 10 ---- NOTE: The vCenter Plug-in contains online Help content. To ensure that your Help contains the latest content, clear your browser cache after upgrading your plug-in. diff --git a/media/2.18_notice.pdf b/media/2.18_notice.pdf new file mode 100644 index 00000000..120ec668 Binary files /dev/null and b/media/2.18_notice.pdf differ diff --git a/media/diagram_credentials_hci.png b/media/diagram_credentials_hci.png index 95d10e32..591fdefe 100644 Binary files a/media/diagram_credentials_hci.png and b/media/diagram_credentials_hci.png differ