Users Guide

When a device sends a pause frame to another device, the time for which the sending of packets from the other device must be stopped is
contained in the pause frame. The device that sent the pause frame empties the buer to be less than the threshold value and restarts the
acceptance of data packets.
Dynamic ingress buering enables the sending of pause frames at dierent thresholds based on the number of ports that experience
congestion at a time. This behavior impacts the total buer size used by a particular lossless priority on an interface. The pause and resume
thresholds can also be congured dynamically. You can congure a buer size, pause threshold, ingress shared threshold weight, and
resume threshold to control and manage the total amount of buers that are to be used in your network environment.
Buer Sizes for Lossless or PFC Packets
You can congure up to a maximum of 64 lossless (PFC) queues. By conguring 64 lossless queues, you can congure multiple priorities
and assign a particular priority to each application that your network is used to process. For example, you can assign a higher priority for
time-sensitive applications and a lower priority for other services, such as le transfers. You can congure the amount of buer space to be
allocated for each priority and the pause or resume thresholds for the buer. This method of conguration enables you to eectively
manage and administer the behavior of lossless queues.
Although the system contains 4 MB of space for shared buers, a minimum guaranteed buer is provided to all the internal and external
ports in the system for both unicast and multicast trac. This minimum guaranteed buer reduces the total available shared buer to 3399
KB. This shared buer can be used for lossy and lossless trac.
The default behavior causes up to a maximum of 2656 KB to be used for PFC-related trac. The remaining approximate space of 744 KB
can be used by lossy trac. You can allocate all the remaining 744 KB to lossless PFC queues. If you allocate in such a way, the
performance of lossy trac is reduced and degraded. Although you can allocate a maximum buer size, it is used only if a PFC priority is
congured and applied on the interface.
The number of lossless queues supported on the system is dependent on the availability of total buers for PFC. The default conguration
in the system guarantees a minimum of 9 KB (for 10G) per queue if all the 64 queues are congested. However, modifying the buer
allocation per queue impacts this default behavior.
The default pause threshold size is 9 KB for all interfaces.
This default behavior is impacted if you modify the total buer available for PFC or assign static buer congurations to the individual PFC
queues.
Shared headroom for lossless or PFC packets
In switches that require lossless frame delivery, some xed buer is set aside to absorb any bursty trac that arrives after ow control is
congured (PFC in this case). This extra buer space is called the PG headroom. The additional buer space is reserved for ingress ports
per PG. As the buer is reserved per ingress Port and per PG, the total reserved headroom buer is the sum of the PG headroom buer
reserved for all PGs congured across all ingress ports on the switch.
The PG headroom allocation is done conservatively to guarantee lossless operation in worst case scenarios where huge amounts of bursty
trac arrive at the ingress ports. However, this scheme of allocating headroom buer per PG and per ingress port may result in the
wastage of the reserved PG headroom buer; as, this headroom buer may never be utilized and some of the buer space allocated to PG
headroom is wasted.
To address this issue, Dell Networking OS enables you to congure the shared headroom buer for the entire device. Each PG can utilize
up to the peak headroom congured per PG as part of the buer threshold prole. The traditional threshold for any inight or bursty trac
is set per ingress port and per PG. Retaining the same ingress admission control capabilities, headroom pool can also be used to manage
the headroom buer as a shared resource.
Data Center Bridging (DCB)
253