Juniper EX8200-40XS 데이터 시트

다운로드
페이지 8
3
Features and Benefits
The EX8200 PFE2 complex is comprised of two ASICs: the packet 
processor and the switch fabric interface. The hardware pipeline 
on the packet processor ASIC supports approximately 960 Mpps 
of Layer 2 and Layer 3 IPv4 and IPv6 traffic in the EX8208, and 
more than 1900 Mpps in the EX8216. Wire-speed performance 
is maintained regardless of packet size, from 64- to 9216-byte 
jumbo frames across both L2 and L3 interfaces. Firewall (access 
control list) filtering, marking, and rate limiting also occur at wire 
rate, with up to 64,000 entries across L2-L4 packet headers that 
can be applied per port, per VLAN, and per routed interface. 
The packet processor ASIC also supports generic routing 
encapsulation (GRE) tunneling and three-label MPLS in hardware 
at line rate .  Additional packet processor ASIC capabilities include 
multiple queues for CPU-bound control traffic to protect the 
Routing Engine from denial of service (DoS) attacks, and support 
for up to seven mirrored analyzer sessions directed to individual 
ports, VLANs, or tunneled interfaces.
The switch fabric interface ASIC of the EX-PFE2 manages the 
large ingress and egress buffers that provide congestion avoidance 
and traffic prioritization. On ingress, each switch fabric interface 
queues packets based on destination using dedicated high- and 
low-priority buffers for each wire-speed, 10-Gigabit Ethernet 
egress port, or each group of 12 Gigabit Ethernet ports in the 
system. These weighted random early detection (WRED) virtual 
output queues—up to 8,192 in an EX8216 chassis—prevent “head-
of-line blocking” among ports on the same line card, ensuring 
complete independence of traffic flows among all 10-Gigabit 
Ethernet ports in the system.
The switch fabric interface also manages the transfer of data 
across the distributed, single-tier crossbar switch fabric. Data is 
evenly distributed across the fabric to balance traffic load and 
ensure graceful degradation of performance in the event of a non-
redundant switch fabric failure. Multicast traffic is also balanced 
across the system using the same line-rate, binary-tree replication 
process as the Juniper Networks T Series Core Routers and the 
Juniper Networks MX Series 3D Universal Edge Routers, minimizing 
fabric congestion while reducing latency.
On egress, the switch fabric interface provides eight dedicated 
queues per port, mapped according to class of service (CoS) or 
DiffServ code point (DSCP) values. A WRED scheduler is used for 
congestion avoidance within each queue, while administrator-
configured strict and weighted round-robin priority options are 
available between queues on a single port. Multicast traffic is 
managed independent of unicast traffic.
Total buffer size is 512 MB on each EX8200-8XS 10-Gigabit 
Ethernet port or each EX8200-40XS port group, and 42 MB 
on each EX8200-48T and EX8200-48F Gigabit Ethernet port, 
providing 50-100 ms of bandwidth delay buffering. These deep 
buffers and ingress and egress queuing mechanisms are critical 
to managing mission-critical data, handling bursty traffic, 
and limiting TCP/IP retries at the application level to free up 
bandwidth, reduce latency and allow a higher quantity of both 
unicast and multicast application flows across the network.
All packets pass through the entire EX-PFE2 ingress pipeline, the 
switch fabric, and the EX-PFE2 egress pipeline. This consistency 
of packet processing ensures that the EX-PFE2 is capable of 
delivering port-to-port latencies of under 10 μs, regardless of 
ingress or egress port location. 
Up to 255 link aggregation groups (LAGs) are supported, ensuring 
that the large number of high-density Gigabit Ethernet LAGs found 
in campus and data center core and aggregation deployments can 
be accommodated. Up to 12 ports may be bundled into a single 
LAG, allowing 120 Gbps logical interfaces to be created using a full 
L2-L4 hash algorithm for optimal load balancing. Ports in a LAG 
may be distributed across line cards within an EX8200 switch for 
an added level of resiliency. Automatic detection, recovery, and 
redistribution of LAG traffic in the event of a port, link, or line card 
failure is supported for highly reliable connections. 
Each line card contains a local CPU that is connected to the 
chassis’ redundant Routing Engines over dedicated internal 
gigabit control-plane links. This CPU manages the local line card 
components, distributes forwarding table and other control plane 
data from the Routing Engine to the local EX-PFE2 ASICs, and 
returns line card status and CPU-directed control plane packets 
to the Routing Engine. A second processor resident on each line 
card aggregates flow-based statistics and analyzes sampled 
packets without impacting control plane performance. Finally, 
hot insertion and removal of all line cards is supported for online 
maintenance and support.