Transcription

A P P E N D I XACisco Unified Communications ArchitectureBasicsThis appendix provides a high-level overview of some of the basic architectural concepts and elementsupon which the Cisco Unified Communications System is built.Additional information regarding Voice over IP technologies is available at:http://www.cisco.com/en/US/tech/tk652/tk701/tsd technology support protocol home.htmlOverviewThe Cisco Unified Communications System provides support for the transmission of voice, video, anddata over a single, IP-based network, which enables companies to consolidate and streamlinecommunications. The Cisco Unified Communications System is a key part of the Cisco UnifiedCommunications Solution, which also includes network infrastructure, security, and networkmanagement products, wireless connectivity, third-party communications applications, and a lifecycleservices approach for preparing, planning, designing, implementing, operating and optimizing(PPDIOO) the system.The Cisco Unified Communications System leverages an existing IP infrastructure (built on the OpenSystem Interconnection [OSI] reference model) and adds support for voice and video-related devices,features, and applications. Support for major signaling protocols, such as the Session Initiation Protocol(SIP), the Media Gateway Control Protocol (MGCP), and H.323 is provided, as is the ability to integratewith legacy voice and video networks.Table A-1 shows the relationship between the OSI reference model and the voice and video protocolsand functions of the Cisco Unified Communications System.Table A-1Voice and Video over IP in the OSI Reference ModelOSI LayerNumberOSI Layer NameVoiceVideo7ApplicationUnified IP Phone, UnifiedPersonal Communicator, etc.Video endpoint, Unified VideoAdvantage, etc.6PresentationG.711, G.722, G.723, G.729H.261, H.263, nsportRTP/UDP, TCPCisco Unified Communications System Description Release 8.5(1)OL-23305-01A-1

Appendix ACisco Unified Communications Architecture BasicsOverviewOSI LayerNumberOSI Layer NameVoice3NetworkIP2Data LinkFrame Relay, ATM, Ethernet, PPP, MLP, and moreVideoFollowing this model: Layer 6—Digital signal processors (DSPs) compress/encode (decompress/decode) the voice orvideo signal using the chosen codec. The DSP then segments the compressed/encoded signal intoframes and stores them into packets. Layer 5—The packets are transported in compliance with a signaling protocol, such as SkinnyClient Control Protocol (SCCP), H.323, MGCP, or SIP. Layer 4—Signaling traffic (call setup and teardown) uses TCP as its transport medium.Media streams use Real-time Transport Protocol (RTP) over UDP for the transport protocol. RTP isused because it inserts timestamps and sequence numbers in each packet to enable synchronizationat the receiving end. UDP is used because TCP would introduce delays (due to acknowledgements)that are not easily tolerated by real-time traffic. Layer 3—The IP layer provides routing and network-level addressing. Layer 2—The data-link layer protocols control and direct the transmission of the information overthe physical medium.Voice over IPIn general, the components of a VoIP network fall into the following categories: Infrastructure—Provides the foundation for the transmission of voice over an IP network. Inaddition to routers and switches, this includes the interfaces, devices, and features necessary tointegrate VoIP devices, legacy PBX, voicemail, and directory systems, and to connect to other VoIPand legacy telephony networks. Typical products used to build the infrastructure include Cisco voicegateways (non-routing, routing, and integrated), Cisco IOS and Catalyst switches, and Cisco routers,as well as security devices, such as firewalls, virtual private networks (VPNs), and intrusiondetection systems. In addition, Quality of Service (QoS), high-availability, and bandwidthprovisioning (for WAN devices) should be deployed. Call processing—Provides signaling and call control services from the time a call is initiated untilthe time a call is terminated. The call processing component also provides feature services, such ascall transfer and forwarding capabilities. In the Cisco Unified Communications System, callprocessing is performed by Cisco Unified Communications Manager or Communications ManagerExpress. Applications—Includes components that supplement the basic call processing to provide users witha complete suite of communications options. Applications in the Cisco Unified CommunicationsSystem include Cisco Unity for voice messaging products, Cisco Unified MeetingPlace conferencescheduling software, Cisco Emergency Responder, and applications that enhance the usability of thesystem and allow users to be more productive, such as the Cisco Unified Presence. Voice-enabled endpoints—Includes IP phones, soft phones, wireless IP phones, and analoggateways, which provide access to the PSTN and enable interoperability with legacy telephonydevices (such as a Plain Old Telephone System [POTS] phone). For IP phones and softphones, thesupported protocols are SCCP, H.323, and SIP. For gateways, the supported protocols are SCCP,H.323, SIP, and MGCP.Cisco Unified Communications System Description Release 8.5(1)A-2OL-23305-01

Appendix ACisco Unified Communications Architecture BasicsOverviewFor a more in depth discussion of Voice over IP, see Voice over IP Fundamentals from Cisco Press.Video over IPTypical IP videoconferencing components include: Gateways—Performs translation between different protocols, audio encoding formats, and videoencoding formats that may be used by the various standards. The Cisco Unified Videoconferencinggateways enable conferences using H.323, H.320, SCCP, or SIP endpoints. Gatekeepers— Works with the call-processing component to provide management of H.323endpoints. The gatekeeper handles all Registration, Admission, and Status (RAS) signaling, whilethe call-processing component handles all of the call signaling and media negotiations. Conference bridges—Enables conferencing between three or more participants. Video endpoints aregenerally point-to-point devices, allowing only two participants per conversation. A conferencebridge or multipoint conference unit (MCU) is required to extend a video conference to three ormore participants. Video-enabled endpoints—Includes stand-alone video terminals, IP phones with integrated videocapabilities, and video conferencing software on a PC. These endpoints can be H.323, H.320, SCCP,or SIP.For additional information about videoconferencing, see the IP Videoconferencing Solution ReferenceNetwork Design guide.Fax over IPFax over IP enables the interworking of standard fax machines over packet-based networks. With faxover IP, the fax image is extracted from the analog signal and converted to digital data for transmissionover the IP network.The components of the Cisco Unified Communications System support three methods for transmittingfax over IP: real-time fax, store-and-forward fax, fax pass-through. For real-time fax, Cisco supports Cisco fax relay and T.38 fax relay (from the InternationalTelecommunications Union [ITU-T]). With this method, the DSP breaks down the fax tones fromthe sending fax machine into their specific frames (demodulation), transmits the information acrossthe IP network using the fax relay protocol, and then converts the bits back into tones at the far side(modulation). The fax machines on either end send and receive tones as they would over the PSTNand are not aware that information is actually going across an IP network. For store-and-forward fax, Cisco supports T.37 (from the ITU-T). With this method, the on-rampgateway receives a fax from a traditional fax device and converts it into a Tagged Image File Format(TIFF) file attachment. The gateway then creates a standard Multipurpose Internet Mail Extension(MIME) e-mail message and attaches the TIFF file to the e-mail. The gateway forwards the e-mail,now called a fax mail, and its attachment to the messaging infrastructure of a designated SimpleMail Transport Protocol (SMTP) server.Store-and-forward fax allows for fax transmissions to be stored and transmitted across apacket-based network in a bulk fashion, which allows faxes to use least-cost routing and enablesfaxes to be stored and transmitted when toll charges are more favorable. When usingstore-and-forward fax, however, the user must be willing to accept fax delays that range fromseconds to hours, depending upon the particular method of deployment.Cisco Unified Communications System Description Release 8.5(1)OL-23305-01A-3

Appendix ACisco Unified Communications Architecture BasicsVoIP Protocols For fax pass-through, fax data is not demodulated or compressed for its transit through the packetnetwork. With this method, the fax traffic is carried between the two gateways in RTP packets usingan uncompressed format resembling the G.711 codec. The gateway does not distinguish fax callsfrom voice calls.VoIP ProtocolsFor signaling and call control, the Cisco Unified Communications System supports the Cisco proprietaryVoIP protocol, SCCP, as well as the major industry-standard protocols of H.323, SIP, and MGCP. Theseprotocols can be categorized as using either a client-server or peer-to-peer model. The client-server model is similar to that used in traditional telephony, in which in which dumbendpoints (telephones) are controlled by centralized switches. With a client-server model, themajority of the of the intelligence resides in the centralized call processing component, whichhandles the switching logic and call control, and with very little processing is done by the phoneitself.The advantages of the client-server model are that it centralizes management, provisioning, and callcontrol; it simplifies call flows for replicating legacy voice features; it reduces the amount ofmemory and CPU required on the phone; and it is easier for legacy voice engineers to understand.MGCP and SCCP are examples of client-server protocols. The peer-to-peer model allows network intelligence to be distributed between the endpoints andcall-control components. Intelligence in this instance refers to call state, calling features, callrouting, provisioning, billing, or any other aspect of call handling. The endpoints can be VoIPgateways, IP phones, media servers, or any device that can initiate and terminate a VoIP call.The advantages of the peer-to-peer model are that it is more flexible, more scalable, and more easilyunderstood by engineers who are accustomed to running IP data networks.SIP and H.323 are examples of peer-to-peer protocols.Table A-2Protocols Supported by Cisco Unified CommunicationsComponentsProtocolDescriptionSCCPA proprietary protocol from Cisco Systems. SCCP uses the client-server model.Call control is provided by the Cisco Unified Communications Manager orCommunications Manager Express. Unified IP Phones run a “skinny” client, whichrequires very little processing to be done by the phone itself.SCCP is supported by all Cisco IP Phones, by Cisco Unified Video Advantage, bymany third-party video endpoints, and by select Cisco gateways.MGCPThe recommendation from the ITU-T for multimedia communications over LANs.MGCP uses the client-server model and is used primarily to communicate withgateways.MGCP provides easier configuration and centralized management. It is supportedby most Cisco gateways.Cisco Unified Communications System Description Release 8.5(1)A-4OL-23305-01

Appendix ACisco Unified Communications Architecture BasicsVoice and Video CodecsProtocolDescriptionSIPA recommendation from the Internet Engineering Task Force (IETF) formultimedia communications over LANs. SIP uses the peer-to-peer model. Callcontrol is provided through a SIP proxy or redirect server. In Cisco UnifiedCommunications Manager, SIP call control is provided through a built-inback-to-back user agent (B2BUA).SIP uses a simple messaging scheme and is highly scalable. It is supported by anincreasing number of Cisco IP phones, by a number of third-party video endpoints,and on the trunk side of many Cisco gateways.H.323The recommendation from the ITU-T for multimedia communications over LANs.H.323 uses the peer-to-peer model. It is based on the Integrated Services DigitalNetwork (ISDN) Q.931 protocol. Call control is provided through a gatekeeper.H.323 provides robust support for interfaces and interoperates easily with PSTNand SS7. It is supported by a number of third-party video endpoints and by mostCisco gateways.Voice and Video CodecsAs previously mentioned, codecs are used to encode and compress analog streams (such as voice orvideo) into digital signals that can then be sent across an IP network.TipAs a general recommendation, if bandwidth permits, it is best use a single codec throughout thecampus to minimize the need for transcoding resources, which can add complexity to network design.Characteristics of a codec are as follows: Codecs are either narrowband or wideband. Narrowband (used by traditional telephony systems)refers to the fact that the audio signals are passed in the range of 300-3500 Hz. With wideband, theaudio signals are passed in the range of 50 to 7000 Hz. Therefore, a wideband codec allows for audiowith richer tones and better quality. The sampling rate (or frequency) corresponds to the number of samples taken per second, expressedin Hz or kHz. For digital audio, typical sampling rates are 8 kHz (narrowband), 16 kHz (wideband)and 32 kHz (ultra-wideband). For digital video, typical sampling rates are 50Hz (forPhase-Alternating Line, PAL, used largely in Western Europe) and 59.94 Hz (for NationalTelevision System Committee, NTSC, used largely in North America). Both rates are supported byall the video codec listed in Table A-3. The compression ratio indicates the relative difference between the original size and the compressedsize of the audio or video stream. Lower compression ratios yield better quality but require greaterbandwidth. In general, low-compression codecs are best suited for voice over LANs and are capableof supporting DTMF and fax. High-compression codecs are better suited for voice over WANs. The complexity refers to the amount of processing required to perform the compression. Codeccomplexity affects the call density—the number of calls reconciled on the DSPs. With higher codeccomplexity, fewer calls can be handled.The components of the Cisco Unified Communications System support one or more of the audio andvideo codecs described in Table A-3.Cisco Unified Communications System Description Release 8.5(1)OL-23305-01A-5

Appendix ACisco Unified Communications Architecture BasicsVoice and Video CodecsTable A-3Codecs Supported by Cisco Unified CommunicationsComponentsCodecDescriptionG.711A narrowband audio codec defined by the ITU-T that provides toll-quality audio at64 Kbps. It uses pulse code modulation (PCM) and samples audio at 8 kHz. G.711supports two companding algorithms; mu-law (used in the US and Japan) and a-law(used in Europe and other parts of the world).G.711 is a low-compression, medium-complexity codec.G.722A wideband audio codec defined by the ITU-T that provides high-quality audio at32 to 64 Kbps. It uses Adaptive Differential PCM (ADPCM) and samples audio at16 kHz.G.722 is similar to G.711 in compression and complexity, but provides higherquality audio.G.722.1A wideband audio codec defined by the ITU-T that provides high-quality audio at24 and 32 Kbps. It uses Modulated Lapped Transform (MLT) and samples audio at16 kHz.G.722.1 is a high-compression, low-complexity codec.It provides better quality thanG.722 at lower bit-rates.G.723.1A narrowband audio codec defined by the ITU-T for videoconferencing thatprovides near toll-quality audio at 6.3 or 5.3 Kbps. It uses Algebraic Code ExcitedLinear Prediction (ACELP) and Multi Pulse-Maximum Likelihood Quantization(MP-MLQ) and samples audio at 8 kHz.G.723.1 is a high-compression, high-complexity codec. However, the quality isslightly lower than that of G.711.G.726A narrowband codec defined by the ITU-T that provides toll-quality audio at 32Kbps. It uses ADPCM and samples audio at 8 kHz.G.726 is a medium-complexity codec. It requires half the bandwidth of G.711, whileproviding nearly the same quality. Note that G.726 supersedes G.723, but has noeffect on G.723.1.G.728A narrowband codec defined by the ITU-T that provides near toll-quality audio at16 Kbps. It uses Low Delay CELP (LD-CELP) and samples audio at 8 kHz.G.728 is a high-compression, high-complexity codec.G.729aA narrowband audio codec defined by the ITU-T that provides toll-quality audio at8 Kbps. It uses Conjugate-Structure ACELP (CS-ACELP) and samples audio at 8kHz.G.729a is a high-compression, medium-complexity codec. The quality is lower thanthat of G.711 and it is not appropriate for DTMF, but it is good for situations wherebandwidth is limited.iLBC (internetLow BitrateCodec)A narrowband audio codec standardized by the IETF that provides better thantoll-quality audio at either 13.33 or 15.2 Kbps. It uses block-independentlinear-predictive coding (LPC) samples audio at 8 kHz.iLBC provides higher basic quality than G.729 and is royalty free. It enablesgraceful speech quality degradation in a lossy network. This codec is suitable forreal-time communications, streaming audio, archival, and messaging.Cisco Unified Communications System Description Release 8.5(1)A-6OL-23305-01

Appendix ACisco Unified Communications Architecture BasicsVoice and Video CodecsCodecDescriptionAAC (AdvancedAudio Codec)A wideband audio codec standardized by the Moving Pictures Experts Group (asMPEG-4 AAC). It provides high-quality audio at rates of 32 Kbps and above. It usesAAC-LD (low delay) samples audio at 20 kHz.L16A wideband audio codec defined by the IETF as a MIME subtype. It providesreasonable quality audio at 256 Kbps. It is based on PMC and samples audio at 16kHz.GSM-FR (GlobalSystem forMobileCommunicationsFull Rate)An audio codec defined by the European Telecommunications Standards Institute(ETSI). It was originally designed for GSM digital mobile phone systems andprovides somewhat less than toll-quality audio at 13 Kbps. It uses Regular PulseExcitation with Long-Term Prediction (RPE-LTP) and samples audio at 8 kHz.GSM-EFR(Enhanced FullRate)An audio codec defined by the ETSI for digital voice that provides toll-quality audioat 12.2 Kbps. It uses ACELP and samples audio at 8 kHz.QCELP(Qualcomm CodeExcited LinearPrediction)An audio codec defined by the Telecommunications Industry Association (TIA) forwideband spread spectrum digital communication systems that provides toll-qualityaudio at either 8 or 13 Kbps. As indicated by the name, it uses CELP and samplesaudio at 8 kHz.GSM-FR is a medium-complexity codec.GSM-EFR is a high-complexity codec and provides better sound quality thanGSM-FR.QCELP is a high-complexity codec.H.261One of the first video codecs defined by the ITU-T. It was originally used for videoover ISDN. It is designed to support data rates in multiples of 64 Kbps. H.261supports Common Intermediate Format (CIF - 352 288) and QCIF (176 144)resolutions.H.261 is similar to MPEG, however, H.261 requires significantly less computingoverhead than MPEG for real-time encoding. Because H.261 uses constant bitrateencoding, it is better suited for use with relatively static video.H.263A video codec defined by the ITU-T as an improvement to H.261. It is used inH.323, H.320, and SIP networks. In addition to CIF and QCIF, H.263 supportsSQCIF (128 x 96), 4CIF (704 x 576), and 16CIF (1408 x 1152) resolutions.H.263 provides lower bitrate communication, better performance, and improvederror recovery. It uses half pixel precision and variable bitrate encoding, whichmakes H.263 better suited to accommodate motion in video.H.264The next in the evolution of video codecs. It was defined by the ITU-T inconjunction with the MPEG (as MPEG-4 Part 10) and is designed to providehigher-quality video at lower bit rates.H.264 provides better video quality, compression efficiency, and resilience to packetand data loss than that of H.263. It also makes better use of bandwidth, resulting inthe ability to run more channels over existing systems.Cisco Unified Communications System Description Release 8.5(1)OL-23305-01A-7

Appendix ACisco Unified Communications Architecture BasicsVoice- and Video-enabled InfrastructureVoice- and Video-enabled InfrastructureBy default, an IP data network transmits data based on the concept of “best effort.” Depending on thevolume of traffic and the bandwidth available, data networks can often experience delays. However,these delays are typically a matter of seconds (or fractions of seconds) and go unnoticed by users andapplications, such as e-mail or file transfers. In the event of significant network congestion or minorroute outages, receiving devices can wait and reorder any out-of-sequence packets and sending devicescan simply resend any dropped packets.Voice and video are very time-dependant media, which suffer greatly when subjected to the delays thatdata applications easily tolerate. In the event of significant congestion or outages, voice applications canonly attempt to conceal dropped packets, often resulting in poor quality. Therefore, voice and videorequire an infrastructure that provides for smooth, guarenteed delivery.A network infrastructure that transmits voice and video, especially that delivered in real-time, requiresspecial mechanisms and technologies to ensure the safety and quality of the media as well as the efficientuse of the network resources. In a voice- or video-enabled network, the following must be built into theinfrastructure: Quality of service High availability Voice security Multicast capabilitiesQuality of ServiceQuality of Service (QoS) is defined as the measure of performance for a transmission system that reflectsits transmission quality and service availability. The transmission quality of the network is determinedby the following factors: Loss—Also known as packet loss, is a measure of packets faithfully transmitted and receivedcompared to the total number that were transmitted. Loss is expressed as the percentage of packetsthat were dropped.Loss is typically a function of availability (see the “High Availability” section on page A-10). If thenetwork is Highly Available, then loss (during periods of non-congestion) would essentially be zero.During periods of congestion, however, QoS mechanisms can be employed to selectively determinewhich packets are more suitable to be dropped. Delay—Also known as latency, is the finite amount of time it takes a packet to reach the receivingendpoint after being transmitted from the sending endpoint. In the case of voice, this equates to theamount of time it takes for sounds to leave the speaker’s mouth and be heard in the listener’s ear.This time period is termed the “end-to-end delay.”There are three types of delay:– Packetization delay—The time required to sample and encode analog voice signals and digitizethem into packets.– Serialization delay—The time required to place the packet bits onto the physical media.– Propagation delay—The time required to transmit the packet bits across the physical media.Cisco Unified Communications System Description Release 8.5(1)A-8OL-23305-01

Appendix ACisco Unified Communications Architecture BasicsVoice- and Video-enabled Infrastructure Delay Variation—Also known as interpacket delay, is the difference in the end-to-end delay betweenpackets. For example, if one packet required 100 ms to traverse the network from thesource-endpoint to the destination-endpoint and the following packet required 125 ms to make thesame trip, then the delay variation would be calculated as 25 ms.Each end station in a VoIP or Video over IP conversation has a jitter buffer. Jitter buffers are used tosmooth out changes in arrival times of data packets containing voice. A jitter buffer is dynamic andadaptive, and can adjust for up to a 30 ms average change in arrival times of packets. If you haveinstantaneous changes in arrival times of packets that are outside of the capabilities of a jitterbuffer’s ability to compensate you will have jitter buffer over-runs and under-runs.– A jitter buffer under-run occurs when the arrival times of packets increases to the point wherethe jitter buffer has been exhausted and contains no packets to be processed by the DSPs whenit is time to play out the next piece of voice or video.– A jitter buffer over-run occurs when packets containing voice or video arrive faster than thejitter buffer can dynamically resize itself to accommodate. When this happens, packets aredropped when it is time to play out the voice or video samples, resulting in degraded voicequality.Cisco provides a QoS toolset that allows network administrators to minimize the effects of loss, delay,and delay variation. These tools (as shown in Figure A-1) enable the classification, scheduling, policingand shaping of traffic—the goal being to give preferential treatment to voice and video traffic.Figure A-1Cisco QoS ToolkitPolicing andMarkdownSTOPClassificationand MarkingScheduling(Queuing andDropping)Traffic ol Classification tools mark a frame or packet with a specific value. This marking (or remarking)establishes a trust boundary on which the scheduling tools depend. Scheduling tools determine how a traffic exits a device. Whenever traffic enters a device faster thanit can exit it (as with speed mismatches), then a point of congestion develops. Scheduling tools usevarious buffers to allow higher-priority traffic to exit sooner than lower priority traffic. Thisbehavior is controlled by queueing algorithms, which are activated only when a devices isexperiencing congestion and are deactivated when the congestion clears.Cisco Unified Communications System Description Release 8.5(1)OL-23305-01A-9

Appendix ACisco Unified Communications Architecture BasicsVoice- and Video-enabled Infrastructure Policers and shapers are the oldest forms of QoS mechanisms. These tools have the sameobjectives—to identify and respond to traffic violations. Policers and shapers identify trafficviolations in an identical manner; however, they respond differently to these violations. A policertypically drops traffic; a shaper typically delays the excess traffic using a buffer to hold packets andshape the flow when the data rate of the source is higher than expected.For more information about QoS considerations and tools, see the Enterprise QoS Solution ReferenceNetwork Design Guide.High AvailabilityThe objective of high availability is to prevent or minimize network outages. This is particularlyimportant in networks that carry voice and video. More than a single technology, high availability is anapproach to implementing a mixture of policies, technologies, and inter-related tools to ensureend-to-end availability for services, clients, and sessions. High availability heavily on networkredundancy and software availability.Network redundancy depends on redundant hardware, processors, line cards, and links. The networkshould be designed so that it has no single points of failure for critical hardware (for example, coreswitches). Hardware elements, such as cards, should be “hot swappable,” meaning they can be replacedwithout causing disruption to the network. Power supplies and sources should also be redundant.Software availability depends on reliability-based protocols, such as Spanning Tree and Hot StandbyRouting Protocol (HSRP). Spanning Tree, HSRP, and other protocols provide instructions to the networkand/or to components of the network on how to behave in the event of a failure. Failure in this case couldbe a power outage, a hardware failure, or a disconnected cable. These protocols provide rules to reroutepackets and reconfigure paths. The speed at which these rules are applied is called convergence. Aconverged network is one that, from a user standpoint, has recovered from a failure and can now processinstructions and/or requests.For more information about high availability, see Campus Network for High Availability Design Guide.SecurityAs with important data traffic, voice (and often video) traffic on an IP network must be secured. In somecases, the same technologies that can be used to secure a data network are employed in a VoIP network.In other cases, unique technologies must be implemented. In both cases, one of the key objectives is toprotect the voice or video stream without impacting the quality.When securing the network, it is important to consider all possible areas of vulnerability. This meansprotecting the network from internal and external threats, securing internal and remote connectivity, andlimiting network access to devices, applications, and users that can be trusted. Comprehensive securityis achieved first by securing the network itself, and then by extending that security to endpoints andapplications. For voice and video communications, security must protect four critical elements: Network infrastructure—The switches, routers, and connecting links comprising the foundationnetwork that carries all IP data, voice, and video traffic. This includes using tools such as:– Firewalls– Network intrusion detection and prevention systems– Voice- and video-enabled VPNs– VLAN segmentation– Port securityCisco Unified Communications System Description Release 8.5(1)A-10OL-23305-01

Appendix ACisco Unified Communications Architecture BasicsVoice- and Video-enabled Infrastructure– Access control server/user authentication and authorization– Dynamic Address Resolution Protocol (ARP) inspection– IP source guard and Dynamic Host Configuration Protocol (DHCP) snooping– Wireless security technologies, such as wired equivalent privacy (WEP) and LightweightExtensible Authentication Protocol (LEAP) Call processing systems—Servers and

A-1 Cisco Unified Communications System Description Release 8.5(1) OL-23305-01 APPENDIX A Cisco Unified Communications Architecture Basics This appendix provides a high-level overview