CCNA Security Chapter 7 - Cryptographic Systems

Network traffic is protected when traversing the public Internet through cryptographic methods.
Cryptology is the science of making and breaking secret codes.
The development and use of codes is called cryptography, and breaking codes is called cryptanalysis.

Secure communication requires a guarantee that:
- integrity: no one intercepted the message and altered it, MD5 or SHA-1, (An unbroken wax seal on an envelope ensures integrity),
- authentication: the message is not a forgery and does actually come from whom it states, HMAC protocol. (sample: PIN for banking at an ATM),
- confidentiality: if the message is captured, it cannot be deciphered, DES, 3DES, and AES, or asymmetric algorithms, including RSA and the public key infrastructure (PKI).


Symmetric encryption algorithms: each communicating party knows the pre-shared key.
Asymmetric encryption algorithms: two communicating parties have not previously shared a secret and must establish a secure method to do so.

When enabling encryption, readable data is called plaintext, or cleartext, while the encrypted version is called ciphertext.

Using a hash function is another way to ensure data confidentiality.
A hash function transforms a string of characters into a usually shorter, fixed-length value or key that represents the original string. The purpose of encryption and hashing is to guarantee confidentiality so that only authorized entities can read the message.

Cipher is a series of well-defined steps that can be followed as a procedure when encrypting and decrypting messages.

Cryptography
Various cipher methods, physical devices, and aids have been used to encrypt and decrypt text:
- In transposition ciphers, no letters are replaced; they are simply rearranged.
Modern encryption algorithms, such as the Data Encryption Standard (DES) and the Triple Data Encryption Standard (3DES), still use transposition as part of the algorithm. 
FLANK EAST ATTACK AT DOWN
F...K...T...A...T...N.
.L.N.E.S.A.T.C.A.D.W..
..A...A...T...K...A...
- Substitution ciphers substitute one letter for another.
The Caesar cipher was a simple substitution cipher.
Plain:    ABCDEFGHIJKLMNOPQRSTUVWXYZ
Cipher:   DEFGHIJKLMNOPQRSTUVWXYZABC

Plaintext:  the quick brown fox jumps over the lazy dog
Ciphertext: WKH TXLFN EURZQ IRA MXPSV RYHU WKH ODCB GRJ
Although the Vigenere cipher uses a longer key, it can still be cracked. For this reason, a better cipher method was required.
To illustrate how the Vigenere Cipher Table works, suppose that a sender and receiver have a shared secret key composed of these letters: SECRETKEY. The sender uses this secret key to encode the plaintext FLANK EAST ATTACK AT DAWN:
F = SxF =>X,   L = ExL =>P,   A = CxA =>C ...

Plaintext: FLANK EAST ATTACK AT DAWN
Secret key: SECRE TKEY SECRET KE YSEC
Cipher text: XPCEO XKUR SXVRGD KX BSAP
-  Gilbert Vernam was an AT&T Bell Labs engineer who, in 1917, invented and patented the stream cipher and later co-invented the one-time pad (OTP) cipher.
Each tape was used only once, hence the name one-time pad.
The one-time pad is generally exchanged before the plaintext exists.

These pads are a stack of paper sheets. On each sheet a series of apparently random numbers is printed.
The pads are then used as shown below to encode/encrypt a message in conjunction with a code book.
Given a copy of the one time pad and code book we can easily unscramble an encrypted message.
Provided the rules are followed, messaged encypted in this way are truly unbreakable as the resulting encrypted message is indistinguishable from random numbers.
The main problem with the one time pad is that we must (secretly) print and distrubute lots of pairs of pads, each sheet of which can only be used once. This gets cumbersome if we have lots of spies (or diplomats) who want to send lots of messages. To get around this, modern system replace the paper pads with an recipe for generating an apparently random sequence of numbers. Computers are ideal for this, so most modern systems use a combination of a recipe (alogrythm) and some ‘key’ values to encrypt information.

Several difficulties are inherent in using one-time pads in the real world. One difficulty is the challenge of creating random data.
A pseudo-random number generator (PRNG), like what your computer normally uses to randomly generate numbers for games, will not work here.
Hardware random number generator is an apparatus that generates random numbers from a physical process, rather than a computer program.
Such devices are often based on microscopic phenomena that generate a low-level, statistically random "noise" signal,
such as thermal noise or the photoelectric effect or other quantum phenomena.
These processes are, in theory, completely unpredictable.

Teletype Cipher:
- It was used by the US and Russian governments to exchange information,
- After a message was encrypted, the key tape was destroyed,
- At the receiving end, the process was reversed using an identical key tape to decode the message.

Cryptanalysis
Cryptanalysis is the practice and study of determining the meaning of encrypted information (cracking the code), without access to the shared secret key.

Throughout history, there have been many instances of cryptanalysis:
- The Vigenère cipher had been absolutely secure until it was broken in the middle of the 19th century by English cryptographer Charles Babbage.
- The Enigma-encrypted communications were used by the Germans to navigate and direct their U-boats in the Atlantic. The Polish and British cryptanalysts broke the German Enigma code. Winston Churchill was of the opinion that it was a turning point in WWII.

A variety of methods are used in cryptanalysis.
 - Brute-Force Attack, an attacker tries every possible key with the decryption algorithm knowing that eventually one of them will work. The objective of modern cryptographers is to have a keyspace large enough that it takes too much money and too much time to accomplish a brute-force attack.
- Ciphertext-Only Attack, the attacker has the ciphertext of several messages, all of which have been encrypted using the same encryption algorithm, but the attacker has no knowledge of the underlying plaintext. These kinds of attacks are no longer practical, because modern algorithms produce pseudorandom output that is resistant to statistical analysis.
- Known-Plaintext Attack, the attacker has access to the ciphertext of several messages, but also knows something about the plaintext underlying that ciphertext. Modern algorithms with enormous keyspaces make it unlikely for this attack to succeed because, on average, an attacker must search through at least half of the keyspace to be successful.
- Chosen-Plaintext Attack, the attacker chooses which data the encryption device encrypts and observes the ciphertext output.This attack is not very practical because, unless the trusted network has been breached.
- Chosen-Ciphertext Attack, the attacker can choose different ciphertext to be decrypted and has access to the decrypted plaintext. This attack is analogous to the chosen-plaintext attack. This attack is not very practical. Unless the trusted network has been breached, and the attacker already has access to confidential information.
- Meet-in-the-Middle, The attacker knows a portion of the plaintext and the corresponding ciphertext.

Cryptographic hashes
A hash function takes binary data, called the message, and produces a condensed representation, called the message digest (digest = catalogue, summary).

Hashing is based on a one-way mathematical function that is relatively easy to compute, but significantly harder to reverse. Grinding coffee is a good example of a one-way function. It is easy to grind coffee beans, but it is almost impossible to put all of the tiny pieces back together to rebuild the original beans.

The cryptographic hashing function is designed to verify and ensure data integrity.

Hashing is similar to calculating cyclic redundancy check (CRC) checksums, but it is much stronger cryptographically.

Every time the data is changed or altered, the hash value also changes.
Because of this, cryptographic hash values are often called digital fingerprints. They can be used to detect duplicate data files, file version changes, and similar applications. These values are used to guard against an accidental or intentional change to the data and accidental data corruption.

A cryptographic hash function should have the following properties:
 - The input can be any length.
 - The output has a fixed length.
 - H(x) - hash function, is relatively easy to compute for any given x. 
 - H(x) is one way and not reversible.
 - H(x) is collision free, meaning that two different input values will result in different hash values.

MD5 and SHA-1
These are two well-known hash functions:
 - Message Digest 5 (MD5) with 128-bit digests,
    is collision resistant, which means that two messages with the same hash are very unlikely to occur,
    produces a 128-bit hash from a complex sequence of simple binary operations.
 - Secure Hash Algorithm 1 (SHA-1) with 160-bit digests.
    Takes an input message of less than 2^64 bits and produces a 160-bit message digest.
    The algorithm is slightly slower than MD5.
    SHA-1 is a revision that corrected an unpublished flaw in the original SHA.
    SHA-224, SHA-256, SHA-384, and SHA-512 are more secure versions of SHA and are collectively known as SHA-2.
    SHA-3, selected in 2012, is the newest and most secure version of SHA.
Cleartext:  CCNA Security
MD-5:   d62d75a4d484befc14cf61f60d02eb74
SHA-1:  7e73851fffa9564d5d0ac491439850e130a6b0ed
SHA224: 4d0398c6dc369665852598b48b3b18e64e44e1fab66e459710844e24
SHA384: f68f1439ee7d8f63a26227c4dc9487bcf0348a75f521b786af7d31a62d2fa0bc564589cf64bd9c1ff348575f650feb1a
HMAC
In cryptography, a keyed-hash message authentication code (HMAC or KHMAC) is a type of message authentication code (MAC). An HMAC is calculated using a specific algorithm that combines a cryptographic hash function with a secret key. Hash functions are the basis of the protection mechanism of HMACs.
Only the sender and the receiver know the secret key, and the output of the hash function now depends on the input data and the secret key.

Cisco technologies use two well-known HMAC functions:
Keyed MD5 (HMAC-MD5), based on the MD5 hashing algorithm
Keyed SHA-1 (HMAC-SHA-1), based on the SHA-1 hashing algorithm

Cisco products use hashing for entity authentication, data integrity, and data authenticity purposes:
 - Cisco IOS routers use hashing with secret keys in an HMAC-like manner to add authentication information to routing protocol updates.
 - IPsec gateways and clients use hashing algorithms, such as MD5 and SHA-1 in HMAC mode.
 - Cisco software images that are downloaded from Cisco.com have an MD5-based checksum available so that customers can check the integrity of downloaded images.
 - Hashing can also be used in a feedback-like mode to provide a shared secret key to encrypt data. For example, TACACS+ uses an MD5 hash as the key to encrypt the session.

Digital signatures are an alternative to HMAC.

Key management
Key management is often considered the most difficult part of designing a cryptosystem.

There are several essential characteristics of key management to consider:
 - Generation - in a modern cryptographic system, key generation is usually automated and not left to the end user. The use of good random number generators is needed to ensure that all keys are likely to be equally generated so that the attacker cannot predict which keys are more likely to be used.
- Verification - Some keys are better than others. Almost all cryptographic algorithms have some weak keys that should not be used. With the help of key verification procedures, these keys can be regenerated if they occur. With the Caesar cipher, using a key of 0 or 25 does not encrypt the message, so it should not be used.
 - Storage - On a modern multi-user operating system that uses cryptography, a key can be stored in memory. This presents a possible problem when that memory is swapped to the disk, because a Trojan Horse program installed on the PC of a user could then have access to the private keys of that user.
 - Exchange - Key management procedures should provide a secure key exchange mechanism that allows secure agreement on the keying material with the other party, probably over an untrusted medium.
 - Revocation and Destruction - Revocation notifies all interested parties that a certain key has been compromised and should no longer be used. Destruction erases old keys in a manner that prevents malicious attackers from recovering them.

Two terms that are used to describe keys are key length and keyspace. 
As key lengths increase, the keyspace increases exponentially:
A 2-bit (2^2) key length = a keyspace of 4, because there are four possible keys (00, 01, 10, and 11).
A 4-bit (2^4) key length = a keyspace of 16 possible keys.
A 40-bit (2^40) key length = a keyspace of 1,099,511,627,776 possible keys.
Almost every algorithm has some weak keys in its keyspace that enable an attacker to break the encryption via a shortcut. Weak keys show regularities in encryption or poor encryption.

Several types of cryptographic keys can be generated:
 - Symmetric keys, which can be exchanged between two routers supporting a VPN
 - Asymmetric keys, which are used in secure HTTPS applications
 - Digital signatures, which are used when connecting to a secure website
 - Hash keys, which are used in symmetric and asymmetric key generation, digital signatures, and other types of applications.

With modern algorithms that are trusted, the strength of protection depends solely on the length of the key.
Performance is another issue that can influence the choice of a key length. The rule "the longer the key, the better" is valid, except for possible performance reasons.

Encryption
    Acronyms
DES     Data Encryption Standard
3DES    Triple Data Encryption Standard
AES     Advanced Encryption Standard
CTR     Counter Mode
CFB     Cipher Feedback Mode
ECB     Electronic Codebook Mode
CBC     Chain Block Chaining Mode
OFB     Output Feedback Mode
PKC     Public Key Cryptography
NIST    National Institute of Standards and Technology
Cryptographic encryption can provide confidentiality at several layers of the OSI model by incorporating various tools and protocols:
 - Proprietary link-encrypting devices provide Data Link Layer confidentiality.
 - Network Layer protocols, such as the IPsec protocol suite, provide Network Layer confidentiality.
 - Protocols such as Secure Sockets Layer (SSL) or Transport Layer Security (TLS) provide Session Layer confidentiality.
 - Secure email, secure database session (Oracle SQL*net), and secure messaging (Lotus Notes sessions) provide Application Layer confidentiality.

Symmetric encryption algorithms use the same key, sometimes called a secret key, to encrypt and decrypt data. The key must be pre-shared.
A pre-shared key (PSK) is known by the sender and receiver before any encrypted communications commence. Because both parties are guarding a shared secret, the encryption algorithms used can have shorter key lengths. Shorter key lengths mean faster execution. Symmetric algorithms are generally much less computationally intensive than asymmetric algorithms.
The usual key length is 80 - 256 bits.
Examples of symmetric encryption algorithms are DES, 3DES, AES, IDEA, RC2/4/5/6, and Blowfish.
Block ciphers - transform a fixed-length block of plaintext into a common block of ciphertext of 64 or 128 bits. Block size refers to how much data is encrypted at any one time.
 - Electronic Codebook (ECB) mode - serially encrypts each 64-bit plaintext block using the same 56-bit key. 
 - Chain Block Chaining (CBC) mode - the encryption of each block depends on previous blocks. CBC mode can help guard against certain attacks, but it cannot help against sophisticated cryptanalysis or an extended brute-force attack.

Stream ciphers encrypt plaintext one byte or one bit at a time. Stream ciphers can be thought of as a block cipher with a block size of one bit.
To encrypt or decrypt more than 64 bits of data, DES uses two common stream cipher modes:
 - Cipher feedback (CFB), which is similar to CBC and can encrypt any number of bits, including single bits or single characters.
 - Output feedback (OFB) generates keystream blocks, which are then XORed with the plaintext blocks to get the ciphertext.  CBC is the most widely used mode of DES.
More info @ http://en.wikipedia.org/wiki/Block_cipher_modes_of_operation

Asymmetric encryption algorithms use different keys to encrypt and decrypt data. Secure messages can be exchanged without having to have a pre-shared key. Because both parties do not have a shared secret, very long key lengths must be used to thwart (stop) attackers.
These algorithms are resource intensive and slower to execute.
In practice, asymmetric algorithms are typically hundreds to thousands times slower than symmetric algorithms.
The usual key length is 512–4096 bits.
A sender and receiver do not share a secret key.
Examples of asymmetric encryption algorithms are RSA, ElGamal, elliptic curves, and DH.

DES
Because of its short key length, DES is considered a good protocol to protect data for a very short time. 3DES is a better choice to protect data. It has an algorithm that is very trusted and has higher security strength.
 - Change keys frequently to help prevent brute-force attacks.

3DES
The technique of applying DES three times in a row to a plaintext block is called 3DES.
3DES uses a method called 3DES-Encrypt-Decrypt-Encrypt (3DES-EDE) to encrypt plaintext. 
Today, brute-force attacks on 3DES are considered unfeasible because the basic algorithm has been well tested in the field for more than 35 years. It is considered very trustworthy.
The Cisco IPsec implementation uses DES and 3DES in CBC mode.
Although 3DES is very secure, it is also very resource intensive.

AES
 In 1997, the AES initiative was announced, and the public was invited to propose encryption schemes to replace DES. After a five-year standardization process in which 15 competing designs were presented and evaluated, the U.S. National Institute of Standards and Technology (NIST) selected the Rijndael block cipher as the AES algorithm.
Rijndael is an iterated block cipher, which means that the initial input block and cipher key undergo multiple transformation cycles before producing output.
The AES algorithm has been analyzed extensively and is now used worldwide.
It can be used in high-throughput, low-latency environments, especially when 3DES cannot handle the throughput or latency requirements.
The golden rule of cryptography states that a mature algorithm is always more trusted. 3DES is therefore a more trusted choice in terms of strength, because it has been tested and analyzed for 35 years.
AES is available in the following Cisco VPN devices as an encryption transform:
 - IPsec-protected traffic using Cisco IOS Release 12.2(13)T and later
 - Cisco PIX Firewall software version 6.3 and later
 - Cisco ASA software version 7.0 and later
 - Cisco VPN 3000 software version 3.6 and later

SEAL
The Software-optimized Encryption Algorithm (SEAL) is an alternative algorithm to software-based DES, 3DES, and AES. SEAL has a lower impact on the CPU compared to other software-based algorithms. SEAL support was added to Cisco IOS Software Release 12.3(7)T.
SEAL has several restrictions:
- The Cisco router and the peer must support IPsec.
- The Cisco router and the other peer must run an IOS image with k9 long keys (the k9 subsystem).
- The router and the peer must not have hardware IPsec encryption.

RC algorithms
The RC algorithms were designed all or in part by Ronald Rivest, who also invented MD5.
The RC algorithms are widely deployed in many networking applications because of their favorable speed and variable key-length capabilities. 

 - RC2 - Variable key-size block cipher that was designed as a "drop-in" replacement for DES.
 - RC4 - World's most widely used stream cipher. This algorithm is a variable key-size Vernam stream cipher that is often used in file encryption products and for secure communications, such as within SSL. It is not considered a one-time pad, because its key is not random. The cipher can be expected to run very quickly in software and is considered secure, although it can be implemented insecurely, as in Wired Equivalent Privacy (WEP).
 - RC5 - A fast block cipher that has a variable block size and key size. RC5 can be used as a drop-in replacement for DES if the block size is set to 64-bit.
 - RC6 - Developed in 1997, RC6 was an AES finalist (Rijndael won). A 128-bit to 256- bit block cipher that was designed by Rivest, Sidney, and Yin and is based on RC5. Its main design goal was to meet the requirement of AES.

Diffie-Hellman
Whitfield Diffie and Martin Hellman invented the Diffie-Hellman (DH) algorithm in 1976.
The DH algorithm is the basis of most modern automatic key exchange methods and is one of the most common protocols used in networking today.
Diffie-Hellman key exchange (D-H) is a cryptographic protocol that allows two parties that have no prior knowledge of each other to securely agree on a shared secret key over an insecure communications channel

It is a method to securely exchange the keys that encrypt data.
Asymmetric key systems use two keys. One key is called the private key, and the other is the public key.
DH is a mathematical algorithm that allows two computers to generate an identical shared secret on both systems, without having communicated before.
Unfortunately, asymmetric key systems are extremely slow for any sort of bulk encryption.

To start a DH exchange, Alice and Bob must agree on two non-secret numbers.
 - The first number, g, is a base number (also called the generator).
 - The second number, p, is a prime number that is used as the modulus. These numbers are usually public and are chosen from a table of known values. Typically, g is a very small number, such as 2, 3, 4, or 5 and p is a larger prime number. 

This is why it is common to encrypt the bulk of the traffic using a symmetric algorithm such as DES, 3DES, or AES and use the DH algorithm to create keys that will be used by the encryption algorithm.

Public Key Cryptography: Diffie-Hellman Key Exchange  http://www.youtube.com/watch?v=3QnD2c4Xovk

IPsec, Internet Protocol Security, is a set of protocols defined by the IETF, Internet Engineering Task Force, to provide IP security at the network layer.
An IPsec based VPN, is made up by two parts:
 - Internet Key Exchange protocol (IKE)
 - IPsec protocols (AH/ESP/both)
 The first part, IKE, is the initial negotiation phase, where the two VPN endpoints agree on which methods will be used to provide security for the underlying IP traffic.
Furthermore, IKE is used to manage connections, by defining a set of Security Associations, SAs, for each connection. SAs are unidirectional, so there will be at least two SAs per IPsec connection.
  The other part is the actual IP data being transferred, using the encryption and authentication methods agreed upon in the IKE negotiation.
 This can be accomplished in a number of ways; by using IPsec protocols ESP, AH, or a combination of both.

Cisco no longer recommends using DES, 3DES, MD5 (including HMAC variant), and Diffie-Hellman (DH) groups 1, 2 and 5; instead, you should use AES, SHA-256 and DH Groups 14 or higher. For more information about the latest Cisco cryptographic recommendations, see the Next Generation Encryption (NGE) white paper.

Symmetric vs Asymmetric
Asymmetric algorithms, also sometimes called public-key algorithms, are designed so that the key that is used for encryption is different from the key that is used for decryption. The decryption key cannot, in any reasonable amount of time, be calculated from the encryption key and vice versa.
There are four protocols that use asymmetric key algorithms:
 - Internet Key Exchange (IKE), protocol - a fundamental component of IPsec VPNs
 - Secure Socket Layer, now implemented as IETF standard TLS
 - SSH
 - Pretty Good Privacy (PGP), a computer program that provides cryptographic privacy and authentication and often used to increase the security of email communications.
Public Key (Encrypt) + Private Key (Decrypt) = Confidentiality
Private Key (Encrypt) + Public Key (Decrypt) = Authentication
Asymmetric algorithms can be up to 1,000 times slower than symmetric algorithms.

Digital Signatures
Digital signatures provide three basic security services:
 - Authenticity of digitally signed data - Digital signatures authenticate a source, proving that a certain party has seen and signed the data in question.
 - Integrity of digitally signed data - Digital signatures guarantee that the data has not changed from the time it was signed.
 - Nonrepudiation of the transaction - The recipient can take the data to a third party, and the third party accepts the digital signature as a proof that this data exchange did take place. The signing party cannot repudiate that it has signed the data.
(nonrepudiation = строгое выполнение обязательств).
Nonrepudiation of the transaction means:
 - A service that provides proof of the integrity and origin of data.
 - An authentication that with high assurance can be asserted to be genuine. 


Example 1: The network administrator for an e-commerce website requires a service that prevents customers from claiming that legitimate orders are fake. 
Example 2: A customer purchases an item from an e-commerce site. The e-commerce site must maintain proof that the data exchange took place between the site and the customer. 
Nonrepudiation of the transaction feature of digital signatures is required.

Many Cisco products use digital signatures:
 - IPsec gateways and clients use digital signatures to authenticate their Internet Key Exchange (IKE) sessions if the administrator chooses digital certificates and the IKE RSA signature authentication method.
 - Cisco SSL endpoints, such as Cisco IOS HTTP servers, and the Cisco Adaptive Security Device Manager (ASDM) use digital signatures to prove the identity of the SSL server.
 - Some of the service provider-oriented voice management protocols for billing and settlement use digital signatures to authenticate the involved parties.

A modern digital signature is based on a hash function and a public-key algorithm.
There are six steps to the digital signature process:
1. The sending device (signer) creates a hash of the document.
2. The sending device encrypts the hash with the private key of the signer.
3. The encrypted hash, known as the signature, is appended to the document.
4. The receiving device (verifier) accepts the document with the digital signature and obtains the public key of the sending device.
5. The receiving device decrypts the signature using the public key of the sending device. This step unveils the assumed hash value of the sending device.
6. The receiving device makes a hash of the received document, without its signature, and compares this hash to the decrypted signature hash. If the hashes match, the document is authentic; it was signed by the assumed signer and has not changed since it was signed.
Digital signatures are widely used for code signing:
 - The publisher of the software attaches a digital signature to the executable, signed with the signature key of the publisher.
 - The user of the software needs to obtain the public key of the publisher or the CA certificate of the publisher if PKI is used.

Well-known asymmetric algorithms, such as RSA or Digital Signature Algorithm (DSA), are typically used to perform digital signing.

DSA (Digital Signature Algorithm) 1994
DSA is based on the discrete logarithm problem and can only provide digital signatures. 
A network administrator must decide whether RSA or DSA is more appropriate for a given situation.
DSA, however, has had several criticisms. Critics claim that DSA lacks the flexibility of RSA.
DSA signature generation is faster than DSA signature verification. On the other hand, RSA signature verification is much faster than signature generation.
Advantages       Signature generation is fast
Disadvantages    Signature verification is slow

RSA (Rivest, Shamir, Adleman) invented in 1977
RSA is one of the most common asymmetric algorithms.
The RSA algorithm is based on a public key and a private key. Key size (in bits) - 512 - 2048.
The public key can be published and given away, but the private key must be kept secret.  
It is not possible to determine the private key from the public key using any computationally feasible algorithm and vice versa.
Of all the public-key algorithms that were proposed over the years, RSA is by far the easiest to understand and implement.
The RSA algorithm is very flexible because it has a variable key length, so the key can be shortened for faster processing. There is a tradeoff; the shorter the key, the less secure it is.
The security of RSA is based on the difficulty of factoring very large numbers.
Advantages       Signature verification is fast
Disadvantages    Signature generation is slow
RSA is about a hundred times slower than DES in hardware, and about a thousand times slower than DES in software.
This performance problem is the main reason that RSA is typically used only to protect small amounts of data.
RSA is mainly used to ensure confidentiality of data by performing encryption, and to perform authentication of data or nonrepudiation of data, or both, by generating digital signatures.

Public Key Infrastructure
A public-key infrastructure (PKI) is a set of hardware, software, people, policies, and procedures needed to create, manage, distribute, use, store, and revoke digital certificates.

Good and simple explanation of PKI,Symetric/Asymetric enryption, CA http://www.youtube.com/watch?v=EizeExsarH8

With trusted third-party protocols, all individuals agree to accept the word of a neutral third party.
Certificate servers are an example of a trusted third party.
For example, driver's license -  the bank trusts the government agency that issued the driver's license, verifies identity and cashes check.
Certificate servers function like the driver's license bureau.
The driver's license is analogous to a certificate in a Public Key Infrastructure (PKI) or another technology that supports certificates.

PKI is a service framework (hardware, software, people, policies and procedures) needed to support large-scale public key-based technologies.
 - Certificate - A document, which binds together the name of the entity and its public key and has been signed by the CA. Certificates are public information.
 - Certificate authority (CA) - The trusted third party that signs the public keys of entities in a PKI-based system.

There are five main components of a PKI:
 - PKI users, such as people, devices, and servers
 - CAs for key management
 - Storage and protocols
 - Supporting organizational framework, known as practices and user authentication using Local Registration Authorities (LRAs)
 - Supporting legal framework

Many vendors offer CA servers as a managed service or as an end user product, including VeriSign, Entrust Technologies, RSA, CyberTrust, Microsoft, and Novell.

A certificate class is usually identified by a number. The higher the number, the more trusted the certificate:
 - Class 0 is for testing purposes in which no checks have been performed.
 - Class 1 is for individuals with a focus on verification of email.
 - Class 2 is for organizations for which proof of identity is required.
 - Class 3 is for servers and software signing for which independent verification and checking of identity and authority is done by the issuing certificate authority.
 - Class 4 is for online business transactions between companies.
 - Class 5 is for private organizations or governmental security.

For example, a class 1 certificate might require an email reply from the holder to confirm the wish to enroll. This kind of confirmation is a weak authentication of the holder. For a class 3 or 4 certificate, the future holder must prove identity and authenticate the public key by showing up in person with at least two official ID documents.

Some PKIs offer the possibility, or even require the use, of two key pairs per entity.
The first public and private key pair is intended only for encryption operations.
The second public and private key pair is intended for digital signing operations.
These keys are sometimes called usage or special keys. 
The following scenarios typically employ usage keys:
- When an encryption certificate is used much more frequently than a signing certificate, the public and private key pair is more exposed because of its frequent usage. In this case, it might be a good idea to shorten the lifetime of the key pair and change it more often, while having a separate signing private and public key pair with a longer lifetime.
- When different levels of encryption and digital signing are required because of legal, export, or performance issues, usage keys allow an administrator to assign different key lengths to the two pairs.
- When key recovery is desired, such as when a copy of a user's private key is kept in a central repository for various backup reasons. Usage keys allow the user to back up only the private key of the encrypting pair. The signing private key remains with the user, enabling true nonrepudiation.

The state of interoperability of PKI standarts is very basic, even after 10 years of PKI software development. To address this interoperability concern, the IETF formed the Public-Key Infrastructure X.509 (PKIX) workgroup, which is dedicated to promoting and standardizing PKI in the Internet. This workgroup has published a draft set of standards, X.509, detailing common data formats and PKI-related protocols in a network.

X.509v3 is a standard defines the format of a digital certificate.
X.509v3 is used with:
 - Secure web servers: SSL and TLS
 - Web browsers: SSL and TLS
 - Email programs: S/MIME (Secure/Multipurpose Internet Mail Extensions)
 - IPsec VPNs: IKE  RSA-based authentication
 - Pretty Good Privacy (PGP) - end users could engage in confidential communications using encryption. The most frequent use of PGP has been to secure email.

Certificates are also used at the Network Layer or Application Layer by network devices.
Cisco routers, Cisco VPN concentrators, and Cisco PIX firewalls can use certificates to authenticate IPsec peers.
Cisco switches can use certificates to authenticate end devices connecting to LAN ports. Authentication uses 802.1X between the adjacent devices. The authentication can be proxied to a central ACS via the Extensible Authentication Protocol with TLS (EAP-TLS).
Cisco routers can also provide TN3270 support that does not include encryption or strong authentication. Cisco routers can now use SSL to establish secure TN3270 sessions.

Another important PKI standard is the Public-Key Cryptography Standards (PKCS).
PKCS provides basic interoperability of applications that use public-key cryptography.
RSA PKCS:
PKCS #1: RSA Cryptography Standard
PKCS #3: DH Key Agreement Standard
PKCS #5: Password-Based Cryptography Standard
PKCS #6: Extended-Certificate Syntax Standard
PKCS #7: Cryptographic Message Syntax Standard
PKCS #8: Private-Key Information Syntax Standard
PKCS #10: Certification Request Syntax Standard
PKCS #12: Personal Information Exchange Syntax Standard
PKCS #13: Elliptic Curve Cryptography Standard
PKCS #15: Cryptographic Token Information Format Standard

The IETF designed the Simple Certificate Enrollment Protocol (SCEP) to make issuing and revocation of digital certificates as scalable as possible.
SCEP is now being referenced by network equipment manufacturers and software companies who are developing simplified means of handling certificates for large-scale implementation to everyday users.

PKIs can form different topologies of trust, including single-root PKI topologies, hierarchical CA topologies, and cross-certified CA topologies.
 - Single-root PKI Topology (Certificates issued by one CA, Centralized trust decisions, Single point of failure)
- Hierarchical CA Topology (Delegation and distribution of trust, Certification paths). CAs can issue certificates to end users and to subordinate CAs, which in turn issue their certificates to end users, other CAs, or both. The main benefits of a hierarchical PKI topology are increased scalability and manageability. One issue with hierarchical PKI topologies lies in finding the certification path for a certificate.  It can be difficult to determine the chain of the signing process.
- Cross-certified CA Topology (multiple, flat, single-root CAs establish trust relationships horizontally by cross-certifying their own CA certificates.)

As PKIs are hierarchical in nature, the issuing certificate authority may be a root CA (the top-level CA in the hierarchy) or a subordinate CA.
The PKI might employ additional hosts, called registration authorities (RAs) to accept requests for enrollment in the PKI. RAs are employed to reduce the burden on CAs in an environment that supports a large number of certificate transactions or where the CA is offline.

Usually, these tasks are offloaded to the RA:
 - Authentication of users when they enroll with the PKI,
 - Key generation for users that cannot generate their own keys,
 - Distribution of certificates after enrollment.

CRL - Certificate Revocation List -  is one of two common methods when using a public key infrastructure for maintaining access to servers in a network. The other, newer method, which has superseded CRL in some cases, is Online Certificate Status Protocol (OCSP).

It is important to note that the RA only has the power to accept registration requests and forward them to the CA. It is not allowed to issue certificates or publish CRLs. The CA is responsible for these functions.

CA and signature procedure:
 1. Alice and Bob request the CA certificate that contains the CA public key.
 2. Upon receipt of the CA certificate, each system (of Alice and Bob) verifies the validity of the certificate using public key cryptography.
 3. Alice and Bob follow up the technical verification done by their system by telephoning the CA administrator and verifying the public key and serial number of the certificate.

Having installed certificates signed by the same CA, Bob and Alice are now ready to authenticate each other.

If Two users must authenticate each other using digital certificates and a CA, the users must obtain the certificate of the CA and then their own certificate.

CA certificates are retrieved in-band over a network, and the authentication is done out-of-band using the telephone.

PKI as an authentication mechanism has several characteristics:
 - To authenticate each other, users have to obtain the certificate of the CA and their own certificate.
 - Public-key systems use asymmetric keys in which one is public and the other one is private. This provides nonrepudiation.
 - Key management is simplified because two users can freely exchange the certificates. The validity of the received certificates is verified using the public key of the CA, which the users have in their possession.
 - Because of the strength of the algorithms that are involved, administrators can set a very long lifetime for the certificates, typically a lifetime that is measured in years.

The disadvantages of using trusted third parties relate to key management:
 - A user certificate is compromised (stolen private key).
 - The certificate of the CA is compromised (stolen private key).
 - The CA administrator makes an error (the human factor).

Which type of PKI to implement varies depending on the needs of the organization. Administrators might need to combine public-key authentication with another authentication mechanism to increase the level of security and provide more authorization options. For example, IPsec using certificates for authentication and Extended Authentication (XAUTH) with one-time password hardware tokens is a superior authentication scheme when compared to certificates alone.