Saturday, August 31, 2013

DBMS-Relational Algebra-Set Operations

DBMS-Relational Algebra-Set Operations


Set Operations

The following standard operations on sets are also available in relational algebra: union
([), intersection (\), set-di erence (−), and cross-product ( ).
Union: R[S returns a relation instance containing all tuples that occur in either
relation instance R or relation instance S (or both). R and S must be unioncompatible,
and the schema of the result is de ned to be identical to the schema
of R.
Two relation instances are said to be union-compatible if the following conditions
hold:
{ they have the same number of the elds, and
{ corresponding elds, taken in order from left to right, have the same domains.
Note that eld names are not used in de ning union-compatibility. For convenience,
we will assume that the elds of R [ S inherit names from R, if the elds
of R have names. (This assumption is implicit in de ning the schema of R [ S to
be identical to the schema of R, as stated earlier.)
Intersection: R\S returns a relation instance containing all tuples that occur in
both R and S. The relations R and S must be union-compatible, and the schema
of the result is de ned to be identical to the schema of R.
Set-di erence: R−S returns a relation instance containing all tuples that occur
in R but not in S. The relations R and S must be union-compatible, and the
schema of the result is de ned to be identical to the schema of R.
Cross-product: R S returns a relation instance whose schema contains all the
elds of R (in the same order as they appear in R) followed by all the elds of S
(in the same order as they appear in S). The result of R S contains one tuple
hr; si (the concatenation of tuples r and s) for each pair of tuples r 2 R; s 2 S.
The cross-product opertion is sometimes called Cartesian product.
We will use the convention that the elds of R S inherit names from the corresponding
elds of R and S. It is possible for both R and S to contain one or
more elds having the same name; this situation creates a naming conflict. The
corresponding elds in R S are unnamed and are referred to solely by position.

DBMS-Relational Algebra

DBMS-Relational Algebra 

Introduction

Relational algebra is one of the two formal query languages associated with the relational model. Queries in algebra are composed using a collection of operators. A fundamental property is that every operator in the algebra accepts (one or two) relation instances as arguments and returns a relation instance as the result. This property makes it easy to compose operators to form a complex query|a relational algebra expression is recursively de ned to be a relation, a unary algebra operator applied to a single expression, or a binary algebra operator applied to two expressions.
Each relational query describes a step-by-step procedure for computing the desired answer, based on the order in which operators are applied in the query. The procedural nature of the algebra allows us to think of an algebra expression as a recipe, or a plan, for evaluating a query, and relational systems in fact use algebra expressions to represent query evaluation plans.

DBMS-Level of Abstraction

DBMS-Level of Abstraction 


Levels of Abstraction in a DBMS


The data in a DBMS is described at three levels of abstraction, as illustrated in Figure The database description consists of a schema at each of these three levels of abstraction: 
the conceptual, physical, and external schemas.
A data de nition language (DDL) is used to de ne the external and conceptual schemas. We will discuss the DDL facilities of the most widely used database language, SQL, DISK External Schema 1 
External Schema 2 External Schema 3 Conceptual Schema, Physical Schema
 Levels of Abstraction in a DBMS
standard. Information about the conceptual, external, and physical schemas is stored
in the system catalogs
Conceptual Schema
The conceptual schema (sometimes called the logical schema) describes the stored data in terms of the data model of the DBMS. In a relational DBMS, the conceptual schema describes all relations that are stored in the database. In our sample university database, these relations contain information about entities, such as students and faculty, and about relationships, such as students' enrollment in courses. All student entities can be described using records in a Students relation, as we saw earlier. In fact, each collection of entities and each collection of relationships can be described as a relation, leading to the following conceptual schema:
Students(sid: string, name: string, login: string, age: integer, gpa: real)
Faculty( d: string, fname: string, sal: real)
Courses(cid: string, cname: string, credits: integer)
Rooms(rno: integer, address: string, capacity: integer)
Enrolled(sid: string, cid: string, grade: string)
Teaches( d: string, cid: string)
Meets In(cid: string, rno: integer, time: string)
The choice of relations, and the choice of
Introduction to Database Systems 13

Physical Schema
The physical schema speci es additional storage details. Essentially, the physical schema summarizes how the relations described in the conceptual schema are actually stored on secondary storage devices such as disks and tapes.
We must decide what le organizations to use to store the relations, and create auxiliary data structures called indexes to speed up data retrieval operations. A sample physical schema for the university database follows:
Store all relations as unsorted les of records. (A le in a DBMS is either a collection of records or a collection of pages, rather than a string of characters as in an operating system.) Create indexes on the rst column of the Students, Faculty, and Courses relations, the sal column of Faculty, and the capacity column of Rooms.
Decisions about the physical schema are based on an understanding of how the data is typically accessed. The process of arriving at a good physical schema is called physical database design. 
External Schema
External schemas, which usually are also in terms of the data model of the DBMS, allow data access to be customized (and authorized) at the level of individual users or groups of users. Any given database has exactly one conceptual schema and one physical schema because it has just one set of stored relations, but it may have several external schemas, each tailored to a particular group of users. Each external schema
consists of a collection of one or more views and relations from the conceptual schema. 
The external schema design is guided by end user requirements. For example, we might want to allow students to nd out the names of faculty members teaching courses, as well as course enrollments. This can be done by de ning the following view: 
Courseinfo(cid: string, fname: string, enrollment: integer)
A user can treat a view just like a relation and ask questions about the records in the view. Even though the records in the view are not stored explicitly, they are computed as needed. We did not include Courseinfo in the conceptual schema because we can compute Courseinfo from the relations in the conceptual schema, and lead to inconsistencies. For example, a tuple may be inserted into the Enrolled relation, indicating that a particular student has enrolled in some course, without incrementing the value in the enrollment eld of the corresponding record of Courseinfo (if the latter also is part of the conceptual schema and its tuples are stored in the DBMS).

Database

Database 

A database is a collection of data, typically describing the activities of one or more related organizations. For example, a university database might contain information about the following:
Entities such as students, faculty, courses, and classrooms.
Relationships between entities, such as students' enrollment in courses, faculty teaching courses, and the use of rooms for courses.

DBMS-Advantages

DBMS-Advantages


ADVANTAGES OF A DBMS
Using a DBMS to manage data has many advantages:

Data independence: Application programs should be as independent as possible from details of data representation and storage. The DBMS can provide an abstract view of the data to insulate application code from such details. 
E cient data access: A DBMS utilizes a variety of sophisticated techniques to store and retrieve data e ciently. This feature is especially important if the data is stored on external storage devices.

Data integrity and security: If data is always accessed through the DBMS, the DBMS can enforce integrity constraints on the data. For example, before inserting salary information for an employee, the DBMS can check that the department budget is not exceeded. Also, the DBMS can enforce access controls that govern
what data is visible to di erent classes of users.

Data administration: When several users share the data, centralizing the administration of data can o er signi cant improvements.

Concurrent access and crash recovery: A DBMS schedules concurrent accesses to the data in such a manner that users can think of the data as being accessed by only one user at a time. Further, the DBMS protects users from the e ects of system failures.

Reduced application development time: Clearly, the DBMS supports many important functions that are common to many applications accessing data stored in the DBMS. This, in conjunction with the high-level interface to the data, facilitates quick development of applications. 

Database Management System-DBMS

Database Management System-DBMS

A database management system, or DBMS, is software designed to assist in maintaining and utilizing large collections of data, and the need for such systems, as well as their use, is growing rapidly. The alternative to using a DBMS is to use adhoc approaches that do not carry over from one application to another; for example, to store the data in les and write application-speci c code to manage it.


Database

Database 


A database is a collection of data, typically describing the activities of one or more related organizations. For example, a university database might contain information about the following:
Entities such as students, faculty, courses, and classrooms.
Relationships between entities, such as students' enrollment in courses, faculty teaching courses, and the use of rooms for courses.

Thursday, August 29, 2013

Network Layer-Functions

Network Layer-Functions

Forwarding: moving packets from router’s input port to appropriate output.
• car analogy: process of getting through a single interchange

routing: determine the route taken by packets from source to destination

car analogy: process of planning trip from source to dest
• use routing algorithms

connection setup: like in TCP, but with all routers in a path
• used in ATM, frame relay, X.25, but not IP

Network Layer-Services

Network Layer-Services 

  • transports segments from sending to receiving host
  • on sending side, encapsulates segments into datagrams.
  • on receiving side, delivers segments to transport layer.
  • network layer protocols in every host, router.
  • router examines header fields in every IP datagram that passes through it.

Network Layer-Principles

Network Layer-Principles 

Principles behind network 

layer services

  • routing (path selection)
  • scalability
  • routers
Example implementation: the
Internet (and ATM briefly)

Network Layer-Introduction

Network Layer-Introduction

The network layer is concerned with getting packets from the source all the way to the destination. The packets may require to make many hops at the intermediate routers while reaching the destination. This is the lowest layer that deals with end to end transmission. In order to achieve its goals, the network layer must know about the topology of the communication network. It must also take care to choose routes to avoid overloading of some of the communication lines while leaving others idle.

Saturday, August 24, 2013

Transmission Control Protocol-TCP

Transmission Control Protocol-TCP


The Transmission Control Protocol (TCP) was initially defined in RFC 793. Several parts of the protocol have been improved since the publication of the original protocol specification [1]. However, the basics of the protocol remain and an implementation that only supports RFC 793 should inter-operate with today's implementation.

TCP provides a reliable bytestream, connection-oriented transport service on top of the unreliable connectionless network service provided by IP. TCP is used by a large number of applications, including :
Email (SMTP, POP, IMAP)
World wide web ( HTTP, ...)
Most file transfer protocols ( ftp, peer-to-peer file sharing applications , ...)
remote computer access : telnet, ssh, X11, VNC, ...
non-interactive multimedia applications : flash
On the global Internet, most of the applications used in the wide area rely on TCP. Many studies [2] have reported that TCP was responsible for more than 90% of the data exchanged in the global Internet.
To provide this service, TCP relies on a simple segment format that is shown in the figure below. Each TCP segment contains a header described below and, optionally, a payload. The default length of the TCP header is twenty bytes, but some TCP headers contain options.

A TCP header contains the following fields :
Source and destination ports. The source and destination ports play an important role in TCP, as they allow the identification of the connection to which a TCP segment belongs. When a client opens a TCP connection, it typically selects an ephemeral TCP port number as its source port and contacts the server by using the server's port number. All the segments that are sent by the client on this connection have the same source and destination ports. The server sends segments that contain as source (resp. destination port, the destination (resp. source) port of the segments sent by the client (see figure Utilization of the TCP source and destination ports). A TCP connection is always identified by five pieces of information :
the IP address of the client
the IP address of the server
the port chosen by the client
the port chosen by the server
TCP
the sequence number (32 bits), acknowledgement number (32 bits) and window (16 bits) fields are used to provide a reliable data transfer, using a window-based protocol. In a TCP bytestream, each byte of the stream consumes one sequence number. Their utilisation will be described in more detail in section TCP reliable data transfer
the Urgent pointer is used to indicate that some data should be considered as urgent in a TCP bytestream. However, it is rarely used in practice and will not be described here. Additional details about the utilisation of this pointer may be found in RFC 793, RFC 1122 or [Stevens1994]
the flags field contains a set of bit flags that indicate how a segment should be interpreted by the TCP entity receiving it :
the SYN flag is used during connection establishment
the FIN flag is used during connection release
the RST is used in case of problems or when an invalid segment has been received
when the ACK flag is set, it indicates that the acknowledgment field contains a valid number. Otherwise, the content of the acknowledgment field must be ignored by the receiver
the URG flag is used together with the Urgent pointer
the PSH flag is used as a notification from the sender to indicate to the receiver that it should pass all the data it has received to the receiving process. However, in practice TCP implementations do not allow TCP users to indicate when the PSH flag should be set and thus there are few real utilizations of this flag.
the checksum field contains the value of the Internet checksum computed over the entire TCP segment and a pseudo-header as with UDP
the Reserved field was initially reserved for future utilization. It is now used by RFC 3168.
the TCP Header Length (THL) or Data Offset field is a four bits field that indicates the size of the TCP header in 32 bit words. The maximum size of the TCP header is thus 64 bytes.
the Optional header extension is used to add optional information to the TCP header. Thanks to this header extension, it is possible to add new fields to the TCP header that were not planned in the original specification. This allowed TCP to evolve since the early eighties. The details of the TCP header extension are explained in sections TCP connection establishment and TCP reliable data transfer.

TCP connection establishment

A TCP connection is established by using a three-way handshake. The connection establishment phase uses the sequence number, the acknowledgment number and the SYN flag. When a TCP connection is established, the two communicating hosts negotiate the initial sequence number to be used in both directions of the connection. For this, each TCP entity maintains a 32 bits counter, which is supposed to be incremented by one at least every 4 microseconds and after each connection establishment [3]. When a client host wants to open a TCP connection with a server host, it creates a TCP segment with :
the SYN flag set
the sequence number set to the current value of the 32 bits counter of the client host's TCP entity
Upon reception of this segment (which is often called a SYN segment), the server host replies with a segment containing :
the SYN flag set
the sequence number set to the current value of the 32 bits counter of the server host's TCP entity
the ACK flag set
the acknowledgment number set to the sequence number of the received SYN segment incremented by 1 (~mod~2^{32}). When a TCP entity sends a segment having x+1 as acknowledgment number, this indicates that it has received all data up to and including sequence number x and that it is expecting data having sequence number x+1. As the SYN flag was set in a segment having sequence number x, this implies that setting the SYN flag in a segment consumes one sequence number.
This segment is often called a SYN+ACK segment. The acknowledgment confirms to the client that the server has correctly received the SYN segment. The sequence number of the SYN+ACK segment is used by the server host to verify that the client has received the segment. Upon reception of the SYN+ACK segment, the client host replies with a segment containing :
the ACK flag set
the acknowledgment number set to the sequence number of the received SYN+ACK segment incremented by 1 ( ~mod~2^{32})

Transport Layer Protocols-UDP

Transport Layer Protocols-UDP

User Datagram Protocol (UDP)
UDP is a standard protocol with STD number 6. UDP is described by RFC 768 – User Datagram Protocol. Its status is recommended, but in practice every TCP/IP implementation that is not used exclusively for routing will include UDP.
UDP is basically an application interface to IP. It adds no reliability, flow-control, or error recovery to IP. It simply serves as a multiplexer/demultiplexer for sending and receiving datagrams, using ports to 
direct the datagrams.UDP provides a mechanism for one application to send a datagram to another. The UDP layer can be regarded as being extremely thin and consequently has low overheads, but it requires the application to take responsibility for error recovery and so on. Applications sending datagrams to a host need to identify a target that is more specific than the IP address, since datagrams are normally directed to 
certain processes and not to the system as a whole. UDP provides this by using ports.


UDP datagram format

Each UDP datagram is sent within a single IP datagram. Although, the IP datagram may be fragmented during transmission, the receiving IP implementation will reassemble it before presenting it to the UDP layer. All IP implementations are required to accept datagrams of 576 bytes, which means that, allowing for maximum-size IP header of 60 bytes, a UDP datagram of 516 bytes is acceptable to all implementations.

• Source Port: Indicates the port of the sending process. It is the port to which replies should be addressed.
• Destination Port: Specifies the port of the destination process on the destination host.
• Length: The length (in bytes) of this user datagram, including the header.
• Checksum: An optional 16-bit one's complement of the one's complement sum of a pseudo-IP header, the UDP header, and the UDP data. The pseudo-IP header contains the source and destination IP addresses, the protocol, and the UDP length:

UDP application programming interface
The application interface offered by UDP is described in RFC 768. It provides for:
• The creation of new receive ports.
• The receive operation that returns the data bytes and an indication of source port and source IP address.
• The send operation that has, as parameters, the data, source, and destination ports and addresses.
The way this interface should be implemented is left to the discretion of each vendor.



Transport Layer Protocols

Transport Layer Protocols

Transport layer protocols
This chapter provides an overview of the most important and common protocols of the TCP/IP transport layer. These include:
• User Datagram Protocol (UDP)
• Transmission Control Protocol (TCP)

Transport Layer_Services

Transport Layer_Services 

The transport layer services:

Connection-Oriented Communication:
Devices at the end-points of a network communication establish a handshake protocol to ensure a connection is robust before data is exchanged. The weakness of this method is that for each delivered message, there is a requirement for an acknowledgment, adding considerable network load compared to self-error-correcting packets. The repeated requests cause significant slowdown of network speed when defective byte streams or datagrams are sent.

Same Order Delivery:
Ensures that packets are always delivered in strict sequence. Although the network layer is responsible, the transport layer can fix any discrepancies in sequence caused by packet drops or device interruption.
Data Integrity: Using checksums, the data integrity across all the delivery layers can be ensured. These checksums guarantee that the data transmitted is the same as the data received through repeated attempts made by other layers to have missing data resent.

Flow Control:
Devices at each end of a network connection often have no way of knowing each other's capabilities in terms of data throughput and can therefore send data faster than the receiving device is able to buffer or process it. In these cases, buffer overruns can cause complete communication breakdowns. Conversely, if the receiving device is not receiving data fast enough, this causes a buffer underrun, which may well cause an unnecessary reduction in network performance.

Traffic Control: 
Digital communications networks are subject to bandwidth and processing speed restrictions, which can mean a huge amount of potential for data congestion on the network. This network congestion can affect almost every part of a network. The transport layer can identify the symptoms of overloaded nodes and reduced flow rates.

Multiplexing: 
The transmission of multiple packet streams from unrelated applications or other sources (multiplexing) across a network requires some very dedicated control mechanisms, which are found in the transport layer. This multiplexing allows the use of simultaneous applications over a network such as when different internet browsers are opened on the same computer. In the OSI model, multiplexing is handled in the service layer.
Byte orientation: Some applications prefer to receive byte streams instead of packets; the transport layer allows for the transmission of byte-oriented data streams if required.

Transport Layer

Transport Layer-Introduction


Definition
The transport layer is the layer in the open system interconnection (OSI) model responsible for end-to-end communication over a network. It provides logical communication between application processes running on different hosts within a layered architecture of protocols and other network components.

The transport layer is also responsible for the management of error correction, providing quality and reliability to the end user. This layer enables the host to send and receive error corrected data, packets or messages over a network and is the network component that allows multiplexing.

ADA-Algorithm Design and Analysis Notes

ADA-Algorithm Design and Analysis Notes


Algorithm

Definition

Step by step procedure designed to perform an operation, and which (like a map or flowchart) will lead to the sought result if followed correctly. Algorithms have a definite beginning and a definite end, and a finite number of steps. An algorithm produces the same output information given the same input information, and several short algorithms can be combined to perform complex tasks such as writing a computer program. A cookbook recipe, a diagnosis, a problem solving routine, are some common examples of simple algorithms. Suitable for solving structured problems (amenable to sequential analysis) algorithms are, however, unsuitable for problems where value judgments are required. See also heuristics and lateral thinking.

Networking Devices-Proxies

Networking Devices-Proxies

Proxies

A proxy device (running either on dedicated hardware or as software on a general-purpose machine) may act as a firewall by responding to input packets (connection requests, for example) in the manner of an application, whilst blocking other packets.
proxy server
Proxies make tampering with an internal system from the external network more difficult, and misuse of one internal system would not necessarily cause a security breach exploitable from outside the firewall (as long as the application proxy remains intact and properly configured). Conversely, intruders may hijack a publicly-reachable system and use it as a proxy for their own purposes; the proxy then masquerades as that system to other internal machines. While use of internal address spaces enhances security, crackers may still employ methods such as IP spoofing to attempt to pass packets to a target network.

Networking Devices-Firewalls

Networking Devices-Firewalls

Firewalls

In computing, a firewall is a piece of hardware and/or software which functions in a networked environment to prevent some communications forbidden by the security policy, analogous to the function of firewalls in building construction.

Firewall

A firewall has the basic task of controlling traffic between different zones of trust. Typical zones of trust include the Internet (a zone with no trust) and an internal network (a zone with high trust). The ultimate goal is to provide controlled connectivity between zones of differing trust levels through the enforcement of a security policy and connectivity model based on the least privilege principle.

There are three basic types of firewalls depending on:
whether the communication is being done between a single node and the network, or between two or more networks
whether the communication is intercepted at the network layer, or at the application layer
whether the communication state is being tracked at the firewall or not
With regard to the scope of filtered communication these firewalls are exist:
Personal firewalls, a software application which normally filters traffic entering or leaving a single computer through the Internet.
Network firewalls, normally running on a dedicated network device or computer positioned on the boundary of two or more networks or DMZs (demilitarized zones). Such a firewall filters all traffic entering or leaving the connected networks.
In reference to the layers where the traffic can be intercepted, three main categories of firewalls exist:
network layer firewalls An example would be iptables.
application layer firewalls An example would be TCP Wrapper.
application firewalls An example would be restricting ftp services through /etc/ftpaccess file
These network-layer and application-layer types of firewall may overlap, even though the personal firewall does not serve a network; indeed, single systems have implemented both together.
There's also the notion of application firewalls which are sometimes used during wide area network (WAN) networking on the world-wide web and govern the system software. An extended description would place them lower than application layer firewalls, indeed at the Operating System layer, and could alternately be called operating system firewalls.
Lastly, depending on whether the firewalls track packet states, two additional categories of firewalls exist:
stateful firewalls
stateless firewalls
Network layer firewalls
Network layer firewalls operate at a (relatively low) level of the TCP/IP protocol stack as IP-packet filters, not allowing packets to pass through the firewall unless they match the rules. The firewall administrator may define the rules; or default built-in rules may apply (as in some inflexible firewall systems).
A more permissive setup could allow any packet to pass the filter as long as it does not match one or more "negative-rules", or "deny rules". Today network firewalls are built into most computer operating system and network appliances.
Modern firewalls can filter traffic based on many packet attributes like source IP address, source port, destination IP address or port, destination service like WWW or FTP. They can filter based on protocols, TTL values, netblock of originator, domain name of the source, and many other attributes.
Application-layer firewalls
Application-layer firewalls work on the application level of the TCP/IP stack (i.e., all browser traffic, or all telnet or ftp traffic), and may intercept all packets traveling to or from an application. They block other packets (usually dropping them without acknowledgement to the sender). In principle, application firewalls can prevent all unwanted outside traffic from reaching protected machines.
By inspecting all packets for improper content, firewalls can even prevent the spread of the likes of viruses. In practice, however, this becomes so complex and so difficult to attempt (given the variety of applications and the diversity of content each may allow in its packet traffic) that comprehensive firewall design does not generally attempt this approach.


Networking Devices-Transceivers (media converters)

Networking Devices-Transceivers (media converters)

Transceivers (media converters)

Transceiver short for transmitter-receiver, a device that both transmits and receives analog or digital signals. The term is used most frequently to describe the component in local-area networks (LANs) that actually applies signals onto the network wire and detects signals passing through the wire. For many LANs, the transceiver is built into the network interface card (NIC). Some types of networks, however, require an external transceiver.
transeiver
In Ethernet networks, a transceiver is also called a Medium Access Unit (MAU). Media converters interconnect different cable types twisted pair, fiber, and Thin or thick coax, within an existing network. They are often used to connect newer 100-Mbps, Gigabit Ethernet, or ATM equipment to existing networks, which are generally 10BASE-T, 100BASE-T, or a mixture of both. They can also be used in pairs to insert a fiber segment into copper networks to increase cabling distances and enhance immunity to electromagnetic interference (EMI).

Networking Devices-Modems

Networking Devices-Modems

Modems

A modem is a device that makes it possible for computers to communicate over telephone lines. The word modem comes from Modulate and Demodulate. Because standard telephone lines use analog signals, and computers digital signals, a sending modem must modulate its digital signals into analog signals. The computers modem on the receiving end must then demodulate the analog signals into digital signals.
modem
Modems can be external, connected to the computers serial port by an RS-232 cable or internal in one of the computers expansion slots. Modems connect to the phone line using standard telephone RJ-11 connectors.


Networking Devices-NICs (Network Interface Card)

Networking Devices-NICs (Network Interface Card)

NICs (Network Interface Card)

Network Interface Card, or NIC is a hardware card installed in a computer so it can communicate on a network. The network adapter provides one or more ports for the network cable to connect to, and it transmits and receives data onto the network cable.


Wireless Lan card
wireless lan card
Every networked computer must also have a network adapter driver, which controls the network adapter. Each network adapter driver is configured to run with a certain type of network adapter.
Network card
networkcard
Network Interface Adapter Functions
Network interface adapters perform a variety of functions that are crucial to getting data to and from the computer over the network.
These functions are as follows:
Data encapsulation
The network interface adapter and its driver are responsible for building the frame around the data generated by the network layer protocol, in preparation for transmission. The network interface adapter also reads the contents of incoming frames and passes the data to the appropriate network layer protocol.
Signal encoding and decoding
The network interface adapter implements the physical layer encoding scheme that converts the binary data generated by the network layer-now encapsulated in the frame-into electrical voltages, light pulses, or whatever other signal type the network medium uses, and converts received signals to binary data for use by the network layer.
transmission and reception
The primary function of the network interface adapter is to generate and transmit signals of the appropriate type over the network and to receive incoming signals. The nature of the signals depends on the network medium and the data-link layer protocol. On a typical LAN, every computer receives all of the packets transmitted over the network, and the network interface adapter examines the destination address in each packet, to see if it is intended for that computer. If so, the network interface adapter passes the packet to the computer for processing by the next layer in the protocol stack; if not, the network interface adapter discards the packet.
Data buffering
Network interface adapters transmit and receive data one frame at a time, so they have built-in buffers that enable them to store data arriving either from the computer or from the network until a frame is complete and ready for processing.
Serial/parallel conversion
The communication between the computer and the network interface adapter runs in parallel, that is, either 16 or 32 bits at a time, depending on the bus the adapter uses. Network communications, however, are serial (running one bit at a time), so the network interface adapter is responsible for performing the conversion between the two types of transmissions.
Media access control
The network interface adapter also implements the MAC mechanism that the data-link layer protocol uses to regulate access to the network medium. The nature of the MAC mechanism depends on the protocol used.
Network protocols
A networked computer must also have one or more protocol drivers (sometimes called a transport protocol or just a protocol). The protocol driver works between the upper-level network software and the network adapter to package data to be sent on the network.
In most cases, for two computers to communicate on a network, they must use identical protocols. Sometimes, a computer is configured to use multiple protocols. In this case, two computers need only one protocol in common to communicate. For example, a computer running File and Printer Sharing for Microsoft Networks that uses both NetBEUI and TCP/IP can communicate with computers using only NetBEUI or TCP/IP.
ISDN (Integrated Services Digital Network) adapters

Integrated Services Digital Network adapters can be used to send voice, data, audio, or video over standard telephone cabling. ISDN adapters must be connected directly to a digital telephone network. ISDN adapters are not actually modems, since they neither modulate nor demodulate the digital ISDN signal.
Like standard modems, ISDN adapters are available both as internal devices that connect directly to a computer's expansion bus and as external devices that connect to one of a computer's serial or parallel ports. ISDN can provide data throughput rates from 56 Kbps to 1.544 Mbps (using a T1 carrier service).
isdn adpator
ISDN hardware requires a NT (network termination) device, which converts network data signals into the signaling protocols used by ISDN. Some times, the NT interface is included, or integrated, with ISDN adapters and ISDN-compatible routers. In other cases, an NT device separate from the adapter or router must be implemented. ISDN works at the physical, data link, network, and transport layers of the OSI Model.
WAPs (Wireless Access Point)

A wireless network adapter card with a transceiver sometimes called an access point, broadcasts and receives signals to and from the surrounding computers and passes back and forth between the wireless computers and the cabled network.
wireless access point

Networking Devices-Gateways

Networking Devices-Gateways

Gateways

A gateway is a device used to connect networks using different protocols. Gateways operate at the network layer of the OSI model. In order to communicate with a host on another network, an IP host must be configured with a route to the destination network. If a configuration route is not found, the host uses the gateway (default IP router) to transmit the traffic to the destination host. The default t gateway is where the IP sends packets that are destined for remote networks. If no default gateway is specified, communication is limited to the local network. Gateways receive data from a network using one type of protocol stack, removes that protocol stack and repackages it with the protocol stack that the other network can use.


Examples
E-mail gateways-for example, a gateway that receives Simple Mail Transfer Protocol (SMTP) e-mail, translates it into a standard X.400 format, and forwards it to its destination
Gateway Service for NetWare (GSNW), which enables a machine running Microsoft Windows NT Server or Windows Server to be a gateway for Windows clients so that they can access file and print resources on a NetWare server
Gateways between a Systems Network Architecture (SNA) host and computers on a TCP/IP network, such as the one provided by Microsoft SNA Server
A packet assembler/disassembler (PAD) that provides connectivity between a local area network (LAN) and an X.25 packet-switching network
CSU / DSU (Channel Service Unit / Data Service Unit)

A CSU/DSU is a device that combines the functionality of a channel service unit (CSU) and a data service unit (DSU). These devices are used to connect a LAN to a WAN, and they take care of all the translation required to convert a data stream between these two methods of communication.
csu dsu
A DSU provides all the handshaking and error correction required to maintain a connection across a wide area link, similar to a modem. The DSU will accept a serial data stream from a device on the LAN and translate this into a useable data stream for the digital WAN network. It will also take care of converting any inbound data streams from the WAN back to a serial communication.
A CSU is similar to a DSU except it does not have the ability to provide handshaking or error correction. It is strictly an interface between the LAN and the WAN and relies on some other device to provide handshaking and error correction.

Networking Devices-Routers

Networking Devices-Routers

Routers

Routers Are networking devices used to extend or segment networks by forwarding packets from one logical network to another. Routers are most often used in large internetworks that use the TCP/IP protocol suite and for connecting TCP/IP hosts and local area networks (LANs) to the Internet using dedicated leased lines.

Routers work at the network layer (layer 3) of the Open Systems Interconnection (OSI) reference model for networking to move packets between networks using their logical addresses (which, in the case of TCP/IP, are the IP addresses of destination hosts on the network). Because routers operate at a higher OSI level than bridges do, they have better packet-routing and filtering capabilities and greater processing power, which results in routers costing more than bridges.
cisco router

Routing tables

Routers contain internal tables of information called routing tables that keep track of all known network addresses and possible paths throughout the internetwork, along with cost of reaching each network. Routers route packets based on the available paths and their costs, thus taking advantage of redundant paths that can exist in a mesh topology network.
Because routers use destination network addresses of packets, they work only if the configured network protocol is a routable protocol such as TCP/IP or IPX/SPX. This is different from bridges, which are protocol independent. The routing tables are the heart of a router; without them, there's no way for the router to know where to send the packets it receives.
Unlike bridges and switches, routers cannot compile routing tables from the information in the data packets they process. This is because the routing table contains more detailed information than is found in a data packet, and also because the router needs the information in the table to process the first packets it receives after being activated. A router can't forward a packet to all possible destinations in the way that a bridge can.
Static routers: These must have their routing tables configured manually with all network addresses and paths in the internetwork.

Dynamic routers: These automatically create their routing tables by listening to network traffic.
Routing tables are the means by which a router selects the fastest or nearest path to the next "hop" on the way to a data packet's final destination. This process is done through the use of routing metrics.
Routing metrics which are the means of determining how much distance or time a packet will require to reach the final destination. Routing metrics are provided in different forms.
hop is simply a router that the packet must travel through.
Ticks measure the time it takes to traverse a link. Each tick is 1/18 of a second. When the router selects a route based on tick and hop metrics, it chooses the one with the lowest number of ticks first.
You can use routers, to segment a large network, and to connect local area segments to a single network backbone that uses a different physical layer and data link layer standard. They can also be used to connect LAN's to a WAN's.
Brouters

Brouters are a combination of router and bridge. This is a special type of equipment used for networks that can be either bridged or routed, based on the protocols being forwarded. Brouters are complex, fairly expensive pieces of equipment and as such are rarely used.
brouter
A Brouter transmits two types of traffic at the exact same time: bridged traffic and routed traffic. For bridged traffic, the Brouter handles the traffic the same way a bridge or switch would, forwarding data based on the physical address of the packet. This makes the bridged traffic fairly fast, but slower than if it were sent directly through a bridge because the Brouter has to determine whether the data packet should be bridged or routed.

Networking Devices-Bridges

Networking Devices-Bridges

Bridges
A bridge is used to join two network segments together, it allows computers on either segment to access resources on the other. They can also be used to divide large networks into smaller segments. Bridges have all the features of repeaters, but can have more nodes, and since the network is divided, there is fewer computers competing for resources on each segment thus improving network performance.
bridge
Bridges can also connect networks that run at different speeds, different topologies, or different protocols. But they cannot, join an Ethernet segment with a Token Ring segment, because these use different networking standards. Bridges operate at both the Physical Layer and the MAC sublayer of the Data Link layer. Bridges read the MAC header of each frame to determine on which side of the bridge the destination device is located, the bridge then repeats the transmission to the segment where the device is located.

Networking Devices-Switch

Networking Devices-Switch

Switches

Switches are a special type of hub that offers an additional layer of intelligence to basic, physical-layer repeater hubs. A switch must be able to read the MAC address of each frame it receives. This information allows switches to repeat incoming data frames only to the computer or computers to which a frame is addressed. This speeds up the network and reduces congestion.

Switches operate at both the physical layer and the data link layer of the OSI Model.


Networking Devices-Hub

Networking Devices-Hub

HUB
Networks using a Star topology require a central point for the devices to connect. Originally this device was called a concentrator since it consolidated the cable runs from all network devices. The basic form of concentrator is the hub.
hub stackable hub


The hub is a hardware device that contains multiple, independent ports that match the cable type of the network. Most common hubs interconnect Category 3 or 5 twisted-pair cable with RJ-45 ends, although Coax BNC and Fiber Optic BNC hubs also exist. The hub is considered the least common denominator in device concentrators. Hubs offer an inexpensive option for transporting data between devices, but hubs don't offer any form of intelligence. Hubs can be active or passive.
An active hub strengthens and regenerates the incoming signals before sending the data on to its destination.
Passive hubs do nothing with the signal.

Ethernet Hubs

An Ethernet hub is also called a multiport repeater. A repeater is a device that amplifies a signal as it passes through it, to counteract the effects of attenuation. If, for example, you have a thin Ethernet network with a cable segment longer than the prescribed maximum of 185 meters, you can install a repeater at some point in the segment to strengthen the signals and increase the maximum segment length. This type of repeater only has two BNC connectors, and is rarely seen these days.

8 Port mini Ethernet Hub

The hubs used on UTP Ethernet networks are repeaters as well, but they can have many RJ45 ports instead of just two BNC connectors. When data enters the hub through any of its ports, the hub amplifies the signal and transmits it out through all of the other ports. This enables a star network to have a shared medium, even though each computer has its own separate cable. The hub relays every packet transmitted by any computer on the network to all of the other computers, and also amplifies the signals.
The maximum segment length for a UTP cable on an Ethernet network is 100 meters. A segment is defined as the distance between two communicating computers. However, because the hub also functions as a repeater, each of the cables connecting a computer to a hub port can be up to 100 meters long, allowing a segment length of up to 200 meters when one hub is inserted in the network.

Multistation Access Unit -MAU

A Multistation Access Unit (MAU) is a special type of hub used for token ring networks. The word "hub" is used most often in relation to Ethernet networks, and MAU only refers to token ring networks. On the outside, the MAU looks like a hub. It connects to multiple network devices, each with a separate cable.
Unlike a hub that uses a logical bus topology over a physical star, the MAU uses a logical ring topology over a physical star.

When the MAU detects a problem with a connection, the ring will beacon. Because it uses a physical star topology, the MAU can easily detect which port the problem exists on and close the port, or "wrap" it. The MAU does actively regenerate signals as it transmits data around the ring.

Network Topologies-Mesh Topology

Network Topologies-Mesh Topology

Mesh Topology

Mesh topologies involve the concept of routes. Unlike each of the previous topologies, messages sent on a mesh network can take any of several possible paths from source to destination. (Recall that even in a ring, although two cable paths exist, messages can only travel in one direction.) Some WANs, most notably the Internet, employ mesh routing.
A mesh network in which every device connects to every other is called a full mesh. As shown in the illustration below, partial mesh networks also exist in which some devices connect only indirectly to others.












Network Topologies-Tree Topology

Network Topologies-Tree Topology

Tree Topology

Tree topologies integrate multiple star topologies together onto a bus. In its simplest form, only hub devices connect directly to the tree bus, and each hub functions as the root of a tree of devices. This bus/star hybrid approach supports future expandability of the network much better than a bus (limited in the number of devices due to the broadcast traffic it generates) or a star (limited by the number of hub connection points) alone.


Network Topologies-Star Topology

Network Topologies-Star Topology

Star Topology

Many home networks use the star topology. A star network features a central connection point called a "hub node" that may be a network hub, switch or router. Devices typically connect to the hub with Unshielded Twisted Pair (UTP) Ethernet.


Compared to the bus topology, a star network generally requires more cable, but a failure in any star network cable will only take down one computer's network access and not the entire LAN. (If the hub fails, however, the entire network also fails.)

Network Topologies-Bus Topology

Network Topologies-Defination

Bus Topology

Bus networks (not to be confused with the system bus of a computer) use a common backbone to connect all devices. A single cable, the backbone functions as a shared communication medium that devices attach or tap into with an interface connector. A device wanting to communicate with another device on the network sends a broadcast message onto the wire that all other devices see, but only the intended recipient actually accepts and processes the message.
Ethernet bus topologies are relatively easy to install and don't require much cabling compared to the alternatives. 10Base-2 ("ThinNet") and 10Base-5 ("ThickNet") both were popular Ethernet cabling options many years ago for bus topologies. However, bus networks work best with a limited number of devices. If more than a few dozen computers are added to a network bus, performance problems will likely result. In addition, if the backbone cable fails, the entire network effectively becomes unusable.




In a ring network, every device has exactly two neighbors for communication purposes. All messages travel through a ring in the same direction (either "clockwise" or "counterclockwise"). A failure in any cable or device breaks the loop and can take down the entire network.
To implement a ring network, one typically uses FDDI, SONET, or Token Ring technology. Ring topologies are found in some office buildings or school campuses.

Network Topologies-Types

Network Topologies-Types

Types of Network Topologies

bus
ring
star
tree
mesh

Network Topologies-Defination

Network Topologies-Defination

Network Topologies

Think of a topology as a network's virtual shape or structure. This shape does not necessarily correspond to the actual physical layout of the devices on the network. For example, the computers on a home LAN may be arranged in a circle in a family room, but it would be highly unlikely to find a ring topology there.

Elementary Data Link Protocols-Sliding Window Protocols

Elementary Data Link Protocols-Sliding Window Protocols

Sliding Window Protocols


Piggybacking technique

In most practical situations there is a need for transmitting data in both directions (i.e. between 2 computers). A full duplex circuit is required for the operation.
If protocol 2 or 3 is used in these situations the data frames and ACK (control) frames in the reverse direction have to be interleaved. This method is acceptable but not efficient. An efficient method is to absorb the ACK frame into the header of the data frame going in the same direction. This technique is known as piggybacking.

When a data frame arrives at an IMP (receiver or station), instead of immediately sending a separate ACK frame, the IMP restrains itself and waits until the host passes it the next message. The acknowledgement is then attached to the outgoing data frame using the ACK field in the frame header. In effect, the acknowledgement gets a free ride in the next outgoing data frame.

This technique makes better use of the channel bandwidth. The ACK field costs only a few bits, whereas a separate frame would need a header, the acknowledgement, and a checksum.

An issue arising here is the time period that the IMP waits for a message onto which to piggyback the ACK. Obviously the IMP cannot wait forever and there is no way to tell exactly when the next message is available. For these reasons the waiting period is usually a fixed period. If a new host packet arrives quickly the acknowledgement is piggybacked onto it; otherwise, the IMP just sends a separate ACK frame.

Sliding window

When one host sends traffic to another it is desirable that the traffic should arrive in the same sequence as that in which it is dispatched. It is also desirable that a data link should deliver frames in the order sent.
A flexible concept of sequencing is referred to as the sliding window concept and the next three protocols are all sliding window protocols.

In all sliding window protocols, each outgoing frame contains a sequence number SN ranging from 0 to 2^(n -1)(where n is the number of bits reserved for the sequence number field).

At any instant of time the sender maintains a list of consecutive sequence numbers corresponding to frames it is permitted to send. These frames are said to fall within the sending window. Similarly, the receiver maintains a receiving window corresponding to frames it is permitted to accept.

The size of the window relates to the available buffers of a receiving or sending node at which frames may be arranged into sequence.

At the receiving node, any frame falling outside the window is discarded. Frames falling within the receiving window are accepted and arranged into sequence. Once sequenced, the frames at the left of the window are delivered to the host and an acknowledgement of the delivered frames is transmitted to their sender. The window is then rotated to the position where the left edge corresponds to the next expected frame, RN.

Whenever a new frame arrives from the host, it is given the next highest sequence number, and the upper edge of the sending window is advanced by one. The sequence numbers within the sender's window represent frames sent but as yet not acknowledged. When an acknowledgement comes in, it gives the position of the receiving left window edge which indicates what frame the receiver expects to receive next. The sender then rotates its window to this position, thus making buffers available for continuous transmission.

A one bit sliding window protocol: protocol 4

The sliding window protocol with a maximum window size 1 uses stop-and-wait since the sender transmits a frame and waits for its acknowledgement before sending the next one.
/* protocol 4 */

Send_and_receive()
{
        NFTS = 0;
        FE = 0;
        from_host(buffer);
        S.info = buffer;
        S.seq = NFTS;
        S.ack = 1-FE;
        sendf(S);
        start_timer(S.seq);
        forever
        {
                wait(event);
                if(event == frame_arrival)
                {
                        getf(R);
                        if(R.seq == FE)
                        {
                                to_host(R.info);
                                ++FE;
                        }
                        if(R.ack == NFTS)
                        {
                                from_host(buffer);
                                ++NFTS;
                        }
                }
                S.info = buffer;
                S.seq = NFTS;
                S.ack = 1-FE;
                sendf(S);
                start_timer(S.seq);
        }
}
Pipelining

In many situations the long round-trip time can have important implications for the efficiency of the bandwidth utilisation.
As an example, consider a satellite channel with a 500ms round-trip propagation delay. At time t~~=0 the sender starts sending the first frame. Not until at least t~>=~500 ms has the acknowledgement arrived back at the sender. This means that the sender was blocked most of the time causing a reduction in efficiency.
As another example, if the link is operated in a two-way alternating mode (half-duplex), the line might have to be "turned around" for each frame in order to receive an acknowledgement. This acknowledgement delay could severely impact the effective data transfer rate.
The effects of these problems can be overcome by allowing the sender to transmit multiple contiguous frames (say up to w frames) before it receives an acknowledgement. This technique is known as pipelining.

In the satellite example, with a channel capacity of 50kbps and 1000-bit frames, by the time the sender has finished sending 26 frames, t~=~520 ms, the acknowledgement for frame 0 will have just arrived, allowing the sender to continue sending frames. At all times, 25 or 26 unacknowledged frames will be outstanding, and the sender's window size needs to be at least 26.

Pipelining frames over an unreliable communication channel raises some serious issues. What happens if a frame in the middle of a long stream is damaged or lost? What should the receiver do with all the correct frames following the bad one?

The are two basic Automatic Request for Repeat (ARQ) methods for dealing with errors in the presence of pipelining.

One method, the normal mode of ARQ is called Go-back-N. If the receiver detects any error in frame N, it signals the sender and then discards any subsequent frame.

The sender, which may currently be sending frame N+X when the error signal is detected, initiates retransmission of frame N and all subsequent frames.

The other method is called selective reject. In this method the receiver stores all the correct frames following the bad one. When the sender finally notices what was wrong, it just retransmits the one bad frame, not all its successors.

Protocol 5: Pipelining, Multiple outstanding frames (MaxSeq)

In this protocol, the sender may transmit up to MaxSeq frames without waiting for an acknowledgement. In addition, unlike the previous protocols, the host is not assumed to have a new message all the time. Instead, the host causes host ready events when there is a message to send.
This protocol employs the Go-back-N technique. In the example below, the window size of the receiver is equal to 1, and a maximum of MaxSeq frames may be outstanding at any instant.

/* protocol 5 */

send_data(frame_number)
{
        S.info = buffer[frame_number];
        S.seq = frame_number;
        S.ack = (FE+MaxSeq) % (MaxSeq+1);
        sendf(S);
        start_timer(frame_number);
}

send_receive()
{
        enable_host();
        NFTS = 0;
        Ack_expected = 0;
        Frame_expected = 0;
        nbuffered = 0;
        forever
        {
                wait(event);
                switch(event)
                {
                case host_ready:
                        from_host(buffer[NFTS]);
                        ++nbuffered;
                        send_data(NFTS);
                        ++NFTS;
                        break;
                case frame_arrival:
                        getf(R);
                        if(R.seq == Frame_expected)
                        {
                                to_host(R.info);
                                ++Frame_expected;
                        }
                        if( (Ack_expected <= R.ack && R.ack < NFTS)
                          ||(NFTS < Ack_expected && Ack_expected <= R.ack)
                          ||(R.ack < NFTS &&NFTS < Ack_expected))
                        {
                                --nbuffered;
                                stop_timer(Ack_expected);
                                ++Ack_expected;
                        }
                        break;
                case checksum_error:
                        /* just ignore the bad frame */
                        break;
                case timeout:
                        NFTS = Ack_expected;
                        i = 0;
                        do
                        {
                                send_data(NFTS);
                                ++NFTS;
                                ++i;
                        } while(i<=nbuffered);
                        break;
                }
                if(nbuffered < MaxSeq)
                        enable_host();
                else
                        disable_host();
        }
}

Elementary Data Link Protocols-A simplex protocol for a noisy channel

Elementary Data Link Protocols-A simplex protocol for a noisy channel

A simplex protocol for a noisy channel

In this protocol the unreal "error free" assumption in protocol 2 is dropped. Frames may be either damaged or lost completely. We assume that transmission errors in the frame are detected by the hardware checksum.
One suggestion is that the sender would send a frame, the receiver would send an ACK frame only if the frame is received correctly. If the frame is in error the receiver simply ignores it; the transmitter would time out and would retransmit it.

One fatal flaw with the above scheme is that if the ACK frame is lost or damaged, duplicate frames are accepted at the receiver without the receiver knowing it.

Imagine a situation where the receiver has just sent an ACK frame back to the sender saying that it correctly received and already passed a frame to its host. However, the ACK frame gets lost completely, the sender times out and retransmits the frame. There is no way for the receiver to tell whether this frame is a retransmitted frame or a new frame, so the receiver accepts this duplicate happily and transfers it to the host. The protocol thus fails in this aspect.

To overcome this problem it is required that the receiver be able to distinguish a frame that it is seeing for the first time from a retransmission. One way to achieve this is to have the sender put a sequence number in the header of each frame it sends. The receiver then can check the sequence number of each arriving frame to see if it is a new frame or a duplicate to be discarded.

The receiver needs to distinguish only 2 possibilities: a new frame or a duplicate; a 1-bit sequence number is sufficient. At any instant the receiver expects a particular sequence number. Any wrong sequence numbered frame arriving at the receiver is rejected as a duplicate. A correctly numbered frame arriving at the receiver is accepted, passed to the host, and the expected sequence number is incremented by 1 (modulo 2).

The protocol is depicted below:

/* protocol 3 */

Sender()
{
        NFTS = 0;               /* NFTS = Next Frame To Send */
        from_host(buffer);
        forever
        {
                S.seq = NFTS;
                S.info = buffer;
                sendf(S);
                start_timer(S.seq);
                wait(event);
                if(event == frame_arrival)
                {
                        from_host(buffer);
                        ++NFTS;  /* modulo 2 operation */
                }
        }
}

Receiver()
{
        FE = 0;                 /* FE = Frame Expected */
        forever
        {
                wait(event);
                if(event == frame_arrival)
                {
                        getf(R);
                        if(R.seq == FE)
                        {
                                to_host(R.info);
                                ++FE;  /* modulo 2 operation */
                        }
                        sendf(S);       /* ACK */
                }
        }
}
This protocol can handle lost frames by timing out. The timeout interval has to be long enough to prevent premature timeouts which could cause a "deadlock" situation.

Elementary Data Link Protocols-A simplex stop-and-wait protocol

Elementary Data Link Protocols-A simplex stop-and-wait protocol

A simplex stop-and-wait protocol

In this protocol we assume that Data are transmitted in one direction only No errors occur (perfect channel)
The receiver can only process the received information at a finite rate These assumptions imply that the transmitter cannot send frames at a rate faster than the receiver can process them. The problem here is how to prevent the sender from flooding the receiver. A general solution to this problem is to have the receiver provide some sort of feedback to the sender. The process could be as follows: The receiver send an acknowledge frame back to the sender telling the sender that the last received frame has been processed and passed to the host; permission to send the next frame is granted. The sender, after having sent a frame, must wait for the acknowledge frame from the receiver before sending another frame. This protocol is known as stop-and-wait.

The protocol is as follows:

/* protocol */

Sender()
{
        forever
        {
                from_host(buffer);
                S.info = buffer;
                sendf(S);
                wait(event);
        }
}

Receiver()
{
        forever
        {
                wait(event);
                getf(R);
                to_host(R.info);
                sendf(S);
        }
}

Elementary Data Link Protocols-An unrestricted simplex protocol

Elementary Data Link Protocols-An unrestricted simplex protocol

An unrestricted simplex protocol

In order to appreciate the step by step development of efficient and complex protocols such as SDLC, HDLC etc., we will begin with a simple but unrealistic protocol. In this protocol: Data are transmitted in one direction only The transmitting (Tx) and receiving (Rx) hosts are always ready Processing time can be ignored Infinite buffer space is available No errors occur; i.e. no damaged frames and no lost frames (perfect channel)  The protocol consists of two procedures, a sender and receiver as depicted below:


Sender()
{
        forever
        {
                from_host(buffer);
                S.info = buffer;
                sendf(S);
        }
}

Receiver()
{
        forever
        {
                wait(event);
                getf(R);
                to_host(R.info);
        }
}

Synchronous Data Link Control(SDLC)

Synchronous Data Link Control(SDLC) 

Synchronous Data Link Control(SDLC) 

SDLC is same as HDLC . The only difference is in the format. In this case the size of the information field is variable whereas in case of HDLC it is multiple of byte. SLIP(Serial Line Internet Protocol) Short for, a protocol for connection to the Internet via a dial-up connection. Developed in the 80s when modem communications typically were limited to 2400 bps, it was designed for simple communication over
serial lines. SLIP can be used on RS-232 serial ports and supports asynchronous links. PPP(Point-to-Point Protocol) A more common protocol is PPP (Point-to-Point Protocol) because it is faster and more reliable and supports functions that SLIP does not, such as error detection, dynamic assignment of IP addresses and data compression. Point-to-Point Protocol, a method of connecting a computer to the Internet. PPP is more stable than the older SLIP protocol and provides error checking features. Working in the data link layer of the OSI model, PPP sends the computer's TCP/IP packets to a server that puts them onto the Internet.
In general, Internet service providers offer only one protocol although some support both protocols.

HDLC - High Level Data Link Control

HDLC - High Level Data Link Control


HDLC Protocol Overall Description:

Layer 2 of the OSI model is the data link layer. One of the most common layer 2 protocols is the HDLC protocol. In fact, many other common layer 2 protocols are heavily based on HDLC, particularly its framing structure: namely, SDLC, SS#7, LAPB ,LAPD and ADCCP. HDLC uses zero insertion/deletion process (commonly known as bit stuffing) to ensure that the bit pattern of the delimiter flag does not occur
in the fields between flags. The HDLC frame is synchronous and therefore relies on the physical layer to provide method of clocking and synchronizing the transmission and reception of frames. The HDLC protocol is defined by ISO for use on both point-to-point and multipoint (multidrop) data links. It supports full duplex transparent-mode operation and is now extensively used in both multipoint and computer networks.

HDLC Operation Modes:

HDLC has three operational modes:
1. Normal Response Mode (NRM)
2. Asynchronous Response Mode (ARM)
3. Asynchronous Balanced Mode (ABM)

Frame Formats:

The standard frame of the HDLC protocol handles both data and control messages. It has the following format: The length of the address field is commonly 0,8 or 16 bits, depending on the data link layer protocol.
For instance the SDLC use only 8 bit address, while SS#7 has no address field at all because it is always used in point to point links.The 8 or 16 bit control field provides a flow control number and defines the
frame type (control or data). The exact use and structure of this field depends upon the protocol using the frame. Data is transmitted in the data field , which can vary in length depending upon the protocol using the frame. Layer 3 frames are carried in the data field.

Error Control is implemented by appending a cyclic redundancy check (CRC) to the frame, which is 16 bits long in most protocols.
Frame Classes: In the HDLC protocol , three classes of frames are used :
1. Unnumbered frames - are used for link management.
Unnumbered frames are used for link management, for example they are used to set up the logical link between the primary station and a secondary , and to inform the secondary station about the mode of operation which is used .
2. Information frames - are used to carry the actual data. Information frames are those who carry the actual data. The Information frames can be used to piggyback acknowledgment information relating to the flow of Information frames in the reverse direction when the link is being operated in ABM or ARM.

3. Supervisory frames - are used for error and flow control. Supervisory frames are use for error and flow control. They contain, send and receive sequence numbers. Frame types: Three classes of frames are used in HDLC. Some of the different types of frame in each class are described below. Unnumbered frames are used for link management. SNRM and SABMframes , for example, are used both to set up logical link between the primary and the secondary station and to inform the secondary station of the mode of operation to be used. A logical link is subsequently cleared by the primary station sending a DISC frame. The UA frame is used as an acknowledgment to the other frames in this class. There are four types of supervisory frames but only RR and RNR are used in both NRM and ABM These frames are used both to indicate the
willingness or otherwise of a secondary station to receive an information frame from the primary station, and for acknowledgment purposes. REJ and SREJ frames are used only in ABM which permits simultaneous twoway communication across a point to point link. The two frames are used to indicate to the other station that a sequence error has occurred, that is an information frame containing an out of sequence N(s) has been
received. the SREJ frame is used with a selective repeat transmission procedure, whereas the REJ frame is used with a go back N procedure.rotocol operationThe two basic functions in the protocol are link management and data transfer (which includes error and flow control). Link management
. Prior to any kind of transmission (either between two stations connected by a point to point link or between a primary and secondary station a multidrop link) a logical connection between the two communication parties must be established.

Data transfer

 NRM all data (information frames) if transferred under the control of the primary station. The unnumbered poll frame with the P bit set to 1 is normally used by the priary to poll a secondary. If the secondary has no
data to transmit, it returns an RNR frame with the F bit set. If data is waiting, it transmits the data, typically as a sequence of information frames. 

Upload UIImage as base64 String

Upload UIImage as Base64 String (Upload UIImage as string) //Select Pic From Camera or Gallery       @objc func btnPro...