Skip to document

Computer Engineering Loksewa Note

This document for Loksewa computer Engineering
Course

Computer Engineering (CSE123)

134 Documents
Students shared 134 documents in this course
Academic year: 2023/2024
Uploaded by:
0followers
1Uploads
8upvotes

Comments

Please sign in or register to post comments.

Preview text

COMPUTER ENGINEERING LOKSEWA PREPARATION

SUBJECTIVES SOLUTION

FOR

SECTION-A

Prepared by:Er prasad kafle (Marks-24marks)

1 networks(5 marks)-Refer TENENBAUM Books for more references 2 architecture & Organization and Microprocessor(5 marks) Refer william stalling Books for more references 3 design(4 marks) refer moris mano for more information 4 electrical and Electronics(5 marks)-BL therja for more information 5 of elctronics and communication(5 marks)-Refer sanjay sharma

Date:2073-07-

Chapter-1 Network(5 marks)

Computer Network-A computer network or data network is a telecommunications network which allows

computers to exchange data. In computer networks, networked computing devices exchange data with each

other using a data link. The connections between nodes are established using either cable media or wireless

media. The best-known computer network is the Internet computer devices that originate, route and

terminate the data are called network nodes.[1] Nodes can include hosts such as personal computers, phones,

servers as well as networking hardware. Two such devices can be said to be networked together when one

device is able to exchange information with the other device, whether or not they have a direct connection to

each other.

1.1 stack,Switching.. 1.1.a stack The protocol stack is an implementation of a computer networking protocol suite. The terms are often used

interchangeably. Strictly speaking, the suite is the definition of the protocols, and the stack is the software

implementation of them.

Protocol Layer HTTP Application TCP Transport IP Internet/Network Ethernet Data Link/Link IEEE 802 Physical figure of: protocol stack The set of protocols used in a communications network. A protocol stack is a prescribed hierarchy of software layers, starting from the application layer at the top (the source of the data being sent) to the data link layer at the bottom (transmitting the bits on the wire). The stack resides in each client and server, and the layered approach lets different protocols be swapped in and out to accommodate different network architectures.

 The Protocol Stack Using TCP/IP as a model, the sending application hands data to the transport layer, which breaks it up into the packets required by the network. It stores the sequence number and other data in its header. The network layer adds source and destination data in its header, and the data link layer adds station data in its header. On the other side, the corresponding layer reads and processes the headers and discards them.

A protocol stack is a complete set of network protocol layers that work together to provide networking capabilities. It is called a stack because it is typically designed as a hierarchy of layers, each supporting the one above it and using those below it. A protocol is a mutually agreed-upon format for doing something. With regard to computers, it most commonly refers a set of rules (i., a standard) that enables computers to connect and transmit data to one another; this is also called a communications protocol. A protocol can be implemented by hardware, software, or a combination of the two. Individual protocols are typically designed with a single purpose in mind. This modularization, which is consistent with the Unix philosophy, facilitates both design and evaluation. The use of a layered approach facilitates allowing different protocols to be substituted for each other, for example, to accommodate new protocols and different network architectures. The number of layers varies according to the particular protocol stack. For example, TCP/IP (transmission control protocol/Internet protocol), which defines communication over the Internet and most other computer networks, has five layers (application, transport, network, data link and physical). The also widely used OSI (open systems interconnect) reference model, defines seven layers (application, presentation, session, transport, network, data link and physical). Regardless of the number of layers, the lowest protocols always deal with low-level, physical interaction of the hardware. Each higher layer adds additional features, and user applications typically interact only with the uppermost layers. The layers can be broadly classified into media, transport and application The terms protocol stack and protocol suite are often used interchangeably. However, the two are sometimes used with subtle differences, such as the former being a complete set of protocols and the latter being a subset of them, often supplied by a particular vendor, or the latter being the definition of the protocols and the former being the software implementation of them.

1.1.b: Switching Techniques - In large networks there might be multiple paths linking sender and receiver. Information may be switched as it travels through various communication channels. There are four typical switching techniques available for digital traffic. There are four types of switching.  Circuit switching: Circuit switching is a technique that directly connects the sender and the receiver in an unbroken path.

  • Telephone switching equipment, for example, establishes a path that connects the caller's telephone to the receiver's telephone by making a physical connection.
  • With this type of switching technique, once a connection is established, a dedicated path exists between both ends until the connection is terminated.
  • Routing decisions must be made when the circuit is first established, but there are no decisions made after that time
  • Circuit switching in a network operates almost the same way as the telephone system works.
  • A complete end-to-end path must exist before communication can take place.
  • The computer initiating the data transfer must ask for a connection to the destination.
  • Once the connection has been initiated and completed to the destination device, the destination device must acknowledge that it is ready and willing to carry on a transfer. Advantages:
    • The communication channel (once established) is dedicated. Disadvantages:
      • Possible long wait to establish a connection, (10 seconds, more on long- distance or international calls.) during which no data can be transmitted.
  • More expensive than any other switching techniques, because a dedicated path is required for each connection.
  • Inefficient use of the communication channel, because the channel is not used when the connected systems are not using it.  Packet swictching: A simple definition of packet switching is: The routing and transferring of data by means of addressed packets so that a channel is occupied during the transmission of the packet only, and upon completion of the transmission the channel is made available for the transfer of other traffic. Packet switching features delivery of variable bit rate data streams, realized as sequences of packets, over a computer network which allocates transmission resources as needed using statistical multiplexing or dynamic bandwidth allocation techniques. As they traverse network nodes, such as switches and routers, packets are received, buffered, queued, and transmitted (stored and forwarded), resulting in variable latency and throughput depending on the link capacity and the traffic load on the network.

Packet switching contrasts with another principal networking paradigm, circuit switching, a method which pre-allocates dedicated network bandwidth specifically for each communication session, each having a constant bit rate and latency between nodes. In cases of billable services, such as cellular communication services, circuit switching is characterized by a fee per unit of connection time, even when no data is transferred, while packet switching may be characterized by a fee per unit of information transmitted, such as characters, packets, or messages. Packet mode communication may be implemented with or without intermediate forwarding nodes (packet switches or routers). Packets are normally forwarded by intermediate network nodes asynchronously using first-in, first-out buffering, but may be forwarded according to some scheduling discipline for fair queuing, traffic shaping, or for differentiated or guaranteed quality of service, such as weighted fair queuing or leaky bucket. In case of a shared physical medium (such as radio or 10BASE5), the packets may be delivered according to a multiple access scheme. OR  Packet switching can be seen as a solution that tries to combine the advantages of message and circuit switching and to minimize the disadvantages of both.

  • There are two methods of packet switching: Datagram and virtual circuit.
  • In both packet switching methods, a message is broken into small parts, called packets.
  • Each packet is tagged with appropriate source and destination addresses.
  • Since packets have a strictly defined maximum length, they can be stored in main memory instead of disk; therefore access delay and cost are minimized.
  • Also the transmission speeds, between nodes, are optimized.
  • With current technology, packets are generally accepted onto the network on a first-come, first-served basis. If the network becomes overloaded, packets are delayed or discarded (``dropped''). The size of the packet can vary from 180 bits, the size for the Datakit virtual circuit switch designed by Bell Labs for communications and business applications; to 1,024 or 2,048 bits for the 1PSS switch, also designed by Bell Labs for public data networking; to 53 bytes for ATM switching, such as Lucent Technologies' packet switches
  • In packet switching, the analog signal from your phone is converted into a digital data stream. That series of digital bits is then divided into relatively tiny clusters of bits, called packets. Each packet has at its beginning the digital address -- a long number -- to which it is being sent. The system blasts out all those tiny packets, as fast as it can, and they travel across the nation's digital backbone systems to their destination: the telephone, or rather the telephone system, of the person you're calling.
  • They do not necessarily travel together; they do not travel sequentially. They don't even all travel via the same route. But eventually they arrive at the right point -- that digital address added to the front of each string of digital data -- and at their destination are reassembled into the correct order, then converted to analog form, so your friend can understand what you're saying.
  • Datagram packet switching is similar to message switching in that each packet is a self-contained unit with complete addressing information attached.
  • This fact allows packets to take a variety of possible paths through the network.
  • So the packets, each with the same destination address, do not follow the same route, and they may arrive out of sequence at the exit point node (or the destination).
  • Reordering is done at the destination point based on the sequence number of the packets.
  • It is possible for a packet to be destroyed if one of the nodes on its way is crashed momentarily. Thus all its queued packets may be lost.
  • In the virtual circuit approach, a preplanned route is established before any data packets are sent.
  • A logical connection is established when a sender send a "call request packet" to the receiver and the receiver send back an acknowledge packet "call accepted packet" to the sender if the receiver agrees on conversational parameters.
  • The conversational parameters can be maximum packet sizes, path to be taken, and other variables necessary to establish and maintain the conversation.

  • Virtual circuits imply acknowledgements, flow control, and error control, so virtual circuits are reliable. That is, they have the capability to inform upper-protocol layers if a transmission problem occurs

  • In virtual circuit, the route between stations does not mean that this is a dedicated path, as in circuit switching.

  • A packet is still buffered at each node and queued for output over a line.  The difference between virtual circuit and datagram approaches:
  • With virtual circuit, the node does not need to make a routing decision for each packet.
  • It is made only once for all packets using that virtual circuit. VC's offer guarantees that the packets sent arrive in the order sent with no duplicates or omissions with no errors (with high probability) regardless of how they are implemented internally Advantages: - Packet switching is cost effective, because switching devices do not need massive amount of secondary storage.
  • Packet switching offers improved delay characteristics, because there are no long messages in the queue (maximum packet size is fixed).
  • Packet can be rerouted if there is any problem, such as, busy or disabled links.
  • The advantage of packet switching is that many network users can share the same channel at the same time. Packet switching can maximize link efficiency by making optimal use of link bandwidth. Disadvantages:
  • Protocols for packet switching are typically more complex.

  • It can add some initial costs in implementation.

  • If packet is lost, sender needs to retransmit the data. Another disadvantage is that packet-switched systems still can’t deliver the same quality as dedicated circuits in applications requiring very little delay - like voice conversations or moving images.  Message switching: Message switching is a network switching technique in which data is routed in its entirety from the source node to the destination node, one hope at a time. During message routing, every intermediate switch in the network stores the whole message. If the entire network's resources are engaged or the network becomes blocked, the message-switched network stores and delays the message until ample resources become available for effective transmission of the message. Before the advancements in packet switching, message switching acted as an efficient substitute for circuit switching. It was initially employed in data communications such as telex networks and paper tape relay systems. Message switching has largely been replaced by packet switching, but the technique is still employed in ad hoc sensor networks, military networks and satellite communications networks. In message switching, the source and destination nodes are not directly connected. Instead, the intermediary nodes (mainly switches) are responsible for transferring the message from one node to the next. Thus, every intermediary node inside the network needs to store every message prior to retransferring the messages one-by-one as adequate resources become available. If the resources are not available, the messages are stored indefinitely. This characteristic is known as store and forward. Every message should include a header, which typically consists of routing information, such as the source and destination, expiry time, priority level, etc. Because message switching implements the store-and-forward technique, it efficiently uses the network. Also, there is no size limit for the messages. However, this technique also has several disadvantages:

    • Because the messages are fully packaged and saved indefinitely at every intermediate node, the nodes demand substantial storage capacity.
    • Message-switched networks are very slow as the processing takes place in each and every node, which may result in poor performance.
    • This technique is not adequate for interactive and real-time processes, such as multimedia games and voice communication. OR  Message Switching
    • With message switching there is no need to establish a dedicated path between two stations.
  • When a station sends a message, the destination address is appended to the message.

  • The message is then transmitted through the network, in its entirety, from node to node.

  • Each node receives the entire message, stores it in its entirety on disk, and then transmits the message to the next node.

  • This type of network is called a store-and-forward network. A message-switching node is typically a general-purpose computer. The device needs sufficient secondary- storage capacity to store the incoming messages, which could be long. A time delay is introduced using this type of scheme due to store- and-forward time, plus the time required to find the next node in the transmission path. Advantages: - Channel efficiency can be greater compared to circuit-switched systems, because more devices are sharing the channel.

  • Traffic congestion can be reduced, because messages may be temporarily stored in route.

  • Message priorities can be established due to store-and-forward technique.

  • Message broadcasting can be achieved with the use of broadcast address appended in the message Disadvantages

    • Message switching is not compatible with interactive applications.
  • Store-and-forward devices are expensive, because they must have large disks to hold potentially long messages  Cell Switching Cell Switching is similar to packet switching, except that the switching does not necessarily occur on packet boundaries. This is ideal for an integrated environment and is found within Cell-based networks, such as ATM. Cell-switching can handle both digital voice and data signals. 1 Data Link Layer:Services,error detection and correction,multiple access protocols,LAN addressing and ARP(address Resolution Protocol),Ethernet,CSMA/CD multiple access protocol,Hubs,Bridges and Switches,Wireless LANs,PPP(Point to point protocol),wide area protocols. 1.2.a link Layer services and function: The Data-Link layer contains two sublayers that are described in the IEEE-802 LAN standards:

    • Media Access Control (MAC)
    • Logical Link Control (LLC) The Data-Link layer ensures that an initial connection has been set up, divides output data into data frames, and handles the acknowledgements from a receiver that the data arrived successfully. It also ensures that incoming data has been received successfully by analyzing bit patterns at special places in the frames. Services
    • Encapsulation of network layer data packets into frames
    • Frame synchronization
    • Logical link control (LLC) sublayer: o Error control (automatic repeat request,ARQ), in addition to ARQ provided by some transport-layer protocols, to forward error correction (FEC) techniques provided on the physical layer, and to error-detection and packet canceling provided at all layers, including the network layer. Data-link-layer error control (i. retransmission of erroneous packets) is provided in wireless networks and V telephone network modems, but not in LAN protocols such as Ethernet, since bit errors are so uncommon in short wires. In that case, only error detection and canceling of erroneous packets are provided. o Flow control, in addition to the one provided on the transport layer. Data-link-layer error control is not used in LAN protocols such as Ethernet, but in modems and wireless networks.
    • Media access control (MAC) sublayer: o Multiple access protocols for channel-access control, for example CSMA/CD protocols for collision detection and re-transmission in Ethernet bus networks and hub networks, or the CSMA/CA protocol for collision avoidance in wireless networks. o Physical addressing (MAC addressing)

o LAN switching (packet switching), including MAC filtering, Spanning Tree Protocol (STP) and Shortest Path Bridging (SPB) o Data packet queuing or scheduling o Store-and-forward switching or cut-through switching o Quality of Service (QoS) control o Virtual LANs (VLAN) Some Extra on Datalink layer:

  1. Data link layer receives the data from the network layer & divide it into manageable units called frames.
  2. It then provides the addressing information by adding header to each frame. Physical addresses of source & destination machines are added to each frame.
  3. It provides flow control mechanism to ensure that sender is not sending the data at the speed that the receiver cannot process.
  4. It also provide error control mechanism to detect & retransmit damaged, duplicate, or lost frame, thus adding reliability to physical layer.
  5. Another function of data link layer is access control. When two or more devices are attached to the same link, data link layer protocols determine which device has control over the link at any given time. Logical Link Control Sublayer: The uppermost sublayer is Logical Link Control (LLC). This sublayer multiplexes protocols running atop the data link layer, and optionally provides flow control, acknowledgment, and error recovery. The LLC provides addressing and control of the data link. It specifies which mechanisms are to be used for addressing stations over the transmission medium and for controlling the data exchanged between the originator and recipient machines. Media Access Control Sublayer: The sublayer below it is Media Access Control (MAC). Sometimes this refers to the sublayer that determines who is allowed to access the media at any one time. Other times it refers to a frame structure with MAC addresses inside. There are generally two forms of media access control: distributed and centralized. Both of these may be compared to communication between people: The Media Access Control sublayer also determines where one frame of data ends and the next one starts. There are four means of doing that: a time based, character counting, byte stuffing and bit stuffing.

The three major types of services offered by data link layer are: 1. Unacknowledged connectionless service. 2. Acknowledged connectionless service. 3. Acknowledged connection oriented service.

  1. Unacknowledged Connectionless Service

(a) In this type of service source machine sends frames to destination machine but the destination machine does not send any acknowledgement of these frames back to the source. Hence it is called unacknowledged service. (b) There is no connection establishment between source and destination machine before data transfer or release after data transfer. Therefore it is known as connectionless service. (c) There is no error control i. if any frame is lost due to noise on the line, no attempt is made to recover it. (d) This type of service is used when error rate is low. (e) It is suitable for real time traffic such as speech.

  1. Acknowledged Connectionless Service

(a) In this service, neither the connection is established before the data transfer nor is it released after the data transfer between source and destination. (b) When the sender sends the data frames to destination, destination machine sends back the acknowledgement of these frames. (c) This type of service provides additional reliability because source machine retransmit the frames if it does not receive the acknowledgement of these frames within the specified time. (d) This service is useful over unreliable channels, such as wireless systems. 3. Acknowledged Connection - Oriented Service

(a) This service is the most sophisticated service provided by data link layer to network layer. (b) It is connection-oriented. It means that connection is establishment between source & destination before any data is transferred. (c) In this service, data transfer has three distinct phases:- (i) Connection establishment (ii) Actual data transfer (iii) Connection release (d) Here, each frame being transmitted from source to destination is given a specific number and is acknowledged by the destination machine. (e) All the frames are received by destination in the same order in which they are send by the source. 1.2.b DETECTION ERROR CONTROL- Network is responsible for transmission of data from one device to another device. The end to end transfer of data from a transmitting application to a receiving application involves many steps, each subject to error. With the error control process, we can be confident that the transmitted and received data are identical. Data can be corrupted during transmission. For reliable communication, error must be detected and corrected. Error control is the process of detecting and correcting both the bit level and packet level errors. Types of Errors Single Bit Error The term single bit error means that only one bit of the data unit was changed from 1 to 0 and 0 to 1. Burst Error In term burst error means that two or more bits in the data unit were changed. Burst error is also called packet level error, where errors like packet loss, duplication, reordering. Error Detection Error detection is the process of detecting the error during the transmission between the sender and the receiver. Types of error detection - Parity checking - Cyclic Redundancy Check (CRC) - Checksum Redundancy Redundancy allows a receiver to check whether received data was corrupted during transmission. So that he can request a retransmission. Redundancy is the concept of using extra bits for use in error detection. As shown in the figure sender adds redundant bits (R) to the data unit and sends to receiver, when receiver gets bits stream and passes through checking function. If no error then data portion of the data unit is accepted and redundant bits are discarded. otherwise asks for the retransmission. Parity checking Parity adds a single bit that indicates whether the number of 1 bits in the preceding data is even or odd. If a single bit is changed in transmission, the message will change parity and the error can be detected at this point. Parity checking is not very robust, since if the number of bits changed is even, the check bit will be invalid and the error will not be detected. 1. Single bit parity 2. Two dimension parity

Moreover, parity does not indicate which bit contained the error, even when it can detect it. The data must be discarded entirely, and re-transmitted from scratch. On a noisy transmission medium a successful transmission could take a long time, or even never occur. Parity does have the advantage, however, that it's about the best possible code that uses only a single bit of space. Cyclic Redundancy Check CRC is a very efficient redundancy checking technique. It is based on binary division of the data unit, the remainder of which (CRC) is added to the data unit and sent to the receiver. The Receiver divides data unit by the same divisor. If the remainder is zero then data unit is accepted and passed up the protocol stack, otherwise it is considered as having been corrupted in transit, and the packet is dropped. Sequential steps in CRC are as follows. Sender follows following steps. - Data unit is composite by number of 0s, which is one less than the divisor. - Then it is divided by the predefined divisor using binary division technique. The remainder is called CRC. CRC is appended to the data unit and is sent to the receiver. Receiver follows following steps. - When data unit arrives followed by the CRC it is divided by the same divisor which was used to find the CRC (remainder). - If the remainder result in this division process is zero then it is error free data, otherwise it is corrupted. Diagram shows how to CRC process works. [a] sender CRC generator [b] receiver CRC checker Checksum Check sum is the third method for error detection mechanism. Checksum is used in the upper layers, while Parity checking and CRC is used in the physical layer. Checksum is also on the concept of redundancy. In the checksum mechanism two operations to perform. Checksum generator Sender uses checksum generator mechanism. First data unit is divided into equal segments of n bits. Then all segments are added together using 1’s complement. Then it complements ones again. It becomes Checksum and sends along with data unit. Exp:If 16 bits 10001010 00100011 is to be sent to receiver the checksum is added to the data unit and sends to the receiver. Final data unit is 10001010 00100011 01010000. Checksum checker Receiver receives the data unit and divides into segments of equal size of segments. All segments are added using 1’s complement. The result is completed once again. If the result is zero, data will be accepted, otherwise rejected. Exp: The final data is nonzero then it is rejected. Error Correction This type of error control allows a receiver to reconstruct the original information when it has been corrupted during transmission. Hamming Code It is a single bit error correction method using redundant bits. In this method redundant bits are included with the original data. Now, the bits are arranged such that different incorrect bits produce different error results and the corrupt bit can be identified. Once the bit is identified, the receiver can reverse its value and correct the error. Hamming code can be applied to any length of data unit and uses the relationships between the data and the redundancy bits. Algorithm: 1. Parity bits are positions at the power of two (2 r). 2. Rest of the positions is filled by original data. 3. Each parity bit will take care of its bits in the code. 4. Final code will sends to the receiver.

In the above example we calculates the even parities for the various bit combinations. the value for the each combination is the value for the corresponding r(redundancy)bit. r1 will take care of bit 1,3,5,7,9,11. and it is set based on the sum of even parity bit. the same method for rest of the parity bits. If the error occurred at bit 7 which is changed from 1 to 0, then receiver recalculates the same sets of bits used by the sender. By this we can identify the perfect location of error occurrence. once the bit is identified the receiver can reverse its value and correct the error.  FLOW CONTROL- Flow Control is one important design issue for the Data Link Layer that controls the flow of data between sender and receiver Communication, there is communication medium between sender and receiver. When Sender sends data to receiver than there can be problem in below case :

  1. Sender sends data at higher rate and receive is too sluggish to support that data rate. To solve the above problem, FLOW CONTROL is introduced in Data Link Layer. It also works on several higher layers. The main concept of Flow Control is to introduce EFFICIENCY in Computer Networks. Approaches of Flow Control
    1. Feed back based Flow Control
    2. Rate based Flow Control Feed back based Flow Control is used in Data Link Layer and Rate based Flow Control is used in Network Layer. Feed back based Flow Control In Feed back based Flow Control, Until sender receives feedback from the receiver, it will not send next data. Types of Feedback based Flow Control A. Stop-and-Wait Protocol B. Sliding Window Protocol
    3. A One-Bit Sliding Window Protocol
    4. A Protocol Using Go Back N
    5. A Protocol Using Selective Repeat A. A Simplex Stop-and-Wait Protocol In this Protocol we have taken the following assumptions:
    6. It provides unidirectional flow of data from sender to receiver.
    7. The Communication channel is assumed to be error free. In this Protocol the Sender simply sends data and waits for the acknowledgment from Receiver. That's why it is called Stop-and-Wait Protocol. This type is not so much efficient, but it is simplest way of Flow Control. In this scheme we take Communication Channel error free, but if the Channel has some errors than receiver is not able to get the correct data from sender so it will not possible for sender to send the next data (because it will not get acknowledge from receiver). So it will end the communication, to solve this problem there are two new concepts were introduced.
    8. TIMER, if sender was not able to get acknowledgment in the particular time than, it sends the buffered data once again to receiver. When sender starts to send the data, it starts timer.
    9. SEQUENCE NUMBER, from this the sender sends the data with the specific sequence number so after receiving the data, receiver sends the data with that sequence number, and here at sender side it also expect the acknowledgment of the same sequence number. This type of scheme is called Positive Acknowledgment with Retransmission (PAR). B. Sliding Window Protocol Problems Stop –wait protocol In the last protocols sender must wait for either positive acknowledgment from receiver or for time out to send the next frame to receiver. So if the sender is ready to send the new data, it can not send. Sender is dependent on the receiver. Previous protocols have only the flow of one sided, means only sender sends the data and receiver just acknowledge it, so the twice bandwidth is used. To solve the above problems the Sliding Window Protocol was introduce. In this, the sender and receiver both use buffer, it’s of same size, so there is no necessary to wait for the sender to send the second data, it can send one after one without wait of the receiver’s acknowledgment.

And it also solve the problem of uses of more bandwidth, because in this scheme both sender and receiver uses the channel to send the data and receiver just send the acknowledge with the data which it want to send to sender, so there is no special bandwidth is used for acknowledgment, so the bandwidth is saved, and this whole process is called PIGGYBACKING. Types of Sliding Window Protocol i. A One-Bit Sliding Window Protocol ii. A Protocol Using Go Back N iii. A Protocol Using Selective Repeat i. A One-Bit Sliding Window Protocol This protocol has buffer size of one bit, so only possibility for sender and receiver to send and receive packet is only 0 and 1. This protocol includes Sequence, Acknowledge, and Packet number uses full duplex channel so there is two possibilities: 1. Sender first start sending the data and receiver start sending data after it receive the data. 2. Receiver and sender both start sending packets simultaneously, First case is simple and works perfectly, but there will be an error in the second one. That error can be like duplication of the packet, without any transmission error. ii. A Protocol Using Go Back N The problem with pipelining is if sender sending 10 packets, but the problem occurs in 8th one than it is needed to resend whole data. So the protocol called Go back N and Selective Repeat were introduced to solve this problem this protocol, there are two possibility at the receiver’s end, it may be with large window size or it may be with window size one.

The window size at the receiver end may be large or only of one. In the case of window size is one at the receiver, as we can see in the figure (a), if sender wants to send the packet from one to ten but suppose it has error in 2nd packet, so sender will start from zero, one, two, etc. here we assume that sender has the time out interval with 8. So the time out will occur after the 8 packets, up to that it will not wait for the acknowledgment. In this case at the receiver side the 2nd packet come with error, and other up to 8 were discarded by receiver. So in this case the loss of data is more. Whether in the other case with the large window size at receiver end as we can see in the figure (b) if the 2nd packet comes with error than the receiver will accept the 3rd packet but it sends NAK of 2 to the sender and buffer the 3rd packet. Receiver do the same thing in 4th and 5th packet. When the sender receiver the NAK of 2nd packet it immediately send the 2nd packet to the receiver. After receiving the 2nd packet, receiver send the ACK of 5th one as saying that it received up to 5 packet. So there is no need to resend 3rd , 4th and 5th packet again, they are buffered in the receiver side. iii. A Protocol Using Selective Repeat Protocol using Go back N is good when the errors are rare, but if the line is poor, it wastes a lot of bandwidth on retransmitted frames. So to provide reliability, Selective repeat protocol was introduced. In this protocol sender starts it's window size with 0 and grows to some predefined maximum number. Receiver's window size is fixed and equal to the maximum number of sender's window size. The receiver has a buffer reserved for each sequence number within its fixed window a frame arrives, its sequence number is checked by the function to see if it falls within the window, if so and if it has not already been received, it is accepted and stored. This action is taken whether it is not expected by the network layer.

Here the buffer size of sender and rece frames to the receiver and starts timer sender and it passes the frames to the increased sequence number and expect will not receive the ACK. So when the t the receiver. In this case the receiver ac network layer. In this case protocol fails To solve the problem of duplication, the is half of the frames to be send. As we window size is 4. Receiver accepts the frames to the network layer and increa than sender will send 0 to 3 to receiver this way the problem of duplication is sol  MAC The data link layer is divided into two Link Control (LLC) layer. The MAC subl data and permission to transmit it. The checking. Mac Layer is one of the sublayers that m MAC layer is responsible for moving pa shared channel The MAC sublayer uses MAC protocols channel don't collide protocols Rings, Token Buses, and WANs. 1.2.c acess Protocol:  ALOHA ALOHA is a simple communication sch there is a frame to send without checkin station waits for implicit or explicit ac next frame is sent. And if the frame fails  Pure ALOHA ALOHA is the simp a user can transmit the data whe any problem. But if collision oc collision if it doesn’t receive the a

In ALOHA Collision probability is qu traffic. Theoretically it is proved that P (success by given node) = P(node tra transmits in [t0,t0 +1] = p. (1-p)N-1. ( ... Choosing optimum p as N --> infinity = 1 / (2e) =. =18%

r and receiver is 7 and as we can see in the figure (a arts timer. When a receiver gets the frames, it sends es to the Network Layer. After doing this, receiver e nd expects sequence number 7,0,1,2,3,4,5. But if the A hen the timer expires, the sender retransmits the ori receiver accepts the frames 0 to 5 (which are duplicat otocol fails. cation, the buffer size of sender and receiver should be d. As we can see in fig(c ), the sender sends the fram cepts the frames and sends acknowledgment to the s nd increases the expected sequence number from 4 t o receiver again but receiver is expecting to 4 to 7, so ation is solved.

into two sublayers: The Media Access Control (MAC) MAC sublayer controls how a computer on the networ it it. The LLC layer controls frame synchronization, f

ers that makeup the datalink layer of the OSI reference moving packets from one Network Interface card NIC

protocols to ensure that signals sent from different sta t protocols are used for different shared networks, suc

cation scheme in which each source in a network sen out checking to see if any other station is active. After s explicit acknowledgment. If the frame successfully re rame fails to be received at the destination it is sent aga

s the simplest technique in multiple accesses. Basic ide data whenever they want. If data is successfully trans ollision occurs than the station will transmit again. S ceive the acknowledgment from the receiver.

y is quite high. ALOHA is suitable for the network ed that maximum throughput for ALOHA is 18%. (node transmits). P(no other node transmits in [t

  1. (1-p)N-1,P (success by any of N nodes) = N. p. > infinity... (2e) =.

e figure (a), the sender sends 7 s, it sends the ACK back to the receiver empties its buffer and ut if the ACK is lost, the sender its the original frames, 0 to 6 to e duplicated) and send it to the

should be (MAX SEQ + 1)/2 that s the frames from 0 to 3 as it's nt to the sender and passes the r from 4 to 7. If the ACK is lost 4 to 7, so it will not accept it. So

rol (MAC) layer and the Logical the network gains access to the onization, flow control and error

reference Model. card NIC to another across the

fferent stations across the same works, such as Ethernets, Token

twork sends its data whenever ve. After sending the frame each ssfully reaches the destination, is sent again.

. Basic idea of this mechanism is ully transmitted then there isn’t it again. Sender can detect the

e network where there is a less

its in [t0-1,t0]. P(no other node s) = N. p. (1-p) N-1. (1-p)N-

 Slotted ALOHA In ALOHA a newly emitted packet can col and take L time units to transmit, the transmitted in a time window of length collisions decreases and the throughpu Time is divided into equal slots of Len beginning of the next time slot. Advantages of slotted ALOHA:

  • single active node can continuous
  • highly decentralized: only slots in
  • simple Disadvantages of slotted ALOHA:
  • collisions, wasting slots
  • idle slots
  • clock synchronization Efficiency of Slotted ALOHA:
  • Suppose there are N nodes with node into the slot is p.
  • Probability that node 1 has a succ
  • Probability that every node has a
  • For max efficiency with N nodes,
  • For many nodes, take limit of Np* The clear advantage of slotted ALOHA is bandwidth overhead because of the nee

 Carrier Sense Multiple Access With slotted ALOHA, the best channe developed for improving the performan carrier sense protocols. Carrier sensing used. Schemes that use a carrier sense CSMA schemes. There are two variants of a station to sense the medium, sending the medium to become idle it is called a. Persistent When a station has the data to send, it f or not. If it senses the channel idle, station until the channel is idle. When a station That’s why this protocol is called p station finds the channel idle, if it transm persistent. 1 -persistent protocol is the b. Non-Persistent Non persistent CSMA is less aggressive the data, the station senses the channel channel is busy, the station does not con

cket can collide with a packet in progress. If all packets nsmit, then it is easy to see that a packet collides w of length 2L. If this time window is decreased some hroughput increase. This mechanism is used in slotted ots of Length L. When a station wants to send a pack

ontinuously transmit at full rate of channel nly slots in nodes need to be in sync

odes with many frames to send. The probability of se

has a success in getting the slot is p.(1-p)N- ode has a success is N.(1-p)N- N nodes, find p* that maximizes Np(1-p)N- imit of Np*(1-p*)N-1 as N goes to infinity, gives 1/e =. ALOHA is higher throughput. But introduces complex of the need for time synchronization.

ccess protocols (CSMA) st channel utilization that can be achieved is 1/e. performance that listen for a carrier and act r sensing allows the station to detect whether the med rier sense circuits are classed together as carrier sen o variants of CSMA. CSMA/CD and CSMA/CA simple , sending packets immediately if the medium is idle. I is called persistent otherwise it is called non persistent

o send, it first listens the channel to check if anyone els idle, station starts transmitting the data. If it senses the n a station detects a channel idle, it transmits its fram p-persistent CSMA. This protocol applies to slot if it transmits the fame with probability 1, that this pr ocol is the most aggressive protocol.

ggressive compared to P persistent protocol. In this pr e channel and if the channel is idle it starts transmitti oes not continuously sense it but instead of that it wai

packets are of the same length collides with any other packet ased somehow, than number of in slotted ALOHA or S-ALOHA. nd a packet it will wait till the

bility of sending frames of each

es 1/e =. s complexity in the stations and

is 1/e. Several protocols are r and act accordingly are called r the medium is currently being arrier sense multiple access or he simplest CSMA scheme is for m is idle. If the station waits for persistent.

anyone else is transmitting data senses the channel busy it waits its its frame with probability P. ies to slotted channels. When a hat this protocol is known as 1 -

. In this protocol, before sending transmitting the data. But if the hat it waits for random amount

of time and repeats the algorithm. Here the algorithm leads to better channel utilization but also results in longer delay compared to 1 –persistent. CSMA/CD Carrier Sense Multiple Access/Collision Detection a technique for multiple access protocols. If no transmission is taking place at the time, the particular station can transmit. If two stations attempt to transmit simultaneously, this causes a collision, which is detected by all participating stations. After a random time interval, the stations that collided attempt to transmit again. If another collision occurs, the time intervals from which the random waiting time is selected are increased step by step. This is known as exponential back off. Exponential back off Algorithm  Adaptor gets datagram and creates frame  If adapter senses channel idle (9 microsecond), it starts to transmit frame. If it senses channel busy, waits until channel idle and then transmits  If adapter transmits entire frame without detecting another transmission, the adapter is done with frame!  If adapter detects another transmission while transmitting, aborts and sends jam signal  After aborting, adapter enters exponential backoff: after the mth collision, adapter chooses a K at random from {0,1,2,...,2m-1}. Adapter waits K*512 bit times (i. slot) and returns to Step 2  After 10th retry, random number stops at 1023. After 16th retry, system stops retry. CSMA/CA CSMA/CA is Carrier Sense Multiple Access/Collision Avoidance. In this multiple access protocol, station senses the medium before transmitting the frame. This protocol has been developed to improve the performance of CSMA. CASMA/CA is used in 802 based wireless LANs. In wireless LANs it is not possible to listen to the medium while transmitting. So collision detection is not possible. In CSMA/CA, when the station detects collision, it waits for the random amount of time. Then before transmitting the packet, it listens to the medium. If station senses the medium idle, it starts transmitting the packet. If it detects the medium busy, it waits for the channel to become idle.

When A wants to transmit a packet to B, first it sends RTS (Request to Send) packet of 30 bytes to B with length L. If B is idle, it sends its response to A with CTS packet (Clear to Send). Here whoever listens to the CTS packet remains silent for duration of L. When A receives CTS, it sends data of L length to B. There are several issues in this protocol  Hidden Station Problem  Exposed Station Problem

  1. Hidden Station Problem (Figure a) When a station sends the packet to another station/receiver, some other station which is not in sender’s range may start sending the packet to the same receiver. That will create collision of packets. This problem is explained more specifically below.

Suppose A is sending a packet to B. Now at the same time D also wants to send the packet to B. Here D does not hear A. So D will also send its packets to B. SO collision will occur.

  1. Exposed Station Problem (Figure b) When A is sending the packet, C will also hear. So if station wants to send the packet D, still it won’t send. This will reduce the efficiency of the protocol. This problem is called Exposed Station problem. To deal with these problems 802 supports two kinds of operations.
    1. DCF (Distributed Coordination Function)
    2. PCF (Point Coordinated Function)  DCF DCF does not use and central control. It uses CSMA/CA protocol. It uses physical channel sensing and virtual channel sensing. Here when a station wants to send packets, first it senses the channel. If the channel is idle, immediately starts transmitting. While transmitting, it does not sense the channel, but it emits its entire frame. This frame can be destroyed at the receiver side if receiver has started transmitting. In this case, if collision occurs, the colliding stations wait for random amount of time using the binary exponential back off algorithm and tries again letter. Virtual sensing is explained in the figure given below.

Here, A wants to send a packet to B. Station C is within A’s Range. Station D is within B’s range but not A’s range. When A wants to send a packet to B, first it sends the RTS (30 bytes) packet to B, asking for the permission to send the packet. In the response, if B wants to grant the permission, it will send the CTS packet to A giving permission to A for sending the packet. When A receives its frame it starts ACK timer. When the frame is successfully transmitted, B sends ACK frame. Here if A’s ACK time expires before receiving B’s ACK frame, the whole process will run again. Here for the stations C and D, when station A sends RTS to station B, RTS will also be received by C. By viewing the information provided in RTS, C will realize that some on is sending the packet and also how long the sequence will take, including the final ACK. So C will assert a kind of virtual channel busy by itself, (indicated by NAV (network Allocation Vector) in the figure above).remain silent for the particular amount of time. Station D will not receive RTS, but it will receive CTS from B. So B will also assert the NAV signal for itself the channel is too noisy, when A send the frame to B and a frame is too large then there are more possibilities of the frame getting damaged and so the frame will be retransmitted. C and D, both stations will also remain silent until the whole frame is transmitted successfully. To deal with this problem of noisy channels, 802 allows the frame to be fragmented into smaller fragments. Once the channel has been acquired using CTS and RTS, multiple segments can be sent in a row. Sequence of segments is called a fragmentation burst. Fragmentation increases the throughput by restricting retransmissions to the bad fragments rather than the entire frame.

 PCF

PCF mechanism uses base station to control all activity in its cell. Base station polls the other station asking them if they have any frame to send. In PCF, as it is centralized, no collision will occur. In polling mechanism, the base station broadcasts a beacon frame periodically (10 to 100 times per second). Beacon frame contains system parameters such as hopping sequences, dwell times, clock synchronization etc. It also invites new station to sign up. All signed up stations are guaranteed to get a certain fraction of the bandwidth. So in PCF quality of service is guaranteed.

All implementations must support DCF Distributed control and Centralized con interval. There are four interval defined

  • More about this has been explained in se  Taking Turns MAC protocols  Polling In Polling, master node invites slave n failure), polling overhead, latency are th  Bit map Reservation In Bit map reservation, stations reserve concerns in this protocol.

 Token Passing In this protocol, token is passed from on overhead, latency are the concerns in tok



Internet vs. local area network

pport DCF but PCF is optional. PCF and DCF can coe ralized control, both can operate at the same time u al defined. This four intervals are shown in the figure g

  • SIFS - Short InterFrame Spacing
    • PIFS – PCF InterFrame Spacing
  • DIFS – DCF InterFrame Spacing EIFS – Extended Inter Frame Spacing ained in section 3 of Data Link Layer. cols

es slave nodes to transmit in nodes. Single point of ncy are the concerns in polling.

s reserves contention slots in advance. Polling overhe

ed from one node to next sequentially. Single point of erns in token passing.

 Multiple Access Protocol

Networking Basics: Network addressing

CF can coexist within one sell. me time using interframe time e figure given below.

point of failure (master node

ng overhead and latency are the

le point of failure (token), token

sing

When a group of computers are connected together within a relatively small area, it is referred to as a local area network (LAN). If a LAN is available only to certain people (such as employees of a company), it is called a private or internal network. The Internet is a public network because it is accessible to many users and computers from different networks. The network shown in Figure 6 is a LAN that can be used to connect to the Internet.

A gateway is a combination of hardware and software that connects two different types of networks, for example a private network and a public network. There must be at least two network adapters installed on a gateway, one to connect to the Internet (ISP network adapter) and the other to connect to the private or local network (local network adapter), as shown in Figure 6. Public vs. private addressing An IP address is a unique numerical value that is used to identify a computer on a network. There are two kinds of IP addresses, public (also called globally unique IP addresses) and private. - Public IP addresses are assigned by the Internet Assigned Numbers Authority (IANA). The addresses are guaranteed to be globally unique and reachable on the Internet. This assures that multiple computers do not have the same IP address. An Internet service provider (ISP) obtains a range of public IP addresses from IANA, and then the ISP assigns the addresses to customers to use when they connect to the Internet through the ISP. Public IP addresses are routable on the Internet, which means that a computer with a public IP address is visible to other computers on the Internet. - Private IP addresses cannot be used on the Internet. IANA has set aside three blocks of IP addresses that cannot be used on the global Internet. These three blocks of addresses are private IP addresses, and they are used for networks that do not directly connect to the Internet. A private IP address is within one of the following blocks or range of addresses: o 192.168.0/16: This block allows valid IP addresses within the range 192.168.0 to 192.168.255. o 172.16.0/12: This block allows valid IP addresses within the range 172.16.0 to 172.31.255. o 10.0.0/8: This block allows valid IP addresses within the range 10.0.0 to 10.255.255. For more information about private IP addresses, see RFC 1918, "Address Allocation for Private Internets," at go.microsoft/fwlink/?LinkID=16424.

Most small businesses prefer to use private IP addresses for the local network, because ISPs generally charge a fee for each public IP address that the small business uses. As a result, using public IP addresses on a local network is costly. Rather than purchasing a globally unique IP address for each client computer that is on your local network, you can purchase one globally unique IP address and use it for the router interface that connects to your ISP. In most cases, a private IP address of 192.168.0, 192.168.1, or 192.168.2 for the local network is recommended during Windows SBS 2008 installation.  IPv4 vs. IPv6 addresses Windows SBS 2008 requires an IPv4 address, but it also supports IPv6 addresses when they are used on the same network.

 IPv

The version of the Internet Protocol (IP) that is commonly used is version 4 (IPv4), which has not changed substantially since RFC 791 was published in 1981. IPv4 is robust, easily implemented, interoperable, and capable of scaling to a global utility that can function with the Internet. The Internet continues to grow exponentially, and the adoption of broadband technologies, such as cable modems, mobile information appliances, such as personal data assistants or PDAs, and cellular phones, means that many more addresses are needed.  IPv

IPv6 significantly increases the number of addresses that are available. The most obvious difference between IPv6 and IPv4 is the size of the addresses. An IPv4 address is 32 bits long, and an IPv6 address is 128 bits long, which is four times longer than an IPv4 address.  Dynamic vs. static IP addresses A local area network can have static and dynamic IP addresses. To configure a network that is easy to support, configure all client computers to obtain an IP address from the Dynamic Host Configuration Protocol (DHCP) Server service that is in Windows SBS 2008. Dynamic IP addresses-Dynamic IP addresses are acquired from a DHCP server, and they may change from time to time. You can provide dynamic IP addresses to the computers on your network by configuring one or more DHCP servers. The DHCP server must be assigned a static IP address. Static IP addresses-A static IP address does not change. It is assigned by the network administrator, and it is manually entered into the properties for the network adapter that is on a server or on a client computer. A static IP address does not require that a DHCP server is running on the network types of servers must have a static IP address. These servers include DHCP servers, DNS servers, WINS servers, and any server that is providing access to users who are using the Internet. If the computer has more than one network adapter, you must assign a separate static IP address for each adapter.  Introduction to TCP/IP TCP/IP is the suite of protocols used by the Internet and most LANs throughout the world. In TCP/IP, every host (computer or other communications device) that is connected to the network has a unique IP address. An IP address is composed of four octets (numbers in the range of 0 to 255) separated by decimal points. The IP address is used to uniquely identify a host or computer on the LAN. For example, a computer with the hostname Morpheus could have an IP address of 192.168.7. You should avoid giving two or more computers the same IP address by using the range of IP addresses that are reserved for private, local area networks; this range of IP addresses usually begins with the octets 192. LAN network address The first three octets of an IP address should be the same for all computers in the LAN. For example, if a total of 128 hosts exist in a single LAN, the IP addresses could be assigned starting with 192.168.1, where x represents a number in the range of 1 to 128. You could create consecutive LANs within the same company in a similar manner consisting of up to another 128 computers. Of course, you are not limited to 128 computers, as there are other ranges of IP addresses that allow you to build even larger networks. There are different classes of networks that determine the size and total possible unique IP addresses of any given LAN. For example, a class A LAN can have over 16 million unique IP addresses. A class B LAN can have over 65,000 unique IP addresses. The size of your LAN depends on which reserved address range you use and the subnet mask (explained later in the article) associated with that range (see Table 1.). Table 1. Address ranges and LAN sizes

Address range Subnet mask Provides Addresses per LAN

10.0.0 - 10.255.255.255 255.0.0 1 class A LAN 16,777,

172.16.0 - 172.31.255 255.255.0 16 class B LANs 65,

192.168.0 - 192.168.255 25.255.255 256 class C LANs 256

Network and broadcast addresses Another important aspect of building a LAN is that the addresses at the two extreme ends of the address range are reserved for use as the LAN's network address and broadcast address. The network address is

used by an application to represent the overall network. The broadcast address is used by an application to send the same message to all other hosts in the network simultaneously. For example, if you use addresses in the range of 192.168.1 to 192.168.1, the first address (192.168.1) is reserved as the network address, and the last address (192.168.1) is reserved as the broadcast address. Therefore, you only assign individual computers on the LAN IP addresses in the range of 192.168.1 to 192.168.1:

Network address: 192.168.

Individual hosts: 192.168.1 to 192.168.

Broadcast address: 192.168.

Subnet masks Each host in a LAN has a subnet mask. The subnet mask is an octet that uses the number 255 to represent the network address portion of the IP address and a zero to identify the host portion of the address. For example, the subnet mask 255.255.255 is used by each host to determine which LAN or class it belongs to. The zero at the end of the subnet mask represents a unique host within that network. Domain name The domain name, or network name, is a unique name followed by a standard Internet suffixes such as .com, .org, .mil, .net, etc. You can pretty much name your LAN anything if it has a simple dial-up connection and your LAN is not a server providing some type of service to other hosts directly. In addition, our sample network is considered private since it uses IP addresses in the range of 192.168.1. Most importantly, the domain name of choice should not be accessible from the Internet if the above constraints are strictly enforced. Lastly, to obtain an "official" domain name you could register through InterNIC, Network Solutions or Register. See the Related topics section later in this article for the Web sites with detailed instructions for obtaining official domain names. Hostnames Another important step in setting up a LAN is assigning a unique hostname to each computer in the LAN. A hostname is simply a unique name that can be made up and is used to identify a unique computer in the LAN. Also, the name should not contain any blank spaces or punctuation. For example, the following are valid hostnames that could be assigned to each computer in a LAN consisting of 5 hosts: hostname 1 - Morpheus; hostname 2 - Trinity; hostname 3 - Tank; hostname 4 - Oracle; and hostname 5 - Dozer. Each of these hostnames conforms to the requirement that no blank spaces or punctuation marks are present. Use short hostnames to eliminate excessive typing, and choose a name that is easy to remember. Table 2 summarizes what we have covered so far in this article. Every host in the LAN will have the same network address, broadcast address, subnet mask, and domain name because those addresses identify the network in its entirety. Each computer in the LAN will have a hostname and IP address that uniquely identifies that particular host. The network address is 192.168.1, and the broadcast address is 192.168.1. Therefore, each host in the LAN must have an IP address between 192.168.1 to 192.168. Table 2. Sample IP addresses for a LAN with 127 or fewer interconnected computers

IP address Example Same/unique

Network address 192.168.1 Same for all hosts

Domain name yourcompanyname Same for all hosts

Broadcast address 192.168.1 Same for all hosts

Subnet mask 255.255.255 Same for all hosts

Hostname Any valid name Unique to each host

Host addresses 192.168.1 x must be unique to each host

Assigning IP addresses in a LAN

There are two ways to assign IP addresses in a LAN. You can manually assign a static IP address to each computer in the LAN, or you can use a special type of server that automatically assigns a dynamic IP address to each computer as it logs into the network. StaticIP addressing Static IP addressing means manually assigning a unique IP address to each computer in the LAN. The first three octets must be the same for each host, and the last digit must be a unique number for each host. In addition, a unique hostname will need to be assigned to each computer. Each host in the LAN will have the same network address (192.168.1), broadcast address (192.168.1), subnet mask (255.255.255), and domain name (yourcompanyname). It's a good idea to start by visiting each computer in the LAN and jotting down the hostname and IP address for future reference. Dynamic IP addressing Dynamic IP addressing is accomplished via a server or host called DHCP (Dynamic Host Configuration Program) that automatically assigns a unique IP address to each computer as it connects to the LAN. A similar service called BootP can also automatically assign unique IP addresses to each host in the network. The DHCP/ BootP service is a program or device that will act as a host with a unique IP address. An example of a DHCP device is a router that acts as an Ethernet hub (a communications device that allows multiple host to be connected via an Ethernet jack and a specific port) on one end and allows a connection to the Internet on the opposite end. Furthermore, the DHCP server will also assi

Was this document helpful?

Computer Engineering Loksewa Note

Course: Computer Engineering (CSE123)

134 Documents
Students shared 134 documents in this course
Was this document helpful?
COMPUTER ENGINEERING LOKSEWA PREPARATION
SUBJECTIVES SOLUTION
FOR
SECTION-A
Prepared by:Er.rudra prasad kafle
(Marks-24marks)
1.Computer networks(5 marks)-Refer TENENBAUM Books for more references
2.Computer architecture & Organization and Microprocessor(5 marks)
Refer william stalling Books for more references
3.Digital design(4 marks) refer moris mano for more information
4.Basic electrical and Electronics(5 marks)-BL therja for more information
5.Principle of elctronics and communication(5 marks)-Refer sanjay sharma
Date:2073-07-13