Computer systems free pdf download
Flat head screwdriver 2. Which tool is used to clean different. Lint-free Cloth Page d. Which tool is used to loosen or tighten screws that have a star-like depression on the top, a feature that is mainly found on laptop? Anti-static mat b. Philips head screwdriver c. Torx screwdriver d. Wire cutter. Which tool is sometimes called a nut driver? It is used to tighten nuts in the same way that a screwdriver tightens screws? Hex driver d. Torx Screwdriver. Which tool is used for hardware to stand on to prevent static electricity from building up?
What can you advise him? Am I clear class? Open navigation menu. Close suggestions Search Search. User Settings. Skip carousel. Carousel Previous. Carousel Next. What is Scribd? Computer Systems Servicing New Normal. Uploaded by Marjory Sanchez Tablit. Did you find this document useful? Is this content inappropriate? Report this Document. Flag for inappropriate content. Download now. Related titles. Carousel Previous Carousel Next. Jump to Page.
Search inside document. Plug-in a video on online classroom Students are watching the video rules and netiquette b. For that reason, this book is not about computer organization, but rather concerns ongoing issues related to computer hardware and the solutions provided by the industry for these issues. Figure 0. In other cases, the high-level programming languages are compiled directly into the machine language.
The translated program executable will be able to run using services provided by the operating system, which is an additional software component, usually considered part of the infrastructure. The next level is the machine instructions. These binary values represent the instructions to be executed and are the only instructions the machine recognizes. These building blocks are defined in the next layer of Figure 0. Data transfer size differences.
Buffers are used in particular in networking systems to break messages up into smaller packets for transfer, and then for re-assembly at the receiving side.
To support copy semantics. For example, when an application makes a request for a disk write, the data is copied from the user's memory area into a kernel buffer. Now the application can change their copy of the data, but the data which eventually gets written out to disk is the version of the data at the time the write request was made. VirtualMemory This section describes concepts of virtual memory, demand paging and various page replacement algorithms. Virtual memory is a technique that allows the execution of processes which are not completely available in memory.
The main visible advantage of this scheme is that programs can be larger than physical memory. Virtual memory is the separation of user logical memory from physical memory. This separation allows an extremely large virtual memory to be provided for programmers when only a smaller physical memory is available.
Following are the situations, when entire program is not required to be loaded fully in main memory. Virtual memory is commonly implemented by demand paging. It can also be implemented in a segmentation system. Demand segmentation can also be used to provide virtual memory. Virtual memory algorithms Page replacement algorithms Page replacement algorithms are the techniques using which Operating System decides which memory pages to swap out, write to disk when a page of memory needs to be allocated.
Paging happens whenever a page fault occurs and a free page cannot be used for allocation purpose accounting to reason that pages are not available or the number of free pages is lower than required pages. This process determines the quality of the page replacement algorithm: the lesser the time waiting for page-ins, the better is the algorithm.
A page replacement algorithm looks at the limited information about accessing the pages provided by hardware, and tries to select which pages should be replaced to minimize the total number of page misses, while balancing it with the costs of primary storage and processor time of the algorithm itself.
There are many different page replacement algorithms. We evaluate an algorithm by running it on a particular string of memory reference and computing the number of page faults. Reference String The string of memory references is called reference string. Reference strings are generated artificially or by tracing a given system and recording the address of each memory reference.
The latter choice produces a large number of data, where we note two things. A translation look-aside buffer TLB : A translation lookaside buffer TLB is a memory cache that stores recent translations of virtual memory to physical addresses for faster retrieval.
When a virtual memory address is referenced by a program, the search starts in the CPU. First, instruction caches are checked. At this point, TLB is checked for a quick reference to the location in physical memory. When an address is searched in the TLB and not found, the physical memory must be searched with a memory page crawl operation. As virtual memory addresses are translated, values referenced are added to TLB. TLBs also add the support required for multi-user computers to keep memory separate, by having a user and a supervisor mode as well as using permissions on read and write bits to enable sharing.
TLBs can suffer performance issues from multitasking and code errors. This performance degradation is called a cache thrash. Cache thrash is caused by an ongoing computer activity that fails to progress due to excessive use of resources or conflicts in the caching system. Use the time when a page is to be used. OperatingSystemSecurity This section describes various security related aspects like authentication, one time password, threats and security classifications.
So a computer system must be protected against unauthorized access, malicious access to system memory, viruses, worms etc. We're going to discuss following topics in this article. One time passwords provides additional security along with normal authentication. In One- Time Password system, a unique password is required every time user tries to login into the system.
Once a one-time password is used then it cannot be used again. One time password are implemented in various ways. System asks for numbers corresponding to few alphabets randomly chosen. System asks for such secret id which is to be generated every time prior to login.
Operating system's processes and kernel do the designated task as instructed. If a user program made these process do malicious tasks then it is known as Program Threats. One of the common examples of program threat is a program installed in a computer which can store and send user credentials via network to some hacker. Following is the list of some well-known program threats. It is harder to detect. A virus is generally a small code embedded in a program.
System threats refer to misuse of system services and network connections to put user in trouble. System threats can be used to launch program threats on a complete network called as program attack. Following is the list of some well-known system threats. A Worm process generates its multiple copies where each copy uses system resources, prevents all other processes to get required resources. Worm processes can even shut down an entire network. Definition motivates a generic model of language processing activities.
We refer to the collection of language processor components engaged in analyzing a source program as the analysis phase of the language processor. Components engaged in synthesizing a target program constitute the synthesis phase. Hardware is just a piece of mechanical device and its functions are being controlled by a compatible software. Hardware understands instructions in the form of electronic charge, which is the counterpart of binary language in software programming. Binary language has only two alphabets, 0 and 1.
To instruct, the hardware codes must be written in binary format, which is simply a series of 1s and 0s. It would be a difficult and cumbersome task for computer programmers to write such codes, which is why we have compilers to write such codes. Language Processing System We have learnt that any computer system is made of hardware and software. The hardware understands a language, which humans cannot understand. So we write programs in high-level language, which is easier for us to understand and remember.
These programs are then fed into a series of tools and OS components to get the desired code that can be used by the machine. This is known as Language Processing System. They may perform the following functions. Macro processing: A preprocessor may allow a user to define macros that are short hands for longer constructs. File inclusion: A preprocessor may include header files into the program text.
Rational preprocessor: these preprocessors augment older languages with more modern flow-of- control and data structuring facilities.
As an important part of a compiler is error showing to the programmer. They begin to use a mnemonic symbols for each machine instruction, which they would subsequently translate into machine language. Such a mnemonic machine language is now called an assembly language. Programs known as assembler were written to automate the translation of assembly language in to machine language.
The input to an assembler program is called source program, the output is a machine language translation object program. What is an assembler?
A tool called an assembler translates assembly language into binary instructions. Symbolic names for operations and locations are one facet of this representation. An assembler reads a single assembly language source file and produces an object file containing machine instructions and bookkeeping information that helps combine several object files into a program. Figure 1 illustrates how a program is built.
Most programs consist of several files—also called modules— that are written, compiled, and assembled independently. A program may also use prewritten routines supplied in a program library. A module typically contains References to subroutines and data defined in other modules and in libraries. The code in a module cannot be executed when it contains unresolved References to labels in other object files or libraries. Another tool, called a linker, combines a collection of object and library files into an executable file , which a computer can run.
The Assembler Provides: a. This includes access to the entire instruction set of the machine. A means for specifying run-time locations of program and data in memory. Provide symbolic labels for the representation of constants and addresses. Perform assemble-time arithmetic. Provide for the use of any synthetic instructions. Emit machine code in a form that can be loaded and executed.
Report syntax errors and provide program listings h. Provide an interface to the module linkers and program loader. Expand programmer defined macro routines. This require more overhead and the process becomes complex While, impure, the source code is subjected to some initial preprocessing before the code is eventually interpreted.
The actual analysis overhead is now reduced and the processor speed enabling faithful and efficient interpretation. JAVA also uses interpreter.
The process of interpretation can be carried out in following phases. Lexical analysis 2. Synatx analysis 3. Semantic analysis 4. Direct Execution e Loader and Link-editor: Once the assembler procedures an object program, that program must be placed into memory and executed.
The assembler could place the object program directly in memory and transfer control to it, thereby causing the machine language program to be execute.
Also the programmer would have to retranslate his program with each execution, thus wasting translation time. To overcome this problems of wasted translation time and memory. It is also expected that a compiler should make the target code efficient and optimized in terms of time and space. Compiler design principles provide an in-depth view of translation and optimization process. It includes lexical, syntax, and semantic analysis as front end, and code generation and optimization as back- end.
Analysis Phase Known as the front-end of the compiler, the analysis phase of the compiler reads the source program, divides it into core parts and then checks for lexical, grammar and syntax errors. The analysis phase generates an intermediate representation of the source program and symbol table, which should be fed to the Synthesis phase as input. Analysis and Synthesis phase of compiler Synthesis Phase Known as the back-end of the compiler, the synthesis phase generates the target program with the help of intermediate source code representation and symbol table.
A compiler can have many phases and passes. Pass : A pass refers to the traversal of a compiler through the entire program. Phase : A phase of a compiler is a distinguishable stage, which takes input from the previous stage, processes and yields output that can be used as input for the next stage.
A pass can have more than one phase. A common division into phases is described below. In some compilers, the ordering of phases may differ slightly, some phases may be combined or split into several phases or some extra phases may be inserted between those mentioned below. Lexical analysis This is the initial part of reading and analysing the program text: The text is read and divided into tokens, each of which corresponds to a sym- bol in the programming language, e.
Syntax analysis This phase takes the list of tokens produced by the lexical analysis and arranges these in a tree-structure called the syntax tree that reflects the structure of the program. This phase is often called parsing. Type checking This phase analyses the syntax tree to determine if the program violates certain consistency requirements, e.
Intermediate code generation The program is translated to a simple machine- independent intermediate language. Register allocation The symbolic variable names used in the intermediate code are translated to numbers, each of which corresponds to a register in the target machine code.
In terms of programming languages, words are objects like variable names, numbers, keywords etc. Lexical analysis is the first phase of a compiler. It takes the modified source code from language preprocessors that are written in the form of sentences.
The lexical analyzer breaks these syntaxes into a series of tokens, by removing any whitespace or comments in the source code. If the lexical analyzer finds a token invalid, it generates an error. The lexical analyzer works closely with the syntax analyzer.
It reads character streams from the source code, checks for legal tokens, and passes the data to the syntax analyzer when it demands. Tokens Lexemes are said to be a sequence of characters alphanumeric in a token.
There are some predefined rules for every lexeme to be identified as a valid token. These rules are defined by grammar rules, by means of a pattern. A pattern explains what can be a token, and these patterns are defined by means of regular expressions. Syntax Analysis Introduction Syntax analysis or parsing is the second phase of a compiler.
In this chapter, we shall learn the basic concepts used in the construction of a parser. We have seen that a lexical analyzer can identify tokens with the help of regular expressions and pattern rules.
But a lexical analyzer cannot check the syntax of a given sentence due to the limitations of the regular expressions. Regular expressions cannot check balancing tokens, such as parenthesis. Syntax Analyzers A syntax analyzer or parser takes the input from a lexical analyzer in the form of token streams. The parser analyzes the source code token stream against the production rules to detect any errors in the code.
The output of this phase is a parse tree. This way, the parser accomplishes two tasks, i. Parsers are expected to parse the whole code even if some errors exist in the program. Parsers use error recovering strategies, which we will learn later in this chapter. Parse Tree A parse tree is a graphical depiction of a derivation. It is convenient to see how strings are derived from the start symbol. The start symbol of the derivation becomes the root of the parse tree.
Let us see this by an example from the last topic. Types of Parsing Syntax analyzers follow production rules defined by means of context-free grammar. The way the production rules are implemented derivation divides parsing into two types : top-down parsing and bottom-up parsing. Top-down Parsing When the parser starts constructing the parse tree from the start symbol and then tries to transform the start symbol to the input, it is called top-down parsing.
It is called recursive as it uses recursive procedures to process the input. Recursive descent parsing suffers from backtracking. This technique may process the input string more than once to determine the right production. Recursive Descent Parsing Recursive descent is a top-down parsing technique that constructs the parse tree from the top and the input is read from left to right.
It uses procedures for every terminal and non-terminal entity. This parsing technique recursively parses the input to make a parse tree, which may or may not require back-tracking.
But the grammar associated with it if not left factored cannot avoid back- tracking. A form of recursive-descent parsing that does not require any back-tracking is known as predictive parsing. This parsing technique is regarded recursive as it uses context-free grammar which is recursive in nature.
Back-tracking Top- down parsers start from the root node start symbol and match the input string against the production rules to replace them if matched. So the top-down parser advances to the next input letter i. It does not match with the next input symbol. Now the parser matches all the input letters in an ordered manner.
The string is accepted. Predictive Parser Predictive parser is a recursive descent parser, which has the capability to predict which production is to be used to replace the input string. The predictive parser does not suffer from backtracking. To accomplish its tasks, the predictive parser uses a look-ahead pointer, which points to the next input symbols. To make the parser back-tracking free, the predictive parser puts some constraints on the grammar and accepts only a class of grammar known as LL k grammar.
Predictive parsing uses a stack and a parsing table to parse the input and generate a parse tree. The parser refers to the parsing table to take any decision on the input and stack element combination.
In recursive descent parsing, the parser may have more than one production to choose from for a single instance of input, whereas in predictive parser, each step has at most one production to choose. There might be instances where there is no production matching the input string, making the parsing procedure to fail.
LL grammar is a subset of context-free grammar but with some restrictions to get the simplified version, in order to achieve easy implementation. LL grammar can be implemented by means of both algorithms namely, recursive-descent or table- driven.
LL parser is denoted as LL k. The first L in LL k is parsing the input from left to right, the second L in LL k stands for left-most derivation and k itself represents the number of look aheads.
Bottom-up Parsing As the name suggests, bottom-up parsing starts with the input symbols and tries to construct the parse tree up to the start symbol. Bottom-up parsing starts from the leaf nodes of a tree and works in upward direction till it reaches the root node. Here, we start from a sentence and then apply production rules in reverse manner in order to reach the start symbol Shift-Reduce Parsing Shift-reduce parsing uses two unique steps for bottom-up parsing.
These steps are known as shift- step and reduce-step. This symbol is pushed onto the stack. The shifted symbol is treated as a single node of the parse tree. This occurs when the top of the stack contains a handle. To reduce, a POP function is performed on the stack which pops off the handle and replaces it with LHS non-terminal symbol. It uses a wide class of context- free grammar which makes it the most efficient syntax analysis technique. LR parsers are also known as LR k parsers, where L stands for left-to-right scanning of the input stream; R stands for the construction of right-most derivation in reverse, and k denotes the number of lookahead symbols to make decisions.
Does a rightmost derivation in reverse. Starts with the root nonterminal on the Ends with the root nonterminal on the stack. Ends when the stack is empty. Starts with an empty stack. Uses the stack for designating what is still Uses the stack for designating what is to be expected.
Builds the parse tree top-down. Builds the parse tree bottom-up. Continuously pops a nonterminal off the Tries to recognize a right hand side on the stack, and pushes the corresponding right stack, pops it, and pushes the corresponding hand side. Expands the non-terminals. Reduces the non-terminals.
0コメント