Monday, July 16, 2018

computer generation and components .

computer generation and components used in each generation|

If you examine computer architectures with the 20-20 hindsight of a historian, it is possible to identify six distinct generations of designs

the first three generations are distinguished by fundamental changes in technology ;
0-Electro-machanical(gears,motors,relays), 1920's, to 1959's
1-vaccum tube(electronic 1946 to 1959)
2- transistor (discrete, solid state,) 1959 to early 1965 the successive generations simply pack more second generation devices into a smaller space.
3- integrated circuit late 1965 to late 1970.
4- vlsi mid 1970's to 1985.
5-parallel systems , early 1985 to present.

                Notice that these generations had significant overlap. new technologies  rarely replace old ones overnight. the pattern that can be regularly observed in the transition between generations is that initially the architectures of the previous  generation are faithfully translated into the new technology. then, once the designers become comfortable with the new technology, they begin to explore the additional capabilities that it offers and new architectures are created that push the limits of the technology. the desire to exceed the current capabilities results in demand for technological advance the cycle .

              there has not been a radical shift in technology since the transistor was introduced, but rather a steady progression in capability, i  shall  recurring throughout the history of architecture, a steady progression occasionally crosses a threshold that leads to a radical change in how we perceive technology. when such a threshold is crossed in device destiny, example , a new generation result

              in the first generations, there was,complete flexibility of design because every circuit was developed out of elementary  electronic components for every computer. if you wanted a particular kind of flip-flop, you designed and built the circuit yourself , using analog components. computer designers had many options  for such circuit designs and even memory technology. for example, one could design a computer to use negative and positive voltages to represent logic states instead of circuits that are either on or off. in the third generation. however, component manufactures started supplying larger building blocks. it become too costly to develop unique circuits. so designers built computers out of commonly available building blocks such as gates, flip-flops,registers, multiplexers, adders, and even complete arithmetic-logic units. there was still a reasonable amount of flexibility in design many unique designs could be created from these building blocks. the advantage was that, anyone who understood digital logic and a modest amount of analog electronics could design and built a computer. the cost of development was dramatically reduced and so designs proliferated. any company that had some background  in digital electronics could enter the market quickly  with a new design.

the fourth generation

with the fourth generation , there was in one sense a return  to the  earlier generation   for the architect, developing  a computer meant working at the transistor level again . for the chip  designers, it meant that they were no longer just building bits and pieces     -  they were designing  a whole computer , with many larger implications  that they were not accustomed to handling. thus the cost to develop  an architecture increased . but that increase was in non- recurring engineering costs, rather than recurring component costs. in past generations, the cost of producing the hardware was much closer to the cost of designing it. with the fourth generation , the cost of production fell dramatically below the cost of design, with a corresponding decrease in profit margin. where it had previously been profitable to design arid build(often by hand) few copies of a large machine, it become necessary to amortize the design cost over thousands to million of unity.

     the result of the fourth generation  was the processor design sifted from the early computer manufactures to companies that built chips. a few of the early marker made this transition to some extent , but shift in technology provided existing chip fabrication  companies like Intel,motorola, texas instruments, etc the opportunity to enter the computer business on an even level with the long -established manufactures. now however, there is enough capacity on single chip to equal the best mainframe designs. thus, mainframes are now being surpassed in raw computational speed.  the remaining advantage of the mainframe is essentially that it is designed for every large data processing applications that involve online transaction processing and access to immense database. mainframes can still outperform microprocessors in terms  of I/O capacity and memory bandwidth, but this is more due to the implementation of their I/O and memory subsystems than to their CPU architecture.  in the fourth generation , there was a great falling out and standardization of architectures. because the use of an off-the shelf microprocessor made it possible for a computer company to cheaply bring a small complete software infrastructure, which is very costly. except for a new large companies from the earlier generation who could afford to invest in such an effort, there was no way for a computer manufacturer  to go into the business with a competitive product that did not use one of the standard microprocessors.

        computer fifth generation 

we have been fifth generation for some time now. arguably, it has always been with us. charles babbage  recognized the potential for parallel processing, as did john vonNeumann . parallel processors were under construction as early as 1963,and commercial ones were being delivered by the end of that decade. whenever there has been  a market for more processing power than can be delivered by a single CPU , there have been parallel machines to sell into that market . however, they only began to be mass produced , in the late 1990's, in the form of multiprocessor server. as of 2002, there is one production mainstream microprocessor  that places multiple processor on single chip.

    the first IC processor were just smaller transistorized designs  but then such concepts as caching, virtual memory, and microprocessor developed .machines were initially patterned after minicomputer a step backward to cacheless, single  user designs for embedded applications, but they quickly advanced to match IC machines and then went past them with reduced instruction set, pipelines, and multiple functional units  now we see super computers moving into massive parallelism , and more mainstream architectures are going to multiple processor. as in previous generations, there is an attempt to pattern the new generation after the old . thus we see processors with multiple functional units than try to hide their presence under  a traditional instruction set. we see multiprocessor that try hard to maintain a single memory space and sequential semantics for processor interactions.we see massively parallel machines that either have just a single control thread or try to hide their multiple threads behind a single thread model. the degree of parallelism in most machines is limited to two or four processors, and they are used to support multitasking in operating systems. chip multiprocessors are simply packing these design onto a single IC.

type of parallel processors

the most popular taxonomy in parallel processing is one proposed by michale flynn in 1972. it defines an orthogonal taxonomy of instruction streams and data stream. 
SISD-uniprocessors 
SIMD- an instruction stream directs the same operations to be performed on multiple data values simultaneously 
MIMD-independent instruction and data streams  that can interact
MISD-multiple instruction streams operating on the same data stream. 
    

        components used in computer generation 

1-mechanical/electromechanical
2-vacuum tube
3-transistor
4-integrated circuit
5-very large scal integration microprocessor
6-homogrnous parallel processor 

1-mechanical/electromechanical

mechanical computer built with trains of gears, much like clocks.typically , they used decimal arithmetic,and each gear or wheel had ten position.the hardest part of designing such a machine was to get the carry to propagate cleanly from one digit to the next the other difficulty was that the sheer amount of complexity of a large calculator, together with the friction of all of the gears ,made construction very difficult prior to the advent of modern machining technology.

  storage in mechanical computer was by the position of the gears. in the later electromechanical machines,relays were able to store some of the machine's state.the program,however,was always stored in a separate medium, typically a punched paper card or tape . some analog mechanical computers could be programmed by changing the gear train , but this was really just equivalent to changing parameters to the program, since they generally just computed the type of function

 relays work on the principle that a voltage is applied to a coil, driving a magnetic rod outward so that a hinged or flexible electrical contact is forced to touch a fixed contact , thus closing a circuit. we thus have an electrically controlled switch. the significance of the electrically controlled switch is that information can be transmitted over significant distances without loss, and without interference. in a mechanical system, carrying the state of one wheel to another at a distance involves long shafts and often extra gear to allow the shafts to bypass other shafts.also of major significance is that the relay is more naturally used with a binary number system because of the on /off nature of circuits.

   different type of mechanical device

  • Edmund gunter's scal 
  • Wilhelm schickard's calculater
  • Blaise pascal's "box"
  • Gottfried wilhelm liebniz's calculator
  • Variations of liebni's calculator
  • Joseph jacquard loom
  • Charles thomas arithmometer

   2-vacuum tube     

      
a vacuum tube is, reasonable enough , a sealed glass tube containing a vacuum in which are present several electronic elements: the cathode , anode, grid,and filament.when the cathode and anode are heated by the filament, and a voltage is applied across them,current flows between the cathode and anode . if grid is  inserted between them,the flow can be controlled by changing the grid between a positive and negative voltage .

    the grid voltage can be quite small, and the plate voltages  can be quite high ,thus providing an amplifying capability  more importantly  for computers. switching the grid voltage causes the tube to act as a switch with respect to the plates thus, we have an electrically controlled that is more faster than relay.
   
  a type of vacuum tube also served  as  a popular storage mechanism , cathode ray tube . other memory devices used during the period include mercury  of glass delay lines, and magnetic core memory .  vacuum tubes, however, are large, require a lot of power, and produce a lot of waste heat in fact, for one rather large vacuum tube machine , it was once estimated that if its four turbine-powered air conditioners were to fail, the heat buildup in 15 minutes would be sufficient to melt the concrete and steel building containing it has also been estimated  that if an modern computer were built with vacuum tube , it would be the size of the empire state building.

  different type of machine which vacuum tube 

  • Atanasoff's machine
  • Colossus 
  • ENIAC
  • MADM
  • EDVAC and EDSAC

   3-TRANSISTOR

The transistor , invented in 1948, performed the same basic function as the vacuum tube , but with much lower voltage  and current , and very little waste heat. it is interesting to note that many engineers at that time pronounced it a useless device precisely because it could't handle the power of a tube ! by using a material called a semiconductor, which conducts electricity when a charge is applied to it and acts as an insulator when the charge is removed,an  electronic switch can be built.current flows between the collector and emitter when a charge is applied to the base. transistors are also much smaller than tubes. rather than bring an inch in diameter and three inches long , transistors are about 1/4 inch in diameter and 3/16 inch long . they also require fewer wires,since there are no filaments.computers of this era mostly used magnetic core memory, although registers were built from transistor circuits eventually leading to modern solid state-memory

    still a computer equivalent to a modern day microsoft/microprocessor , build with transistors , would have occupied several floors of the empire state building A typical machine of the period had 16k 32-bit words of core, and filled a large room.

       4-integrated circuit 

The concept behind the integrated circuit is that transistor can be formed by crossing two semi conducting materials on a silicon substrate. wherever a wire or  line of polysilicon cross a line of silicon with ions diffused into it, a transistor is formed. if a charge is applied to the polysilicon line, current flows through the junction. if the charge is taken away, the junction become non-conductive . thus, we have another example of an electrically controlled switch.
    The important point about the integrated circuit is that multiple transistors  can be formed on a single substrate. thus, a logic circuit that occupied a whole PC board can be reduced to fit on a single chip of silicon . also because the transistors can be connected directly on the chip, they can be smaller and need less power to communicate. thus , Ic 's require less power, and generate less waste heat.

  Early IC's contained just  a few transistors. these were latter called SSI, for small scale Integration. latter on , there were chips with a hundred or so transistors that were called medium scale integration (MSI).  this might contain  a whole register,or part of an arithmetic unit , then came large scale integration (LSI) in which as many as a thousand transistor called be placed on a chip so that fairly complicated building blocks  fit into 1 IC,   the other novel use of the IC was pioneered by texas instruments as storage devices it was found that lengths of IC were also store charge , and could be used as memory . thus, the IC revolutionized  computer design in two ways by shrinking size of computers , and by making the memory technology compatible with the processing technology 

       5-VLSI/microprocessor 

Very large scale integration (VLSI) was simple the next step beyond LSI , to thousands of transistor on a chip. for a while , a few people tried calling new levels of  integration ULSI(ultra), but the name never caught on . basically , the division between LSI and VLSI is the difference in design approach between the two . with LSI,  you think in terms of standard modules that are wired together  on a circuit board to build a computer or custom logic system . with VLSI,an entire system can be placed on  a chip, and the design of chips is standard practice. other VSL/microprocessor

  • Intel 4004
  • Intel 8008
  • Intel 8080/8085
  • ZilogZ80/Z8/Z8000
  • Intel 8086/8088
  • intel 80186/80286/80386/80486
  • Intel IL-32:pentium,PII,PHI,P4, Celeron , xeon....
  • Intel Itanium:mereced, McKinley,Madison,deerfield...
Intel developed the first microprocessor , the 4-bit 4004, in 1971,as a basis for a desktop calculator . in the next year , they produced the 8008,an 8-bit  microprocessor to control a terminal . fortunately for intel, the terminal manufacturer chose not to use the 8008--thus forcing intel to look for other uses for device. it was not easy to sell a microprocessor in 1972--most engineers designed in MSI and LSI, and were leery of this stuff called software . however, computer hobbyists caught on to the potential and a new industry sprang up overnight . intel followed the 8008 with the 8080,a more powerful 8-bit system and the 8085 which required fewer support chips. 


   Intel was not the first to jump to 16-bits (National  semiconductor's IMP-16 was actually contemporary with the 8085) but they were the first to market one successfully . it is widely agreed that the intel architectures are properly conceived, but that there success was always due to the fact that in tel was been the first to bring usable product of market . the 80186 and 80286 are internationals of the 8086 16-bits processor. the 8088 is an 8086 with an 8-bit external data path. the 80386 is a 32-bit extensions of the family and the 80486 adds floating point and virtual memory support to the processor. the pentium (IA-32) family added cache memory and pipelined execution . subsequent generations of the pentium have continued to increase on-chips cache and pipeline depth (the P4 has a 20-stage branch pipe ) as well as adding special feature such as multimedia instructions . it is interesting to not that intel's "family " of processors is not architecturally consistent there is upward compatibility from the 8086 , but not downward compatibility 
                 Intel recognized that the legacy of x86/IA32 would make it difficult to extend the architecture to 64 bits , and in architectures toward simplifying instruction sets and the corresponding control logic , so that the basic machine cycle  would be faster. code would be larger , and more instructions would be required for infrequently used operation , but overall speed would increase . the first of these new microprocessor to be sold commercially was the SPARC,developed at UC berkeley . shortly after that came the  MIPS R2000, developed at stanford. IBM also had an entry , the  RISC, 6000, which was operating in a laboratory well before the other two . there are a lot of claims and counterclaims about who first developed the concept of reduce instructions set computers (RISC). in reality the first machines were mostly RISC designs , and it is really a question of who rediscovered this and repopularized the nation. 
         

          6-parallel processor

  • originally proposed by babbage
  • staran 
  • mpp
  • transputer 
  • connection machine CM-2 CM-5 
  • maspar,DAP
  • Intel iPSC,paragon ,teraflop 
  • IBM SP, Meiko CS
  • sequent symmetry , SGI Challenge, SGI origin,etc
staran was the first commercially successful massively parallel processor , and it still bring sold as the ASPRO .MPP was a one-of a-kind processor  built for NASA to process the vast amount of data being gathered by earth -resources satellites . it had 16, 384 1-bit processor arranged in a grid . another similar design is the AMT/CPP DAP. the transputer was a parallel processor cell with communications channel and the ability to be configured into different network topologies the first version was marginally successful. A second generation was repeatedly set back by delats and never went into vaccum production.

           summary of generations of computers

the computer produced during the period 1946-1959 with the then technology are regarded as the first generation computers . these computers were manufactured with the vacuum tubes, triodes, diodes etc. as their basic elements . these tubes were used in arithmetic and logical operations 

advantage

they were capable of making arithmetic and logical operations.
they used the electronic values in place of the key punch machines language.

disadvantages

they were too big in size , very slow, low level of accuracy and capability
they consumed lot of electricity , generated a lot of heat and were subject to frequent breakdown.
they had very low storage capacity and used machine language.

       the computer produced during the 1959-1965 with technology are know as second generation computers. these computers used transistors in place of vacuum tubes as their basic elements  to perform all computational and logical work 

advantages

they required very small space , were very fast and reliable and dependable .
they used less power and dissipated heat . and had large storage ability 
they used better peripheral devices like card readers  and printer etc .

disadvantages

they did not have any operating system , and used assembly language . they lacked in intelligence and decision making, and needed constant upkeep and maintenance .
they handled data processing in batch mode only.

      the computers developed during the period 1965-1970 are branded as third generation computers  the significant features of these computers was that  they were built with monolithic integrated circuits, (ICs), each of which consisted of thousands of transistors , and other electronic components on single crystal. 

advantages

they size were very small in comparison, less costly and built with thousands of transistor which were very cheap .
the used faster, better device for storage, called auxiliary backing on secondary  storage. 
they used operating system for better resource management, and used the concept of time sharing and multiple programming.

disadvantages

they created lot of problems to the manufactures at their initial states. 
they lacked thinking power and decision making capability. 
they could not provide any insight into their internal working.

     the computer that came to the scene with improved technology during the period 1970-1985 are marked as the fourth generation computers. they used larger scale integrated circuits and very large integrated circuits in the form of microprocessor in their memory. these computers enlaced millions of transistors and other electronic components on a single silicon chip. a micro-processor is a single chip which itself can perform the controlling, arithmetic and logical functions of a computer that's too a fast speed.

advantages

they were very small in size, and cost of operation was very less.
they were very compact, faster and reliable, as they used very large scale integrated circuits.
they were capable of facilitating the interactive on-line remote programming by which one, sitting at the distant place, can get his programs executed by centrally located computer.

disadvantages

they were less powerful and had less speed  than the main-frame computers.
they lacked thinking power and decision making ability 
they had less storage capacity and needed further improvement. 

       the computers that are emerging after 1985 with further improved technology are considered  as the fifth generation computers. these machines are designed to incorporate "artificial intelligence" and use stored reservoirs of knowledge  to make expert judgments and decision like human beings . they are also designed to process no-numerical information like pictures and graphs using the very large scale integrated circuits. 

advantages

they are oriented towards integrated data base development to provide decision models .
they are faster, very cheap and have very high storage capacity.
they have thinking power and decision making capability and thereby they will be able to aid the executives in the management .

disadvantages 

they needed very low level languages, they may replace the human force and cause grievous unemployment problems. 
they may make the human brains dull and doomed.
the computer is made up of electronic and electrical components, there is need to know about the electricity, units and measure.

Some Advance Tips

If you like this article so please share this post , and if you know to interest about RAM, ROM, and PC so visit my site : https://www.digitaltech.net.in

No comments:

Post a Comment