A microprocessor incorporates the functions of a computer's central processing unit (CPU) on a single integrated circuit (IC, or microchip).[1][2] It is a multipurpose, programmable, clock-driven, register based electronic device that accepts binary data as input, processes it according to instructions stored in its memory, and provides results as output.
The first microprocessors emerged in the early 1970s and were used for electronic calculators, using binary-coded decimal (BCD) arithmetic on 4-bit words. Other embedded uses of 4-bit and 8-bit microprocessors, such as terminals, printers, various kinds of automation etc., followed soon after. Affordable 8-bit microprocessors with 16-bit addressing also led to the first general-purpose microcomputers from the mid-1970s on.
During the 1960s, computer processors were often constructed out of small and medium-scale ICs containing from tens to a few hundred transistors. The integration of a whole CPU onto a single chip greatly reduced the cost of processing power. From these humble beginnings, continued increases in microprocessor capacity have rendered other forms of computers almost completely obsolete (see history of computing hardware), with one or more microprocessors used in everything from the smallest embedded systems and handheld devices to the largest mainframes and supercomputers.
Since the early 1970s, the increase in capacity of microprocessors has followed Moore's law, which suggests that the number of transistors that can be fitted onto a chip doubles every two years. Although originally calculated as a doubling every year,[3] Moore later refined the period to two years.[4] It is often incorrectly quoted as a doubling of transistors every 18 months.
From Wikipedia, the free encyclopedia