Thursday, May 24, 2007

Civil Engineering

Civil engineering, like military engineering, emerged in large part from the employments of Renaissance architects. Many Renaissance cities and regional princes engaged an architect-engineer to oversee the construction of all public works, including defensive structures, bridges, and maintenance of roads and waterways. Well into the eighteenth century, a number of engineers maintained versatile skills in both military and civil engineering, although men of more specialized backgrounds, such as surveyors, millwrights, and drainage engineers, always added expertise in the construction of public works and often fashioned themselves more broadly as engineers. Mathematicians, too, consulted on engineering works and helped develop the relationship between engineering and the emerging sciences of mechanics and hydrology. The rise of absolutism combined with growing capital interests to fund a broad range of city-planning, communication, and, above all, water-management programs. Civil engineers were those experts who rose to the challenges and the perquisites these projects offered.

Cities and Villas

The vision of the Renaissance city developed out of new conceptions of the role cities played and an idealized notion of classical urbanism. Building programs to reshape major capitals or plan new military strongholds created cityscapes that demonstrated the power of the rulers, but also served pedestrian traffic, the easy transport of goods (or munitions), water-supply needs, and public theaters and hospitals. The work of Domenico Fontana (1543–1607) for Sixtus V is emblematic: Fontana not only designed new, more convenient, traffic patterns for Rome, but he was involved in the vaulting of St. Peter's cupola and is best known for his direction of the removal of a giant Egyptian obelisk from the site of the Circus Maximus and its reerection in the center of St. Peter's piazza. The latter was itself a theatrical technological feat that involved massive scaffolding and numerous windlasses, tackles, and pulleys. It drew a huge audience of spectators, reportedly hushed under threat of death so that workers could hear the bell prompts.

Structural Engineering

Expertise with materials was largely a tacit knowledge among Renaissance architects and engineers. The astounding heights achieved by the domes and basilicas of the period tested artisanal acumen in the analysis of tensional stress and outward thrust. Filippo Brunelleschi's (1377–1446) pioneering octagonal duomo atop Santa Maria dei Fiori in Florence featured a double-shelled dome, tapered walls that distributed stress to the thicker walls at the base, and a wooden chain that fortified the structure precisely at the point where tensional strain was greatest. A number of engineers consulted on the challenges posed by the even larger and higher circular dome of St. Peter's in Rome, finally completed under Michelangelo Buonarroti (1475–1564). In designing St. Paul's Cathedral in London, Christopher Wren (1632–1723) drew on structural ideas provided by the Royal Society's curator, Robert Hooke (1635–1703). By the beginning of the eighteenth century, rules for the proportioning of a masonry dome were available through the Swiss architect Carlo Fontana (1634–1714), and an easy geometrical construction for determining the thickness of abutments known as "Blondel's Rule" widely applied. The French mathematician Philippe de la Hire (1640–1718) investigated dome equilibrium from the point of view of theoretical statics. Three mathematicians, hired to analyze the cracks in St. Peter's dome in 1742–1743, partially employed de la Hire's work, but it seems to have been little utilized by practicing engineers.

Arched bridges were also a favorite form for experimentation by early modern engineers. Their construction was detailed by technical experts from Leon Battista Alberti (1404–1472) to Jean Rodolphe Perronet (1708–1794). Some of the most acclaimed examples of early modern engineering are bridges, such as the Rialto Bridge in Venice (Antonio da Ponte, begun 1588), Santa Trinità in Florence (Bartolomeo Ammannati, begun 1567), and the Pont Neuf in Paris (Jacques Androuet and Guillaume Marchand, begun 1578).

Galileo Galilei (1564–1642), himself trained as a military engineer, attempted to address some of the problems posed by structural engineering mathematically in the first half of his Discourses on Two New Sciences, devoted to material strength. The "new" science presented ways of determining the tensile strength of beams and ways of proportioning machines in larger scales. Galileo also discussed the subject of centers of gravity, a subject that had been developed by mathematicians Luca Valerio (1552–1618) and Federico Commandino (1509–1575), as a key to determining the equilibrium of rigid systems. This approach, rooted both in engineering practice and the Archimedean revival so influential to Renaissance engineers, contrasted dramatically with the prevalent Aristotelian approach to materials.

Water supply and fountains. Water was supplied to city residents through aqueducts or pipes. Raising enough water from nearby river sources with pumps was a constant occupation of engineers. One of the most ingenious pumping stations was constructed in 1602 by the Flemish hydraulic engineer Jean Lintlaer, whose water-wheel-driven pump, constructed under the Pont Neuf, could rise and fall with the level of the river.

Lintlaer had been hired by Henry IV (ruled 1589–1610) not only on behalf of Paris, but because the king wanted to improve his gardens. The baroque fountains that engineers designed for the gardens of very wealthy houses across Europe were largely inspired by the work of the ancient engineer Hero of Alexandria. Hero had used the natural flow of water, the effects of air pressure and steam, and the creation of a vacuum to achieve delightful effects, such as the playing of music or operation of mechanical birds. Hero's Pneumatica was translated numerous times between 1575 and 1700, many vernacular editions brought out by engineers. The book not only inspired technological marvels, but set out a newly revived matter theory. Hero maintained that the air was elastic, and was composed of tiny bits of matter separated by vacua, a theory discounted by traditional Aristotelians.

Water Management

The professions of water management assumed ever greater attention in the early modern period. Hydraulic engineering was necessary not only to raise water for drinking and fountains, but to drain and reclaim wetlands, dredge ports and harbors, build canals, and turn mills for industry. In Venice, a sea-empire into which several rivers flowed, nine out of ten patents were requested by inventors of machines that could control or utilize water. The various demands on waterways could also conflict. Too many mills constructed on a river would hinder commercial traffic, or even drinking water delivery. A river diverted to serve the needs of one town might render another town's waterways unnavigable.

The leaders in hydraulic engineering were the Dutch, who had developed their expertise through long experience maintaining their below-sea-level landscape with dykes, dredging machines, and canals. Regarding the interrelation of hydraulic works and Dutch government, the English poet Andrew Marvel quipped, "To make a bank, was a great plot of state/Invent a shov'l and be a Magistrate." Indeed, administrative skills were often an indispensable requisite for engineers who directed the huge labor force that large water management schemes demanded.

Land reclamation. Europeans began to drain the wetlands of alluvial plains beginning at least in the twelfth century. In the sixteenth century, the desire to create productive land from the swampy river valleys was translated into capital investment. Olivier de Serres (1539–1619) gave full attention to the conversion of marshlands into arable rents in his Théâtre d'Agriculture. Sixteenth- and seventeenth-century land improvement schemes were carried out from Andalusia through Italy, the Languedoc, the lower Rhône, and the fens of England. The latter was a favorite project of James I (ruled 1603–1625) for which he hired the Dutch engineer Cornelius Vermuyden (1595?–1683). The reclaimed land fell to the control of regional noblemen and investors, and head engineers were sometimes given grants from them.

Ports, rivers, and canals. Rivers and tidal ports prone to silting required periodic dredging. This was usually accomplished with bucket or scraper dredgers. Ports often needed seawalls or the installation of locks. Salvage operations were also a matter of import to the state and to entrepreneurs, as wrecked ships blocked harbors. Sometimes, inventive but ultimately inefficacious schemes were conducted, such as the attempt of Bartolomeo Campi (1525–1573) to raise a sunken ship in the Venetian lagoon with a machine built on two caissons, on Archimedean hydrostatic principles suggested by the mathematician Niccolò Tartaglia (1500–1557). However, the use of diving bells and diving suits, such as those developed by the mathematicians Giovanni Alfonso Borelli (1608–1679) and Edmond Halley (1656–1742), were the more promising means of removing wreckage.

Rivers and their tributaries were constantly diverted, channeled, or dammed in order to irrigate land, avoid flood, or improve navigation. Engineers reinforced banks with piers and the planting of trees and straightened and deepened numerous tributaries. The greatest boon to intracontinental navigation was the development of canal locks.

The invention of the lock was of signal importance to commerce and communication. The construction of intercity turnpikes and well-drained roads did not accelerate until the second half of the eighteenth century. Systems of canals, however, greatly extended alluvial navigation beyond the paths of naturally navigable rivers, and made possible commercial transport between many more cities. Canal waters were also employed to turn the water wheels that powered numerous mills.

While single gates had been employed in regulating water flow, the first lock, with gates at either end of a short section of the canal, appears to have been constructed by Bertola da Novate in the mid 1450s. Bertola, commissioned by the Duke of Milan, Francesco Sforza (ruled 1450–1466), to enlarge the Berguardo Canal, devised the scheme by which boats could ascend or descend the elevation of the waterway in a step-wise way by lifting one gate to fill or empty to the level of the subsequent section of canal. In seventeenth-century Netherlands, where canals had defined the landscape since the Middle Ages, new intercity canals were dug that carried passenger traffic on horse-drawn boats. England almost doubled its river navigation in the second half of the century, from 685 miles to 1160 miles. In France, the ambitious project to connect the Mediterranean with the Atlantic by canal, originally promoted by Leonardo da Vinci in the service of Francis I, was half completed with the Canal du Midi in 1681. Beginning in 1642, the foodstuffs of the Loire Valley could be carried to Paris via a canal that included thirty-five locks, and featured a seven-rise staircase of consecutive locks. The fortifications chief, Sébastien le Prestre de Vauban (1633–1707), extended the canal system through Belgium.

Hydraulics and mathematicians. Attempts to systematize the artisanal knowledge of hydraulic engineering within a more learned framework were available by the seventeenth centuries in the work of Alvise Cornaro (1484–1566) and Simon Stevin (1548–1620). Although Stevin was a preeminent mathematician, his hydraulics did not significantly depart from contemporary engineering practices. The work of Galileo's pupil Benedetto Castelli (1577–1644), in response to Papal plans to (re)divert the Reno into the Po flowing past Ferrara, extended the geometrical study of motion to waters. While Renaissance engineers like Leonardo had grappled with questions of water velocity, Castelli carved out new territory in his 1628 On the Measurement of Running Waters (Della misura dell'acque correnti). Castelli articulated the law of constant flow, that a river discharges equal quantities of water in equal times, regardless of the size of the cross-section. While this work had little direct effect on practice, the science of fluids was studied intensively over the next century. Fluid mechanics was developed experimentally by the French physicist Edme Mariotte (1620–1684), and the mathematician Daniel Bernoulli (1700–1782) formulated the relationship between the density of fluid in a pipe, its speed and pressure. By the eighteenth century, figures such as the mathematical professor and hydraulic engineer/government administrator Giovanni Poleni (1683–1761) were not rare.

Industrial Machines

Early modern engineers constantly designed and redesigned the wheeled machines that lifted stones for building; pumps that drained mines and swamps and raised water for drinking or ornamental fountains; and a vast array of machines that milled wheat, crushed minerals, lifted hammers, beat cloth, and operated the bellows of the new iron blast furnaces. Until the employment of the steam engine in the eighteenth century, the power of these machines was either a water wheel, a human-turned treadmill, winch, capstan, or crank, or an animal-turned device such as the horse whim. The cam, which translated rotational motion into vertical motion, was greatly developed by sixteenth-century engineers and was of huge industrial import. Printed machine books produced by Agostino Ramelli (1531–c. 1600), Jacques Besson (1540–1576), and Vittorio Zonca (b. c. 1580) demonstrate how combinations of toothed wheels, worm gears, crown gears, and lanterns might redirect motion in various ways. The treadmill that powered a sixteenth-century crane employed several men running on the inside of a huge wheel; due to gearing and other improvements, eighteenth-century cranes were smaller and could be turned externally with a crank.

With the mutually reinforcing developments of mining, metallurgy, and steam engines, the mechanical engineer had, literally, to retool. The new steam engines were first used in the drainage of mines; the new product of cast iron found one of its premier uses in the cylinders used on the steam engine. While engineers had increasingly employed metal in eighteenth-century machines, its wide adoption in the final years of the eighteenth century not only added strength, but also made precision, industrial tooling possible. The circle around the steam-engine moguls James Watt (1736–1819) and Matthew Boulton (1728–1809) procured watchmakers and other artisans skilled in machining gears. With the invention of the industrial lathe in 1716 by Christopher Polhem (1661–1751) of Sweden, its development by Jacques de Vaucanson (1709–1782) and others, and the 1776 cylinder-boring machine of the ironmaster John Wilkinson (1728–1808), it became possible to produce machines that produced machines.

Engineers, Science, and Professionalism

Throughout the early modern period, civil engineers were artisans of more and less learning, or mathematicians of more and less experience. The relationship between the practices of engineering and the new mathematical sciences of mechanics and hydraulics, however, was never unidirectional or static, nor was it easy to generalize. The engineer and machine book author Agostino Ramelli wrote an elaborate preface insisting on the necessity of mathematics as the foundation for machine design. On the other hand, practicing engineers often resisted the advice of mathematicians employed as consultants and sneered at theoreticians. In both cases, the relationship seems rhetorically constructed. Only in the eighteenth century did a more stable professional identity for engineers emerge, as technical education was formally organized and the social role of the technical expert more clearly defined. By that time, the sciences of rational mechanics and hydrology had developed within the framework of engineering problems.

John Smeaton (1724–1792) was the first Englishman to adopt the title "civil engineer." Although he was trained, as were many engineers, as a millwright, Smeaton performed systematic experimentation on the superior efficiency of overshot waterwheels, engaged in investigations regarding Leibnizian and Newtonian mechanics, and advocated a more rigorous technical education. The leaders in the establishment of the latter were the French.

In keeping with the rational systematization of absolutist, Enlightenment France, the Corps de Ponts et Chaussées was founded in 1719 to organize the network of roads and waterways throughout the country. Members of the corps tested the bending of various materials and invented machines for compression tests on stone and mortar; Henri de Pitot (1695–1771) invented the Pitot tube, by which the velocity of a current could be taken. The corps also founded a school. Cadets would have available to them the textbooks of Bernard Forest de Belidor (1697–1761), books reprinted so often that the copper plates wore out and had to be reengraved in the early nineteenth century. There was nothing new or cutting-edge in these handbooks, but they offered both traditional guidelines of practice and the possibility of applying static and dynamic theorems to practical problems. The French engineering organizations were the apotheosis and production line for engineers who could combine knowledge, machines, and the organization of human labor in order to fulfill corporate demands for huge undertakings.

Automobile

In 1908 Henry Ford began production of the Model T automobile. Based on his original Model A design first manufactured in 1903, the Model T took five years to develop. Its creation inaugurated what we know today as the mass production assembly line. This revolutionary idea was based on the concept of simply assembling interchangeable component parts. Prior to this time, coaches and buggies had been hand-built in small numbers by specialized craftspeople who rarely duplicated any particular unit. Ford's innovative design reduced the number of parts needed as well as the number of skilled fitters who had always formed the bulk of the assembly operation, giving Ford a tremendous advantage over his competition.

Ford's first venture into automobile assembly with the Model A involved setting up assembly stands on which the whole vehicle was built, usually by a single assembler who fit an entire section of the car together in one place. This person performed the same activity over and over at his stationary assembly stand. To provide for more efficiency, Ford had parts delivered as needed to each work station. In this way each assembly fitter took about 8.5 hours to complete his assembly task. By the time the Model T was being developed Ford had decided to use multiple assembly stands with assemblers moving from stand to stand, each performing a specific function. This process reduced the assembly time for each fitter from 8.5 hours to a mere 2.5 minutes by rendering each worker completely familiar with a specific task.

Ford soon recognized that walking from stand to stand wasted time and created jam-ups in the production process as faster workers overtook slower ones. In Detroit in 1913, he solved this problem by introducing the first moving assembly line, a conveyor that moved the vehicle past a stationary assembler. By eliminating the need for workers to move between stations, Ford cut the assembly task for each worker from 2.5 minutes to just under 2 minutes; the moving assembly conveyor could now pace the stationary worker. The first conveyor line consisted of metal strips to which the vehicle's wheels were attached. The metal strips were attached to a belt that rolled the length of the factory and then, beneath the floor, returned to the beginning area. This reduction in the amount of human effort required to assemble an automobile caught the attention of automobile assemblers throughout the world. Ford's mass production drove the automobile industry for nearly five decades and was eventually adopted by almost every other industrial manufacturer. Although technological advancements have enabled many improvements to modern day automobile assembly operations, the basic concept of stationary workers installing parts on a vehicle as it passes their work stations has not changed drastically over the years.

Raw Materials

Although the bulk of an automobile is virgin steel, petroleum-based products (plastics and vinyls) have come to represent an increasingly large percentage of automotive components. The light-weight materials derived from petroleum have helped to lighten some models by as much as thirty percent. As the price of fossil fuels continues to rise, the preference for lighter, more fuel efficient vehicles will become more pronounced.

Design

Introducing a new model of automobile generally takes three to five years from inception to assembly. Ideas for new models are developed to respond to unmet pubic needs and preferences. Trying to predict what the public will want to drive in five years is no small feat, yet automobile companies have successfully designed automobiles that fit public tastes. With the help of computer-aided design equipment, designers develop basic concept drawings that help them visualize the proposed vehicle's appearance. Based on this simulation, they then construct clay models that can be studied by styling experts familiar with what the public is likely to accept. Aerodynamic engineers also review the models, studying air-flow parameters and doing feasibility studies on crash tests. Only after all models have been reviewed and accepted are tool designers permitted to begin building the tools that will manufacture the component parts of the new model.

The Manufacturing
Process

Components

* The automobile assembly plant represents only the final phase in the process of manufacturing an automobile, for it is here that the components supplied by more than 4,000 outside suppliers, including company-owned parts suppliers, are brought together for assembly, usually by truck or railroad. Those parts that will be used in the chassis are delivered to one area, while those that will comprise the body are unloaded at another.

Chassis

* The typical car or truck is constructed from the ground up (and out). The frame forms the base on which the body rests and from which all subsequent assembly components follow. The frame is placed on the assembly line and clamped to the conveyer to prevent shifting as it moves down the line. From here the automobile frame moves to component assembly areas where complete front and rear suspensions, gas tanks, rear axles and drive shafts, gear boxes, steering box components, wheel drums, and braking systems are sequentially installed.
* An off-line operation at this stage of production mates the vehicle's engine with its transmission. Workers use robotic arms to install these heavy components inside the engine compartment of the frame. After the engine and transmission are installed, a worker attaches the radiator, and another bolts it into place. Because of the nature of these heavy component parts, articulating robots perform all of the lift and carry operations while assemblers using pneumatic wrenches bolt component pieces in place. Careful ergonomic studies of every assembly task have provided assembly workers with the safest and most efficient tools available.

Body

* Generally, the floor pan is the largest body component to which a multitude of panels and braces will subsequently be either welded or bolted. As it moves down the assembly line, held in place by clamping fixtures, the shell of the vehicle is built. First, the left and right quarter panels are robotically disengaged from pre-staged shipping containers and placed onto the floor pan, where they are stabilized with positioning fixtures and welded.
* The front and rear door pillars, roof, and body side panels are assembled in the same fashion. The shell of the automobile assembled in this section of the process lends itself to the use of robots because articulating arms can easily introduce various component braces and panels to the floor pan and perform a high number of weld operations in a time frame and with a degree of accuracy no human workers could ever approach. Robots can pick and load 200-pound (90.8 kilograms) roof panels and place them precisely in the proper weld position with tolerance variations held to within .001 of an inch. Moreover, robots can also tolerate the smoke, weld flashes, and gases created during this phase of production.
* As the body moves from the isolated weld area of the assembly line, subsequent body components including fully assembled doors, deck lids, hood panel, fenders, trunk lid, and bumper reinforcements are installed. Although robots help workers place these components onto the body shell, the workers provide the proper fit for most of the bolt-on functional parts using pneumatically assisted tools.

Paint

* Prior to painting, the body must pass through a rigorous inspection process, the body in white operation. The shell of the vehicle passes through a brightly lit white room where it is fully wiped down by visual inspectors using cloths soaked in hi-light oil. Under the lights, this oil allows inspectors to see any defects in the sheet metal body panels. Dings, dents, and any other defects are repaired right on the line by skilled body repairmen. After the shell has been fully inspected and repaired, the assembly conveyor carries it through a cleaning station where it is immersed and cleaned of all residual oil, dirt, and contaminants.
* As the shell exits the cleaning station it goes through a drying booth and then through an undercoat dip—an electrostatically charged bath of undercoat paint (called the E-coat) that covers every nook and cranny of the body shell, both inside and out, with primer. This coat acts as a substrate surface to which the top coat of colored paint adheres.
* After the E-coat bath, the shell is again dried in a booth as it proceeds on to the final paint operation. In most automobile assembly plants today, vehicle bodies are spray-painted by robots that have been programmed to apply the exact amounts of paint to just the right areas for just the right length of time. Considerable research and programming has gone into the dynamics of robotic painting in order to ensure the fine "wet" finishes we have come to expect. Our robotic painters have come a long way since Ford's first Model Ts, which were painted by hand with a brush.
* Once the shell has been fully covered 1 V with a base coat of color paint and a clear top coat, the conveyor transfers the bodies through baking ovens where the paint is cured at temperatures exceeding 275 degrees Fahrenheit (135 degrees Celsius). After the shell leaves the paint area it is ready for interior assembly.

Interior assembly

* The painted shell proceeds through the interior assembly area where workers assemble all of the instrumentation and wiring systems, dash panels, interior lights, seats, door and trim panels, headliners, radios, speakers, all glass except the automobile windshield, steering column and wheel, body weatherstrips, vinyl tops, brake and gas pedals, carpeting, and front and rear bumper fascias.
* Next, robots equipped with suction cups remove the windshield from a shipping container, apply a bead of urethane sealer to the perimeter of the glass, and then place it into the body windshield frame. Robots also pick seats and trim panels and transport them to the vehicle for the ease and efficiency of the assembly operator. After passing through this section the shell is given a water test to ensure the proper fit of door panels, glass, and weatherstripping. It is now ready to mate with the chassis.

Mate

* The chassis assembly conveyor and the body shell conveyor meet at this stage of production. As the chassis passes the body conveyor the shell is robotically lifted from its conveyor fixtures and placed onto the car frame. Assembly workers, some at ground level and some in work pits beneath the conveyor, bolt the car body to the frame. Once the mating takes place the automobile proceeds down the line to receive final trim components, battery, tires, anti-freeze, and gasoline.
* The vehicle can now be started. From here it is driven to a checkpoint off the line, where its engine is audited, its lights and horn checked, its tires balanced, and its charging system examined. Any defects discovered at this stage require that the car be taken to a central repair area, usually located near the end of the line. A crew of skilled trouble-shooters at this stage analyze and repair all problems. When the vehicle passes final audit it is given a price label and driven to a staging lot where it will await shipment to its destination.

Quality Control

All of the components that go into the automobile are produced at other sites. This means the thousands of component pieces that comprise the car must be manufactured, tested, packaged, and shipped to the assembly plants, often on the same day they will be used. This requires no small amount of planning. To accomplish it, most automobile manufacturers require outside parts vendors to subject their component parts to rigorous testing and inspection audits similar to those used by the assembly plants. In this way the assembly plants can anticipate that the products arriving at their receiving docks are Statistical Process Control (SPC) approved and free from defects.

Once the component parts of the automobile begin to be assembled at the automotive factory, production control specialists can follow the progress of each embryonic automobile by means of its Vehicle Identification Number (VIN), assigned at the start of the production line. In many of the more advanced assembly plants a small radio frequency transponder is attached to the chassis and floor pan. This sending unit carries the VIN information and monitors its progress along the assembly process. Knowing what operations the vehicle has been through, where it is going, and when it should arrive at the next assembly station gives production management personnel the ability to electronically control the manufacturing sequence. Throughout the assembly process quality audit stations keep track of vital information concerning the integrity of various functional components of the vehicle.

This idea comes from a change in quality control ideology over the years. Formerly, quality control was seen as a final inspection process that sought to discover defects only after the vehicle was built. In contrast, today quality is seen as a process built right into the design of the vehicle as well as the assembly process. In this way assembly operators can stop the conveyor if workers find a defect. Corrections can then be made, or supplies checked to determine whether an entire batch of components is bad. Vehicle recalls are costly and manufacturers do everything possible to ensure the integrity of their product before it is shipped to the customer. After the vehicle is assembled a validation process is conducted at the end of the assembly line to verify quality audits from the various inspection points throughout the assembly process. This final audit tests for properly fitting panels; dynamics; squeaks and rattles; functioning electrical components; and engine, chassis, and wheel alignment. In many assembly plants vehicles are periodically pulled from the audit line and given full functional tests. All efforts today are put forth to ensure that quality and reliability are built into the assembled product.

The Future

The development of the electric automobile will owe more to innovative solar and aeronautical engineering and advanced satellite and radar technology than to traditional automotive design and construction. The electric car has no engine, exhaust system, transmission, muffler, radiator, or spark plugs. It will require neither tune-ups nor—truly revolutionary—gasoline. Instead, its power will come from alternating current (AC) electric motors with a brushless design capable of spinning up to 20,000 revolutions/minute. Batteries to power these motors will come from high performance cells capable of generating more than 100 kilowatts of power. And, unlike the lead-acid batteries of the past and present, future batteries will be environmentally safe and recyclable. Integral to the braking system of the vehicle will be a power inverter that converts direct current electricity back into the battery pack system once the accelerator is let off, thus acting as a generator to the battery system even as the car is driven long into the future.

The growth of automobile use and the increasing resistance to road building have made our highway systems both congested and obsolete. But new electronic vehicle technologies that permit cars to navigate around the congestion and even drive themselves may soon become possible. Turning over the operation of our automobiles to computers would mean they would gather information from the roadway about congestion and find the fastest route to their instructed destination, thus making better use of limited highway space. The advent of the electric car will come because of a rare convergence of circumstance and ability. Growing intolerance for pollution combined with extraordinary technological advancements will change the global transportation paradigm that will carry us into the twenty-first century.

Saturday, April 28, 2007

The History of American English

The history of American English can be divided into the colonial (1607-1776), the national (1776-1898), and the international (1898-present) periods. During nearly four hundred years of use in North America, the English language changed in small ways in pronunciation and grammar but extensively in vocabulary and in the attitude of its speakers.

English settlements along the Atlantic Coast during the seventeenth century provided the foundation for English as a permanent language in the New World. But the English of the American colonies was bound to become distinct from that of the motherland. When people do not talk with one another, they begin to talk differently. The Atlantic Ocean served as an effective barrier to oral communication between the colonists and those who stayed in England, ensuring that their speech would evolve in different directions.

On the one hand, changes in the English of England were slow to reach America, and some never made the crossing, so American English became in certain respects old-fashioned and eventually archaic, from the standpoint of the British. But on the other hand, the colonists were forced to talk about new physical features, flora, and fauna. For example, an Americanism early noted (and objected to) by British travelers was the use of bluff for the steep, high bank of a river. British rivers usually do not have such banks but are nearly level with the surrounding land, so when the colonists encountered the new fluvial topography, they had no name for it. Consequently, they pressed into service a word that means "steep" in naval jargon.

Americans also came cheek-to-jowl with Amerindians of several linguistic stocks, as well as French and Dutch speakers. They had to talk in new ways to communicate with their new neighbors. Moreover, the settlers had come from various districts and social groups of England, so there was a homogenizing effect: those in a given colony came to talk more like one another and less like any particular community in England. All these influences combined to make American English a distinct variety of the language.

Despite such changes, the norm of usage in the colonies remained that of the motherland until the American Revolution. Thereafter American English was no longer a colonial variety of the English of London but had entered its national period. Political independence was soon followed by cultural independence, of which a notable Founding Father was Noah Webster. As a schoolmaster, Webster recognized that the new nation needed a sense of linguistic identity. Accordingly he set out to provide dictionaries and textbooks for recording and teaching American English with American models. The need Webster sought to fill was twofold: to help Americans realize they should no longer look to England for a standard of usage and to foster a reasonable degree of uniformity in American English. To those ends, Webster's dictionary, reader, grammar, and blue-backed speller were major forces for institutionalizing what he called Federal English.

The language preserved its unity through the challenge of the Civil War (1861-1865); it assimilated immigrant languages and dialects, such as Spanish, German, and Irish, and replaced aboriginal Amerindian languages. The extension of American English and the preservation of its relative uniformity as the country expanded westward were aided by the railroads spanning the continent, the invention of the telegraph and telephone, and the explosion of journalism and popular education, all of which broadened communication.

By the end of the nineteenth century, the Manifest Destiny of American English had been achieved, along with that of the territorial expansion of the nation. Because the domestic frontier had been exhausted, the nation had to look abroad if it was to continue to expand, territorially and linguistically. The Spanish-American War in 1898, though lasting barely four months, was a turning point in the history of the language. Before that war, American English played no more than a walk-on role on the world stage; foreign influences usually had to come to it. Afterward, international activity sharply increased, and the prominence of American English around the globe became proportionately greater.

In the course of war or commerce American English spread to Hawaii, Puerto Rico, the Philippines, China, Panama and other countries in Latin America, the Virgin Islands, and nations throughout the world. To the consternation of some, American popular culture followed. Through music, films, recordings, television, computers, aeronautics, multinational companies, and the military, the second half of the twentieth century became the Age of America, for good or for ill. The linguistic consequences have been profound--both on the use of English internationally and on the language itself.

As American English has been institutionalized and used internationally, the nature of its relationship to British English has changed. From the national period until the present day, there have been two opposing attitudes: Americanizing and Briticizing. The Americanizing attitude recognizes American usage as independent of British, not inferior to it; at its most extreme it seeks to exaggerate the differences. The Briticizing attitude emphasizes the connections between American and British; at its most extreme it regards American as subordinate to British. Exemplifications of the Americanizing versus Briticizing are Noah Webster's dictionary versus Joseph Worcester's; Mark Twain versus Henry James; H. L. Mencken's The American Language versus George Philip Krapp's The English Language in America, and Robert Frost versus T. S. Eliot.

If Americans have been divided on their view of the relationship between American and British English, few Britishers have had any doubt, and their confidence is widely shared by continental Europeans. To them English means British English, and American is a dialect, if not an aberration. The international prestige of British English has been maintained by both the geographical proximity of continental Europe to the British Isles and the residual influence of the British Empire around the world. It is also supported by England's reputation as a source of high culture. America, in contrast, is seen as a source of technology, commercialism, and pop culture.

Today, however, there are two main branches of English in the world, both including several national varieties: British English in the United Kingdom, Ireland, Australia, New Zealand, South Africa, and elsewhere; and American English in the United States, Canada, and elsewhere. Although British English is more widely distributed, American English is spoken by nearly three times as many persons. That numerical preponderance has as an inevitable consequence that American English is now the principal representative of the English language and the major determinant of its future.

American English, although remarkably uniform considering that over 230 million people speak it, is by no means monolithic. It varies by location, social level, ethnic group, and other factors. There are four primary regional dialects in the United States: Southern or Coastal Southern, South Midland or Southern Mountain, North Midland or Lower Northern, and Northern or Upper Northern. The boundaries between them, which are traceable to the earliest settlements, are clearest in the eastern part of the country, where settlement came first. They become less distinct and more overlapping in the West.

The dialect regions are distinguished mainly by differences of pronunciation and vocabulary and only to a small extent by grammar. Pronunciation differences include the sounding or nonsounding of r in words like mother and mirth; the quality of the "aw" vowel in words like lawn and caught; the use of an "s" or "z" sound in greasy; and many other such features. Vocabulary differences include choices among faucet, spigot (spicket), and tap; downtown and uptown for a main business district; soda, pop, Coke, tonic, and soft drink for a carbonated beverage; and many other variations, including more restricted ones, such as schlepp for "mosey" or "lug" in the New York area or arroyo in the Southwest.

Ethnic dialects have phonological and grammatical characteristics, but they are most easily recognized by vocabulary. Many ethnic communities have contributed to the general American word stock: Louisiana creole gumbo and lagniappe, New York Dutch cookie and boss, Yiddish schnook and chutzpa, Mexican-Spanish lasso and ranch, Irish shebang and blarney, African-American jazz and goober, and many others.

Black English, one of the most prominent ethnic dialects, is the subject of great controversy concerning its history and present use. There are two opinions about its origin. One holds that slaves came from many tribes in Africa; they had no common language and therefore learned English from whites. In this view, Black English is a historical evolution of forms of nonstandard English that can be traced to the British Isles. The other holds that sailors and natives along the African coast used an English-based pidgin (or reduced language used for communication among persons speaking no other common tongue). Slaves brought to America knew this pidgin or soon learned it, and on the plantations it developed into a creole (a full language of mixed origins). In this view, Black English is a remnant of an independent language that has been gradually assimilating to general English, so that it now appears to be only a dialect. There is evidence for both opinions; it is impossible to say which is nearer to the historical reality.

The other controversy over Black English concerns its use and social status today. Some view it as a "home dialect" whose speakers need also to learn standard English to live effectively in the dominant society. Others regard that position as linguistic imperialism. They believe that the dominant society should respect minority cultures, including dialects, instead of expecting minorities to do the adapting. In practical terms, those who hold the second view would use Black English as a medium of instruction in the schools and would provide pedagogical materials written in it. Among the strong opponents of the second view are older-generation, middle-class African-Americans, who believe this would limit opportunities for social and economic advancement among blacks.

Hispanic English, another major ethnic dialect in the United States, exists in several subvarieties, notably Puerto Rican English in New York City, Cuban English in south Florida, and Chicano English in the Southwest. The main issue about Hispanic English (and other immigrant languages with sizable numbers of speakers) is that of bilingualism. It is parallel to the issue of the use and status of Black English. Specifically, the question is, should those who do not speak English be provided with schools, public services, legal proceedings, and so on in their native languages or should they be expected to learn English quickly and be linguistically disadvantaged until they do? In a larger sense, the question is whether non-English ethnic cultures should be preserved and fostered in Anglophone America or assimilated as quickly and completely as possible.

Both Black English and bilingualism are highly emotional issues with political overtones. The English First movement, which arose in opposition to other languages' achieving official status within American life, seeks the constitutional establishment of English as the only official language. Although seen by its opponents as xenophobic, the movement is a contemporary version of Noah Webster's Federal English--that is, an effort to provide a distinctive standard language for all citizens of the United States.

The very existence of a standard language has been called into question, but several things are clear. First, there is a standard written form of the language, extensively described in dictionaries and grammar books and used for most printed matter and public discourse. Second, this written standard is by no means monolithic but has a good deal of variation in it. Third, most arguments about what is or is not "good" English are concerned not with differentiating standard from nonstandard use but with variations within the standard. Fourth, standard English is chiefly a matter of grammar, spelling, and word choice; being primarily a matter of written English, it has little to do with pronunciation. Fifth, there is no standard pronunciation in the United States comparable to the Received Standard (or bbc English) of the United Kingdom.

Some particular pronunciations have low prestige ("ax" for ask or "liberry" for library), but pronunciation has not been institutionalized--there is no standard American accent. Recent presidents have spoken the easily identifiable regional accents of Massachusetts, Texas, and Georgia. What is called "General American" is a myth. Persons who deal with those from other regions may modify their pronunciation to eliminate phonetic features that are most readily identified as local dialect, but the result is not a unified, consistent accent. Rather, it is a pronunciation that has been "smoothed out" by avoiding easily recognized regionalisms.

Today English is an international language, widely used as a second and foreign language as well as a primary one. Although British English is more prestigious, American English is increasingly used. But in fact, the differences between them, especially in their written forms, are not great. In the foreseeable future, the unity of English--internationally and nationally in the United States--seems assured.

Friday, April 06, 2007

Painting

Painting

Painting, meant literally, is the practice of applying color to a surface (support) such as paper, canvas, wood, glass, lacquer, or other. However, when used in an artistic sense, the term "painting" means the use of this activity in combination with drawing, composition and other aesthetic considerations in order to manifest the expressive and conceptual intention of the practitioner.

Painting is used as a mode of representing, documenting and expressing all the varied intents and subjects that are as numerous as there are practitioners of the craft. Paintings can be naturalistic and representational (as in a still life or landscape painting), photographic, abstract, be loaded with narrative content, symbolism, emotion or be political in nature. A large portion of the history of painting is dominated by spiritual motifs and ideas; sites of this kind of painting range from artwork depicting mythological figures on pottery to biblical scenes rendered on the interior walls and ceiling of The Sistine Chapel to depictions of the human body itself as a spiritual subject.


Overview

What enables painting is the perception and representation of intensity. Every point in space has different intensity. That painting is one of the basic skills needed by an artist to make masterpieces and works of art. The means of representing this intensity in painting is therefore the shade, nuance, i.e. the span between white and black with all visible gray shades - the difference in intensity. Line is considered as reduced surface, the difference in surface intensity (i.e., intensity of reflected light) is marked by thickness of line. In practice, only by use of shades painter can articulate shapes - if the two meeting surfaces are of very different intensity, the line will be thick, if the surfaces are close in intensity, the line will be pale. Color and texture are separate qualities, and they can not be used to articulate form, but can be mixed (for instance with gray) without restriction. By using just color (of the same intensity) one can only represent symbolic shapes, and not 3D space or construction of an object. It is important to distinguish between using this basic painting means and ideological means, like geometrical figures, various points of view and organization (perspective), symbols, etc. For instance, "white wall" is an idea, and for a painter, a white wall has different intensity at each point; a painter will perceive all various shades and reflections from nearby objects on a particular wall, but ideally, a white wall is still white in pitch darkness. In technical drawing, thickness of line is also ideal, this kind of drawing gives ideal outlines of an object within a perceptual frame different from the one used by painters.

Color and tone are the essence of painting as pitch and rhythm are of music. Color is highly subjective, but has observable psychological effects, although these can differ from one culture to the next. Black is associated with mourning in the West, but elsewhere white may be. Some painters, theoreticians, writers and scientists, including Goethe, Kandinsky, Newton, have written their own color theory. Moreover the use of language is only a generalisation for a color equivalent. The word "red", for example, can cover a wide range of variations on the pure red of the visible spectrum of light. There is not a formalised register of different colors in the way that there is agreement on different notes in music, such as C or C# in music, although the Pantone system is widely used in the commercial printing and graphic design industry for this purpose.

For a painter, color is not simply divided into basic and derived (complementary or mixed) colors (like, red, blue, green, brown, etc.). Painters deal practically with pigments, so "blue" for a painter can be any of the blues: phtalocyan, Paris blue, indigo, cobalt, ultramarine, etc. Psychological, symbolical meanings of color are not strictly speaking means of painting. Colors only add to the potential, derived context of meanings, and because of this the perception of a painting is highly subjective. The analogy with music is quite clear - tones in music (like "C") are analogous to "shades" in painting, and coloration in painting is the same as the specific color of certain instrument - these do not form a melody, but can add different contexts to it.

Rhythm is important in painting as well as in music. Rhythm is basically a pause incorporated into a body (sequence). This pause allows creative force to intervene and add new creations - form, melody, coloration. The distribution of form, or any kind of information is of crucial importance in the given work of art and it directly affects the esthetical value of that work. This is because the esthetical value is functionality dependent, i.e. the freedom (of movement) of perception is perceived as beauty. Free flow of energy, in art as well as in other forms of "techne", directly contributes to the esthetical value.

Modern artists have extended the practice of painting considerably to include, for example, collage, which began with Cubism and is not painting in the strict sense. Some modern painters incorporate different materials such as sand, cement, straw or wood for their texture. Examples of this are the works of Jean Dubuffet and Anselm Kiefer. (There is a growing community of artists who use computers to literally paint color onto a digital canvas using programs such as Photoshop, Painter, and many others. These images can be printed onto traditional canvas if required.)

In 1829, the first photograph was produced. From the mid to late 19th century, photographic processes improved and, as it became more widespread, painting lost much of its historic purpose to provide an accurate record of the observable world. There began a series of art movements into the 20th century where the Renaissance view of the world was steadily eroded, through Impressionism, Post-Impressionism, Fauvism, Expressionism, Cubism and Dadaism. Eastern and African painting, however, continued a long history of stylization and did not undergo an equivalent transformation at the same time.

Modern and Contemporary Art has moved away from the historic value of craft and documentation in favour of concept; this has led some to say that painting, as a serious art form, is dead, although this has not deterred the majority of artists from continuing to practise it either as whole or part of their work.


History of painting


The oldest known paintings are at the Grotte Chauvet in France, claimed by some historians to be about 32,000 years old. They are engraved and painted using red ochre and black pigment and show horses, rhinoceros, lions, buffalo, mammoth or humans often hunting. There are examples of cave paintings all over the world—in France, Spain, Portugal, China, Australia, etc.

In Western cultures oil painting and watercolor painting are the best known media, with rich and complex traditions in style and subject matter. In the East, ink and color ink historical predominated the choice of media with equally rich and complex traditions.

Aesthetics and theory of painting

Aesthetics tries to be the "science of beauty" and it was an important issue for 18th and 19th century philoshopers like Kant or Hegel. Classical philosophers like Plato and Aristotle also theorized about art and painting in particular; Plato disregarded painters (as well as sculptors) in his philosophical system, sustaining that a painting is a copy of reality (a shadow of the world of ideas so it cannot depict the truth) and is nothing but a craft, similar to shoemaking or iron casting. Leonardo Da Vinci, on the contrary, said that "Pittura est cousa mentale" (painting is an intellectual thing). Kant identified Beauty with the Sublime, not referring particularly to painting, but this concept was taken by painters like Turner or Caspar David Friedrich. Hegel recognized the failure of attaining a universal concept of beauty and in his aesthetic essay wrote that Painting is one of the three "romantic" arts, along with Poetry and Music for its symbolic, highly intellectual purpose. Painters like Kandinsky or Paul Klee also wrote theory of painting. Kandinsky in its essay sustains that painting has a spiritual value also he attaches primary colors to essential feelings or concepts, something that writers like Goethe had already tried to do.

Iconography has also something to say about painting. The creator of this discipline, Erwin Panofsky, tries to analyse visual symbols in their cultural, religious, social and philosophical depth to attain a better comprehension of mankind's symbolic activity.

Beauty, however, a concept of which Painting is essentially linked, cannot be defined as an objective matter, purpose or idea. Much aesthetics and theory of art is connected with painting. In 1890, the Parisian painter Maurice Denis famously asserted: "Remember that a painting – before being a warhorse, a naked woman or some story or other – is essentially a flat surface covered with colors assembled in a certain order." Thus, many twentieth century developments in painting, such as Cubism, were reflections on the business of painting rather than on the external world, nature, which had previously been its core subject.

Julian Bell (1908-37), a painter himself, examines in his book What is Painting? the historical development of the notion that paintings can express feelings and ideas:

"Let us be brutal: expression is a joke. Your painting expresses – for you; but it does not communicate to me. You had something in mind, something you wanted to ‘bring out’; but looking at what you have done, I have no certainty that I know what it was...."

Tuesday, March 27, 2007

Materials science

Materials science is an interdisciplinary field involving the properties of matter and its applications to various areas of science and engineering. It includes elements of applied physics and chemistry, as well as chemical, mechanical, civil and electrical engineering. With significant media attention to nanoscience and nanotechnology in the recent years, materials science has been propelled to the forefront at many universities, sometimes controversially.

History

The choice material of a given era is often its defining point: the stone age, Bronze Age, and steel age are examples. Materials science is one of the oldest forms of engineering and applied science. Modern materials science evolved directly from metallurgy, which itself evolved from mining. A major breakthrough in the understanding of materials occurred in the late 19th century, when Willard Gibbs demonstrated that thermodynamic properties relating to atomic structure in various phases are related to the physical properties of the material. Important elements of modern materials science are a product of the space race: the understanding and engineering of the metallic alloys and other materials that went into the construction of space vehicles was one of the enablers of space exploration. Materials science has driven, and been driven by, the development of revolutionary technologies such as plastics, semiconductors, and biomaterials.

Before the 1960s (and in some cases decades after), many materials science departments were named metallurgy departments, from a 19th and early 20th century emphasis on metals. The field has since broadened to include every class of materials, including: ceramics, polymers, semiconductors, magnetic materials, medical implant materials and biological materials.

In 2006 the Minerals, Metals & Materials Society (TMS) voted on and published the Top 50 Moments in the History of Materials.

Fundamentals of Materials Science

In materials science, rather than haphazardly looking for and discovering materials and exploiting their properties, one instead aims to understand materials fundamentally so that new materials with the desired properties can be created.

The basis of all materials science involves relating the desired properties and relative performance of a material in a certain application to the structure of the atoms and phases in that material through characterization. The major determinants of the structure of a material and thus of its properties are its constituent chemical elements and the way in which it has been processed into its final form. These, taken together and related through the laws of thermodynamics, govern the material’s microstructure, and thus its properties.

An old adage in materials science says: "materials are like people; it is the defects that make them interesting". The manufacture of a perfect crystal of a material is physically impossible. Instead materials scientists manipulate the defects in crystalline materials such as precipitates, grain boundaries (Hall-Petch relationship), interstitial atoms, vacancies or substitutional atoms, creating a material with the desired properties.

Not all materials have a regular crystal structure. Polymers display varying degrees of crystallinity. Glasses, some ceramics, and many natural materials are amorphous, not possessing any long-range order in their atomic arrangements. These materials are much harder to engineer than crystalline materials. Polymers are a mixed case, and their study commonly combines elements of chemical and statistical thermodynamics to give thermodynamical, rather than mechanical descriptions of physical properties.

In addition to industrial interest, materials science has gradually developed into a field which provides tests for condensed matter or solid state theories. New physics emerges because of the diverse new material properties needed to be explained.

Materials in Industry

Radical materials advances can drive the creation of new products or even new industries, but stable industries also employ materials scientists to make incremental improvements and troubleshoot issues with currently used materials. Industrial applications of materials science include materials design, cost-benefit tradeoffs in industrial production of materials, processing techniques (casting, rolling, welding, ion implantation, crystal growth, thin-film deposition, sintering, glassblowing, etc.), and analytical techniques (characterization techniques such as electron microscopy, x-ray diffraction, calorimetry, nuclear microscopy (HEFIB), Rutherford backscattering, neutron diffraction, etc.).

Besides material characterisation, the material scientist/engineer also deals with the extraction of materials and their conversion into useful forms. Thus ingot casting, foundry techniques, blast furnace extraction, electrolytic extraction all are part of the required knowledge of a materials scientist/engineer. Often the presence, absence or variation of minute quantities of secondary elements and compounds in a bulk material will have a great impact on the final properties of the materials produced, for instance, steels are classified based on 1/10th and 1/100 weight percentages of the carbon and other alloying elements they contain. Thus, the extraction and purification techniques employed in the extraction of iron in the blast furnace will have an impact of the quality of steel that may be produced.

The overlap between physics and materials science has led to the offshoot field of materials physics, which is concerned with the physical properties of materials. The approach is generally more macroscopic and applied than in condensed matter physics. See the important publications in materials physics for more details on this field of study.

Alloys of metals is an important and significant part of materials science. Of all the metallic alloys in use today, the alloys of iron (steel, stainless steel, cast iron, tool steel, alloy steels) make up the largest proportion both by quantity and commercial value. Iron alloyed with various weight percentages of carbon gives low, mid and high carbon steels. For the steels, the hardness and tensile strength of the steel is directly related to the amount of carbon present, while increasing carbon levels lead to lower ductility and toughness. The addition of silicon and graphitization will produce cast irons (although some cast irons are made precisely with no graphitization). The addition of chromium, nickel and molybdenum to carbon steels (more than 10%) gives us stainless steels.

Other significant metallic alloys are those of aluminium, titanium, copper and magnesium. Copper alloys have been know for a long time (during the Bronze Age), while the alloys of the other three metals have been relatively recently developed, due to the chemical reactivity of these metals and the resultant difficulty in their extraction which wasn't accomplished (electrolytically) until recently. The alloys of aluminium, titanium and magnesium are also known and valued for their high strength to weight ratios and, in the case of magnesium, their ability to provide electromagnetic shielding. These materials find special applications where high strength-weight ratios are desired (aero-space industry).

Other than metals, polymers and ceramics are also an important part of material science. Polymers are the raw materials (the resins) used to make what we commonly call plastics. Plastics really are the final product after a/many polymers and additives have been processed and shaped into a final shape and form. Polymers that have been around and are in current widespread use include polyethylene, polypropylene, polyvinyl-chloride, polystyrene, nylons, polyesters, acrylics, polyurethane, polycarbonates. Plastics are generally classified as "commodity", "specialty" and "engineering" plastics.

PVC is a commodity plastic, it is widely used, low cost and annual quantities are huge. It lends itself to an incredible array of applications, from faux leather to electrical insulation to cabling to packaging and vessels. Its fabrication and processing are simple and well-established. The versatility of PVC is due to the wide range of additives that it accepts. Additives in polymer science refers to the chemicals and compounds added to the polymer base to modify its physical and material properties.

Polycarbon would be normally considered an engineering plastic (other examples include PEEK, ABS). Engineering plastics are valued for their superior strengths and other special material properties. They are usually not used for disposable applications, unlike commodity plastics.
Specialty plastics are really the materials with unique characteristics, such as ultrahigh strength, electrical conductivity, electro-florescence, high thermal stability, etc.

It should be noted here that the dividing line between the various types of plastics is not based on material but rather their properties and applications. For instance, polypropylene (PP) is a cheap, slippery polymer commonly used to make disposable shopping bags and trash bags. It is commodity. But a variety of PP called Ultra-high Molecular Weight Polypropylene (UHMWPE) is an engineering plastic which is used extensively as the glide rails for industrial equipment.

Another application of material science in industry is the making of composite materials.
Composite materials are structured materials composed of at least two different macroscopic phases. An example would be steel-reinforced concrete. Also, take a look at the plastic casing of your telly set, cell-phone: these plastic casings are usually a composite made up of a thermoplastic matrix such as acrylonitrile-butadiene-styrene (ABS)in which calcium carbonate chalk, talc, glass fibres or carbon fibres have been added (dispersants) for added strength, bulk, or electro-static dispersion.

Sunday, May 14, 2006

Emigration

Emigration is the act and the phenomenon of leaving one's native country to settle abroad. It is the same as immigration but from the perspective of the country of origin. Human movement before the establishment of state boundaries or within one state, is termed migration. There are many reasons why people might choose to emigrate. Some for political or economic reasons. Some might have found a spouse while visiting another country and emigrate to be with them. Many older people living in rich nations with cold climates will choose to move to warmer climates when they retire.
Many political or economic emigrants move together with their families toward new regions or new countries where they hope to find peace or job opportunities not available to them in their original location. Throughout history a large number of emigrants return to their homelands, often after they have earned sufficient money in the other country. Sometimes these emigrants move to countries with big cultural differences and will always feel as guests in their destinations, and preserve their original culture, traditions and language, sometimes transmitting them to their children. The conflict between the native and the newer culture may easily create social contrasts, generally resulting in an uncomfortable situation for the "foreigners", who have to understand legal and social systems sometimes new and strange to them. Often, communities of emigrants grow up in the destination areas, collecting immigrants of common provenance, also to help for integration.