(From Encyclopedia Americana)

The U.S. Economy

As we have seen, the U.S. economy is the largest in the world, producing between one third and one fourth of total world output in the years since World War II. It was not always thus. In the middle of the 19th century, the GNP of the United States was probably a little less than that of France or the United Kingdom, while at the end of the 18th century, at the beginning of U.S. national history, the American economy was a very small one. Since then U.S. economic growth has been exceptionally rapid, and the current scale of the American economy reflects both the great speed of U.S. growth and the very extended period over which it has taken place. According to one estimate, America's GNP in 1973 was about 1,100 times as large in real dollars as it had been in 1790. Another way to put this is to say that the entire economy of the United States in 1790 was about half as large as that of Vermont in the early 1970s.

The rapidity of growth in real national product was due to dramatic increases in U.S. supplies of factors of production—land and other natural resources, labor, and capital—and to improvements in the productivity of these factors. The Louisiana Purchase, the acquisition of Texas, the various cessions from Mexico and Spain, the purchase of Alaska, and the acquisition of Hawaii increased the territory of the United States until it is, today, over four times as large as it was in 1790. Over the years this territory was explored, land was gradually brought under agricultural cultivation, mineral deposits were discovered and improved, and scientific knowledge found new and valuable uses for various minerals.

Thus, throughout most of the history of the United States, land and other natural resources have been abundant. As growth has proceeded, these resources have been steadily brought into production, the pace being particularly rapid in the 19th century.

The U.S. labor supply also increased dramatically, chiefly as a result of the growth of population, although the labor-force participation rate also increased. The U.S. birthrate, although gradually trending downward, was high, while the death rate was moderate. The rate of natural increase was unusually high through most of the 19th century. Furthermore, in the 1830s there began an extraordinary development in world history, which persisted down to World War I. European countries—in the midst of revolutionary social, economic, and political modernization and of pronounced population growth—began to experience a massive intercontinental emigration, mostly to the United States. Roughly one tenth of the average population in Europe, including almost one fourth of the European labor force, participated in this movement. The population and labor force of the United States thus grew with extraordinary speed down to World War I.

Finally, the U.S. capital stock also increased steadily and rapidly. While Europeans invested heavily in U.S. canals and railroads, European capital played a much smaller role in U.S. growth than did European immigrants. Most U.S. capital formation came from the investment by Americans of American savings. The rate of savings rose gradually and steadily through the 19th century until, by the 1890s, almost 30% of the GNP was typically being saved and invested.

Finally, American land, labor, and capital also became persistently more productive. There were several reasons for this, but three were probably the most important. First, as time passed, markets developed and became more perfect, and the volume of information they provided increased, as did the speed with which information and goods could be moved. The allocation of resources throughout the economy probably improved, which made for greater efficiency. Second, the level of education and training of workers rose so that the quality and efficiency of the work force became steadily better. Finally, the level of scientific knowledge increased more and more rapidly, and there was a lessening in the lag between scientific discoveries and their subsequent applications in improved productive processes.

In summary, U.S. long-term economic growth has been due both to increases in the supplies of factors of production and to improvements in the quality and productivity of these factors. According to one set of estimates, the growth of factors of production—labor, capital, and land (natural resources)—accounted for almost three quarters of U.S. economic growth between 1840 and 1960, while productivity improvements were responsible for the rest.

The rate and nature of growth changed in the early 20th century, around the time of World War I. The end of large-scale immigration and the persistently declining birthrate meant that the rate of growth of population slowed down. As a result, the growth rate of the labor force, despite increasing labor force participation by women, also slowed. The savings and investment rate trended downward until it reached, in recent decades, levels about half those attained in the 1890s. No new, large masses of land were added to the U.S. stock of natural resources.

On the other hand, factors affecting productivity were much more favorable until sometime in the late 1960s or early 1970s. Educational levels of the work force improved, as did markets, and technical change went on apace. As a result, the apportionment of responsibility for economic growth also changed dramatically. Between 1840 and 1900, factors of production—land (natural resources), labor, and capital—accounted for over 80% of U.S. economic growth; productivity change, for less than 20%. On the other hand, between 1900 and 1960, productivity advance was responsible for almost 45% of the economic growth experienced in the United States. The nature of U.S. growth was clearly changing, becoming more and more dependent on the improvement of productivity.

Despite these favorable developments with respect to productivity, however, the overall rate of U.S. economic growth slowed down. For example, between 1840 and 1900 real net national product increased on average about 4% per year, a very high rate. But after 1900 this figure fell, reaching a level of about 3% per year between 1900 and 1960.

In the years following World War II the U.S. rate of growth increased, but improvements in Japan and western European countries were even greater. Thus the gaps between the size of the U.S. economy and the economies of these countries narrowed, at least in relative terms.

Business Cycles and Kuznets Cycles

Capitalist economies have been subject to various periodic undulations, and the U.S. economy has been no exception. The most well-known of these fluctuations is the business cycle, which, in the United States, lasts typically three to five years and is associated with inventory changes. Thus a crisis is typically caused by excess inventories. Business executives cut back on orders so as to reduce inventories, and thus production is reduced and unemployment grows. Once inventories have been run down, firms again begin to order, production increases, and unemployment is reduced. The impact of the business cycle can be observed in U.S. records at least as far back as the beginning of the 19th century.

The American records show that the severity of U.S. business-cycle depressions has varied widely. Roughly every 20 years down to World War II, there was an exceptionally serious one. Thus in the 1840s there was a deep and long depression, following the crises of 1837 and 1839. In 1857 there was another sharp financial crisis, but the subsequent depression was cut short by the expansive impact of the Civil War. Or perhaps it would be best to say that it was put off by the war. At least it is true that another great depression occurred in the 1870s, beginning in 1873 and continuing to 1879, an unprecedentedly long period. Twenty years later, in the 1890s, there was another long and deep depression. The pattern was then broken, presumably by the impact of World War I, but in 1929 there began the most severe depression in U.S. history.

Some economists believe that the pattern of deep depression roughly every 20 years is an element in a 20-year economic cycle, called the Kuznets cycle (after the U.S. economist Simon Kuznets). Unlike the business cycle, the Kuznets cycle involves variations in investment in machinery and buildings, not in inventories. The cycle takes as long as it does because the planning and gestation period for fixed investment is very long. In the U.S. the cycle was also exacerbated by the pattern of immigration.

While there is as yet no complete theory of the Kuznets cycle, most accounts of it run something like this: At the pit of a great depression, such as the one in the 1849's, businessmen engage in little or no investment and may even run down past investment by failing to replace worn-out equipment. Eventually, however, firms begin again to order goods for inventory. Production then picks up and unemployment drops. As the economy recovers bottlenecks begin to appear, and firms buy new equipment and perhaps also new plants to break the bottlenecks. Labor markets tighten up, wage rates rise, and workers are drawn into the industrial cities from the surrounding countryside, producing a demand for housing and a secondary building boom. Workers are also drawn in from overseas. Immigration picks up. The demand for housing increases further, with rising immigration, and the booming economy also stimulates demand for transportation services, which calls for investment in transportation systems.

The boom is now full-fledged. There are periodic inventory adjustments and business-cycle depressions, but they are short and mild. Eventually, however, the big investment boom comes to an end. The next inventory cycle brings a sharp depression and heavy unemployment. Migration to the cities from the countryside and from overseas is discouraged, and the economy moves into a long depressed period. There are periodic recoveries, but they are short, weak and incomplete, until the imbalance in investment is eventually worked off, when a new long boom begins.

The evidence shows that there have been long undulations of this type in investment, the growth rate of national product, immigration, and even in marriage and birthrates. The Kuznets cycle, then, seems to have been a pervasive phenomenon, affecting the U.S. economy and population, at least up to the great depression of the 1930s. Since that time the strength of the cycle may have weakened because of the altered roles of immigration and government finance in the post-World War II U.S. economy. Overall, immigration has played a much smaller role in the United States than before World War I. Thus one of the stimulants to the Kuznets cycle has been missing. Second, government expenditures are now very much larger, relative to national product, than they were before World War II. They communicate their own shocks to the economy, but they do not follow the 20-year pattern of the Kuznets cycle and thus they help to counter those forces in the economy that tend to produce the cycle.

According to one authority, however, the great Baby Boom after World War II was a Kuznets-cycle phenomenon. Following World War II there was an investment boom, just like those that had occurred in the 19th century. The labor market became very tight, and particularly favorable for young people. The tightness of the labor market was not relieved, as it had been in the 19th century, by a flood of immigrants. Thus wage rates stayed high and the careers of young people developed quickly and successfully. Marriage and child-bearing were encouraged among them. Thus the Baby Boom had clear economic origins.

The Baby Boom, in turn, has had profound economic implications and may result in echo effects. Thus the 1970s and early 1980s experienced unusually high rates of youth unemployment and reduced marriage and birth rates. In some measure, these rates may reflect the coming to maturity of the very large Baby Boom generation, and they may lead to continued cyclical changes in marriage and fertility, as well as long economic cycles.

Economic Performance

One measure of the performance of an economy is the level of material well-being enjoyed by those who derive their incomes from the economy. A commonly used index of the average level of material well-being produced within a country is real national product per capita—that is, the ratio of the national product in real dollars divided by the population.

In 1970 the United States had the highest real per capita product in the world. The nearest rivals were Canada, several of the industrial countries of northwest Europe, and a few oil-exporting nations, all of them with per capita product levels 10%-25% below that of the United States. The margin of difference, thus, was not great and had probably narrowed in the preceding two decades. This was not because U.S. growth had slowed down—it had not—but because the growth of the other countries had speeded up. On the other hand, the gap between the developed world, including the United States, Canada, Western Europe, and Japan, and the underdeveloped world remained very wide, indeed. Per capita product levels one-third and less than that of the United States were common in the underdeveloped world.

By 1974 the success of the Organization of Petroleum Exporting Countries (OPEC) catapulted Kuwait to the top of the ranking in terms of real national product per capita. Even Libya drew near to the U.S. level, although the United States remained in second place. Since then, a number of European countries have continued to experience rates of growth higher than the U.S. rate and have therefore drawn closer to and possibly even surpassed the United States. Comparisons of per capita product levels that involve incomplete price adjustments, and therefore are not fully reliable, suggest that Switzerland, Luxembourg, Sweden, and Denmark are in the latter category, although the United States remains near the top of the world rankings.

The superior performance of the United States economy is not simply a recent phenomenon. The United States was one of the first countries in the world to begin the process of industrialization and general economic modernization. By the mid-19th century the U.S. real per capita product was only a little below the levels achieved by the richest countries in the world at that time—Britain and the Netherlands—and exceeded French per capita product. Indeed, as early as the years just before the Revolution, the American colonies constituted one of the richest countries in the world. Thus recent U.S. success has had a long history.

Since the middle of the 19th century the level of the U.S. real national product per capita has risen at an average rate of about 1.6% per year. This rate may seem small, but it compounds over the years into very large values. For example, in 100 years, it will achieve a magnitude about five times as large as its original value; in 150 years, ten times; and in 200 years, almost 24 times. Thus the volume of goods and services available for each member of the U.S. population today is many times as large as it was in the early years of the last century, when the process of modern growth got well under way in the United States. The current superior economic position of the United States reflects the fact that the nation was already rich in its early history, that it experienced economic modernization at an early date, and that the process of economic modernization raised U.S. per capita product levels at a rapid pace. The speed of U.S. per capita product growth was not unusually high, however, as compared with other modernizing countries. In this respect, the American experience has been about average.

The pace of U.S. growth has not been steady, of course. Changes in per capita product have shown the influence of business cycles and Kuznets cycles. The rate of improvement has also shown a long-term tendency to rise. Thus early in the 19th century, per capita product grew at a rate of perhaps 1% per year on average, a figure that increased to perhaps 1.4% or 1.5% per year later in the century and to 1.7%-1.8% in the 20th century. The influence of the process of growth on the average level of material wellbeing in the United States has thus been increasing with time.

The sources of the improvement in per capita real national product are the same as the sources of the rise in real national product, but the relative importance of these sources is not the same. Increasing productivity explained almost two thirds of the gain in per capita product in the years between 1840 and 1960 and over three quarters of the gain between 1900 and 1960. The improvements in the standard of American life thus have depended principally on productivity gains and on the factors underlying them. These factors include better markets, better workers, a vastly more capable science, and the technical skills and apparatus to put improved scientific knowledge to practical use.

The Functional Distribution of Income

The classical economists of the 19th century believed that capitalist economic growth eroded profits in the long run, and tended to move the economy toward a stagnant condition in which all incomes would be divided between workers and landlords, while capitalist profit-takers would receive nothing. Karl Marx, on the other hand, foresaw harsh competition among capitalists, growing concentration of capital in the hands of a few, and the deterioration of the material condition of the worker. Neither of these sets of forecasts has been realized. The modern world has been dynamic, not stagnant, the power of technical change constantly refreshing profit opportunities. Economic concentration has taken place, but workers are far better off than in Marx's time.

In recent decades the lion's share of U.S. national income has been paid out in the form of wages and wage supplements. Over 70% of national income took this form. The incomes received by unincorporated businesses and corporate businesses each accounted for 10% or 12% of the total, while the remainder, about 7%, consisted of rents and interest paid to individuals. Put in another way, owners of property appear to have received as payment for the use of their property less than one third of the national income, while laborers have received well over two thirds. ("Laborers" refers to all workers, including very well-paid professionals, business managers, sports figures, and entertainers.)

Data for the first two decades of the present century seem to tell a very different story. At that time only about 55% of the national income consisted of wages and supplements, the rest representing income flowing to owners of property. Within that broad group, unincorporated businesses received very much more, relatively, than they do today—almost 25% of national income—while corporate businesses received 7% to 10% and persons earning rents and interest, 13% to 15%. These data, taken at face value, suggest that there have been major structural changes during this century, with labor being the chief gainer from them and unincorporated business the chief loser. However, a little thought will show that the shifts in the structure of the economy may not have been so pronounced as the data at first suggest.

The decline in the relative importance of the income of unincorporated businesses probably does reflect, in part, a set of true structural changes. One stronghold of unincorporated business, the farm sector, has been a shrinking part of the national economy, and this fact no doubt has influenced the changing stream of income received by unincorporated businesses. Furthermore, in some parts of the nonagricultural sector of the economy, small independent firms have been replaced by branches of large corporations, such as in the retail grocery sector. But in some measure the decline in the fraction of national income received by unincorporated businesses must simply reflect the fact that the corporate form of organization has become cheaper to use and more popular. The advantages of limited liability, indefinite life, and the ability to sue and be sued no doubt have induced many small, independent firms to give up the partnership or proprietorship forms of organization and to incorporate instead.

The increase in the fraction of national income consisting of wages is probably more apparent than real. The data cited do not really distinguish exactly and clearly between labor income and property income. For example, the returns to unincorporated businesses, which would seem, at first blush, to consist of payments to property owners, and were so treated above, are actually a conglomeration of property and labor incomes. Shopkeepers make earnings from their shops both because they have made an investment in them (property income) and because they wait on customers and manage their firms (labor income). If proper allowance were made for this fact, the share of income flowing to labor in the early part of the century would surely be found to be much closer to 70% than the 55% given above. Indeed, the best estimates presently available suggest that the fraction of the national income paid out for all labor services showed no important long-term trend between the middle of the 19th century and the early 20th century, and only the most modest upward trend thereafter. Thus property and labor have divided the national income in roughly the same proportions for the last 130 years or so. The very dramatic expansion in labor union power in the late 1930s and the 1940s thus seems not to have produced a major redistribution of income from property owners to laborers.

The Size Distribution of Income and Wealth

What degree of inequality in the distribution of income exists in the United States? What factors account for it? How does U.S. experience in these matters compare with that of other societies? Has inequality been increasing or decreasing over the course of U.S. history, or has it been unchanging?

Since World War II those families (including individuals living alone) that composed the richest fifth of the population received about 45% of total income, before taxes, and a few percent less, after taxes, while the poorest fifth received a little less than 4% before taxes and perhaps a little more than 4% after taxes. Thus the average income of families in the upper fifth was 10 to 12 times as high as the average income of families in the lower fifth. However, the true extent of inequality in the U.S. may be exaggerated somewhat by these figures, for various reasons, some of the more important of which follow:

Families are not, after all, entirely independent. Young people starting out in life usually receive some help from their parents and even if not, have the assurance that, in the event of a catastrophe, they would be able to seek parental help. In the same way, elderly parents sometimes are assisted by mature adult children. Thus the income received by a family does not comprise all of its resources. Formerly, different generations frequently lived together, sharing their incomes. In the post-World War II period, rising incomes have permitted generations to live more often apart. The result of this tendency has been to give the appearance of greater inequality than would be exhibited if different generations lived together.

Second, family incomes are affected by transient phenomena. A stroke of fortune may produce an unusually high income for one family in a given year, while a temporary loss of a job may result in an unusually low income for a second family. Measures of inequality that pertain to a single year—and all existing measures are of this type—capture these unusual events and exaggerate the true degree of inequality.

Third, for most families income follows a life-cycle pattern. The income of the family is low, relative to its lifetime average, when the family is young. It rises as the family ages and perhaps peaks just before the retirement of the household head. It then falls, and the family perhaps begins to live partly from wealth, as well as from income. Part of the inequality observed in any given year, then, simply reflects the fact that different families are at different stages in the life cycle. If we were able to look not at income for a given year, but at the average lifetime earnings of families, we would no doubt find less inequality than the figures given above seem to show.

Nonetheless, even after all adjustments were made, inequality would remain. Family background, family wealth, education, talent, luck, health, race, the sex of the household head—all influence the income of the family and the place it will occupy in the income distribution. But granting the existence of inequality in the United States, how extreme has it been compared with inequality in other societies? The results of careful comparative work suggest that the U.S. income distribution is substantially more egalitarian than the distributions emerging in underdeveloped and developing societies. On the other hand, the U.S. distribution seems to fall within the range of results exhibited by the developed countries of western Europe.

In the years since the mid-1950s the U.S. income distribution has remained fairly stable, exhibiting no clear, strong tendency for inequality either to rise or to decline. It is possible, however, that the apparent stability has been maintained by changes in the nature of the family and the structure of population, which mask trends of a fairly general nature toward greater equality. It is also clear that for about the first 30 of the postwar years black-white family-income differentials narrowed quite markedly, dropping from about 50% to about 30% (that is, black families, on average, had about 50% less income than white families after World War II and about 30% less by the mid-1970s), a dramatic change promoting greater equality. That improvement has not persisted, however, and black families remain, on average, poorer than white families. On the other hand, part of the black-white income difference reflects the higher proportion of young people among blacks. If blacks and whites of the same age are compared, the income difference is considerably smaller.

For female-headed households postwar developments have been much less favorable. In the first 30 postwar years, female-headed households dropped from a situation in which they received incomes equal to about three quarters of the incomes of male-headed households, to a situation in which this proportion had fallen to about half. This experience seems to be related to the dramatic increase in the labor-force participation of women, particularly since about 1950.

Over the longer term there is a much clearer tendency toward greater equality than is exhibited in the postwar period. Thus in 1929 the richest fifth of families (including individuals living alone) received over 54% of total income and the poorest fifth, 3.5%. By the beginning of World War II these figures had shifted to less than 50% and over 4%, respectively, and by 1962, to 45.5% and over 4.5%, respectively. (The income concept is different from the one underlying the postwar figures previously given, so that these two sets of data are not directly comparable.) Furthermore, the period was one in which regional per capita income differences, income differences among workers in different sectors of the economy, and skill-level wage differences all narrowed, producing powerful forces operating in the direction of greater equality. Thus the testimony of narrowing inequality given by the income-distribution data is plausible.

Industrial Distribution of National Income

Three broad industrial divisions of economic activity may be distinguished: primary, secondary, and tertiary. The primary industries consist of agriculture, forestry, and fisheries—industries that work directly with renewable natural resources. Mining, manufacturing, and construction—industries engaged in the extraction of mineral resources and with the processing of the goods produced by mining and the primary sector—compose the secondary sector. Sometimes transportation and public utilities are also counted as part of the secondary sector, although here they will be treated as members of the tertiary sector, along with finance, commerce, services, and government.

One of the most dramatic features of the economic history of the United States has to do with the primary sector. Early in the history of the United States, about 1840, the gross income of the primary sector composed almost 45% of the gross income earned by all sectors. A substantial fraction of factors of production was concentrated in this sector, particularly in agriculture. For the following decades the data show a steady decline in the importance of this sector, a decline that has accelerated over time. Thus by the turn of the 19th century, the sector's share in gross income had fallen to less than 20%, while today it is about 3%.

This change has reflected complex developments. On the one hand, and most importantly, agricultural productivity has vastly improved. At the beginning of the 19th century, agriculture required almost 9 of every 10 workers in the United States in order to produce the food and textile fibers to feed and clothe Americans and to provide foreign earnings needed to pay for imports. Today just a small percentage of the total work force is required to perform the same tasks. While American and foreign demand for the products of U.S. agriculture have risen over the years, they have not gone up fast enough to provide remunerative employment for the same fraction of the work force as in 1800, 1840, or 1900. Thus over the decades the sons and daughters of farmers have steadily migrated from the land to the city, to join immigrants from overseas in the creation of a modern industrial economy.

The course of the development of the secondary sector—the industrial sector—has differed markedly from that of the primary sector. But it has not been quite the obverse of that development. The fraction of gross income earned by the industrial sector did rise, in the 19th century, as the share flowing to the primary sector declined. It rose from less than 20% about 1840 to almost 35% in 1900, chiefly due to the expansion of the manufacturing sector. The United States had begun to experience an industrial revolution early in the 19th century and had made considerable progress by the time of the Civil War. American manufacturing grew up and began to take the domestic market for manufactures away from foreign producers. By the end of the 19th century the United States had even begun to export some manufactures, whereas agriculture had previously accounted for nearly all U.S. exports.

In the 20th century the story began to change. While U.S. industry continued to expand and to change in structure, its growth no longer exceeded the growth of the overall economy. About 1900, industry earned about 35% of the aggregate gross income of the economy, a figure that has changed very little since then. Thus while the U.S. economy often is spoken of as an industrial economy, the income earned by the sector has never accounted for much more than one third of total U.S. income.

The tertiary sector, on the other hand, has shown steady growth in excess of the rate of growth of the overall economy. About 1840, well over one third of the aggregate U.S. income was earned by factors of production in the tertiary sector, a fraction that rose to almost half in 1900 and to almost two thirds in recent years. The U.S. economy could properly be called a services economy, and that title is apt not simply for the present but for early in this century as well.

While the tertiary sector has grown, it has also changed in structure. For example, since 1900 the finance industry has increased nearly as fast as the overall economy, as has the transportation industry. (Before 1900 the transportation industry grew much faster than the entire economy.) On the other hand, wholesale and retail trade, services of all kinds, and government have expanded much faster than the rest of the economy. About 1840, factors of production in these industries earned gross income equal to about one fifth of the GNP, most of it going to wholesale and retail trade. Currently, these three industries earn income equal to more than two fifths of GNP, and they divide it almost equally among the three of them.

These structural changes have accompanied the modernization of the economy and the concentration of population in and near urban places. They reflect enormous increases in productivity, which in turn have affected the pattern of consumer demands. Thus, improved agricultural productivity has provided cheap food, enabling consumers to spend more for nonagricultural industrial goods and for services. Rising per capita income also has provided a means for expanding the demand for nonagricultural goods. The spatial concentration of population and production has called for the expansion of those parts of the tertiary sector—trade, transportation, and finance—that have facilitated industrial production and distribution. Thus the changing income distribution among the primary, secondary, and tertiary sectors captures the central features of economic modernization.

The geographic distribution of economic activity has also changed as time has passed, shifting gradually to the West and, more recently, to the South. Population and economic activity 140 years ago were still concentrated along the East coast. Almost 60% of the income of the country was earned in the Northeast—in New England and the Middle Atlantic states—while roughly 25% was accounted for by the South—which includes the South Atlantic and the East South Central states. The Midwest and the Southwest earned less than 20% of total U.S. income, while the Far West—which includes the Pacific Coast and Mountain regions—were not yet in the union.

The acquisition of the Far West and the discovery of gold and silver there extended and hastened the westward movement, but the Civil War sharply retarded Southern economic growth. By 1900 only about 10% of total U.S. income was earned in the South and the share of the Northeast had fallen to a little over 40%, while the proportions earned in the Midwest (35%) and the Southwest and Far West (about 13%) were now relatively very large. In the years since, the movement to the Far West and the Southwest has continued, while the South has experienced vigorous growth. In recent years, the Northeast and Midwest have each accounted for well under one third of aggregate income, the South for about one fifth, and the Southwest, Mountain, and Pacific Coast regions for well over one quarter.

Not only has the geographic distribution of economic activity changed, but the relative levels of material well-being among regions have also shifted, and even more dramatically. For example, in 1880 the per capita income level of the poorest region in the country, the South Atlantic, was probably less than one quarter as high as the per capita income of the richest region at that time, the Pacific Coast. All the other Southern states were poor, still suffering from the effects of the Civil War and from the subsequent reorganization of the Southern economy. Even in the Midwest, per capita income was substantially lower than in the Far West and Northeast. In the century since 1880, the levels of per capita income in the various regions of the country have grown more and more similar. The pace of change has varied from one decade to the next, and there have been periods, such as the 1920s, when the income gaps among regions widened. Nonetheless, the long-term tendency clearly has been in the direction of narrowing differentials. Thus, in recent years per capita income in the poorest region (East South Central) is no more than 25% below the national average, while in the richest region (Pacific Coast) it is less than 15% above the national average.

Three kinds of developments have produced this homogenization. First, workers have tended to move from regions in which income and employment levels were low (such as the South in the 19th century) to regions where they were high (such as the Far West). These movements improved the regional distribution of labor and lessened wage-rate differentials among regions. Similar geographic redistribution has occurred in the case of capital. These are instances of the market at work.

Second, the process of economic modernization has spread geographically. In 1880 the richest region apart from the labor-scarce Far West was the Northeast, a region that was well off because it had experienced an industrial revolution. In the years since then (particularly since 1900) industrial activity spread from the Northeast to the South and the West, increasing the relative per capita income levels of these two broad regions. Regional differences in the extent of industrialization are much narrower today than they were a hundred years ago.

Third, the discovery of new resources and the evolution of new techniques have influenced regional per capita income levels. Both points can be illustrated by the experience of the Southwest. The invention of the automobile, the consequent development of the petroleum industry, and the discovery of rich deposits of oil in the Southwest clearly promoted the growth of that region. Furthermore, among the most vigorously growing U.S. manufacturing industries in the 20th century has been the chemicals industry, which has been based importantly on petroleum. Finally, the development of speedier and cheaper means of transportation and communications (including the computer) and the emergence of important industries (such as tertiary industries) depending very little on natural resources has meant that the climatic advantages of the Southwest carry much greater weight today in the determination of the location of industries than they once did.

Two of the forces discussed above—the mobility of labor and capital and the diffusion of modernization—have operated systematically to narrow regional per capita income levels. The third has probably also done so, but this was only fortuitous, not the result of systematic forces necessarily favorable to poor regions. It just happens to be so that resource discoveries and climatic advantages operated, in some measure, to promote the relative advance of regions that, before the fact, were relatively poor. That need not have been the case.

Composition of Final Output

The commodities and services that make up the output of the U.S. economy may be divided into three broad groups: consumption goods and investment goods, which will be discussed in this section, and goods purchased by government, which will be discussed in the following section.

The broad changes in the composition of the output of the U.S. economy are easily summarized. As we have already seen, in the 19th century the fraction of the U.S. output invested each year rose, while in the 20th century it fell; in the 20th century the fraction of total output taken by government also rose. These movements were quite pronounced. Thus in the 19th century the fraction of the GNP that was saved and invested increased from 15%-20% before the Civil War to about 30% in the 1890s and has declined to about 14% in recent years. Government took only a very small fraction of GNP in the 19th century, but in recent years has acquired typically about 20%. The fraction of GNP privately consumed, then, fell from about 80%-85% early in the 19th century, to perhaps 70% in the late 19th century, and to less than 65% in recent years.


The decline in the fraction of output invested in the 20th century has represented a puzzle and, to some analysts, a worrisome puzzle, since they associate the drop in the investment share with the 20th century slowdown in the growth rate of real national product.

The development is puzzling for the following reasons. The investment rate of an economy depends upon the willingness of the people to save. Two developments of the 20th century should have increased the willingness to save and to have raised the fraction of total income saved.

First, an impulse to save from corporate earnings has been given in the last 40-odd years by federal tax treatment of dividends and capital gains. Second, it is well known that in any given year the fraction of income saved by families and individuals is directly associated with income. That is, other things being equal, the richer a person is, the larger the fraction of income that he or she can save.

We have already seen that the level of per capita income has risen dramatically over the last 90 years, and this holds for income after taxes as well as for income before taxes. If the proportion of income saved is associated with the size of income, which is apparently the case, then one would expect to find that with rising per capita incomes since the 1890s the fraction of income saved had gone up. Thus two powerful forces have been at work that should have raised the proportion of income saved and, thus, the proportion of output invested: rising per capita income and the increased tax advantages of saving corporate earnings. Why, then, has the proportion of income saved and the proportion of output invested actually fallen?

Several possibilities present themselves. First, most of U.S. savings appear to have been made by the rich. As we have seen, the rich receive a smaller fraction of total income today than they did 50 years ago and probably less than they received in the 1890s as well. Thus the declining savings rate may reflect, in some measure, the declining relative affluence of the rich.

Second, as incomes have risen, each income group probably adopted the consumption standards observed among higher income groups and thus have saved no larger a fraction of income than they had before they experienced the income increase.

A third possibility is that insofar as those other than the rich have saved in the past, they saved chiefly to support retirement. The emergence of social security and its widespread applications may, thus, have diminished the incentive to save.

Finally, it may be that people are actually saving today as much as or more than ever before, relative to their incomes, but they may be saving in forms that the above treatment of savings and investment neglects. Two come immediately to mind. In the 1890s a child typically might be expected to leave school and go to work in the early teens. Today most children complete high school and many go on to college. On the whole, education has paid off very well for those who have received it, so that we may think of expenditures on education as investment in human capital. Parents today invest in education by paying education bills and by supporting their children while they are at school. Children invest by sharing in these expenditures and also by putting off earnings from full-time work until after they have completed their education. If a proper accounting were made for these expenditures, the U.S. savings rate today would be shown to be substantially higher than the 14% mentioned earlier.

The typical American today also probably spends much more than his or her counterpart of the 1890s on consumer durable goods such as washing and drying machines, vehicles of all kinds, stoves, refrigerators, radios, television sets, and record and tape systems. All of these durables provide services as they are used, a form of income, and the purchase of them may, therefore, be thought of as a type of investment.

The decline in the U.S. savings rate, then, may be more apparent than real. Insofar as it is real, it may reflect changes in the structure of the economy (for example, the growing importance of education) and the partly unforeseen consequences of social policy (for example, social security).

The composition of investments and consumption has also changed over the years. Investments, traditionally defined (that is, exclusive of investments in human capital), take three forms: the accumulation of inventories, the construction of buildings and other improvements to land, and the acquisition of machinery and tools. But over the long run there has been just one major change in the structure of such investment, and, therefore, in the structure of the capital stock: machinery and equipment have increased in relative importance, whereas buildings and other improvements to land have declined. These developments reflect the changing nature of the U.S. economy and the role and character of technical change, much of which has been embodied in machinery, in the last 80 years or so.


The long-term trends in the composition of consumption goods have involved a decline in the relative importance of perishables (chiefly foods and fuels), increases in the relative importance of durables and services, and stability in the relative importance of semidurables (chiefly clothing). We have seen that income earned in agriculture—the principal source of perishables—has composed a declining fraction of total income, while precisely the reverse is true of the tertiary sector. Thus it is not surprising to find that the output produced by the first of these sectors composed a dwindling fraction of total output, whereas the output of the second increased relative to GNP. But the data on income earned in the various industrial sectors lead us to expect a more pronounced shift in the structure of final output than actually took place. Thus, between the middle of the 19th century and the present, income earned in agriculture as a fraction of GNP dropped by over nine tenths, reaching a level of only about 3% in recent years. On the other hand, the fraction of total U.S. final output consisting of consumer perishables fell by only about six tenths, amounting in recent years to nearly 20% of GNP. Why should this be so?

The most plausible explanation lies in the changing nature of perishables purchased by American consumers. In the mid-19th century most Americans obtained much of their food locally in raw form. It had not been transported far, and it had not been processed much. Most processing, such as bread baking and food preservation, took place in the home. Therefore the value of agricultural perishables purchased by consumers was close to the value of agricultural goods as they came off the farm. Today, however, most agricultural goods travel long distances from farm to consumer and pass through the hands of one or more intermediaries. Many of them also undergo extensive processing—including cooking, freezing, dehydration, and elaborate packaging—en route. Thus the modern consumer buying perishables pays for much more than the efforts of the farmer, while his or her counterpart 150 years ago did not. The difference reflects both the modern concentration of Americans in and near urban places—and far from sources of agricultural products—and the growing productivity of the economy, which makes it possible for consumers to avoid the effort involved in the home processing of food products.

Improved techniques, particularly in transportation and refrigeration, have greatly increased the variety of foods available and the nutritional quality of the American diet. In the middle of the 19th century, choices typically were very restricted, especially in the winter. Diet involved large amounts of bread and meats; relatively little in the way of vegetables, fruits and dairy products; and relatively few choices of vegetables and fruits.

In the case of durables, too, consumers are able to buy many durables that either did not exist 150 years ago or, if they did, were of much lower quality than their modern counterparts (for example, stoves). The tertiary sector, also, offers more and better products today than it formerly did, as a consideration of the medical services available then and now would demonstrate.


In recent years expenditures of all levels of government on factors of production (specifically, wage payments) have amounted to about one eighth of the GNP. Government expenditures on all goods (services and commodities, such as munitions) were equal in value to over one fifth of the GNP, while total government expenditures, including transfer payments (such as social security and welfare expenditures) were equal in value to about one third of the GNP.

Until the Great Depression of the 1930s, governmental tax collections and spending were dominated by state and local governments. These levels of government were largely responsible for performing the principal functions of government at that time: the provision of education, welfare services, and roads and highways. The federal government's most costly activity was the maintenance of the armed forces, but prior to World War II this did not demand large amounts of resources during peacetime.

Since the 1930s, however, circumstances have changed importantly. First, the federal government moved strongly into the area of social services by introducing the social security system during the 1930s, extending the coverage and scope of the system in the decades since then, and by moving strongly into the welfare area, especially in the 1960s. Second, the federal government has provided grants-in-aid to state and local governments, usually with the requirement that these governments carry out certain programs. Thus the financial relations among the three levels of government have become more intimate in recent decades than ever before. Third, in the post-World War II years the federal government launched a major highway building program, which for the first time placed it solidly in this sphere of operations. Finally, while military expenditures dropped off very quickly after World War II, they increased nearly as quickly with the Korean War and have remained high as the Cold War has persisted.

As a result of these changes, the federal government in recent years has typically accounted for over six tenths of total government expenditures in the United States. Transfer payments to individuals have been the most important of these federal expenditures, typically amounting to more than twice the expenditures on the armed forces and about three times the value of grants-in-aid to state and local governments, the third most important federal-expenditure category. Interest payments on the federal debt plus expenditures on nondefense goods and services together amount to a little less than the total of national-security expenditures. Transfers have represented a very much smaller part of state and local expenditures—something over one tenth. The lion's share of state and local resources has gone to government payrolls and to the educational system.

All three levels of government cover their expenses chiefly by means of taxes, with each level of government tending to depend chiefly on one or two types. Thus local governments have received most of their revenues from property taxes and from grants of one kind or another from the state and federal governments. State governments have depended mostly on sales and excise taxes, although the income tax has grown in importance in the post-World War II years.

While the federal government has borrowed heavily in recent years, its chief source of revenue has also been taxes, especially the income tax, which has brought in about two fifths of total federal revenues; the corporate profit tax, accounting for about one fifth; and social insurance taxes, yielding over a quarter. All three taxes are relatively recent additions to the federal tax system.

Social security taxation was introduced in the 1930s, and as late as the 1950s it contributed a relatively small fraction of federal revenues. Its importance has increased dramatically in recent years and will almost certainly continue to increase, unless major changes are made in the benefits or funding arrangements for social security. Strains on the system will become particularly severe when the Baby Boom generation reaches retirement age early in the next century.

The income tax system was introduced by a constitutional amendment just before World War I, but except for that war was not a major source of revenue until the 1930s. Until then, the federal government had financed itself chiefly from tariffs. In the 1930s both individual and corporate income taxes became more important, but it was during World War II that the modern systems for managing these taxes were introduced, that tax rates were raised to very high levels, that huge amounts of revenue were raised, and that these taxes became the principal sources of federal revenues.

International Transactions

Measured against total economic activity, U.S. international transactions in commodities and finance generally have been relatively small. In the early history of the country the United States may have exported as much as 10% or 15% of the gross national product on average, but that proportion fell in the early 19th century to about 5% to 7%, as the U.S. economy modernized and diversified and as the scale of the U.S. domestic market expanded. It rose during World War I but fell back again thereafter, particularly during the Great Depression of the 1930s. Exports increased during and after World War II, but not much faster than the GNP, so that the proportion of GNP exported remained well below 10% until the early years of the 1980s, when it surpassed that figure.

A similar pattern has been followed in the case of international finance. Until the end of World War II the proportion of the U.S. domestic capital stock financed by foreign investors was very small, as was the proportion of the assets of U.S. investors that were held overseas. But in the postwar years, international financial relations have grown much more important and much more complex than ever before.

While the volume of U.S. trade relative to the GNP has been typically small, its role in U.S. economic growth and economic performance was not unimportant. The existence of the international economy gave Americans the opportunity to specialize in those economic activities in which they excelled and to acquire from foreign suppliers those goods in which foreigners had a comparative advantage. Thus trade presented the opportunity for the U.S. economy to be more efficient than it would have been in the absence of trade. The opportunity was not fully accepted. Tariffs—particularly in the years between the end of the Civil War and World War II—and other restrictions kept trade flows below the maximum level they would otherwise have achieved. Nonetheless, the United States has participated in international trade in a substantial way.

From the beginning, the United States has held an important comparative advantage in agricultural goods. Following the War of 1812, American cotton entered world markets on a large scale, providing one of the bases for the British Industrial Revolution. In the two decades before the Civil War, cotton accounted for as much as two thirds of U.S. export earnings. Beginning in the 1850s, perhaps partly as a result of the interruption of normal grain supplies by the Crimean War, U.S. grains began to enter the European markets. In the decades after the Civil War, low-cost grain from the Western states and various technical and institutional innovations increased the flow of U.S. grains and meats overseas. Meanwhile the United States had been importing foreign consumer goods, food products from tropical countries, and European manufactures. Capital goods did not figure importantly in international trade. The American capital goods industries supplied the U.S. market.

In the 20th century, changes affected this pattern, not all of them of a permanent nature. American high-technology products—particularly from the capital-goods industries—began to enter world markets. The products of these industries have contributed an increasing fraction of U.S. exports and have been persistently important. On the other hand, the United States also began to depend more than formerly on imports of natural resources, petroleum being particularly important.

Temporary changes of a large scale were set off by World War II. The United States emerged from the war relatively unscathed and with enormous productive capacity, while the economies of Europe and Japan were a shambles. The United States controlled at least 60% of world industrial capacity at the end of the war and accounted for the lion's share of world exports of manufactures. For the first time, large amounts of U.S. consumer goods were exported. Dollars were scarce in world markets, and various special devices had to be worked out to permit the world financial system to operate in the face of such unbalanced trade.

This situation proved to be temporary. American aid helped to promote overseas recovery, and European and Japanese workers, entrepreneurs, and institutions proved equal to the challenge of long-term economic development. These economies entered an extended and unprecedented period of rapid growth. Overseas industrial capacity increased, and Europe and Japan began to reclaim the markets that they had held before the war. American consumer goods were edged out of the positions they had newly occupied and also began to receive sharp foreign competition at home, in the U.S. market. Thus in recent years the U.S. has returned to its traditional position, exporting agricultural products and high-technology goods and importing consumer goods and natural resources. The increase in U.S. dependence on foreign oil, however, has been large enough to amount to a qualitative change in U.S. circumstances.

In the decades since World War II an enormous increase occurred in the volume of world trade, a phenomenon both flowing from the extraordinarily high rates of postwar economic growth and helping to promote this growth. The financing of world economic activity has also experienced marked, even revolutionary changes. Capital has moved easily from one economy to another, and the volume of international financial claims has increased to unprecedently high levels. The institution of the multinational corporation has arisen and become important. Many firms now operate in more than one country, shifting investments from place to place to take advantage of changing markets and production costs. American firms have invested in foreign facilities and have merged with foreign firms, while foreign firms have moved equally into the United States. This has been one of the most striking postwar economic developments. Almost one sixth of the assets of nonfinancial U.S. firms were in foreign countries in 1980.

Most of this international investment takes place among the developed countries of the West, although some finance has also flowed from these countries to underdeveloped, natural-resource-producing countries. These developments helped to make the postwar economy, down to the late 1960s or early 1970s, extraordinarily dynamic. But it has also meant that financial flows through the foreign exchanges can change size and direction relatively quickly, putting pressure on the exchanges and creating exchange-rate instability.

Finally, these new financial developments have meant that economic policies instituted in one country, if the economy of that country is powerful and if the policy is pursued aggressively, can have important foreign impacts. For example, foreign critics have argued that high interest rates caused by the U.S. anti-inflation policy of the early 1980s brought unemployment and recession not only to the United States but to the economies of Europe as well.

Industrial Concentration

Modern economic growth in the United States has been accompanied by an increase in the size of the typical business firm, particularly in the industrial sector. Several kinds of developments have figured in this experience.

Many of the technical changes of the late 19th century required for their efficient exploitation large amounts of capital and large batch or continuous production runs. They thus encouraged the construction of much bigger plants than had previously been common. Production opportunities alone, however, do not guarantee that the appropriate investments will be made. There must be markets for the product, the management capacity to organize production and to sell the output, and finance for the investment.

In the latter part of the 19th century national markets for U.S. industrial output were being created and expanded. Following the Civil War, tariff rates were raised to protect the domestic market for U.S. producers. The railroad and telegraph networks were extended and articulated in detail, and rail rates declined faster than the price level. Capital markets, which had handled chiefly government securities before and during the Civil War, expanded to accommodate railroads after the war and then industrial firms toward the end of the century. Industrial managers, learning from the railroads, developed techniques for controlling large production units and for marketing their output.

These changes permitted the adoption of the new techniques, with their large-scale production plants. But they permitted more than that. The increased capacity of management to organize production, finance, and sales meant that many firms did not stop with control of one plant, but became multiplant firms. Thus while changes in production techniques and in markets gave the first impetus to growth in firm size, improved methods of management carried the process forward. Multiplant firms often were created by the merger of two or more firms. Giant enterprises thus emerged, which threatened a loss of competitive vigor in many industrial markets.

There have been three major waves of mergers in the United States: one at the turn of the century, the second in the 1920s, and the third in the 1960s. According to one estimate, about a third of the manufacturing sector at the turn of the century lay in industries in which the four leading firms controlled at least half of output. In these industries pure price competition could be assumed to be virtually dead.

Interestingly enough, the extent of this concentration seems not to have changed much since, despite the merger waves of the 1920s and 1960s. There appear to be three reasons for this. First, while mergers increase concentration, other forces are at work to diminish it. Thus there has been a certain amount of turnover among leading firms. Second, while the mergers at the turn of the century were chiefly carried out within a single industry or between two lying in sequence in the chain of production, recent mergers have involved firms from altogether different industries. These mergers typically seek reduced risks through diversification, rather than market control, and they do not make for increased concentration or reduced competition within any given industry. However, they do lead to concentration of economic power. For example, in 1980 the 200 largest manufacturing firms controlled about six tenths of all of the assets owned by U.S. manufacturing firms. In the mid-1950s this proportion was not much over one half. By this measure, concentration in manufacturing has increased substantially in less than three decades.

Outside of manufacturing, concentration levels are high in a few industries, notably in branches of retail trade. Otherwise they fall well below the levels attained in manufacturing.

The Structure of the Labor Force

The composition of the U.S. work force has changed with the structure of the U.S. economy and with economic and social modernization. Thus the fraction of the work force in agriculture has fallen while the fractions in industry and the tertiary sector have risen. Within sectors the skills required and the jobs done have also changed. In relative terms, there are fewer blue-collar workers and more white-collar workers. The latter are responsible for the control of production, sales, the shipment of goods, and accounting. Within the blue-collar ranks, relatively fewer workers are directly engaged in production, whereas more are responsible for the monitoring of machines.

Economic growth in the United States has demanded constant changes in the location and skills of workers, and the work force and the institutions responsible for training workers have responded effectively. Thus for many decades the skill structure of wage rates changed little, despite the pressing demands for skilled workers, and in recent decades the wage-rate gap between skilled and unskilled workers has actually narrowed. This is a clear indication that the supply of skilled workers has grown rapidly, relative to the demands for these workers.

The sex composition of the U.S. work force has also undergone evolution. Males, who 100 years ago constituted the lion's share of the work force, today contribute a much smaller share of total labor, less than six tenths. Three factors appear to have accounted for this change. Educational opportunities have improved, so that today a large fraction of male adolescents and young male adults, who 100 years ago would have been at work, are today still in school. Second, retirement, which was uncommon in the 19th century, has become common in the 20th, and the typical age of retirement has also dropped in this century. Thus many adult males who would have been at work 100 years ago are retired today. This phenomenon has apparently been due chiefly to the advent and elaboration of the social security system and to the proliferation of private retirement plans.

Education and retirement have thus reduced the fraction of adult males in the work force and have slowed down the growth of the adult male work force. On the other hand, the participation rate of females has risen. At the end of the 19th century probably not much more than one fifth of women over 16 years of age were in the work force. This figure rose to about one third in 1950 and to over one half in 1980. Thus while the participation rate of men was falling, that of women was rising, as was the proportion of the work force accounted for by women. While older women entered the work force in particularly large numbers after World War II and younger women in subsequent decades, virtually all classes of women today participate in the work force in larger proportions than was true in decades past. These shifts have been related to changes in the nature and stability of the family, they have figured in the evolution of income distribution, and they are clearly of great social and economic importance.

The fraction of the U.S. work force that is unionized is small, by the standards of many other developed Western economies; roughly 25% of the nonagricultural work force belongs to labor unions. For comparison, in Britain the unionized workers amount to 40% of the work force and in Sweden to 80%, although in France, Italy, and Japan the proportion organized falls below that of the United States.

American unions also differ from European unions in important respects. In the United States, trade unions have played a much larger role than they have in Europe, where the industrial union and the general union have great importance. Trade unions organize members of a given occupation, whereas industrial unions recruit from all workers in a given industry almost regardless of their trades, and general unions organize across both trade and industry lines.

Unions in the United States typically have focused on bread-and-butter issues, whereas European unions, particularly on the Continent, have concerned themselves with social and political issues and are very often closely allied with political parties.

The American labor movement prior to the 1930s ebbed with bad times and flowed with good ones, but rarely organized more than a very small fraction of the labor force. Some success was attained during World War I, but most of these gains were lost during the 1920s. With the economic collapse of the 1930s the labor movement, if it had followed historical precedent, would have shrunk even further. But in fact it did not. It recovered and experienced a decade and a half of solid growth. In 1930 only 12% of the nonagricultural labor force was organized. By 1945 this figure had grown to about 35%.

Why the labor movement was so successful during this period is not entirely clear, but the following developments appear to be relevant. In the 1920s management had successfully opposed independent unions by offering company unions and packages of fringe benefits—so-called welfare capitalism. The economic distress of the 1930s may have made workers less willing to repose faith in the ability of employers to continue to perform successfully along these lines. Further, the labor legislation of the 1930s—the Norris-La Guardia Act, the NIRA, and the Wagner Act—outlawed the yellow-dog contract (a contract by which an employee agreed not to join a union in exchange for a job), did away with the company union, enumerated legally prohibited unfair labor practices on the part of employers, and established a government body, the National Labor Relations Board, to act on complaints of unfair labor practices. This legislation represented a major change in the conditions under which unions organized workers, and it probably contributed in an important way to union success. The courts, which in the 1920s had weakened pro-union legislation by very narrow interpretations of the laws, began to hand down decisions in the late 1930s that were favorable to unions, as when, for example, the Supreme Court upheld the constitutionality of the Wagner Act.

The New Deal, while it was not chiefly responsible for those parts of the legislative program that favored unions, was at least not hostile to unions. In view of the record of previous administrations, this represented an improvement, in the eyes of union leaders. During World War II the administration brought pressure on both employers and unions to avoid interruptions of work, enabling the unions to expand membership rolls with limited employer opposition.

Finally, in the late 1930s the union movement made a sustained, powerful, and ultimately successful effort to organize the mass-production industries, which had successfully staved off unionization until then.

In the decades since World War II, the fraction of the U.S. work force that was unionized declined first and then held roughly stable. The workers who were found easiest to organize have been male blue-collar workers in manufacturing, mining, construction, and transportation. By the end of the war, however, these workers were already heavily unionized. Future union successes would require recruiting among groups with whom unions had previously had less success: women and male white-collar and professional workers. Some important breakthroughs for unions were made among these groups. For example, President John F. Kennedy's signature of executive order 10988 permitted unions to organize federal workers, and they had considerable success in doing so. Teachers and other state and local government workers have also been successfully organized. Indeed, the greatest new victories by unions in the postwar years have been among public employees.

Despite these successes, through most of the postwar years unions have been unable even to keep pace with the growth of the labor force. Apart from the shifts in the makeup of the work force described above, two factors seem to have accounted for union failures. After several years of token opposition during the war, employers in the postwar years increased their resistance to union organizers. They were abetted in their efforts by new legislation, notably the Taft-Hartley Act, which amended the Wagner Act to name illegal unfair labor practices of unions and which permitted employers to engage in anti-union campaigning formerly denied to them. Thus the law, the new will to resist on the part of the employers, and the changing structure of the economy undercut union power.

The Financial System

The U.S. banking system is unusual in that it consists of many banks, many of them quite small. Historically, branch banking has played a modest role in the U.S. economy, although one that has been increasing in relative importance in recent decades. In most other developed Western countries, a few large banks with branches all over the country dominate the financial system.

The peculiar character of the U.S. system derives ultimately from the fact that banks are chartered by each of the states, as well as by the federal government. Early in U.S. history, so-called free banking developed and spread among the states, a system in which banks were chartered under general incorporation laws and in which branching was typically discouraged. When the federal government began chartering banks during the Civil War, under the National Banking Act, most of the rules adopted were taken from the New York Free Banking Act.

American banks have been subject also to unusual amounts of regulation and oversight. Rules have been applied with respect to reserves, capital, loans, investments, interest paid, and interest received. In principle, banks are subject to examination by state authorities (if they are state banks) and by several federal authorities (if they are national banks or members of the Federal Reserve System or are insured by the Federal Deposit Insurance Corporation). The depositors of many of them are also insured against loss (within limits) by the Federal Deposit Insurance Corporation.

From time to time the federal government has created banks intended to perform fiscal functions for the government and also to exercise some discretionary controls over the activities of the rest of the system. In the 19th century two such banks were chartered, each for 20 years. Each of these banks—the First and Second Bank of the United States—became the subject of controversy and each lost its federal charter when the 20-year period was up. The Second Bank of the United States was destroyed in a bitter struggle with President Andrew Jackson, which came to be known as the Bank War.

For nearly 80 years the United States had no central bank, although on occasion the Treasury performed some central banking functions. Then in 1913 the Federal Reserve System was created, a system that remains in operation. Today it is directed by a board of governors in Washington, D.C. Twelve regional Federal Reserve Banks hold deposits for member commercial banks, make loans to them from time to time, and carry out the policy established by the Board of Governors.

Central banks typically influence the economy by affecting the terms on which commercial banks lend and the amounts they lend. Bank lending depends upon the demand for bank credit, the terms on which banks can lend, bank expectations about the future, and the volume of excess bank reserves in existence. All banks are obliged by law or custom and good business practice to hold reserves against deposits. In the United States the law establishes reserve requirements. Individual banks can lend only when they hold reserves in excess of their requirements.

The Federal Reserve (hereafter, "the Fed ") can influence the volume of bank lending by influencing the volume of excess reserves. This it can do through its own lending policy to banks and through "open-market operations." Open-market operations consist of the purchase and sale of securities by the Fed in the open market. When the Fed buys securities, bank reserves are created, whereas when the Fed sells securities, bank reserves are destroyed. The Fed is constantly engaged in activities that influence the volume of bank reserves, in many instances simply offsetting the effects of other influences in the market and thus preventing sudden sharp, temporary changes in the flow of credit. When the Fed follows a policy of monetary ease, by allowing bank reserves to grow rapidly, other things being equal, bank lending will increase and the rate of interest will be low. On the other hand, a policy of monetary tightness will restrict the growth of bank credit and increase the rate of interest.

Other factors also have bearing on the volume of credit and the rate of interest—in particular, expectations about the future held by potential borrowers and lenders, particularly, expectations as to the rate of inflation. Other things being equal, the higher the expected rate of inflation, the higher the rate of interest.

In the mid-19th century, banks were virtually the only financial intermediaries in the United States. Since then others of great importance have grown up, some of them in the post-World War II period. Insurance companies, savings and loan institutions, mutual savings banks, and credit unions are among the most important intermediaries that had appeared before World War II.

In recent decades, these institutions have become diversified. Thus, insurance companies have developed and sold various retirement plans, while savings and loan institutions and savings banks have taken on a number of the functions of commercial banks. Other intermediaries, notably mutual funds and money market funds, either have been created or have gained importance in the postwar years.

Financial markets today are more numerous and diverse than they were 100 years ago. A number of them, the most famous and largest of which is the New York Stock Exchange, handle trades in stocks and bonds. Access to these markets is usually obtained through a broker or a bank. There are also short-term money markets, in which debts of private businesses (commercial paper) and of the federal government (treasury bills) are bought and sold, and secondary markets for mortgages, in which large mortgage lenders participate.

Contemporary Issues

Although the U.S. economy has experienced many problems throughout the country's history, even through the prosperous years, the period from the mid-1970s through late 1998 presented some unusual difficulties. The country met with some success in dealing with some of them, while others remained problematic. The main issues in the economy during this period were the economic growth rate, labor productivity, inflation, and unemployment as well as growth in earnings and governmental policy toward the economy.


Historically, the U.S. economy was a powerhouse of growth, averaging 3.8% annual growth from the early 19th century into the 1970s. This trend was responsible for the persistent, pronounced gains in material well-being to which Americans became accustomed and on which rested their expectation that each generation would have more than the last. In the 1980s, however, growth averaged only 2.7% per year; from 1989 to 1996, the average fell to 2%. In 1997, government projections for the next five years were pegged at 2.3%, reflecting an overall improvement in the economy in the late 1990s but still far below historical rates. At that time mainstream economists held the opinion that a 2.0% to 2.5% annual increase was healthy and sustainable. Other economists believed that legitimate adjustments to the statistics would have brought the growth rate for 1997-2002 to 3.5%, much closer to historical levels.

Labor Productivity

The two key factors in determining a desirable growth rate of the economy are the growth in the labor force and the growth in labor productivity. Growth beyond the "speed limit " established by the sum of these two factors is believed to result in inflation. Both of these factors were weak from the 1980s to the late 1990s, thus limiting the economic growth for much of the period. The increase in the labor force was flat in the 1980s and early 1990s, and since such increases are largely determined by demographics, this suggested that its growth would not be rapid. Productivity remained low. The overall productivity growth from 1995 to 1998 improved over previous years but still remained low. In the view of most economists, the low rates of growth in both the workforce and in productivity justified modest growth projections for the future. Others disagreed, arguing that the workforce was more elastic in size than assumed and that the use of technology in the workplace would increase the productivity rate. These economists consequently believed there was room for more growth than the government believed.


Inflation was a very serious problem throughout much of the 1970s, and its control remained a key policy issue through the 1990s. In the 1970s inflation was triggered by factors such as the cost of the Vietnam War, Lyndon Johnson's social programs, and the tripling of oil prices. During this period inflation reached 8.8%, in 1979, and a new term, "stagflation," was coined for this unusual situation of recession accompanied by inflation. So recalcitrant was the stagflation of the 1970s that government efforts to curb it had little effect. The main tool used in affecting the business cycle, and therefore inflation, was the fiscal policy of the U.S. government. Taxes could be lowered and government spending increased to stimulate the economy, or taxes could be raised and government spending reduced to slow it down. This kind of policy is often referred to as Keynesianism, for its chief exponent, the economist John Maynard Keynes.

In the years after World War II, however, some of the practical shortcomings of Keynesianism had manifested themselves. It was difficult to the point of impossibility to adjust expenditures to meet anticyclical fiscal-policy requirements. The interval between the time of decision and the actual change in expenditures for such policies to be effective was too long. After all, the average postwar downturn took less than a year. A decision halfway into such a downturn to increase federal expenditures often resulted in an increase in expenditures not at the pit of the recession, but during the period of recovery. Tax changes could be effected more quickly, and in a number of instances in the postwar years, tax increases or decreases were well timed and apparently effective. But there were also some instances of badly timed changes.

In 1979 President Jimmy Carter appointed Paul Volcker as chairman of the Federal Reserve Board. Inflation responded to Volcker's policy of reducing the money supply and drastically raising interest rates, but it also resulted in the worst recession (1981-1983) since the Great Depression. In contrast to Keynesian economics, Volcker's approach, often called monetarism, holds that controlling the supply of money in the economy, largely through raising and lowering interest rates, is the best way to affect business cycles. Monetarism, whose chief exponent is the economist Milton Friedman, also suggested that use of fiscal policy to control the economy was often more damaging than helpful. Whatever the merits of Keynesianism versus monetarism, after Volcker's policies had taken effect, inflation was no longer a serious issue in the U.S. economy, and from the mid-1990s it was held at about 2.5% per year, a level considered acceptable by most economists.


Unemployment was another difficult issue in the 1970s, since the economy slowed down in response to the inflation-reduction policies of presidents Gerald Ford and Jimmy Carter. The rate of unemployment soared from 4.9% in 1970 to 9.2% in 1974. By 1980 it had come down to 7.1%, but in response to the recession of 1981-1983, it peaked at 9.6% in 1983. Unemployment began to improve only after the recession ended and a business recovery began in 1984, and it finished the decade at 5.3%.

Unemployment took another leap in the early 1990s. President Ronald Reagan had cut taxes to stimulate business investment, with the expectation that the new wealth created would yield even more in tax revenues. The tax cuts and heavy military spending during this time indeed served to prompt the business recovery of the 1980s, but tax revenues did not increase as expected. As a result, the budget deficit skyrocketed during the Ronald Reagan and George Bush years and public demand for credit crowded out private borrowers. The resulting recession of 1991-1992 pushed unemployment back up to 7.5% in 1992. This, along with a tax increase that was politically difficult for Bush, contributed heavily to his 1992 defeat by Bill Clinton, who used the state of the country's economy as a major campaign issue. After the recovery from the 1991-1992 recession, unemployment, like inflation, remained low. The Federal Reserve Board, after Volcker's retirement in 1987, was headed by another monetarist, Alan Greenspan, who continued the policy of adjusting interest rates to affect the economy. Fiscal year 1998 ended with the unemployment rate at 4.5%, a 25-year low.

A stable environment of steady business expansion, low inflation, and low unemployment characterized most of the decade of the 1990s in the United States. International trade boomed, and the "global economy" became a reality for many companies. At the same time there were growing concerns among the workforce, especially about job security. Furthermore, underlying economic weaknesses in Japan and in many developing areas, especially in Asia but also in Latin America and Russia, became evident in 1997, sending shock waves through world financial markets and raising concern among some economists and political leaders about a global recession.

U.S. Stock Markets

Before the crisis in some national economies began in 1997, the value of U.S. stocks increased unabated throughout the decade and reached a record peak in July 1998. This unprecedented increase in value created the opportunity for enormous wealth for stockholders, among them millions of ordinary citizens participating in the stock markets through their retirement savings plans.

Despite concerns about the international economic crisis, throughout most of 1998 U.S. stock prices continued to increase in value, but they were much more volatile than in past years because of the worsening economic conditions abroad. These concerns finally caused a downturn in the markets after the July peak and generally raised fears that, despite the underlying strength of the U.S. economy, international troubles could prompt a downturn in the U.S. economy and, indeed, a global recession. The Federal Reserve Board decreased its key interest rate by 0.25% at the end of the month, with the intention of stimulating the economy. This first reduction did not do much to reassure the markets, and stock prices immediately plunged about 3%, prompting the Federal Reserve to enact further cuts in October.

Corporations and Employees

In the 1990s the rising trend in the stock market and increased competition induced many companies to focus keenly on improving profitability. The drive to achieve economies of scale prompted a wave of mergers in several sectors, for example, health care and telecommunications. Companies restructured their operations and moved manufacturing operations out of the United States to cut their expenses. Severe cutbacks in the workforce were a frequent result of these cost-saving measures. Throughout much of the 1990s many employees experienced job insecurity. In 1991, a time of high unemployment, 25% of workers at large firms said they were insecure about their jobs; in 1996, with low unemployment and many jobs available, 46% were insecure.

In addition to job insecurity, workers also experienced a very slow growth in wages over this decade of great business success. One reason for this lack of growth may have been the declining influence of labor unions. Another possibility was the tendency of workers to accept smaller wage increases in return for greater job security or increased employer-subsidized benefits such as health care and retirement plans. In 1998 there was finally a significant real increase in wages (2.7% after adjustments for inflation), but it was not clear if this was the beginning of a reversal to the long-term trend.

Government Spending

As noted above, prior to the Reagan era, the federal government generally showed a small deficit at the end of each fiscal year. This era of relatively balanced budgets was ended by the tax cuts of the 1980s, which were coupled with heavy spending on the military. Thereafter the federal budget was in deficit until 1998, when the fiscal year closed with a $70 billion surplus. The question remaining was what to do with the money, with Clinton insisting it be reserved to ensure the solidity of the Social Security system. Republicans in Congress pushed for a tax cut instead.

During this period some economists maintained that a balanced federal budget was not such an important goal. They feared that focusing on a balanced budget (and the cuts in spending required to achieve it) would endanger future economic well-being by limiting investment in education, job training, and basic research and development. In the 1990s, however, this point of view went largely unheeded by the public, which seemed to want both a balanced budget and lower taxes.

Robert E. Gallman, University of North Carolina at Chapel Hill


Adams, Walter, and James Brock, The Bigness Complex: Industry, Labor and Government in the American Economy (Pantheon 1986).

Colton, Joel, and Stuart Bruchey, eds., Technology, the Economy, and Society: The American Experience (Columbia Univ. Press 1987).

Darcy, Robert L., The Economic Process: A Structured Approach (Horizons 1986).

Davis, Lance E., et al., American Economic Growth (Harper 1972).

Easterlin, Richard A., Birth and Fortune (Basic Bks. 1980).

The Economist, The World in Figures (1976).

Feldstein, Martin, ed., The American Economy in Transition (Univ. of Chicago Press 1980).

Hanson, Jim M., The Decline of the American Empire (Praeger 1993).

Mayne, Alan J., Resources for the Future: An International Annotated Bibliography (Greenwood Press 1993).

Perelman, Michael, The Pathology of the U.S. Economy: The Costs of a Low-Wage System (St. Martin's 1993).

Porter, Glenn, Encyclopedia of American Economic History, 3 vols. (Scribner 1980).

Puth, Robert C., American Economic History (Dryden Press 1982).

Ratner, Sidney, et al., The Evolution of the American Economy (Basic Bks. 1980).

Ritter, Lawrence S., and William L. Silber, Principles of Money, Banking, and Financial Markets (Basic Bks. 1977).

U.S. Bureau of the Census, Historical Statistics of the United States, Colonial Times to 1970, 2 vols. (USGPO 1975).

U.S. Department of Commerce, Bureau of the Census, Statistical Abstract of the United States (USGPO, annually).

U.S. Department of Commerce, Economics and Statistics Administration, Bureau of Economic Analysis, Regional Economic Information System, 1969-1992 (May 1994). Available only on compact disc; updated at regular intervals.

Williamson, Jeffrey S., and Peter Lindert, American Inequality (Academic Press 1980).