The Evolution of the Data Center

Army's ENIAC
Circa 1940: The US Army’s “Giant Brain” AKA ENIAC (Electronic Numerator, Integrator, Analyzer and Computer)

The history of the data center is often told as if it’s the history of computing, though they’re not the same thing. The earliest facilities actually ran more than 100 years ago, well before electronic computers, and provided centralized sites for managing punch card operations.

Today, of course, data centers today mean computers but also so much more. With virtualization, orchestration, the cloud and cutting-edge software, data center computing is at the heart of virtually everything we do online, from our purchases and financial transactions to our travel arrangements and workplace interactions.

That’s a far cry from 1890, when the U.S. Census Bureau first used punch cards and mechanical tabulating machines invented by a New York-born statistician named Herman Hollerith. Founder of the Tabulating Machine Company (which eventually
merged with several other companies to become IBM), Hollerith’s machines enabled the 1890 Census count to be completed in just one year, compared to the eight it took in 1880.

While other early computers were developed in the late 1930s and early 1940s, the first general-purpose electronic computing device to enter the data center arrived in 1946. ENIAC (for Electronic Numerator, Integrator, Analyzer and Computer), billed as a “Giant Brain” and was built to perform calculations for the U.S. Army. Eventually housed at the Aberdeen Proving Ground in Maryland, ENIAC was a 30-ton “grotesque monster” by modern standards, a military account states. It took up 1,800 square feet of floor space and required six full-time operators to keep it running.

The computer’s 19,000 vacuum tubes, coupled with 1500 relays and “hundreds of thousands of resistors, capacitors and inductors,” ate up almost 200 kilowatts of power and gave off a tremendous amount of heat. Military records recall: “Power-line fluctuations and power failures made continuous operation directly off transformer mains an impossibility. The substantial quantity of heat which had to be dissipated into the warm, humid Aberdeen atmosphere created a heat-removal problem of major proportions. Down times were long; error-free running periods were short.”

Over the next decade-and-a-half, computers — used mostly for government purposes — remained massive devices that required large rooms to house them. As technology continued to improve, though, transistors eventually replaced the vacuum tubes of older devices, making it possible to build smaller and more reliable computers. These devices, though, were still big enough to require substantial floor space. UNIVAC, for example, which came out in 1951, “was about the size of a one-car garage,” according to the History of Computing Project.

By 1960, American Airlines, working with IBM, moved two IBM 7090 mainframes (which were closer in size to a car rather than a garage) into a new data center in New York. That facility helped power the air carrier’s first-ever computer-based reservation and booking system, SABRE.

With the first commercial microprocessor (released by Intel in 1971), computers could be made increasingly compact, as well as more affordable. By the end of the ‘70s, an account by Rackspace recalls, “air-cooled computers moved into offices. Consequently, data centers died.”

And then came the personal computer. The advent of the IBM 5150 PC in 1981 started a revolution in computing trends as banks of microcomputers began moving back into the old office computer rooms. These new in-house data centers handled ever-more complex tasks for organizations, but they couldn’t manage the challenge that arrived in the late 1990s. The rise of the internet, especially the early dot-com boom, fueled rapid development in the commercial data centers we are all familiar with today. Even with the boom’s subsequent bust, the demand for data center services just kept skyrocketing as everything from news to education, voice communications to movies and videos went online.

Today, virtualization, containerization and the emergence of software-led infrastructure (SLI) are helping data centers to become ever-more efficient as they help to deliver business-critical services around the globe.

“(C)loud computing is ultimately changing the way we think about data centers,” notes an exhaustive look at the data center on Wikibon. “It’s changing the way data centers are structured and managed. It’s also creating new technologies that apply the same virtualization principles of the cloud to the other components of a data center in the form of SDS (software-defined storage) and SDN (software-defined networks).”

Wikibon concludes, “Data centers are moving towards a model of efficiency and better scalability.”

Schedule a Tour

Fields marked with an * are required