October 28, 2022

5 Grid Computing Projects That Rocked the 90’s

As essential as the internet is today, there was once a time when it didn’t exist. Computers could be found only in research centers, and the only way to learn information about a person or business was the local phonebook. Grid computing will shift society in a similar way. Since the 1990’s, grid computing has advanced dramatically. Though early systems were designed to increase computing power for scientific research, commercial applications are likely to be forthcoming. 

The term “grid” in grid computing is a reference to traditional electrical grids. In a grid computing system, computers, tablets, and phones distributed around the world form a network to perform computations in parallel. The result is faster and more powerful processing, with some systems rivaling top supercomputers. Here are 5 early projects that were key to the emerging development of grid computing.

ARPANET - The Foundation for the Internet

In 1966, an agency at the US Department of Defense laid the groundwork for the internet. A division known as DARPA was responsible for developing emerging military technologies. Upon their request, the Advanced Research Projects Agency Network (ARPANET) was created to enable access to remote computers.

The first successful exchange on the network happened in October of 1969, when UCLA student programmer Charley Kline typed the words “lo” (an attempt at the command “login”) to a host computer at Stanford. UCLA and the Augmentation Research Center at Stanford Research Institute were just two of four “nodes” on the system. The other two, UC Santa Barbara and University of Utah School of Computing, were also on the west coast.


By March 1970, ARPANET had reached the eastern half of the US through a computer in Cambridge, Massachusetts. The device was hosted by research company Raytheon BBN, an official government partner who developed the first protocol for the network. The project hopped across the pond in 1973 to connect with the Royal Radar Establishment in Norway and University College London in England.

ARPANET 1969-1977. Wikipedia

Early ARPANET Etiquette

In contrast to today’s internet, ARPANET users were required to keep interactions strictly professional. Non-governmental uses were illegal, and one man was found guilty of an early infraction for using the system to find a lost electric razor. A 1982 handbook from MIT’s AI Lab states:

“It is considered illegal to use the ARPANET for anything which is not in direct support of Government business ... personal messages to other ARPANET subscribers (for example, to arrange a get-together or check and say a friendly hello) are generally not considered harmful ... Sending electronic mail over the ARPANET for commercial profit or political purposes is both anti-social and illegal. By sending such messages, you can offend many people, and it is possible to get MIT in serious trouble with the Government agencies which manage the ARPANET.”

ARPANET and Al Gore

ARPANET closed in 1990, but its influence lives on to this day. In 1991 it inspired then Senator Al Gore to write the High Performance Computing and Communication Act. The bill spurred the creation of high-speed fiber optic computer networks and one of the first web browsers (Mosaic). In 1993, the New York Times reported, “One of the technologies Vice President Al Gore is pushing is the information superhighway, which will link everyone at home or office to everything else - movies and television shows, shopping services, electronic mail and huge collections of data.” So yes, Al Gore did invent the internet (in a way).

Information-Wide Area Year (I-WAY) (1995)

The Information Wide Area Year (I-WAY) was the first modern grid computing project. Though innovative, it was not available to the general public. The I-WAY connected 17 supercomputer centers, a dozen advanced networking testbeds, 5 virtual-reality research sites, and more than 60 applications groups into one network. The project debuted to much fanfare at the ‘95 Supercomputing conference in San Diego. 

One of the project founders said his goal in forming I-WAY was to impress attendees at the conference. The effort was led by a community of volunteer scientists from roughly 30 different research institutes, as well as major telecommunications giants like AT&T, Sprint, and Pacific Bell. The three project founders were scientists Rick Stevens (director at Argonne National Laboratory), Tom DeFanti (director of the Electronic Visualization Laboratory at University of Illinois at Chicago), and Larry Smarr (director of National Center for Supercomputing Applications). 

In a 1995 news report, co-founder Stevens predicted the I-WAY would have a lasting impact. “Historically what the scientific community has done the Internet will look like. Just as…Mosaic and other Internet tools influenced a broader range of users than just scientists, this project is not just about science.”

Networking supercomputers together allowed researchers to perform large scale computations. Attendees at the ‘95 Supercomputing Conference were even able to view data in a 3D “virtual reality” simulator. Keeping with the trend of government and universities leading technical innovation, computers connected to the I-WAY were:

  • DOE supercomputer centers
  • National Center for Supercomputing Applications
  • National Aeronautics and Space Administration Supercomputers
  • Lockheed Martin Missiles and Space
  • Advanced Research Projects Agency, Enterprise Integration Testbed
  • Maui High Performance Computing Center
  • U.S. Army Waterways Experiment Station
  • University of Illinois at Chicago, Electronic Visualization Laboratory 

GIMPS (1996)

George Woltman founded The Great Internet Mersenne Prime Search in 1996 as a way to discover more Mersenne primes than was previously humanly possible. The Mersenne primes get their name from Marin Mersenne, a French monk who studied them in the 17th century. Early mathematicians discovered that 2, 3, 5, 7, 13, 17, 19, and 31 are all Mersenne primes (numbers that fit the formula  “2n - 1 = a prime number”). 

GIMPS in the Early 2000’s

 The first version of GIMP was so rudimentary that it operated by sending email requests to volunteers for work assignments, who sent the work back to founder Woltman. By 2003, a news report from Orlando, Florida describes then 26-year-old Michael Schafer as having discovered the largest known prime number in the world. Schafer, a grad student at Michigan State University, told reporters that he had been running the program on his Dell PC for 19 days. "The software runs great without affecting the computer. I get my work done and contribute to the project at the same time." His computer was reportedly one of 211,000 in multiple countries connected to GIMPS. By 2009 the Electronic Frontier Foundation awarded GIMPS a $100,000 Cooperative Computing Award for the discovery of the 47th known Mersenne prime number (Shafer’s was 40th). 

Here's how they found the world’s biggest prime number: https://www.youtube.com/watch?v=jNXAMBvYe-Y


GIMPS is still in operation, and users can download software here https://www.mersenne.org/download/. The project uses only 8 MB of memory and 10 MB of disk space per personal computer, and awards prizes of $3,000 for a prime below 100 million decimals, and $50,000 for a 100 million digit prime.

As of 2022, GIMPS has found seventeen Mersenne primes, setting new world records for the largest known prime number a total of 15 times. The current largest known prime number (282,589,933 − 1)  is a Mersenne prime, and every new Mersenne prime found since 1997 has been discovered through the Great Internet Mersenne Prime Search.

The SETI@home Project (1999)

SETI@home was the first distributed computing project available to the general public. When SETI@home was released in May of 1999, it was only the third large-scale grid computing effort in history. There might be a reason for its popularity - it was designed by the UC Berkeley Space Sciences Lab (SSL) to search for extraterrestrial life. SETI@home proved to be a massive hit, attracting over 1.8 million volunteers during its 6 year lifespan. Users were able to set the program to run as an alien themed screensaver, with 3D graphics that were positively cutting edge for the early 2000’s. The project’s large scale computing power did not disappoint. In 2001 SETI performed 10^21 floating point operations, which was listed in the 2008 Guinness Book of World Records as the largest computation in world history.

Before SETI@Home

Before SETI@home, the UC Berkeley SSL looked for aliens with a project called the Search for Extraterrestrial Radio Emissions from Nearby Developed Intelligent Populations (SERENDIP). Though the project began 20 years earlier with a radio dish at Arecibo Observatory in Puerto Rico, SETI@home refined their analysis. In a 1999 interview with UC Berkeley Public Affairs, chief scientist Dan Werthimer told a reporter that computing capacity before SETI@home only allowed SSL to detect the most obvious of radio signals. “We’re not asking people to call the press when they see a spike on the screen,” said Werthimer. “We get strong signals all the time and have to sift through them.”

In the same interview, SETI@home project director and computer scientist David Anderson explained, “I’m amazed at the extreme eagerness of people to use this… you can download enough data through the internet in five minutes to keep the computer analyzing for several days. A computer then sends back a summary of the interesting stuff it found and gets another chunk of data.”

Arecibo Telescope

SETI@Home Now

SETI@home classic ran from May 17, 1999 to December 15, 2005 before it was replaced by the Berkeley Open Infrastructure for Network Computing and SETI@home enhanced. In March of 2020, the project stopped sending signals out altogether.

BOINC (Berkeley Open Infrastructure for Network Computing)

BOINC, one of the best known grid computing projects to date, was launched in 2002 from the UC Berkeley SSL. As an open source middleware system, it has the ability to run on a variety of operating systems. According to Guinness World Records, BOINC is currently the largest computing grid in the world. With over 310,000 participants and 800,000 devices, it rivals the processing power of the world’s top supercomputers. 


The BOINC platform is currently used by universities around the world to enhance computing power. Volunteers can download the application here https://boinc.berkeley.edu/download.php and choose the scientific project they want to work on. Currently available projects include:

  • Predictor@home 
  • Acoustics@home 
  • Asteroids@home 
  • Rosetta@home
  • Einstein@home
  • yoyo@home
  • Universe@home
  • LHC@home
  • World Community Grid
  • Moo!Wrapper

How BOINC Works

BOINC only runs when a machine is idle, and requires mobile devices to be plugged in and at 90% battery capacity. Volunteers have the opportunity to earn credit for their work through projects like Charity Engine and Gridcoin. The Charity Engine program was created by Berkeley professor David Anderson and former journalist Mark McAndrew, who is said to have come up with the idea while writing a science fiction novel. Charity Engine users who donate computing power to BOINC are entered into a lottery for cash prizes.

The Future of Grid Computing

Just as scientific researchers pioneered the internet through projects like ARPANET, efforts like I-WAY, GIMPS, SETI@home, and BOINC foreshadow new applications for grid computing. Developments in grid computing inspired the cloud computing revolution of the 2000’s that consumers enjoy today. As projects like SETI@home create networks out of personal computers, tablets, and smartphones, commercial applications are likely to follow. Thanks to grid computing, the potential for better, faster, and cheaper computations is nearly limitless. 


Lauren Glazer




Distributed Computing

Sign up to our newsletter

Stay updated on our latest product launches, industry insights, and company news! Don't miss out.
Thank you for subscribing!
Oops! Something went wrong while submitting the form.

Compute faster

Discover the full potential of distributed computing