Jump to content

Folding@home

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Jesse V. (talk | contribs) at 01:32, 4 August 2011 (Function: better link). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Folding@home
Original author(s)Vijay Pande
Developer(s)Stanford University / Pande Group
Initial release2000-10-01
Stable release
Windows:
6.23 (uniprocessor)
6.32 (GPU)
Mac OS X:
6.29.3 (PPC uniprocessor)
6.29.3 (x86 SMP)
Linux:
6.02 (uniprocessor)
6.34 (x86-64 SMP)
PlayStation 3: 1.3.1 [1] / April 12, 2010; 14 years ago (2010-04-12) (Mac SMP)
Preview release
All platforms:
7.1.24 / April 9, 2011; 13 years ago (2011-04-09)
PlatformCross-platform
Available inEnglish
TypeDistributed computing
LicenseProprietary [1]
Websitefolding.stanford.edu

Folding@home ("Folding at Home", FAH, F@h) is a distributed computing (DC) project designed to perform computationally intensive simulations of protein folding and other molecular dynamics (MD), and to improve on the methods available to do so. It was launched on October 1, 2000, and is currently managed by the Pande Group, within Stanford University's chemistry department, under the supervision of Professor Vijay Pande.

In 2007 Guinness[2] recognized Folding@home as the most powerful distributed computing cluster in the world. Folding@home is one of the world's largest distributed computing projects.[3] The goal of the project is "to understand protein folding, misfolding, and related diseases."[4]

Accurate simulations of protein folding and misfolding enable the scientific community to better understand the development of many diseases, including sickle-cell disease (drepanocytosis), Alzheimer's disease, Parkinson's disease, Bovine spongiform encephalopathy, cancer, Huntington's disease, cystic fibrosis, osteogenesis imperfecta, alpha 1-antitrypsin deficiency, and other aggregation-related diseases.[5] More fundamentally, understanding the process of protein folding — how biological molecules assemble themselves into a functional state — is one of the outstanding problems of molecular biology. So far, the Folding@home project has successfully simulated folding in the 1.5 millisecond range[6] — which is a simulation thousands of times longer than it was previously thought possible to model.

The Pande Group's goal is to refine and improve the MD and Folding@home DC methods to the level where it will become an essential tool for MD research,[7] and to achieve that goal they collaborate with various scientific institutions.[8] As of November 23, 2010, seventy-eight scientific research papers have been published using the project's work.[9] A University of Illinois at Urbana-Champaign report dated October 22, 2002 states that Folding@home distributed simulations of protein folding are demonstrably accurate.[10]

Function

Folding@home when running takes advantage of unused CPU cycles on a computer system as shown by this computer's 99% CPU usage.

Folding@home does not rely on powerful supercomputers for its data processing; instead, the primary contributors to the Folding@home project are many hundreds of thousands of personal computer users who have installed a client program. The client runs in the background, utilizing otherwise unused CPU (or in some cases, GPU, see below) power. The older, no longer used version of Folding@home for the CPU could be run as a screen saver, only folding while the user is away. In most modern personal computers, the CPU is rarely used to its full capacity at all times; the Folding@home client takes advantage of this unused processing power.

The Folding@home client periodically connects to a server to retrieve "work units", which are packets of data upon which to perform calculations. Each completed work unit is then sent back to the server. As data integrity is a major concern for all distributed computing projects, all work units are validated through the use of a 2048 bit digital signature.

Contributors to Folding@home may have user names used to keep track of their contributions. Each user may be running the client on one or more CPUs; for example, a user with two computers could run the client on both of them. Users may also contribute under one or more team names; many different users may join together to form a team. Contributors are assigned a score indicating the number and difficulty of completed work units. Rankings and other statistics are posted to the Folding@home website https://folding.stanford.edu/English/Stats.

Software

The Folding@home client consists of three separate components.

  • The client software acts as a download and file manager for work units and scientific cores, controls the cores, and is the software with which the user interacts. Separating the client from the core enables the scientific methods to be updated automatically (or new methods to be added) without a client update.
  • The Work Unit is the actual data that the client is being asked to process.
  • The Core performs the calculations on the work unit. Folding@home's cores are based on modified versions of seven molecular simulation programs for calculation: TINKER, GROMACS, AMBER, CPMD, SHARPEN, ProtoMol and Desmond.[11][12] Where possible, optimizations are used to speed the process of calculation. There are many variants on these base simulation programs, each of which is given an arbitrary identifier (Core xx).[13]

Active cores

Cores listed in this section may not necessarily be being used by the project at any given time, but are included here as their use may be resumed at any time without notice. Deprecated and forthcoming cores are in the next section.

  • GROMACS (all variants of this core use SIMD optimizations including SSE, 3DNow+ or AltiVec, where available, unless otherwise specified)
    • Gromacs (Core 78)
    • DGromacs (Core 79)
    • DGromacsB (Core 7b)
      • Nominally an update of DGromacs, but is actually based on the SMP/GPU codebases (and is therefore a completely new core). As a result, both are still in use.
      • Double precision Gromacs, uses SSE2 only.
      • Available for all uniprocessor clients only.
    • DGromacsC (Core 7c)
      • Double precision Gromacs, uses SSE2 only.
      • Available on Windows and Linux uniprocessor clients only.
    • GBGromacs (Core 7a)
    • Gromacs SREM (Core 80)
      • Gromacs Serial Replica Exchange Method.
      • The Gromacs Serial Replica Exchange Method core, also known as GroST (Gromacs Serial replica exchange with Temperatures), uses the Replica Exchange method (also known as REMD or Replica Exchange Molecular Dynamics) in its simulations.
      • Available for Windows and Linux uniprocessor clients only.
    • GroSimT (Core 81)
      • Gromacs with simulated tempering.
      • Available for Windows and Linux uniprocessor clients only.
    • Gromacs 33 (Core a0)
      • Uses the Gromacs 3.3 codebase.
      • Available for all uniprocessor clients only.
        File:GPU - F@H.jpg
        NVIDIA GPU v2.0 r1 client for Windows.
    • Gro-A3 core (Core a3)
      • SMP version of the Gromacs A4 core.
      • Uses threads rather than MPI for multicore support.
      • Available for SMP2 client only.
      • In open beta testing before general release.
      • Released January 24, 2010.
    • Gro-A4 (Core a4)
      • A one-core version of the Gromacs SMP2 core.
      • Available for Windows and Linux uniprocessor clients only.
    • Gro-A5 (Core a5)
      • An SMP version of the Gromacs A4 core.
      • Only runs using the -bigadv client flag.
      • Uses the same codebase as Gro-A3.
    • Gro-A6 (Core a6)
      • Same as Gro-A5, except it is a newer version.
      • Only runs using the -bigadv client flag.
      • Uses the same codebase as Gro-A3.
    • GroGPU2 (Core 11)
      • Graphics processing unit variant for ATI CAL-enabled and nVidia CUDA-enabled GPUs.
      • Comes in two separate versions, one each for ATI and nVidia, but both have the same Core ID.
      • GPUs do not support SIMD optimizations by design, so none are used in this core.
      • Available for GPU2 client only.
    • ATI-DEV (Core 12)
      • Graphics processing unit developmental core for ATI CAL-enabled GPUS.
      • Does not support SIMD optimizations.
      • Available for GPU2 client only.
    • NVIDIA-DEV (Core 13)
      • Graphics processing unit developmental core for nVidia CUDA-enabled GPUs.
      • Does not support SIMD optimizations.
      • Available for GPU2 client only.
    • GroGPU2-MT (Core 14)[14]
      • Graphics processing unit variant for nVidia CUDA-enabled GPUs.
      • Contains additional debugging code compared to the standard Core 11.
      • Does not support SIMD optimizations.
      • Released March 2, 2009.
      • Available for GPU2 client only.
    • Gro-PS3 (Does not have a known ID number, but also called SCEARD core)
      • PlayStation 3 variant.
      • No SIMD optimizations, uses SPE cores for optimization.
      • Available for PS3 client only.
  • AMBER
    • PMD (Core 82)[13]
      • No optimizations.
      • Available for Windows and Linux uniprocessor clients only.
  • ProtoMol [8]
    • Protomol Core (Core b4)
      • In open beta testing before general release.
      • Released to open testing on February 11, 2010.
  • OpenMM
    • Gromacs-OpenMM Core (Core 15)
      • In open beta testing before general release.
      • Uses CUDA and is available for nVidia only.
      • Available for GPU2 client only.

Inactive cores

  • TINKER
    • Tinker core (Core 65)
      • Retired, as the GBGromacs cores (Cores 7a and a4) perform the same tasks much faster.
      • No optimizations.
      • Available for all uniprocessor clients only.
  • GROMACS
    • GroGPU (Core 10)
      • Graphics processing unit variant for ATI series 1xxx GPUs.
      • GPUs do not have optimizations; no SIMD optimizations needed since GPU cores are explicitly designed for SIMD.
      • Retired as of June 6, 2008 due to end of distribution of GPU1 client units.
      • Available for GPU1 client only.
    • Gro-SMP (Core a1)
    • GroCVS (Core a2)
      • Symmetric multiprocessing variant with scalable numbers of threads.
      • Runs only on multi-core x86 or x86-64 hardware, with four or more cores.
      • Uses the Gromacs 4.0 codebase.
      • Was available for Linux and Mac OS X SMP1 clients only.
      • Retired due to move to threads-based SMP with the SMP2 client.
  • CPMD
    • QMD (Core 96)
      • Currently inactive, due to QMD developer graduating from Stanford University and to current research shifting away from Quantum MD.
      • Caused controversy due to SSE2 issues involving Intel libraries and AMD processors.[15]
      • Uses SSE2 (currently only on Intel CPUs, see above).
      • Available for Windows and Linux uniprocessor clients only.
  • SHARPEN[16]
    • SHARPEN Core[17]
      • Currently inactive, in closed beta testing before general release.
      • Uses different format to standard F@H cores, as there is more than one "Work Unit" (using the normal definition) in each work packet sent to clients.
  • Desmond
    • Desmond Core
      • Currently inactive, in closed beta testing before general release.
      • Will be available for uniprocessor and SMP2 clients.
  • OpenMM
    • OpenMM-Gromacs core (Core 16)
      • An updated version of Core 15, using OpenCL to support both ATI and nVidia graphics cards.
      • Currently inactive, in closed beta testing before general release.
      • Will be available for GPU3 client only.

Participation

Folding@home computing power shown, by device type, in teraFLOPS as recorded semi-daily from November 2006 until September 2007. Note the large spike in total compute power after March 22, when the PlayStation 3 client was released.

Shortly after breaking the 200,000 active CPU count on September 20, 2005, the Folding@home project celebrated its fifth anniversary on October 1, 2005.

Interest and participation in the project has grown steadily since its launch. The number of active devices participating in the project increased substantially after receiving much publicity during the launch of their high performance clients for both ATi graphics cards and the PlayStation 3, and again following the launch of the high performance client for nVidia graphics cards.

As of March 28, 2011 the peak speed of the project overall has reached over 7 native PFLOPS (12.4 x86 PFLOPS)[18] from around 464,000 active machines, and the project has received computing results from over 6.07 million devices since it first started.[3]

Google collaboration

There used to be cooperation between Folding@home and Google Labs in the form of Google Toolbar. Google Compute supported Folding@home during its early stage — when Folding@home had about 10,000 active CPUs. At that time, a boost of 20,000 machines was very significant. Today the project has a large number of active CPUs and the number of new clients joining Google Compute was very low (most people opted for the Folding@home client instead). It is available here (must access in IE with old version of Google Toolbar). The Google Compute clients also had certain limits: they could only run the TINKER core and had limited naming and team options. Folding@home is no longer supported on Google Toolbar, and even the old Google Toolbar client will not work.[19]

Genome@home

Folding@home absorbed the Genome@home project on March 8, 2004. The work which was started by the Genome@home project has since been completed using the Folding@home network (the work units without deadlines), and no new work is being distributed by this project. All donators were encouraged to download the Folding@home client (the F@h 4.xx client had a Genome@home option), and once the Genome@home work was complete these clients were asked to donate their processing power to the Folding@home project instead.

PetaFLOPS milestones

Native petaFLOPS threshold Date crossed
1.0 September 16, 2007
2.0 early May 2008
3.0 August 20, 2008
4.0 September 28, 2008
5.0 February 18, 2009
6.0 March 25, 2011
7.0 March 28, 2011

On September 16, 2007, the Folding@home project officially attained a sustained performance level higher than one native petaFLOPS, becoming the first computing system of any kind in the world ever to do so, although it had briefly peaked above one native petaFLOPS in March 2007, receiving a large amount of mainstream media coverage for doing so.[20][21] In early May 2008 the project attained a sustained performance level higher than two native petaFLOPS, followed by the three and four native petaFLOPS milestones on August 20 and September 28, 2008 respectively. On February 18, 2009, Folding@home achieved a performance level of just above 5 petaFLOPS, thereby becoming the first computing system of any kind to surpass 5 native PFLOPS,[22] just as it was for the other four milestones. In late March 2011 Folding@home briefly peaked above the 6 and 7 native petaFLOP barriers, but then fell back to 5.6.[3]

The Folding@home computing cluster currently operates at about 5.6 native petaFLOPS, (about 9.3 x86 petaFLOPS) with a large majority of the performance coming from GPU and PlayStation 3 clients.[3] In comparison to this, the fastest standalone supercomputer (non-distributed computing) in the world (as of November 2010, Tianhe-I) peaks at approximately 2.56 petaFLOPS.[23]

Starting in April 2009, Folding@home began reporting performance in both "Native" FLOPS and x86 FLOPS.[3] "x86" FLOPS reported at a much higher mark than the "Native" FLOPS. A detailed explanation of the difference between the two figures was given in the FLOP section of the Folding@home FAQ.[18]

Results

These peer-reviewed papers (in chronological order) all use research from the Folding@home project:[9]

2000–2001

  • M. R. Shirts and V. S. Pande. (2000). "Screen Savers of the World, Unite!". Science. 290 (5498): 1903–1904. doi:10.1126/science.290.5498.1903. PMID 17742054.
  • Michael R. Shirts and Vijay S. Pande (2001). "Mathematical Analysis of Coupled Parallel Simulations". Physical Review Letters. 86 (22): 4983–4987. Bibcode:2001PhRvL..86.4983S. doi:10.1103/PhysRevLett.86.4983. PMID 11384401.
  • Bojan Zagrovic, Eric J. Sorin and Vijay Pande (2001). "b-Hairpin Folding Simulations in Atomistic Detail Using an Implicit Solvent Model". Journal of Molecular Biology. 313 (1): 151–169. doi:10.1006/jmbi.2001.5033. PMID 11601853.

2002

  • Stefan M. Larson, Christopher D. Snow, Michael R. Shirts, and Vijay S. Pande (2002) "Folding@home and Genome@home: Using distributed computing to tackle previously intractable problems in computational biology", Stefan M. Larson, Christopher D. Snow, Michael R. Shirts, and Vijay S. Pande. Published in Computational Genomics, Richard Grant (2004), Horizon Press
  • Bojan Zagrovic, Christopher D. Snow, Michael R. Shirts, and Vijay S. Pande. (2002). "Simulation of Folding of a Small Alpha-helical Protein in Atomistic Detail using Worldwide distributed Computing". Journal of Molecular Biology. 323 (5): 927–937. doi:10.1016/S0022-2836(02)00997-X. PMID 12417204.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  • Bojan Zagrovic, Christopher D. Snow, Siraj Khaliq, Michael R. Shirts, and Vijay S. Pande (2002). "Native-like Mean Structure in the Unfolded Ensemble of Small Proteins". Journal of Molecular Biology. 323 (1): 153–164. doi:10.1016/S0022-2836(02)00888-4. PMID 12368107.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  • Christopher D. Snow, Bojan Zagrovic, and Vijay S. Pande (2002). "The Trp Cage: Folding Kinetics and Unfolded State Topology via Molecular Dynamics Simulations". Journal of the American Chemical Society. 124 (49): 14548–14549. doi:10.1021/ja028604l. PMID 12465960.{{cite journal}}: CS1 maint: multiple names: authors list (link)

2003

2004

2005

2006

2007

2008

2009

2010

2011

  • Peter M. Kasson, Erik Lindahl, and Vijay S. Pande (2011). "Water Ordering at Membrane Interfaces Controls Fusion Dynamics". Journal of the American Chemical Society. doi:10.1021/ja200310d.{{cite journal}}: CS1 maint: multiple names: authors list (link)

High performance platforms

Graphics processing units

On October 2, 2006, the Folding@home Windows GPU client was released to the public as a beta test. After 9 days of processing from the beta client the Folding@home project had received 31 teraFLOPS of computing performance from only 450 ATI Radeon X1900 GPUs, averaging at over 70x the performance of current CPU submissions, and the GPU clients remain the most powerful clients available in performance per client (as of March 28, 2011, GPU clients accounted for about 80% of the entire project's throughput at an approximate ratio of 3.6 clients per teraFLOP).[3] On April 10, 2008, the second generation Windows GPU client was released to open beta testing, supporting ATI/AMD's Radeon HD 2000 and HD 3000 series, and also debuting a new core (GROGPU2 - Core 11). Inaccuracies with DirectX were cited as the main reason for migrating to the new version (the original GPU client was officially retired June 6, 2008),[24] which uses AMD/ATI's CAL. On June 17, 2008, a version of the second-generation Windows GPU client for CUDA enabled Nvidia GPUs was also released for public beta testing.[25] The GPU clients proved reliable enough to be promoted out of the beta phase and were officially released August 1, 2008.[26] Newer GPU cores continue to be released for both CAL and CUDA.

While the only officially released GPU v2.0 client is for Windows, this client can be run on Linux under Wine with NVIDIA graphics cards.[27] The client can operate on both 32- and 64-bit Linux platforms, but in either case the 32-bit CUDA toolkit is needed. This configuration is not officially supported, though initial results have shown comparable performance to that of the native client and no problems with the scientific results have been found.[citation needed] An unofficial installation guide has been published.[27]

On September 25, 2009, Vijay Pande revealed in his blog that a new third version of the GPU client was in development.[28] GPU3 will use OpenCL (preferred over DirectCompute) as the software interface, which may mean that the GPU core will be unified for both ATI and nVidia, and may also mean the addition of support for other platforms with OpenCL support.

On May 25, 2010, Vijay Pande announced an open beta of the GPU3 client on the Folding@home blog.[29] The new core initially supports only Nvidia GPUs, but will support ATI/AMD GPUs in a subsequent release.

PlayStation 3

The PlayStation 3's Life With PlayStation client replaced the Folding@home application on September 18, 2008.

Folding@home is also a channel of the application Life with PlayStation for PlayStation 3. The client was originally a standalone application, but is now part of a virtual globe which depicts news, weather and encyclopedic information (notably from Wikipedia).[3][30]

Multi-core processing client

As newer CPUs are being released, the migration to multiple cores is becoming more adopted by the public, and the Pande Group is adding symmetric multiprocessing (SMP) support to the Folding@home client in hope of using the additional processing power. The SMP support is being achieved via Message Passing Interface (MPI) protocols. In current state it is being confined inside one node by hard coded usage of the localhost.

On November 13, 2006, the beta SMP Folding@home clients for x86-64 Linux and x86 Mac OS X were released. The beta win32 SMP Folding@home client is out as well, and a 32-bit Linux client is currently in development.[31]

On June 17, 2009 the Pande Group revealed that a second generation SMP client (known as the SMP2 client) was in development. This client will use threads rather than MPI[12] to spread the processing load across multiple cores and thereby remove the overhead of keeping the cores synced, as they should share a common data bank in RAM. On January 24, 2010, the first open beta release of the SMP2 client was made, trialling the new processing methods and a new points bonus system rewarding quick unit returns.

Teams

A typical Folding@home user, running the client on one PC, will likely not be ranked high on the list of contributors. However, if the user were to join a team, they would add the points they receive to a larger collective. Teams work by using the combined score of all their members. Thus, teams are ranked much higher than individual submitters. Rivalries between teams create friendly competition that benefits the folding community. Many teams publish their own stats, so members can have intra-team competitions for top spots.[32] Some teams offer prizes in an attempt to increase participation in the project.[33][34]

Development

The Folding@home project does not make the project source code available to the public, citing security and integrity concerns.[35] At the same time, most of the scientific codes used by the FAH (ex. Cosm, GROMACS, TINKER, AMBER, CPMD, BrookGPU) are largely open-source software or under similar licenses.

A development version of Folding@home once ran on the open source BOINC framework; however, this version remained unreleased.[36]

Estimated energy use

The original PlayStation 3 has a maximum power rating of 380 watts (newer versions have a lower rating). However, according to Stanford's PS3 FAQ, "We expect the PS3 to use about 200W while running Folding@home."[37] As of December 27, 2008, there are 55,291 PS3s providing 1,559,000,000 MFlops of processing power. This amounts to 28,196 MFlops/PS3, and with Stanford's estimate of 200W per PS3 (for original units manufactured on the 90 nm process), 140.98 MFlops/watt.[3] This would put the PS3 portion of Folding@home at 95th on the November 2008 Green500 list.[38] (as of November 2008, the most efficient computer from the list - also based on a version of the Cell BE - runs at 536.24 MFLOPS/watt.[39]) The Cell processors used in 65 nm PlayStation 3s lower power use to around 140 W per PS3, while the 45 nm PS3s reduce it again to around 100 W. This further increases the power efficiency of the contribution from PlayStation 3 units.

Estimates of energy usage per time period are more difficult than estimates of energy usage per processing instruction. This is because Folding@home clients are often run on computers that would be powered-on even in the absence of the Folding@home client, and that run other programs simultaneously.

Looking at energy-balance in a larger context needs a number of assumptions. While Folding@home increases processor utilization, and thus (usually) power use; the extent to which it does so depends on the client processor's normal operating load, and its ability to reduce clock speeds when presented with less-than-full utilization (a process known as dynamic frequency scaling). In a Folding@home client that runs in a heated home, the excess heat generated by the power usage would reduce the amount of energy needed to heat the building somewhat. However, such energy gains would be offset over the year in those locations where air-conditioners are used during warmer months.

See also

References

  1. ^ "Folding@home distributed computing client". Stanford University. Retrieved 26 August 2010.
  2. ^ Engadget, among other sites, announces that Guinness has recognized FAH as the most powerful distributed cluster, October 31, 2007. Retrieved November 5, 2007
  3. ^ a b c d e f g h "Client Statistics by OS". Folding@home distributed computing. Stanford University. 2006-11-12 (updated automatically). Retrieved 2008-01-05. {{cite web}}: Check date values in: |date= (help)
  4. ^ Vijay Pande (2006). "Folding@home distributed computing home page". Stanford University. Retrieved 2006-11-12.
  5. ^ "Folding@home diseases studied FAQ". Stanford University.
  6. ^ "Folding@home: Paper #72: Major new result for Folding@home: Simulation of the millisecond timescale".
  7. ^ "Futures in Biotech 27: Folding@home at 1.3 Petaflops" (Interview, webcast).
  8. ^ a b "Folding@home - About" (FAQ).
  9. ^ a b Vijay Pande and the Folding@home team (2009). "Folding@home - Papers". Folding@home distributed computing. Stanford University. Retrieved 2009-12-23.
  10. ^ C. Snow, H. Nguyen, V. S. Pande, and M. Gruebele. (2002). "Absolute comparison of simulated and experimental protein-folding dynamics". Nature. 420 (6911): 102–106. Bibcode:2002Natur.420..102S. doi:10.1038/nature01160. PMID 12422224.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  11. ^ Vijay Pande (2005-10-16). "Folding@home with QMD core FAQ" (FAQ). Stanford University. Retrieved 2006-12-03. The site indicates that Folding@home uses a modification of CPMD allowing it to run on the supercluster environment.
  12. ^ a b Vijay Pande (2009-06-17). "Folding@home: How does FAH code development and sysadmin get done?". Retrieved 2009-06-25.
  13. ^ a b "Cores - FaHWiki" (FAQ). Retrieved 2007-11-06.
  14. ^ "Folding Forum: Announcing project 5900 and Core_14 on advmethods". 2009. Retrieved 2009-03-02.
  15. ^ "FAH & QMD & AMD64 & SSE2" (FAQ).
  16. ^ "SHARPEN: Systematic Hierarchical Algorithms for Rotamers and Proteins on an Extended Network" (About).
  17. ^ "SHARPEN".
  18. ^ a b Folding@home FLOP FAQ
  19. ^ "What is the state of Google Compute client?" (Blog). Folding@home support forum. Stanford University. Retrieved 2006-11-12.
  20. ^ Folding@home: Crossing the petaFLOPS barrier
  21. ^ Folding@home: Post petaflop
  22. ^ "Folding@home passes the 5 petaflop mark" from the official Folding@home blog
  23. ^ "TOP500 Tianhe-1A Performance Data". Retrieved 2010-11-29.
  24. ^ "Folding@home: GPU1 has been retired, GPU2 for NVIDIA release nearing".
  25. ^ "Folding@home: GPU2 beta client for NVIDIA now released".
  26. ^ "Folding@home: New clients are out (6.20)". {{cite web}}: Unknown parameter |name= ignored (help)
  27. ^ a b "Folding@home GPU v2.0 Windows Client on Linux Wiki". 2008-08-23. Retrieved 2008-11-06.[dead link]
  28. ^ "Folding@home: Update on new FAH core and clients".
  29. ^ "Folding@home: Open beta release of the GPU3 client/core".
  30. ^ Vijay Pande (2006-10-22). "PS3 FAQ". Stanford University. Retrieved 2006-11-13.
  31. ^ Vijay Pande (2006-11-13). "Folding@home SMP Client FAQ". Stanford University. Retrieved 2006-11-13.
  32. ^ Folding-community: why have teams?[dead link]
  33. ^ "The Mprize-".
  34. ^ Team Jiggmin F@H Prizes
  35. ^ "Folding@home Open Source FAQ".
  36. ^ "FAH on BOINC". Folding@home high performance client FAQ.
  37. ^ "PS3 FAQ" (FAQ).
  38. ^ "Lists :: November 2008 :: Ranks 1-100". Green 500.
  39. ^ "The Green500 List". Retrieved 2008-12-27.