Open Collective
Open Collective
Loading

HPX

COLLECTIVE
Open source

The C++ Standard Library for Parallelism and Concurrency

Contribute


Become a financial contributor.

Financial Contributions

Recurring contribution
Backer

Become a backer for $5.00 per month and support us

Starts at
$5 USD / month

Latest activity by


Be the first one to contribute!
Recurring contribution
Sponsor

Become a sponsor for $20.00 per month and support us

Starts at
$20 USD / month

Latest activity by


Custom contribution
Donation
Make a custom one-time or recurring contribution.

Latest activity by


Top financial contributors

Organizations

1
GitHub Sponsors

$438.55 USD since May 2022

Individuals

1
Nikunj Gupta

$400 USD since Aug 2024

HPX is all of us

Our contributors 8

Thank you for supporting HPX.

Nikunj Gupta

sponsor

$400 USD

Guest

Budget


Transparent and open finances.

NumFocus

from Dimitra Karatza to HPX
-$10,000.00 USD
Pending
Invoice #227643

Credit from Nikunj Gupta to HPX

+$100.00USD
Completed
Contribution #783815

Credit from Nikunj Gupta to HPX

+$100.00USD
Completed
Contribution #783815
$
Today’s balance

$4,415.86 USD

Total raised

$13,441.89 USD

Total disbursed

$9,026.03 USD

Estimated annual budget

$1,200.00 USD

About


HPX is shaking up high-performance computing through a unique combination of C++ language development and parallelism research. At its heart, it is a general purpose C++ many-task runtime system for parallel and distributed applications of any scale. It strives to provide a unified programming model which transparently utilizes the available resources to achieve unprecedented levels of scalability.  This C++ library strictly adheres to the C++ Standard, which makes HPX easy to use, highly optimized, and very portable.  HPX is being developed for conventional architectures including Linux-based systems, Windows, Mac, and the BlueGene/Q, as well as accelerators such as the Xeon Phi.

HPX is the first open-source implementation of a new asynchronous C++ Standard programming model. This model focuses on overcoming the four main barriers to scalability:

  • Starvation- insufficient concurrent work available to maintain high utilization of all resources.
  • Latencies- the time-distance delay intrinsic to accessing remote resources and services.
  • Overhead- the work required for the management of parallel actions and resources on the critical execution path which is not necessary in a sequential variant.
  • Waiting for contention resolution- the delay due to the lack of availability of oversubscribed shared resources.
In order to overcome these challenges, HPX utilizes these governing principles: latency hiding, fine-grained parallelism, constraint based synchronization, adaptive locality control, work following the data, and message driven computation.  None of these principles are in themselves new; however, the novel combination of all of these is what makes HPX standing out. The proper integration of these concepts have enabled HPX to take advantage of new features such as e.g. transparent data migration and a policy based decision engine allowing for HPX to be truly runtime adaptive.  We believe that by adhering to these guidelines, HPX allows applications to efficiently utilize today’s peta-scale machines and scale on future exa-scale architectures.

HPX offers a unique solution to your challenging scalability problems. So give HPX a whirl! You can download HPX directly from GitHub, or if you prefer you can download a release version of HPX here.  If you are just getting started please checkout our HPX documentation which includes a getting started guide, complete with instructions for setup and a walk through of some simple examples. If you run into problems you can search our mailing list archives or contact us directly at [email protected]. In addition, we maintain a near constant presence on IRC in the #ste||ar chat room hosted on Freenode. We have outlined all the ways you can use to get in contact on our support page.


Please feel free to share your thoughts, bugs, and ideas.  We would love to hear from you!



Our team