PhysioNet/Computing in Cardiology Challenges

The new PhysioNet website is available at: https://physionet.org. We welcome your feedback.

In cooperation with the annual Computing in Cardiology conference, PhysioNet hosts a series of challenges, inviting participants to tackle clinically interesting problems that are either unsolved or not well-solved.

YearTopicPapers Contributed
Software
2000 Detecting Sleep Apnea from the ECG 13 1
2001 Predicting Paroxysmal Atrial Fibrillation 9
2002 RR Interval Time Series Modeling 12 10
2003 Distinguishing Ischemic from Non-Ischemic ST Changes 3 1
2004 Spontaneous Termination of Atrial Fibrillation 12 1
2005 The First Five Challenges Revisited 5
2006 QT Interval Measurement 20 6
2007 Electrocardiographic Imaging of Myocardial Infarction 6
2008 Detecting and Quantifying T-Wave Alternans 19 5 + 1
2009 Predicting Acute Hypotensive Episodes 11 4
2010 Mind the Gap 13 5
2011 Improving the Quality of ECGs Collected using Mobile Phones 17 7
2012 Predicting Mortality of ICU Patients 17 58
2013 Noninvasive Fetal ECG 29 17
2014 Robust Detection of Heart Beats in Multimodal Data 15 35
2015 Reducing False Arrhythmia Alarms in the ICU 20 28
2016 Classification of Normal/Abnormal Heart Sound Recordings 11 48
2017 AF Classification from a Short Single Lead ECG Recording 57 64
2018 You Snooze, You Win Ongoing Ongoing
2019 Early Prediction of Sepsis from Clinical Data Ongoing Ongoing

Discussion Forum

Background

In complementary ways, PhysioNet and Computing in Cardiology (CinC) catalyze and support scientific communication and collaboration between basic and clinical scientists. The annual meetings of CinC are gatherings of researchers from many nations and disciplines, bridging the geographic and specialty chasms that separate understanding from practice, while PhysioNet provides on-line data and software resources that support collaborations of basic and clinical researchers throughout the year. The annual PhysioNet/CinC Challenges seek to provide stimulating yet friendly competitions, while at the same time offering both specialists and non-specialists alike opportunities to make progress on significant open problems whose solutions may be of profound clinical value. The use of shared data provided via PhysioNet makes it possible for participants to work independently toward a common objective. At CinC, participants can make meaningful results-based comparisons of their methods; lively and well-informed discussions are the norm at scientific sessions dedicated to these challenges. Discovery of the complementary strengths of diverse approaches to a problem when coupled with deep understanding of that problem frequently sparks new collaborations and opportunities for further study.

A new challenge topic is announced each year. For each challenge, we assemble the raw materials needed to begin work, and we post them here on PhysioNet. In a typical challenge, these raw materials consist of a collection of signals or other data to be analyzed, and sometimes a sample entry that can be used as a starting point. The required analyses are provided for a subset of the data (the "learning set") in each case, and the challenge is to analyze the remaining data (the "test set").

Each challenge begins when the announcement is posted here, and ends in August or early September, shortly before the Computing in Cardiology conference. An important milestone for participants is the deadline for submitting abstracts for Computing in Cardiology, which is usually 1 May each year. Those wishing to qualify as official entrants, with eligibility for awards, must submit an acceptable abstract describing their work as well as an entry for scoring by this deadline. A limited number of revised entries may be submitted between 1 May and the final challenge deadline, which varies from year to year.

Challenges are open to all. Beginning in 2010, participants enter by joining a PhysioNetWorks project that allows them to submit entries and receive scores via this web site. Instructions and details, which vary from year to year, are on the home page for each challenge.

(You are also encouraged to work on challenges from previous years. In most cases, the solutions have been posted and you can score yourself. If the solutions have been withheld, as in a few cases in which followup studies are planned or ongoing, the challenge home pages have information about how to obtain scores.)

When the structure of the challenge permits, the top scores (including those of both official and unofficial entrants) are posted anonymously during the challenge period. The top-scoring eligible participant in each challenge event receives an award at that year's Computing in Cardiology conference.

After the final challenge deadline, we post the names of the top scorers, their scores, the number of entries they submitted in order to achieve their scores, and (for the official entrants) the papers they submitted to Computing in Cardiology in order to qualify. Many of the Challenges have open-source events, in which participants submit their entries as code to be tested using data they have not seen; the most successful of these entries are also posted after the challenge concludes, as a basis for follow-up studies.

By presenting these challenges, we aim to stimulate work on important clinical problems and to foster rapid progress towards their solution. Collaborations among those who have developed complementary approaches to challenge problems are easily established. We consider it especially significant that many of those who have participated in these challenges would not otherwise have had access to the data needed to study these topics. By bringing with them the insights and methods they have acquired from their own areas of expertise, these researchers enrich our fields of interest. We look forward to future challenges, and invite you to join in!

What will be the topic of the next challenge? It might be image analysis, or simulation, or forecasting.... An ideal challenge problem is interesting, clinically important, and possible to study using available materials that have not been widely circulated previously. Moreover, there must be an objective way to evaluate the quality of a challenge entry (for an analysis problem, this usually means there must be a known set of correct analyses of the data, i.e., a "gold standard" against which entries can be compared).

What should be the topic of next year's challenge? Do you have a data set that can help in creating a challenge? Please send us your ideas!

Questions and Comments

If you would like help understanding, using, or downloading content, please see our Frequently Asked Questions.

If you have any comments, feedback, or particular questions regarding this page, please send them to the webmaster.

Comments and issues can also be raised on PhysioNet's GitHub page.

Updated Thursday, 7 February 2019 at 18:52 EST

PhysioNet is supported by the National Institute of General Medical Sciences (NIGMS) and the National Institute of Biomedical Imaging and Bioengineering (NIBIB) under NIH grant number 2R01GM104987-09.