firsttime
  • |

VITAL Prize Official Rules

1. Challenge overview

 The National Science Foundation (“ NSF”) and Digital Promise (together, the “ Organizers”), present the Visionary Interdisciplinary Teams Advancing Learning Prize Challenge (the “ VITAL Prize Challenge” or “ Challenge”). The Challenge is funded by NSF, the Bill & Melinda Gates Foundation, Schmidt Futures, and the Walton Family Foundation, (together, the “ Partners”) and administered by Digital Promise. This document describes the Challenge and contains the official rules (these “ Rules”) for the Challenge. The Challenge is governed by these Rules. By submitting an application to enter the Challenge, each participant (“ Participant”) is deemed to have accepted these Rules and agreed to abide by them.

The VITAL Prize Challenge will focus on providing interdisciplinary teams the funding and training to build K-12 learning technology innovations at speed and scale for adoption nationwide. The Challenge will be composed of four (4) progressive activities (Concept Paper, Discovery Round, Semi-Final Round, and Final Round). Teams applying to the VITAL Prize Challenge will submit an application to one (1) of three (3) K-12 technology translation tracks, depending on the anticipated application, end user, and area of impact: Rapid and Continuous Learning Assessment; Mathematical Literacy to Promote a Future STEM Workforce; or Other Innovations in Translational Learning Technologies. More details can be found in the NSF VITAL Dear Colleague Letter and on the Digital Promise VITAL Prize Challenge webpage.

Participating teams who progress through the VITAL Prize Challenge will be supported - through training, coaching, resources, and research and development (R&D) seed funding - to develop a marketable education technology prototype that can equitably impact learning. They will also simultaneously build capacities in entrepreneurship, research-based design, learner variability, inclusion, and equity in the education marketplace.

Winning teams will demonstrate a marketable technology prototype that can equitably impact learning in K-12 education and may receive over $300,000 in seed funding support and cash prizes

2. Challenge structure

The Challenge is structured into three (3) rounds over approximately twelve (12) months. Table 1 displays the initial Challenge calendar (“ Challenge Calendar”).

Table 1: Challenge Calendar

February 16, 2023 Applications Open for Team Concept Paper Submission
March 19, 2023 Applications Close
April 11, 2023 Judging Complete and Participating Teams for the Discovery Round Announced [appx 100 teams; 30+ teams per track]
April 18, 2023 - June 15, 2023 Discovery Round Programming
July 14, 2023 Judging Complete and Semi-Final Round Participating Teams Announced [appx 54 teams; 18+ teams per track]
August 7, 2023 - October 16, 2023 Semi-Final Round Programming
November 16, 2023 Judging Complete and Final Round Participating Teams Announced [appx 18 teams; 6+ teams per track]
November 27, 2023 - January 29, 2024 Final Round Programming
February 13, 2024 Judging Complete and Winning Teams Announced

Note: The above dates are estimates only and are subject to change at any time in Digital Promise’s sole discretion.

2.1 Team Applications (Concept paper)

Teams with a research-based idea for an emerging technology that could serve K-12 students or contribute to more equitable learning systems should consider applying, if they meet the eligibility requirements stated in these Rules. Team members can come from any field (e.g., engineering, computer science, neuroscience, psychology, anthropology, economics), and need not have existing experience in education or technology. Team members can also be at any stage of their careers across academia, industry, or elsewhere (e.g., professors, students, researchers, teachers, computer scientists, product developers, early-stage entrepreneurs).

In order to participate in the VITAL Prize Challenge, teams will need at least one (1) individual team member with the capability to represent a concept and develop a prototype throughout the training and mentorship opportunities provided by the Challenge (read more on designating an Official Representative). Teams must be made up of at least two (2) individual team members, and must not exceed ten (10) individual team members. Individuals cannot be on multiple teams.

Any participant who registers or submits an entry (whether a private entity, or team or anyone acting on behalf of a private entity or team) to participate in this Challenge represents that they (including all members of their team) have read, understood, and agree to all terms and conditions of these Rules. This must be agreed to prior to the submission and review of any team application (Concept Paper).

Teams interested in applying will submit the initial application (Concept Paper) on the VITAL Prize Challenge website https://www.vitalprize.org/ (the “ Challenge Website”) during the application window. Each application must align to one (1) of three (3) “ Translational Learning Technology Tracks” (as defined herein) identified by Digital Promise, depending on the anticipated learning technology, end user, and area of impact:

  1. Rapid and Continuous Learning Assessment: Advancing measures and tools that dramatically increase the speed and utility of student learning information for educators, students, and families.
  2. Mathematical Literacy to Promote a Future STEM Workforce: Advancing a student’s capacity to employ fundamental critical thinking skills and quantitative reasoning in a variety of contexts.
  3. Other Innovations in Translational Learning Technologies: Advancing novel concepts and technologies for diverse communities of K-12 student learners and teachers, outside of assessment and mathematical literacy.

The application must include brief responses to prompts that describe the innovation and potential translational learning technology applying teams hope to develop and demonstrate the team’s capacity to achieve the goals of the VITAL Prize Challenge. A complete application must include the following information:

  • Identification and a description of all team members, including their expertise;
  • Designated Official Representative for the team;
  • Any challenges the team foresees with regard to implementing the prototype;
  • The team’s ability to meaningfully demonstrate capabilities in the timeline of the Challenge;
  • A description of the proposed translational learning technology, focusing specifically on how the concept is a technically viable and innovative idea that is likely to fulfill a need and supports learning in the K-12 education;
  • A description of how the prototype supports aspects of learner variability;
  • An overview of the team’s commitment to equity and cited metrics and projections on the technology’s potential to fulfill a need for historically and systematically excluded learners;
  • An overview of how the team might work with educators and other stakeholders in education; and
  • The degree to which the team has the capacity to use lean, iterative, evidence-based design methods to refine an idea into a prototype.

Throughout the application phase, Digital Promise will host a series of webinars for all registered teams and teams considering registering for the Challenge. Digital Promise webinars will allow teams to get to know each other and also to receive important Challenge updates. Participation in these webinars, while not mandatory, is strongly encouraged.

A Judging Panel will review each application and the potential translational learning technologies outlined in the submission. Upon review, up to one hundred (100) teams, roughly split between each of the three (3) Translational Learning Technology Tracks (i.e., roughly thirty three (33) teams per track), will be selected and invited to participate in the “Discovery Round” phase.



2.2 Challenge rounds

Upon completion of the application (Concept Paper) phase, the Challenge embarks on three progressive (3) rounds of programming, judging, and team selection: Discovery, Semi-Final, and Final. During each of these rounds, teams will be asked to submit, within certain time periods, white papers, video footage, supporting documentation, and/or product prototypes that demonstrate the teams’ solutions in accordance with the Challenge criteria (each, a “submission”). Team submissions will be reviewed by an independent Judging Panel, which is responsible for making the final decisions on advancing teams from one (1) round of the Challenge to the next. Team submissions will be assessed in a way that maintains confidentiality as well as fair and equal consideration of all Challenge criteria, without favoring one (1) criterion over another unless explicitly specified.

Discovery Round (April 2023 - June 2023)

Participating teams will go through a brief orientation to concepts and activities they will develop during the Challenge before beginning the Discovery Round. The Discovery Round will be based on teachings from the NSF Innovation Corps (I-Corps™) program, which supports researchers and developers interested in entrepreneurial education and mentoring, with the goal of reducing the time it takes to bring technologies from the laboratory to the marketplace. Up to one hundred (100) teams will be selected for the Discovery 4 Round, roughly thirty-three (33) teams from each of the Transitional Learning Technology Tracks.

The I-Corps™ course provides participants with real-world, hands-on learning experience through customer discovery activities, which may include engaging with industry stakeholders – including potential customers, partners, and competitors.

During this round, teams will spend time interviewing commercial stakeholders and testing their concepts. Teams should assume an approximate effort of twenty (20) hours per week cumulatively across team members for the I-Corps™ program learning goals and associated development activities.

I-Corps™ Learning Team Expectations:

  1. Participate in experiential learning opportunities to help determine whether a significant commercial need exists for their technology.
  2. Engage in activities that will support them in articulating clear decisions regarding the commercial effort.
  3. Develop a transition plan to move the technology forward to market

After the Discovery Round, teams will present their findings and customer discovery outcomes via an (8) eight-minute presentation that will be recorded, scored by I-Corps coaches, and presented to the Judging Panel. The Judging Panel will review these submissions and up to fifty-four (54) highly ranked teams, eighteen (18) from each of the Translational Learning Technology Tracks, will be selected to participate in a “Semi-Final Round”.

Semi-Final Round (August 2023 - October 2023)

Each team, up to fifty-four (54) teams total, selected to advance to the Semi-Final Round will receive $20,000 in research and development seed funding to help offset costs associated with early solution development.

During the Semi-Final Round, teams will have approximately three (3) months to further develop their prototypes. Teams will be partnered with paid educator co-design mentors (“Mentors”) to provide contextual and application-driven feedback as they further assess the feasibility of their proposed translational learning technology concept.

All teams will receive additional training focused on designing emerging technologies for learning, including in-depth training on educational equity, participatory design, and the science of learning and learner variability.

At the end of the Semi-Final Round, teams will be expected to be able to:

  1. Provide evidence and cited metrics that the prototype is likely to fulfill a need and supports learning in K-12 education;
  2. Articulate how the prototype is designed to support learner variability;
  3. Articulate how their concept could support the needs of historically and systematically excluded learners;
  4. Collaborate with teachers using participatory design practices sustainably over time to improve their products; and
  5. Use lean, iterative, evidence-based design methods to improve the development of the prototype.

Teams, along with their educator Mentors, will be expected to make significant progress towards the design and feasibility of a minimum viable prototype (“MVP”) during the Semi-Final Round. At the conclusion of the Semi-Final Round, teams will present their revised concept and/or initial prototype via the Challenge Website. The Judging Panel will review the submissions and select a subset of these teams to participate in the “Final Round” based on criteria below.

Final Round (November 2023 - January 2024)

Up to thirty-four (34) teams, roughly eleven (11) teams in each Translational Learning Technology Track, will be selected to advance to the Final Round. Each team selected to advance to the Final Round (“finalists”) will receive an additional $50,000 in research and development seed funding to help offset costs associated with the development of the prototype. These finalists will continue their partnership with an educator Mentor in an effort to further develop and technically validate their learning technology MVP, and refine their prototypes for impact across the education marketplace. They will receive mentorship and support on developing their prototypes into market ready solutions that are well-placed to garner further support from investors, and can be further tested and scaled in inclusive and equitable ways. Teams, along with their educator Mentors, will be expected at the end of the Final Round to have developed an MVP that meets the following criteria.

The MVP must demonstrate how the prototype:

  • Represents a marketable education technology prototype that is likely to fulfill a need and supports learning in K-12 education;
  • Is designed to support learner variability;
  • Is designed to support historically and systematically excluded learners;
  • Has integrated findings from participatory design practices to improve their concept; and
  • Has integrated findings from lean, iterative, evidence-based design methods.

Upon completion of the Final Round, the Judging Panel will then identify three (3) winning teams within each of the Transitional Learning Technology Tracks. These winning teams will present their MVP via a live “Pitch Session” to a panel composed of committee members, NSF staff, sponsoring Partner representatives, and private sector investors in learning technologies. The Judging Panel will then identify within each of the Transitional Learning Technology Tracks a first, second, and third prize-winning team. Partner representatives may provide input to the Judges but will not be voting members of the Judging Panel.

2.3 Criteria

The Judging Panel will use criteria to review applications and judge each round of the Challenge. The following criteria represent examples of the types of criteria the Judging Panel will apply. The final criteria for each round will be published on the Challenge Website before submissions for that round can be submitted

Table 2: Application Criteria

Team expertise
1- Does not meet standard The team, or their partner organizations, does not have the necessary technical, research or subject matter expertise to design and refine the concept into a prototype.
2 The team, or their partner organizations, has limited technical, research or subject matter expertise to design and refine the concept into a prototype, but will need significant additional support.
3 The team, or their partner organizations, has some relevant technical, research or subject matter expertise to design and refine the concept into a prototype, but is likely to need additional support.
4 - Meets Standard The team, or their partner organizations, has relevant technical, research and subject matter expertise to design and refine the concept into a prototype.
Represents a technically Innovative concept and has market potential
1- Does not meet standard The concept is not technically innovative and has low potential for adoption in the market.
2 The concept has a variety of technical risks or has limited potential for adoption in the market.
3 The concept has minor technical risks or has some potential for adoption in the market.
4 - Meets Standard The concept is technically innovative and has high potential for adoption in the market.
Potential to fulfill a need in K-12 education
1- Does not meet standard The concept is not relevant to K-12 education.
2 The concept makes a case for fulfilling a need in K-12 education, but there is little clear logic or evidence to support the idea.
3 The concept makes a strong, logical case for fulfilling a need in K-12 education or learning in a new manner.
4 - Meets Standard The concept makes a strong, logical and evidence-based case for fulfilling a need in K-12 education and learning in a transformational manner.
Potential to support learner variability of underserved learners
1- Does not meet standard The application does not describe how itmay support learner variability or underserved learners.
2 The concept may support learner variability or underserved learners. It provides some logic on how the problem is relevant to diverse learners.
3 The concept is likely to support learner variability, especially for underserved learners. It provides some evidence or examples on how the problem is relevant to underserved learners.
4 - Meets Standard The concept is very likely to support learner variability, especially for underserved learners. It provides strong evidence on how the problem is relevant to underserved learners.
Participatory design practices
1- Does not meet standard The team does not adequately describe an understanding of the underserved populations and contexts their concept is intended to serve, or show a commitment to work with underserved educators and learners as equal partners in their prototype design.
2 The team has limited understanding of the underserved populations and contexts their concept is intended to serve, or shows a limited commitment to work with underserved educators and learners as equal partners in their prototype design.
3 The team has some professional or lived experiences that relate to the underserved populations and contexts their concept is intended to serve, or shows a commitment to work with underserved educators and learners as equal partners in their prototype design.
4 - Meets Standard The team has professional or lived experiences that relate to the underserved populations and contexts their concept is intended to serve, and shows a clear commitment to work with underserved educators and learners as equal partners in their prototype design.

Table 3: Discovery Round Criteria

Deliverable: I-Corps 8 Minute Presentation

Team followed the process of customer discovery
1- Does not meet standard The team did not perform their interviews to understand the customer ecosystem.
2 The team struggled with the interview process and was unable to locate and procure on-target interviews to the level of depth required.
3 The team completed some good interviews but will need to continue the interview process to build out their understanding of the customer ecosystem.
4 - Meets Standard The team completed many interviews that were both on target and provided a deep understanding of the customer ecosystem.
The team articulated the potential impact of their idea
1- Does not meet standard The team did not clearly identify their potential impact on the education system.
2 While the team stated a purpose, mission, or impact, they were not able to clearly relate the impact to information gleaned from the customer discovery process.
3 The team has solid information and ideas about their potential impact and how the data from interviews supports that but simply needs more work.
4 - Meets Standard The team clearly linked their customer discovery learning to potential impact measures.
Problem understanding
1- Does not meet standard The team is still in the problem-finding stage of the process. Their interview process didn't uncover an existing problem to solve. They may have a solution looking for a problem.
2 The team has a strong hypothesis for a problem but doesn't yet have enough data to show that the problem is real and current.
3 The team has a stronger understanding of the problem based on evidence but is still understanding the customer ecosystem.
4 - Meets Standard The team clearly articulated a problem that they want to solve and they understand which parts of the customer ecosystem are users, buyers and decision-makers.
Business model
1- Does not meet standard The team doesn't have a business model hypothesis. They are still in the search phase.
2 The business model is beginning to emerge from the team's presentation but may suffer from desirability, feasibility, or viability issues.
3 The team expressed a good hypothetical business model but needs to do more work to understand the model in more depth.
4 - Meets Standard The team presented a coherent business model based on their interviews and hypotheses about desirability, feasibility, and viability.
Next steps
1- Does not meet standard The team has not been able to advance past the idea phase and may need to restart the process. Or, the team has unrealistic expectations of how to become successful.
2 The team has encountered setbacks during the program and is working diligently to overcome them. They have a solid hypothesis about their next steps to achieve more data to support it.
3 The team has data, a sense of their business, and potential impact but needs to work on understanding the critical resources they require to proceed.
4 - Meets Standard The team is aware of their current situation and what they must do to continue to make progress. They have both short and long-term milestones defined and an understanding of the resources they need.

Table 4: Semi-Final Round Criteria

Deliverable: Revised Concept or Initial Prototype

Fulfills a need and can support learning in K12 education
1- Does not meet standard The submission does not provide clear goals for impacting learners or other stakeholders in K12 education
2 The submission provides goals; however the goals are not clear and/or the submission does not address how the concept will impact learners or other stakeholders in K12 education
3 The submission clearly defines provides goals; however the goals do not address how the concept will impact learners or other stakeholders in K12 education
4 - Meets Standard The submission clearly defines the intended goals and logic for how their solution can impact learners or other education stakeholders. The submission also clearly defines the need and why it is important.
Supports learner variability
1- Does not meet standard The concept does not show support for learner variability
2 The concept shows some limited support for learner variability
3 The concept shows some supports for learner variability across more than one element of the whole child.
4 - Meets Standard The concept shpws multiple evidence-based supports for learner variability across varied elements of the whole child
Supports underserved learners
1- Does not meet standard The team has not integrated evidence on how the prototype is likely to support underserved learners
2 The team has integrated limited evidence on how the prototype is likely to support underserved learners
3 The team has integrated some new evidence on how the prototype is likely to support underserved learners
4 - Meets Standard The team has integrated strong new evidence on how the prototype is likely to support underserved learners
Participatory design practices
1- Does not meet standard The submission does not show sufficient evidence of using or engaging in participatory design.
2 The submission provides limited evidence of basic participatory design practices with educators, or shows little changes to their prototype design as a result.
3 The submission provides evidence of some participatory design practices with educators or other stakeholders, and shows some ways these practices have improved their prototype.
4 - Meets Standard The submission provides evidence of strong participatory design practices with educators and other stakeholders, and shows how these practices have improved their prototype.

Table 5: Final Round Criteria

Deliverables: Minimum Viable Prototype and Pitch

Product-market fit and fulfills a need
1- Does not meet standard The submission represents a minimum viable prototype, but does not demonstrate potential for market adoption or evidence of product-market fit.
2 The submission represents a minimum viable prototype that has limited potential for market adoption or evidence of product-market fit.
3 The submission represents a minimum viable prototype that is market testable and has reasonable potential for market adoption and evidence of product-market fit.
4 - Meets Standard The submission represents a minimum viable prototype that is market testable and has strong potential for market adoption and evidence of product-market fit. The submission also demonstrates strong alignment to the intended goals for impacting learners or other education stakeholders.
Improves student learning
1- Does not meet standard The prototype has limited potential to impact teaching and learning
2 The prototype has some potential to impact teaching and learning
3 The prototype has the potential to meaningfully impact teaching and learning for underserved learners and contexts
4 - Meets Standard The prototype has strong potential to transformationally impact teaching and learning for underserved learners and contexts
Uses participatory and evidence-based design practices
1- Does not meet standard The submission does not represent a minimum viable prototype with plans for future collection of feedback
2 The submission represents a minimum viable prototype that provides a limited plan for future collection of feedback
3 The submission represents a minimum viable prototype that provides a basic plan for feedback loops to guide future development
4 - Meets Standard The submission represents a minimum viable prototype that provides a plan for participatory and evidence-based feedback loops to guide future development
Demonstrates functioning market testable prototype
1- Does not meet standard The team does not have the capacities or business plan to productize the prototype and take it to market
2 The team has some capacities and a limited business plan to productize the prototype and take it to market
3 The team has many key capacities and a business plan to successfully productize the prototype and take it to market
4 - Meets Standard The team has strong key capacities and a clear business plan to successfully productize the prototype and take it to market

2.4 Prizes

Discovery round prizes

During the Discovery Round, up to one hundred (100) teams will receive in-kind training and support.

Semi-final round prizes

During the Semi-Final Round, up to fifty-four (54) teams will receive $20,000 in seed funding to be used towards research & development (“R&D”) activities associated with the Challenge. Educator Mentors will also receive a $15,000 stipend during this round to support the team.

Final round prizes

During the Final Round, up to eighteen (18) teams will receive an additional $50,000 in seed funding to be used towards development and commercialization activities associated with the Challenge. Educator Mentors will also receive a $10,000 stipend during this round.

Grand prize

At the conclusion of the Challenge, three (3) teams from each Transitional Learning Technology Track will receive final prize winnings of up to $250,000 per team.

Table 6: Prizes and Mentorship Stipends

Phase/Prize Number of Teams Seed Funding/Prize Per Team Mentor Stipend Per Team
Discovery Round 100
In-kind Training and Support
Semi-Final Round 54 $20,000 $15,000
Final Round 18 $50,000 $10,000
First Place Prize 3 $250,000 -
Second Place Prize 3 $150,000 -
Third Place Prize 3 $100,000 -