Industry Leader Known for Software Development Skills Expertise Introduces Real-World Benchmark of AI Software Development Capabilities

CUPERTINO, Calif., Feb. 11, 2025 (GLOBE NEWSWIRE) -- HackerRank, the Developer Skills Company, today introduced its new ASTRA Benchmark. ASTRA, which stands for Assessment of Software Tasks in Real-World Applications, is designed to evaluate the capabilities of advanced AI models, such as ChatGPT, Claude or Gemini, to perform tasks across the entire software development lifecycle.

The ASTRA Benchmark consists of multi-file, project-based problems designed to mimic real-world coding tasks. The intent of the HackerRank ASTRA Benchmark is to determine the correctness and consistency of an AI model's coding ability in relation to practical applications.

"With the ASTRA Benchmark, we're setting a new standard for evaluating AI models,” said Vivek Ravisankar, co-founder and CEO of HackerRank. "As software development becomes more human + AI, it's important that we have a very good understanding of the combined abilities. Our experience pioneering the market in assessing software development skills makes us uniquely qualified to assess the abilities of AI models acting as agents for software developers.”

A key highlight from the benchmark showed o1 from OpenAI was the top performer, but Claude- -3.5-sonnet produced more consistent results.

Get the latest news
delivered to your inbox
Sign up for The Manila Times newsletters
By signing up with an email address, I acknowledge that I have read and agree to the Terms of Service and Privacy Policy.

Key features of ASTRA Benchmark include:

  • Diverse skill domains: The current version includes 65 project-based coding questions, primarily focused on front-end development. These questions are categorized into 10 primary coding skill domains and 34 subcategories.
  • Multi-file project questions: To mimic real-world development, ASTRA's dataset includes an average of 12 source code and configuration files per question as model inputs. This results in an average of 61 lines of solution code per question.
  • Model correctness and consistency evaluation: To provide a more precise assessment, ASTRA prioritizes comprehensive metrics such as average scores, average pass@1 and median standard deviation.
  • Wide test case coverage: ASTRA's dataset contains an average of 6.7 test cases per question, designed to rigorously evaluate the correctness of implementations.
  • Benchmark Results: For a full report and analysis of the initial benchmark results, please visit hackerrank.com/ai/astra.

Ravisankar added, "By open sourcing our ASTRA Benchmark, we're offering the AI community the opportunity to run their models against a high-quality, independent benchmark. This supports the continued advancement of AI while fostering more collaboration and transparency in the AI community to ensure the integrity of new models.”

For more information about HackerRank's ASTRA Benchmark, contact [email protected].

About HackerRank

HackerRank, the Developer Skills Company, leads the market with over 2,500 customers and a community of over 25 million developers. Having pioneered this space, companies trust HackerRank to help them set up a skills strategy, showcase their brand to developers, implement a skills-based hiring process, and ultimately upskill and certify employees…all driven by AI. Learn more at hackerrank.com.

CONTACT: Note to editors: Trademarks and registered trademarks referenced herein remain the property of their respective owners. Interview requests will be coordinated through the media contacts listed below.

Media Contact:

Kate Achille

The Devon Group for HackerRank

[email protected]