I first met HackerRank back in 2016, as a wholly new solution for assessing developer skills—and they’ve been a leader in the category for ages. But the space we’re in and the challenges their customers face are both evolving rapidly. Back then, the future of work was more esoteric, more philosophical than real. I remember during my days at IDC when Lisa Rowan and I were invited to contribute to their new Future of Work practice, and AI and automation was positioned as an extension and enablement of human workforce… It was a nice idea, but it felt like we were a decade or more from it being more than just that—more than just an idea.
Fast forward to today, and every CHRO across every industry is fielding questions about how AI is integrated into their workforce strategies. Hot on the heels of the rapid emergence of Gen AI use cases across the world of work, we’re swiftly entering the age of the AI agent—and suddenly, the notion of a digital workforce is very, very real. This shift isn’t just a challenge for IT leaders—it’s an urgent priority for HR and talent leaders, who now have to rethink workforce planning, hiring, and the very nature of work itself.
So back to HackerRank.
They recently launched the ASTRA Benchmark, a tool designed to evaluate various AI models’ specific and unique software development capabilities (e.g. GTP-4o vs. Claude 3.5 vs. Gemini 1.5). I had an advanced briefing of this new offering—at the same time as Oracle and Workday made huge AI Agent announcements—and I had to share what I see as a major play in our space.
So, What’s ASTRA and Why Should We Care?
ASTRA (Assessment of Software Tasks in Real-World Applications) is HackerRank’s new benchmarking framework for AI models like OpenAI’s GPT-4, Anthropic’s Claude, and Google’s Gemini. But instead of testing them on abstract coding puzzles, ASTRA throws them into the deep end—real-world, project-based coding challenges that mimic what actual developers and programmers face daily.

Here’s why this is a game changer:
- Assessing AIs’ Ability to Do Real Work: Most AI coding benchmarks focus on problem-solving in isolation. ASTRA looks at multi-file projects, giving us a better read on how AI can contribute to real software development.
- Managing AI as a Dev Partner, Not Just a Dev Tool: This isn’t just about whether AI can write code—it’s about its role in the entire development lifecycle. Engineers are shifting from coders to AI orchestrators, and ASTRA helps measure that evolution.
- Accounting for Correctness vs. Consistency: One of the biggest revelations? OpenAI’s latest model (o1) was the most “correct,” but Claude-3.5-sonnet was the most consistent. If you’re a developer, reliability might matter more than occasional flashes of brilliance – and knowing which AI is the best fit for the use case is how the best work gets done, and how this changes.**
**For example, the HackerRank team sent over an UPDATED leaderboard between the time I briefed with them and publishing this article:

When the HackerRank team first shared the details of ASTRA with me, it felt very much like an IT/engineering tool rather than something for HR or TA. But here’s the thing: ASTRA doesn’t just benchmark AI’s coding skills; it provides a window into how AI is shifting job roles and workflows by assessing where and how it works most effectively.
The current use case for solutions like ASTRA is in software development, but I sincerely believe this is just the first place we’ll see it. The same fundamental challenge exists across all knowledge work: how do we integrate AI in a way that enhances, rather than replaces, human expertise?
As I see it, there are three major takeaways for HR and talent leaders:
1. AI is Reshaping Work Faster Than Anyone Expected
Forget automation as an incremental improvement; AI is fundamentally reshaping entire professions—and we need to be tracking this trend closely. In software development as the current example, we’re moving away from dev teams writing code and toward engineers orchestrating AI agents. Sales, marketing, HR—every field is undergoing this shift to varying degrees.
So talent leaders need to take note: We need to be on the front lines of rethinking roles, skill development, and hiring criteria—or fall out of the loop. How do you hire for a job that didn’t exist a year ago? That’s the challenge we’re facing; it’s the very real question we need to be ready to answer. Start by working with business and technology leaders to map out how AI is being used today, what gaps exist, and where human expertise is still critical.
2. AI’s Impact on Hiring is Massive—and Messy
Here’s the kicker: AI is making hiring both easier and harder. On one hand, AI-driven assessments help us evaluate candidates better. On the other hand, AI-assisted cheating is running rampant.
We need to make sure our teams are ready to both make the most of these capabilities ethically and effectively and to ensure candidates are doing the same. This means establishing clear guidelines on when and how AI should be used in hiring—not just detecting AI use, but defining acceptable AI collaboration in assessments
The use of AI in hiring—by job seekers, hiring teams, and/or recruiters—not just a tech problem; it’s an HR issue begging to be treated as an HR strategy.
I got on my soapbox with the HackerRank team (who totally agreed with me, I’m sure!) about how poorly we managed this same challenge with mobile, social, and video tech… I’m hopeful that we can do better this time by promoting best practices rather than policing any-and-all use of AI.
3. AI Agents Are Changing Workforce Dynamics—Are You Ready?
The shift from human-only work to AI-augmented work is accelerating, and now AI isn’t just assisting—it’s acting as an agent, making decisions, generating code, and performing tasks once limited to skilled professionals. This isn’t a future problem; it’s today’s reality. HR needs to be thinking beyond hiring and upskilling—it’s about workforce planning at a whole new level.
How will AI agents be integrated into teams? What skills are essential for managing AI-driven workflows? How does this impact performance measurement, collaboration, and job design? These are questions HR leaders need to answer now, not later. Just as ASTRA is benchmarking AI’s coding capabilities, HR needs frameworks for assessing AI’s role in the workforce and ensuring human-AI collaboration is productive, ethical, and sustainable.
Final Thoughts
The ASTRA Benchmark isn’t just a technical innovation—it’s a wake-up call for those of us paying attention. AI is evolving at a breakneck pace, no doubt, and its impact on work is no longer theoretical. With AI agents emerging that are ready to take on more sophisticated work and more autonomous roles, HR leaders need to play an active role in shaping how AI integrates into their organizations’ workforce strategies.
Rethinking job roles, redefining skills, and ensuring fair, ethical hiring practices that account for AI’s influence; The opportunity is clear: AI isn’t just reshaping how work gets done but redefining who (or what) is doing it.
When I first met Vivek, questions about AI’s role in the workforce felt abstract—something to debate, not something to solve. Now AI isn’t just assisting work; it’s actively shaping how work gets done. The need for answers isn’t theoretical anymore—it’s immediate. Tools like ASTRA matter because they don’t just help us understand AI’s capabilities; they help us prepare for the workforce shifts already happening. The future of work isn’t ahead of us; it’s here, and it’s moving faster than we could have imagined.
Author
-
Kyle Lagunas is the Head of Strategy & Principal Analyst at Aptitude Research. He’s spent over a decade studying innovation cycles in HR and talent technology—with leadership roles in both the solution provider and practitioner space—and brings that breadth of experience and insight to his work advising vendors and practitioners on the ever-evolving world of talent. He’s a transformational talent leader, top industry analyst, and non-nonsense strategist with a reputation for bringing a fresh perspective to the table. Before joining Aptitude, he served as the Head of Talent Attraction, Sourcing & Insight at General Motors where he led the go-to-market functions of the company’s global talent acquisition and played a pivotal role in transforming their recruiting strategies and processes. At Beamery, his role as Director of Strategy put him at the intersection of customer experience, sales and marketing, and product. At Aptitude, Kyle brings together all aspects of his career to bridge the ever-growing gap between talent leaders, their stakeholders, and their solution providers to power more meaningful outcomes.
Recent Posts