Why Most Technical Interviews Are Broken (And What Companies Should Do Instead)
Published on BirJob.com · March 2026 · by Ismat
Last month a developer friend of mine — someone I'd trust to build production systems in his sleep — failed a technical interview at a Baku-based fintech company. The reason? He couldn't implement a red-black tree on a whiteboard in 20 minutes. He's been writing production code for six years. He's never once needed a red-black tree.
He called me that evening, half angry and half deflated. "Am I actually bad at my job?" he asked. No. He's excellent at his job. The interview was bad at its job.
I've been on both sides of technical interviews now — as a candidate who's bombed them and as someone who's built a system (BirJob) complex enough that I occasionally have to evaluate technical people who want to help. And I've come to a conclusion that I think the data supports: most technical interviews are fundamentally broken. They measure the wrong things, they're biased in ways nobody wants to talk about, and there are better alternatives that companies just refuse to adopt.
This is going to be a controversial article. I'm okay with that.
The Problem, in Numbers
I analyzed 1,847 tech job postings from BirJob's database and cross-referenced them with interview process descriptions (from Glassdoor reviews, LinkedIn posts, and direct conversations with 23 developers who interviewed at Azerbaijani companies in 2025-2026).
Here's what the typical tech interview pipeline looks like in Azerbaijan:
| Stage | % of Companies That Use It | Avg. Time Spent |
|---|---|---|
| HR phone screen | 89% | 15 min |
| Algorithm/DS coding test | 64% | 60-90 min |
| Technical knowledge quiz | 52% | 30-45 min |
| Take-home project | 27% | 4-8 hrs |
| System design discussion | 18% | 45 min |
| Pair programming | 8% | 60 min |
| Portfolio/project review | 11% | 30 min |
64% of companies use algorithm and data structure coding tests as their primary technical evaluation. Let me say that differently: nearly two-thirds of Azerbaijani tech companies decide who to hire based on whether candidates can solve puzzles they'll never encounter on the job.
Why Algorithm Interviews Don't Work
I'm not the first person to say this. But I've got a specific angle most critics miss.
1. They measure preparation, not ability. Anyone can learn to solve LeetCode problems. You grind 200 problems over two months, you memorize the patterns, and you pass. This tests discipline and free time, not engineering talent. The best LeetCode solver I know is a mediocre developer who happens to be extremely good at pattern recognition on artificial problems.
2. They're biased toward recent CS graduates. If you graduated last year, algorithms and data structures are fresh in your mind. If you've been building real products for five years, you've forgotten the implementation details of Dijkstra's algorithm — because you've been using libraries that implement it for you. This isn't a weakness. This is how professional development works. We build on abstractions.
3. They're biased toward people with free time. A 24-year-old living with parents has weeks to grind LeetCode. A 32-year-old with two kids and a full-time job has... evenings. Maybe. The interview process systematically favors candidates with the least life responsibilities, not the most capability.
4. The correlation with job performance is weak. Google famously studied this. Their own research (reported by Laszlo Bock, former SVP of People Operations) found that structured behavioral interviews predicted job performance better than brainteasers or algorithm tests. Google. The company that invented the modern algorithm interview. Even they found it doesn't work.
The "Technical Quiz" Problem
Azerbaijani companies love technical knowledge quizzes. Fifty-two percent use them. These are questions like "What's the difference between an abstract class and an interface?" or "Explain the CAP theorem" or "What are SOLID principles?"
These questions test memorization, not understanding. I can recite SOLID principles in my sleep. That doesn't mean my code follows them. Conversely, I know developers who write beautifully architected code but would struggle to name all five principles on demand.
The worst version of this is the "gotcha" question. "What's the output of this JavaScript snippet?" where the snippet involves some obscure hoisting edge case that no reasonable developer would write in production. You're testing whether someone has memorized JavaScript trivia, not whether they can build anything useful.
I made this mistake myself when I first started evaluating contributors for BirJob. I asked algorithm questions because that's what I thought you were supposed to do. The candidate who aced the algorithm questions wrote scraper code that was technically correct but completely unmaintainable. The candidate who couldn't solve the algorithm problem wrote clean, well-structured code that's still running in production today.
That experience changed how I evaluate people permanently.
The Take-Home Project Trap
Take-home projects sound better in theory. Give the candidate a real-world problem, let them work on their own schedule, evaluate the output. What could go wrong?
A lot.
First, the time investment. Most take-home projects advertised as "2-4 hours" actually take 6-10 hours if you do them properly. I've done take-homes where the "small project" required setting up a database, building a REST API, writing a frontend, adding tests, and documenting everything. That's not a take-home project — that's unpaid labor.
Second, they advantage people who can copy. Nothing stops a candidate from using someone else's code, asking ChatGPT, or paying a freelancer. Companies know this but pretend they don't.
Third, they're brutally disrespectful of candidates' time. If a candidate is interviewing at three companies (which is normal in an active job search), and each gives a 6-hour take-home, that's 18 hours of unpaid work on top of their current job. This is why senior developers — the ones you most want to hire — often refuse to do take-homes.
What Actually Predicts Job Performance
Okay, enough criticism. What should companies do instead?
Based on research (I'll cite sources at the bottom), conversations with hiring managers who've refined their processes, and my own experience, here's what the evidence supports:
1. Structured Behavioral Interviews
"Tell me about a time you had to debug a production issue under time pressure. What happened, what did you do, what was the outcome?"
These questions measure actual experience, problem-solving approach, and communication skills. They can't be gamed as easily as algorithm questions. And research consistently shows they're among the strongest predictors of job performance.
The key word is structured. Every candidate gets the same questions. Responses are scored on the same rubric. This removes the "vibes-based hiring" problem where interviewers just chat and then decide based on whether they liked the person.
2. Pair Programming on Real Problems
Only 8% of Azerbaijani tech companies use pair programming interviews. This is a missed opportunity.
Give the candidate a real bug from your codebase (anonymized if needed). Sit with them while they debug it. Watch how they read code, form hypotheses, test assumptions, and communicate their thinking. This tells you more about how they'll actually work at your company than any whiteboard algorithm ever will.
Yes, it requires more effort from the interviewer. That's the point. If you're not willing to invest an hour of a senior engineer's time to make a good hiring decision, your hiring process is optimized for convenience, not quality.
3. Portfolio and Project Review
This is my favorite and it's criminally underused (11% of companies).
Ask the candidate to walk you through something they've built. Not a tutorial project — something they actually shipped. Dig into the technical decisions. "Why did you use Postgres instead of MongoDB here?" "What would you change if you rebuilt this?" "What's the worst bug you shipped?"
This approach has massive advantages: it evaluates real work, it tests communication skills, it reveals how the candidate thinks about tradeoffs, and it's impossible to fake (you can fake a LeetCode solution, but you can't fake deep understanding of a system you didn't build).
The objection I hear: "What about candidates who don't have public projects?" Fair point. But in 2026, most serious developers have something — a GitHub profile, a personal site, a contribution to an open-source project. If they truly have nothing, the take-home project makes sense as a last resort — but keep it under 3 hours, pay for the candidate's time, and actually review it thoughtfully.
4. Paid Trial Days
Some companies in Azerbaijan have started doing this and I think it's the future.
Instead of an interview, hire the candidate for one paid day. Give them a real task (not a critical one, but a real one). See how they work. Do they ask good questions? Do they read the codebase before writing code? Do they communicate blockers? Do they write tests?
One day tells you more than five hours of interviews. It's the closest simulation to actual employment that exists. And paying for it (even at a day rate of 100-200 AZN) shows respect for the candidate's time.
I've heard objections: "What about confidentiality?" Use non-sensitive tasks. "What about the time investment?" It's less total time than a multi-round interview process. "What about candidates who can't take a day off?" Schedule it on a weekend and pay a premium.
The Azerbaijan-Specific Problem
Here's something nobody in the global discourse about broken interviews is talking about. In Azerbaijan, there's an additional layer of dysfunction: interviews as power displays.
I've heard this from multiple candidates and, off the record, from a few recruiters who were honest about it. Some technical interviewers — not all, but enough to be a pattern — use the interview to demonstrate their own knowledge rather than evaluate the candidate's. They ask impossibly obscure questions, then lecture the candidate on the answer. The interview becomes a performance by the interviewer, not an evaluation of the candidate.
This isn't unique to Azerbaijan. It happens everywhere. But in a smaller market where the same interviewers keep showing up at different companies, it's particularly toxic. Candidates share names. "Oh, you're interviewing at X? Watch out for Y, he'll spend 40 minutes explaining why your approach is wrong instead of letting you explain your approach."
Companies: if your technical interviewers are doing this, you're losing good candidates. Fix it. Train your interviewers. Or better yet, have someone sit in on interviews and provide feedback to the interviewer, not just the candidate.
A Better Interview Process: My Proposal
If I were building a technical hiring process from scratch (and I have, for BirJob's small team), here's what it would look like:
Stage 1: Resume/portfolio review (async, 15 min). Does this person have relevant experience or projects? Skip the HR screen for technical roles — have a technical person do the initial review.
Stage 2: 30-minute video call. Not technical. Just a conversation. What have you built? What problems do you enjoy solving? What do you want to learn? This is where you evaluate communication, curiosity, and culture fit.
Stage 3: Pair programming session (60 min). Real codebase, real problem. The interviewer works alongside the candidate, not as an adversary but as a collaborator. Score on: problem decomposition, code quality, communication, handling of ambiguity.
Stage 4: Paid trial day (optional, for senior roles). One day, real tasks, real team interaction. This is the final evaluation.
Total candidate time: ~6 hours across 2-3 sessions. Compare that to the 15-20 hours some companies put candidates through (algorithm test + take-home + three interview rounds + culture fit interview + final round with management). It's faster, more respectful, and more predictive.
What Candidates Can Do in the Meantime
I know most of you reading this aren't hiring managers. You're the candidates dealing with broken interviews. Here's practical advice:
Grind LeetCode if you must, but strategically. Focus on the top 50 most common problems, not all 2,000+. Most companies reuse the same patterns. Spending 200 hours on LeetCode has diminishing returns after the first 40.
Build something public. A project that you can walk through in detail. If an interviewer lets you redirect the conversation to your projects, you're on your strongest ground.
Ask about the interview process upfront. "Could you walk me through your technical interview format?" Any company that gets defensive about this question is waving a red flag.
Decline disrespectful processes. If a company gives you a 15-hour take-home project, it's okay to say "I'd be happy to do a shorter exercise or walk you through an existing project instead." The best companies will accommodate this. The companies that won't... well, think about what that tells you about how they value people's time.
Browse IT job listings on BirJob and you'll notice which companies mention their interview process in the posting. Those companies tend to have more thoughtful hiring practices overall. It's a useful signal.
The Uncomfortable Conclusion
Most companies don't fix their interview processes because the current process works well enough. They hire people. Some of those people are good. The ones who aren't get managed out. The cost of bad interviews is invisible — you never see the excellent engineer who failed your algorithm test and went to a competitor.
But "well enough" has a cost. Every false negative is a missed opportunity. Every great developer who fails a red-black tree whiteboard question is a developer building software for someone else. In a small market like Azerbaijan, where the total pool of experienced developers is maybe 15,000-20,000 people, you can't afford to waste a single good candidate on a bad process.
Fix your interviews. Or don't, and wonder why your competitors keep building better products with the people you rejected.
Sources
- Bock, L. "Work Rules!" (2015) — Google's research on interview predictiveness
- BirJob.com analysis of 1,847 tech job postings, 2025-2026
- 23 structured interviews with Azerbaijani developers, January-February 2026
- Schmidt, F.L. & Hunter, J.E. "The Validity and Utility of Selection Methods" (1998, updated 2016)
- Behroozi et al. "Does Stress Impact Technical Interview Performance?" (North Carolina State University, 2020)
- Glassdoor interview reviews for Azerbaijani tech companies, 2024-2026
I'm Ismat, and I build BirJob — a job aggregator that scrapes 80+ Azerbaijani job sites so you don't have to. If this helped, check our blog for more.
