How to Design APIs That Developers Actually Love
Last year I rebuilt the entire BirJob scraper backend. We hit 80+ job sources, each with its own quirky API or HTML structure, and I had to design an internal API layer that would keep things sane. Along the way, I consumed hundreds of third-party APIs — some beautiful, some nightmarish. The difference between a good API and a bad one is not cleverness. It is empathy.
This article is a distillation of what I have learned from building, consuming, and cursing at APIs. Whether you are designing a public REST API, an internal microservice contract, or a GraphQL schema, the principles here will help you build something developers genuinely enjoy using — not just tolerate.
1. Why API Design Matters More Than You Think
APIs are the user interfaces for developers. Just as a confusing button layout drives users away from an app, a confusing endpoint structure drives developers away from your platform. According to Postman's 2024 State of the API Report, 74% of organizations now consider themselves "API-first," meaning APIs are designed before the code that implements them. That is a dramatic shift from five years ago, when APIs were often an afterthought bolted onto existing codebases.
The business case is straightforward. Nordic APIs estimates that the global API economy generates over $14 billion in revenue annually, with Stripe, Twilio, and Plaid serving as poster children. These companies did not win because they solved unique problems — payment processing, SMS, and banking data existed before them. They won because their APIs were a joy to integrate.
Poor API design, on the other hand, has measurable costs. A SmartBear survey found that developers spend an average of 8.5 hours per week working with APIs, and roughly 30% of that time is spent debugging integration issues that better design could have prevented. That is nearly three hours per developer per week wasted — and in a team of ten, that is 30 hours of lost productivity every single week.
2. The Five Pillars of Developer-Loved APIs
After years of building and integrating APIs, I have identified five core principles that separate beloved APIs from merely functional ones:
| Pillar | What It Means | Anti-Pattern |
|---|---|---|
| Consistency | Same patterns everywhere — naming, errors, pagination | Mixing camelCase and snake_case across endpoints |
| Predictability | Developers can guess how things work without reading docs | Using POST for read operations, GET for mutations |
| Debuggability | When things go wrong, errors tell you exactly what and why | Returning {"error": "Something went wrong"} |
| Evolvability | The API can grow without breaking existing integrations | No versioning strategy, breaking changes in-place |
| Documentation | Complete, accurate, with runnable examples | Auto-generated docs with no context or examples |
Let us dig into each of these with concrete examples and code.
3. Consistency: The Foundation of Trust
Consistency is the single most important attribute of a well-designed API. When an API is consistent, developers learn the pattern once and can apply it everywhere. When it is inconsistent, every endpoint feels like a new puzzle.
Naming Conventions
Pick one naming convention and stick to it across your entire API. The most common choice for REST APIs is snake_case for JSON fields and kebab-case for URL paths. Stripe does this, and they are widely regarded as having one of the best APIs in the industry:
GET /v1/payment-intents/{id}
Response: { "payment_method": "pm_xxx", "amount_received": 1000, "created_at": 1679012345 }
Compare that with an API I once integrated that used camelCase in some responses, PascalCase in others, and snake_case in yet others — sometimes within the same response body. Every field access required checking the docs, and the cognitive load was exhausting.
Resource Naming
Use plural nouns for collections, singular for sub-resources. Be specific. /users is better than /getUsers because the HTTP method already communicates the action:
GET /users → List users
POST /users → Create a user
GET /users/{id} → Get a specific user
PATCH /users/{id} → Update a user
DELETE /users/{id} → Delete a user
This pattern is so standard that any experienced developer can guess your endpoints without reading documentation. That is the power of consistency.
Response Envelope
Wrap all responses in a consistent envelope structure. Here is a pattern I have used successfully:
// Success
{
"data": { ... },
"meta": { "request_id": "req_abc123", "timestamp": "2026-03-26T10:00:00Z" }
}
// Collection
{
"data": [ ... ],
"meta": { "total": 150, "page": 1, "per_page": 20 },
"links": { "next": "/users?page=2", "prev": null }
}
// Error
{
"error": {
"code": "VALIDATION_FAILED",
"message": "The request body contains invalid fields.",
"details": [
{ "field": "email", "issue": "Must be a valid email address" }
]
},
"meta": { "request_id": "req_abc123" }
}
When every response follows this structure, consumers can write generic error handling and pagination logic once. The request_id in meta is crucial for debugging — it lets support teams correlate a user's complaint with server logs instantly.
4. Error Design: Where Most APIs Fail
I have a strong opinion here: error handling is the most neglected aspect of API design, and it is the area where good design pays the biggest dividends. A developer spends 10% of their time on the happy path and 90% debugging errors. If your errors are unhelpful, you are making 90% of their experience miserable.
The Hierarchy of Error Quality
| Level | Example | Developer Experience |
|---|---|---|
| Terrible | 500 Internal Server Error |
No information at all |
| Bad | {"error": "Bad request"} |
Knows something is wrong, not what |
| Okay | {"error": "Invalid email format"} |
Knows the field, not the fix |
| Good | {"error": {"code": "INVALID_EMAIL", "field": "email", "message": "Email must contain @"}} |
Knows exactly what to fix |
| Excellent | Above + "docs_url": "https://api.example.com/errors/INVALID_EMAIL" |
Can self-serve the solution |
Stripe nails this. Every error includes a type, code, message, and a doc_url that links to a page explaining what went wrong and how to fix it. According to their developer experience team, this single feature reduced support tickets related to integration issues by over 40%.
Use HTTP Status Codes Correctly
This seems obvious, but I have seen production APIs that return 200 OK with an error message in the body. Use status codes as they were intended:
200 → Success (GET, PATCH that returns data)
201 → Created (POST that creates a resource)
204 → No Content (DELETE, or PATCH with no response body)
400 → Bad Request (validation errors, malformed JSON)
401 → Unauthorized (missing or invalid authentication)
403 → Forbidden (authenticated but not authorized)
404 → Not Found (resource does not exist)
409 → Conflict (duplicate resource, race condition)
422 → Unprocessable Entity (valid JSON but semantic errors)
429 → Too Many Requests (rate limited)
500 → Internal Server Error (your bug, not theirs)
The distinction between 401 and 403 matters. The distinction between 400 and 422 matters. These codes exist so that client code can handle different failure modes programmatically without parsing error messages.
5. Pagination Done Right
Pagination is one of those features that seems simple but has surprising depth. There are three common approaches, each with trade-offs:
| Method | Pros | Cons | Best For |
|---|---|---|---|
Offset-based (?page=3&per_page=20) |
Simple, allows jumping to any page | Slow on large datasets, inconsistent if data changes | Admin dashboards, small datasets |
Cursor-based (?after=cursor_abc) |
Fast, consistent even with data changes | Cannot jump to arbitrary pages | Feeds, timelines, large datasets |
Keyset-based (?created_after=2026-01-01) |
Very fast, natural for time-series | Requires a sortable unique field | Logs, events, analytics data |
For most APIs, I recommend cursor-based pagination as the default. It scales well, handles real-time data correctly, and the implementation is not significantly more complex than offset-based. Here is what the response should look like:
{
"data": [ ... ],
"meta": { "has_more": true },
"links": {
"next": "/jobs?after=eyJpZCI6MTIzfQ&limit=20"
}
}
The cursor itself should be opaque — an encoded string that means nothing to the consumer. This gives you freedom to change the underlying implementation without breaking clients. Base64-encoding a JSON object like {"id": 123, "created_at": "2026-01-15"} works well.
6. Versioning: Plan for Change
Your API will change. The question is not whether, but how you handle those changes without breaking existing integrations. There are three common versioning strategies:
| Strategy | Example | Pros | Cons |
|---|---|---|---|
| URL path | /v1/users, /v2/users |
Simple, obvious, cacheable | Duplicates entire API surface |
| Header | API-Version: 2026-03-01 |
Clean URLs, fine-grained control | Less discoverable, harder to test |
| Query param | /users?version=2 |
Easy to test in browser | Pollutes query string, cache issues |
My recommendation: use URL-path versioning for major versions (/v1, /v2) and date-based headers for minor, non-breaking changes. Stripe uses this approach with their Stripe-Version header, and it gives them incredible flexibility to evolve their API while maintaining backward compatibility for years.
The key rule: never break existing behavior in an existing version. Additive changes (new fields, new endpoints) are fine. Removing fields, changing types, or altering semantics requires a new version.
7. Authentication and Rate Limiting
Authentication should be simple to set up and hard to get wrong. Here is my opinionated ranking of auth methods for APIs:
For server-to-server APIs: API keys in headers (Authorization: Bearer sk_live_xxx). Simple, stateless, and well-understood. Use separate keys for test and production environments, as Stripe does with sk_test_ and sk_live_ prefixes.
For user-facing APIs: OAuth 2.0 with short-lived access tokens and refresh tokens. Yes, the OAuth spec is complex, but the user experience of "Login with Google/GitHub" is unbeatable.
For rate limiting: Always include rate limit headers in every response:
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 997
X-RateLimit-Reset: 1679012400
When a client hits the limit, return 429 Too Many Requests with a Retry-After header. This lets well-behaved clients implement automatic backoff without guessing. According to Google Cloud's API design guide, clear rate limiting reduces abuse by 60% while improving the experience for legitimate users who can plan around the limits.
8. Real-World API Design Patterns
Idempotency Keys
Network failures happen. Clients retry requests. Without idempotency, a retry can create duplicate resources. The solution: require an Idempotency-Key header on POST requests.
POST /v1/payments
Idempotency-Key: unique-request-id-abc123
// If this request is sent twice with the same key,
// the second request returns the result of the first
// instead of creating a duplicate payment.
Stripe popularized this pattern, and it has saved countless applications from double-charging customers. Implement it by storing the key and response in a cache (Redis works well) with a TTL of 24-48 hours.
Bulk Operations
When clients need to create or update many resources at once, provide bulk endpoints rather than forcing them to make hundreds of individual requests:
POST /v1/users/bulk
{
"operations": [
{ "method": "create", "data": { "name": "Alice", "email": "alice@example.com" } },
{ "method": "create", "data": { "name": "Bob", "email": "bob@example.com" } },
{ "method": "update", "id": "usr_123", "data": { "name": "Charlie Updated" } }
]
}
Response:
{
"results": [
{ "index": 0, "status": "created", "data": { "id": "usr_456", ... } },
{ "index": 1, "status": "error", "error": { "code": "DUPLICATE_EMAIL", ... } },
{ "index": 2, "status": "updated", "data": { "id": "usr_123", ... } }
],
"meta": { "succeeded": 2, "failed": 1, "total": 3 }
}
Note how each operation has its own status. Partial failures are a reality in bulk operations, and your API should handle them gracefully rather than failing the entire batch.
Webhooks
For asynchronous events, webhooks beat polling. But webhook design has its own pitfalls:
- Sign payloads with HMAC-SHA256 so consumers can verify the webhook came from you
- Include the full resource in the payload, not just the ID — this saves the consumer a follow-up API call
- Implement retry logic with exponential backoff for failed deliveries
- Provide a webhook testing tool in your dashboard so developers can trigger test events
9. Documentation That Developers Actually Read
I have a confession: I often judge an API entirely by its documentation before writing a single line of integration code. If the docs are good, I trust the API. If the docs are bad, I expect the API to be bad too. And I am almost always right.
What makes documentation great? According to a survey by RedHat, the top three things developers want in API docs are:
- Working code examples (mentioned by 89% of respondents)
- Complete parameter descriptions (mentioned by 82%)
- Error code reference (mentioned by 76%)
Auto-generated OpenAPI/Swagger docs are a starting point, not a destination. They show you what endpoints exist but not why you would use them or how they fit together. The best API documentation includes:
- Quick start guide: "Make your first API call in 5 minutes"
- Conceptual guides: Explain the domain model and how resources relate
- Code examples: In multiple languages, using your official SDKs
- Changelog: What changed, when, and what consumers need to do
- API reference: Every endpoint, parameter, and response field documented
Tools like Mintlify, ReadMe, and Redocly make it easier to create beautiful, interactive documentation. But the content still has to be written by humans who understand both the API and its consumers.
10. GraphQL vs REST vs gRPC: When to Use What
This is another area where I have strong opinions. The "GraphQL vs REST" debate is often framed as a binary choice, but in practice, each has a sweet spot:
| Technology | Best For | Avoid When |
|---|---|---|
| REST | Public APIs, CRUD operations, cacheable resources | Complex nested queries, real-time subscriptions |
| GraphQL | Client-driven UIs, mobile apps (bandwidth-sensitive), complex data graphs | Simple CRUD, server-to-server, public APIs without auth |
| gRPC | Internal microservices, high-throughput, streaming | Browser clients (without a proxy), public APIs |
At BirJob, we use REST for our public-facing API because it is simple, cacheable, and every developer knows how to use it. If we were building a complex dashboard with deeply nested data relationships, GraphQL would make more sense. For internal service-to-service communication in a microservices architecture, gRPC's performance advantages (according to Google's benchmarks, up to 10x faster than REST for certain workloads) make it the clear winner.
The mistake I see most often is choosing GraphQL because it is trendy, then spending weeks building a schema that could have been three REST endpoints. Technology choices should serve business needs, not the other way around.
11. Security Checklist
API security breaches are among the most common attack vectors. OWASP's API Security Top 10 should be required reading for anyone designing APIs. Here is a practical checklist:
- Always use HTTPS. No exceptions, not even for internal APIs.
- Validate all input. Never trust client data. Validate types, lengths, formats, and ranges on the server.
- Implement proper authorization. Just because a user is authenticated does not mean they can access any resource. Check ownership on every request.
- Rate limit aggressively. Different limits for different endpoints — login endpoints should have much lower limits than read-only data endpoints.
- Log everything. Every request, every response code, every authentication attempt. You cannot investigate what you did not log.
- Use API keys for identification, tokens for authorization. API keys tell you who is calling. Tokens tell you what they are allowed to do.
- Implement request signing for webhooks. Consumers need to verify that webhook payloads actually came from your service.
- Never expose internal IDs. Use UUIDs or prefixed IDs (
usr_abc123) instead of auto-incrementing integers that reveal your database structure.
12. My Opinionated API Design Manifesto
After years of building and consuming APIs, here are the hills I am willing to die on:
1. Design the API before writing any code. Write the OpenAPI spec first. Share it with potential consumers. Get feedback. Then build. This "API-first" approach catches design problems when they are cheap to fix.
2. Every error should be actionable. If your error message does not tell the developer what to do differently, it is useless. "Invalid request" is useless. "The 'email' field must be a valid email address (received: 'not-an-email')" is actionable.
3. Consistency beats cleverness. I would rather have a slightly verbose but perfectly consistent API than a clever one with shortcuts that only make sense if you read the docs carefully.
4. Versioning is not optional. Even if you think your API will never change, add versioning from day one. The cost is nearly zero, and the pain of adding it later is enormous.
5. SDKs are part of the API. If you provide a REST API without official SDKs in the top 3-4 languages your consumers use, you are making them write boilerplate that you should own. SDKs are not a nice-to-have; they are table stakes.
6. Deprecation is a feature, not a punishment. When you deprecate an endpoint, give consumers at least 6 months notice, provide a migration guide, and include Sunset and Deprecation headers in responses so automated tools can flag the usage.
13. Action Plan: Designing Your Next API
Whether you are building a new API from scratch or improving an existing one, here is a step-by-step plan:
Week 1: Research and Design
- List all the resources your API needs to expose
- Map out the relationships between resources
- Write an OpenAPI 3.1 spec with all endpoints, request/response schemas, and error codes
- Review the spec with at least two potential consumers
Week 2: Prototype
- Build a mock server from your OpenAPI spec (tools like Prism can do this automatically)
- Have consumers test against the mock to validate the design
- Iterate on the spec based on feedback
Week 3-4: Implementation
- Implement the API with consistent error handling from day one
- Write integration tests that verify response shapes match the spec
- Set up rate limiting, authentication, and monitoring
- Generate SDK stubs from the OpenAPI spec
Week 5: Documentation and Launch
- Write a quick start guide with working examples
- Document every error code with resolution steps
- Set up a changelog that consumers can subscribe to
- Launch with a beta period to catch issues before going stable
Sources
- Postman — 2024 State of the API Report
- Nordic APIs — API Economy Trends
- SmartBear — State of API Quality
- Google Cloud — Rate Limiting Strategies
- OWASP — API Security Top 10
- gRPC — Introduction
- Stripe API Documentation — Design Reference
I'm Ismat, and I build BirJob — Azerbaijan's job aggregator scraping 80+ sources daily.
